There Will Never Be a Conscious Robot, Part 2

20140613-080925-29365481.jpg

@500px

Wow, indeed 🙂

Dr. Stuart Hameroff has discovered the mechanism envisioned by Roger Penrose.  What I’m saying is astonishing: there is a structure in the brain that operates on the grain level of quantum gravity! Think about that for a minute. In all the experiments I have ever seen involving quantum processes, one major factor has been the extremely cold temperatures and highly insulated environments necessary to create and maintain observable quantum effects, as with the D-Wave computer, for example. Living tissue is far too warm and ‘noisy’ for the maintenance of quantum superposition coherency. Not so, according to Drs. Hameroff and Penrose.

Stuart Hameroff is an anesthesiologist as well as a Professor at the University of Arizona, Associate Director of the Center for Consciousness Studies, so consciousness is basically his middle name. In the course of his studies, he discovered that anesthesia seems to operate by disrupting a very specific type of electrical activity in the brain, the higher frequency gamma synchronies. These electrical potentials arise in a very different manner than the neuronal axonal spikes associated with dendritic chemical synapses.

Gamma synchrony, first of all, is a synchrony; that is, the neurons are not firing consecutively, as with the chemically induced synaptic spikes, but rather, simultaneously across several hundred thousand neurons at a time. How do they do this? It appears that there is another kind of synaptic junction in brain dendrites, in addition to the familiar chemical type involving neurotransmitters and electrolytes. Gamma synchronies travel as quantum superpositions, tunnelling instantaneously, not firing consecutively, through gap junctions, that is, directly conjoined portals in the dendritic membranes.

How these superpositions arise will be discussed, but first let us understand that this is not the typical kind of dendritic activity we all learned in grade school. Rather than an exchange of neurotransmitters at synaptic junctions across a space between the neurons, one by one, the gamma potential arises simultaneously in a large group of neurons and is shared directly from neuron to neuron through the gap junctions with no space in between:

20140618-142309-51789833.jpg

The gamma potentials arise from lattice structures inside the brain dendrites, microtubules, seen as the horizontal structures inside the circle in the diagram above.  The diagram also shows the microtubules emitting the high frequency gamma waves and how the waves pass through the gap junction. These waves constitute the synchronies, which occur at a frequency of 40-80+ per second. Each superposition potential, each individual wave, is allowed to quantum tunnel because of the time symmetry allowed as the wave rises into the density matrix (nonlocality), thereby eliminating the need for consecutive spikes. This phenomenon was demonstrated in ‘Libet’s Case of Backwards Referral in Time,’ (Dennett, Consciousness Explained, 153-166) but denied because of the lack of an appropriate physical mechanism for backwards time referral of perception.  Here we have it!

Now here’s where it gets weird.

Each microtubule is made of individual protein dimers called tubulin. The tubulin dimers arrange themselves in precise geometric patterns to form fibonacci spiral lattice tubes. Tubulin is a kind of protein that forms an aromatic ring so that the protein folds and creates hydrophobic pockets in the interior of the folds. These are electrically insulated areas, as water cannot enter and therefore, the noisy electrical interference associated with water cannot interfere with the more subtle electrical processes inside the hydrophobic pockets of the dimer. As I said, the dimers arrange themselves in a precise geometrical pattern so that their connections are regular and their mass energies consistent. This is important for the formation and maintenance of sustained coherent gamma potentials along the individual microtubules and among the microtubules en masse. The diagram below is a beautiful illustration of the geometrical order of a portion of a microtubule:

20140618-233949-85189236.jpg

Inside the hydrophobic pockets there form magnetic dipoles, induced by adjacent dimers.  The electron clouds inside one dimer repel the electrons in the neighboring dimer to produce an electronic dipole.  This is called a London force, a type of van der Waals force. It is a quantum physical effect, not a chemical one. The result is a kind of electrical  ‘switch’ in which the poles oscillate back and forth. The above diagram looks like a set of polarized tubulin dimers…that is, if you imagine you could actually see the London dipoles.  These quantum forces may also exist in a state of superposition, that is, a state of being in both polarities at the same time. The diagrams below illustrate the London van der Waals force and how it creates the electrical switching potential among tubulin dimers.

20140619-093026-34226957.jpg

20140619-093027-34227075.jpg

The gamma potential, then, arises via the superposition state of London force dipoles, tunneling instantaneously through hundreds of thousands of neurons simultaneously. O.M.G. Hameroff calls this kind of network a hyperneuron or dendritic web.

The big problem with this theory is how the superposition resolves on its own, without an objective measurement or measurer. Penrose calls this phenomenon ‘objective reduction‘ and has coined the term ‘orchestrated objective reduction’ for the extremely organized and precise process occurring in the brain among the gorgeously, amazingly, mind blowingly beautifully arranged material substrate of the dendritic microtubules. As discussed in Part 1, the objective reduction of the tunneling gamma superposition wave occurs on the basis of the total mass energy involved in the superposition separation (i.e. tubulin mass), and the amount of time the superposition can be maintained, given a Planck length superposition separation distance: E=ℏ/t:

20140624-133331-48811084.jpg

20140624-133331-48811183.jpg

High amplitude gamma potentials resolve more quickly, resulting in a greater number of arguably more intense conscious moments per second (closer to 80, say), possibly with a resultant perception of time passing more slowly, exactly like increased frames per second in slow motion film photography.

This process of orchestrated objective reduction of gamma synchronies arising as quantum superpositions in dendritic microtubules is the production of conscious frames of reality, at a rate of 40-80+ per second:

Consciousness is a process in fundamental spacetime geometry, coupled to brain function.

Penrose suggests that Platonic information embedded in Planck scale geometry pervades the universe and is accessible to our conscious process.

~ Stuart Hameroff

Some thoughts on Part 2: on the basis of the above theory, it seems that our conscious process might work in concert with Planck scale geometry to create meaning. I would associate the gamma synchrony input as the ‘immediacy’ concept of Sartre and the further processing of this information as the ‘reflective’ process defined by Sartre. The conscious pilot described by Hameroff seems to be related to the ‘narrative center of gravity’ envisioned by Daniel Dennett.

Finally, what is the potential for quantum nanotechnology, the D-Wave chip, to run an algorithm for consciousness? Stay tuned for Part 3.

There Will Never Be A Conscious Robot: Part 1

20140606-080228-28948953.jpg
@500px

You can find Part 2 here.

A while back, I began to explore the origin of consciousness in the work of Roger Penrose and Stuart Hameroff; check out my blog posts, entitled Is You Is O’ Is You Ain’t Conscious?, A Brief History of the Density Matrix , The Density Matrix, and A Note on R and the 2nd Law of Thermodynamics. These notes approach the problem from the point of view of mathematics, first of all, and second, specifically from the model given by quantum physics.

In his books, The Emperor’s New Mind and Shadows of the Mind, Penrose took me on a journey through the limitations of mathematical knowledge in terms of creating an algorithm, that is, an organized, logical, consistent ‘formal’ system, for producing consciousness.  On the basis of Kurt Gödel’s awesome, all-powerful Incompleteness Theorem, Penrose concludes that consciousness is not computable, and he points out that there are many, many concepts that are not computable, this is nothing new, Turing’s ‘halting problem’ being the primary example.  In a nutshell then, Penrose uses Gödel’s mighty reductio ad absurdum to demonstrate that there is no such thing as a provably true, consistent formal system:  this sentence, indeed, is false.

This raises the question as to how we can know truth, since we cannot discover it through any formal logical system whatsoever.  For instance, the formal proof that 1+1=2 is quite lengthy and can be shown by the Incompleteness Theorem to be not provably consistent, and therefore, not a proof. While it is a proof of consistency, it is not a provably certain one. And this is, in fact, the upshot of Gödel’s theorem, according to Penrose; i.e., that the Incompleteness Theorem is an unassailable refutation of the provable consistency of any and every formal system whatsoever, and so, therefore, no formal system is reliably capable of generating that kind of intuitive grasp of truth that we all seem to possess. In effect, truth does not emerge from mathematics but rather, mathematics emerges from truth. Truth is important in that any algorithm that might generate consciousness should generate a true perception of the world. Without a mathematics whose consistency is provably true, we could never know for sure if our algorithm is reliable.

The point is that our intuitive grasp of truth, how we do know that 1+1=2, is obviously not on the basis of, i.e. not generated by, any formal mathematical system or any algorithm based on any such system and never will be because of the great Kurt Gödel.  Further, Penrose supposes, perhaps a new, more powerful mathematics will be necessary to approach the problem of how consciousness arises from ordinary matter but this does not mean that the solution will be computable. This obviously has strong implications for AI and the possibility of developing a conscious computer.

What Penrose does find is unbelievably fascinating.  He raises the question whether there is some non-computable property of ordinary matter…photons, electrons, atoms…you know… that has been overlooked by science to this point.  He asks whether such a property might be employed in the production of consciousness and, therefore, in the engineering of a conscious machine. This is an astounding question, the question of an innovator, the question a child might ask!  I mean, most of us would probably assume that there couldn’t be such an unknown property, given all the myriad and plethora of work that’s been done in theoretical physics. Even more astoundingly, he then finds this property in the standard quantum mechanical model.

Penrose proposes that the specific material property demonstrated by the phenomenon of quantum superposition involves a resolution or reduction process in the identification of a particle from out of the undifferentiated complex nonlocality of the density matrix, a deliberately ‘fuzzy’ mathematical terminology (|ψ + |φ), into the well defined classical state we discover upon measurement.  He argues that this resolution process (R) of the deterministic (meaning it will continue indefinitely) Schrödinger wave (U), reveals that quantum superpositions are not time-symmetric, that is, they cannot run both forwards and backwards in time.  This is revealed because the R procedure is arbitrary and is not derived from the deterministic equation (U), so it must be an approximation of some process yet unknown.  It is also revealed by the obvious common sense absurdities arising from a reversal of U, including, for instance, the emission of a photon from a non-light source.  Penrose reasons that this is similar and perhaps related to that other non-time-symmetric process in mathematics, the second law of thermodynamics involving entropy. Gravity provides a constraint or obstacle in spacetime so that entropy cannot flow backwards in time; likewise, quantum superpositions must be constrained by some kind of gravitational action.

Penrose brilliantly theorizes that the double-spiked state vector of the superposition is associated with two completely separate spacetime geometries, something for which Einstein’s theory of General Relativity has no expression. There is no way to mathematically express the relationship between the two separate spacetime geometries of the quantum superposition on Einstein’s relativistic curved spacetime tensor, making the mathematical formulation “profoundly obscure” and lending weight to the conjecture that it is not computable. Penrose argues that because of this separation of gravitational spacetime geometries, the superposition state is unstable and that the energy required to maintain the superposition separation is inversely proportional to the time that the state can be maintained; that is, the greater the energy, the shorter the time. Further, the quantum superposition separation distance is supposed to be on the Planck scale, resulting in a quantum gravitational measurement for the resolution process, yielding extremely reasonable mathematical results when applied to the tiny masses involved: E=ħ/t.  Wow.

While I’m catching my breath, Sir Roger goes on to describe how consciousness can arise from this kind of objective reduction of a quantum superposition, in that each resolution  (R) of the superposition would be a conscious moment, like a little captured frame of reality…REAL reality, not just a reproduction or simulation.  The ‘decision’ that is made when the resolution occurs is the snapshot, the production, of a tiny frame of reality.  On this basis, he wonders whether there might be some structure or process in the brain that can produce and maintain quantum superpositions at the appropriate amplitudes.  He reasons that while low amplitude quantum superpositions might exist in the universe and might yield low-grade conscious moments individually, a sustained series of high amplitude objective reductions would require an insulated environment.

So, although it is not a generator, mathematics might make use of the rudimentary conscious perception inherent in particulate matter, at this most basic level, in appropriately designed devices.  Mathematics seems to be the thing that shapes consciousness, analogous to the manifold of string theory; the gravitational field is this mathematical thing, in reality, and quantum gravity is the mathematics of objectively resolving trajectories in this field.  According to Penrose’s colleague,  Stuart Hameroff, Director of the Center for Consciousness Studies at the University of Arizona, one could make the astounding assertion that,

…we are built into the universe, I mean, these objective reductions are…reorganizations, a reshuffling of the makeup of the universe…of material reality as it’s forming; we are part of that.

Again, wow.

Stuart Hameroff on Quantum Consciousness

Professor Hameroff discusses the quantum mechanism of consciousness in the brain and it’s implications, for instance, in terms of time symmetrical processes taking place (that is, backwards in time) in the brain, and the potential for quantum coherence of the mind outside the body.

A Note on R and the 2nd Law of Thermodynamics

20120526-123135.jpg
Photo taken from NASA video, Solar Dynamics Observatory

What is really cool is that Penrose challenges the time-symmetry of quantum mechanics from two directions, the non-reversibility of the reduction of the Schrodinger wave and the irreversibility of entropy according to the second law of thermodynamics. He shows that while the linear superposition exists going forward in time for a particle in the classic, simple half-silvered mirror scenario, when one attempts to reverse the process it becomes obvious that once R has occurred, absurdities, such as emission of a photon from a non-light source which absorbed it, will occur. R seems, therefore, to be related to entropy, another irreversible process.

Penrose discusses how entropy is related to gravity using the idea of light cones. On the theory of general relativity, he proposes that the gravitational field is a kind of all-pervasive refracting medium, tilting light cones in space, thereby influencing the direction of light (by curving it), and thus, influencing the relations of cause and effect. He reminds us that cause and effect are only relevant under general relativity and not under Newtonian physics, because in relativity theory, light has a speed limit beyond which paradoxes occur in time-space. This is crucial when one is considering quantum superpositions, as there are also time-space paradoxes involved in their current mathematical description.

In The Emperor’s New Mind, Penrose states:

“…there was a huge gain in entropy due to gravitational contraction…all the remarkable lowness of entropy that we find about us – and which provides this most puzzling aspect of the second law – must be attributed to the fact that vast amounts of entropy can be gained through the gravitational contraction of diffuse gas into stars.” (417)

In fact, what is really bizarre is the assertion that such a low entropy state as the original singularity could have existed spontaneously, as the “natural” state of matter is a high-entropy state of thermal equilibrium. In discussing why entropy is not time-symmetric, Penrose notes that, on the basis of the phase-space model of entropy:

“Our phase-space argument gave us completely the wrong answer when we tried to apply it in the reverse direction of time!…What that argument actually showed was that for a given low-entropy state (say for a gas tucked in a corner of a box), then, in the absence of any other factors constraining the system, the entropy would be expected to increase in both directions in time away from the given state…The argument has not worked in the past direction in time precisely because there were such factors. There was indeed something constraining the system in the past. The tendency towards high entropy in the future is no surprise. The high-entropy states are, in a sense, the ‘natural’ states which do not need further explanation. But the low-entropy states in the past are a puzzle. What constrained the entropy of our world to be so low in the past? The common presence of states in which the entropy is absurdly low is an amazing fact of the actual universe that we inhabit – though such states are so commonplace and familiar to us that we do not normally tend to regard them as amazing. We ourselves are configurations of ridiculously low entropy!” (410)

So, entropy and gravity are related.

Penrose notes that in the space-time of general relativity there is an ‘obstruction,’ called the WEYL tensor, which is the conformal part of the relativistic equations. This obstruction prevents uniformity of space-time in terms of his illustration using light cones; that is, the light cones cannot be aligned perfectly with one another because of WEYL:

“The tensor WEYL describes just half of the information – the ‘conformal’ half – that is contained in the full Reimann curvature tensor of space-time…Only if WEYL is zero can we rotate all the light cones into the Minkowskian arrangement [i.e. perfectly aligned with one another]. The tensor WEYL measures the gravitational field – in the sense of the gravitational tidal distortion – so it is precisely the gravitational field, in this sense, that provides the obstruction…”(Shadows of the Mind, 224)

The point about light cone tilting is that this action, this character of gravity had gone unnoticed in classical physics and was only identified in Einstein’s theory. Recent observations of gravitational lensing have provided good evidence for this hitherto “invisible” aspect of gravity. Analogously, perhaps there is some unseen, non-computational aspect of physical matter that is organized in biological design for the purpose of producing consciousness.

Penrose argues that instances of quantum superposition in nature are rare and unstable. The occurrence of coherent, prolonged quantum superposition in a biological organism must be the result of design and constitutes a novel use, in nature, of such special properties of matter that are not well understood. Stuart Hameroff alludes to this when he mentions that the objective reduction time of an individual particle in space, which avoided decoherence, would be in the order of 10 million years, and that it would be of low frequency, low intensity. The implication is that for consciousness to occur on the basis of the objective reduction of coherent quantum superpositions, a special design which organizes this phenomenon is necessary.

The Density Matrix

20120519-101142.jpg
500px.com

“…mathematical understanding is something different from computation and cannot be completely supplanted by it. Computation can supply extremely valuable aid to understanding, but it never supplies actual understanding itself.” (Penrose, Shadows of the Mind, 199)

Basically, mathematics is a descriptive language, like any other language, and as such is not a generator of conscious perception.

Penrose’s discussions of quantum physics explore its mathematical “ability” to describe reality. This line of inquiry appears to be motivated by his conjecture that there must be some property of physical reality that is related to the production of consciousness which science has either overlooked or not discovered yet:

“…the phenomenon of consciousness can arise only in the presence of some non-computational physical process taking place in the brain. One must presume, however, that such (putative) non-computational processes would also have to be inherent in the action of inanimate matter…First, why is it that the phenomenon of consciousness appears to occur, as far as we know, only in (or in relation to) brains…Second, we must ask how it could be that such a seemingly (putative) ingredient as non-computational behaviour, presumed to be inherent – potentially at least – in the actions of all material things, so far has entirely escaped the notice of physicists?” (SotM, 216)

Penrose finds this ingredient in the diaphanous vicissitudes of the gravitational field because, “gravity actually influences the causal relationships between space-time events, and it is the only physical quantity that has this effect” (SotM, 219). Gravity really alters the geometry of space-time and of all particulate matter found within it. Because particles (or “lumps” of matter with specific mass-energies) in superposition also have a gravitational field which also must be part of the superposition, “the state involves a superposition of two different gravitational fields. According to Einstein’s theory, this implies that we have two different space-time geometries superposed!” (SotM, 337)

“The point is that we really have no conception of how to consider linear superpositions of states when the states themselves involve different space-time geometries. A fundamental difficulty with ‘standard theory’ is that when the geometries become significantly different from each other, we have no absolute means of identifying a point in one geometry with any particular point in the other – the two geometries are strictly separate spaces – so the very idea that one could form a superposition of the matter states within these two separate spaces becomes profoundly obscure.” (SotM, 337)

This is where the rubber really hits the road for the brilliant Sir. It is stunning, awesome and totally amazing to witness the invention, right before our very eyes, of a beginning of a new mathematical description of reality.

The density matrix becomes important at this point because it is the mathematics of the density matrix, rather than simply the state vector, ψ, that is involved in the state vector Reduction or “measurement” process. The density matrix is a deliberately fuzzy description of multiple state vectors, a “probability mixture”:

“…with a density matrix, there is a (deliberate) confusion, in this description, between these classical probabilities, occurring in this probability-weighted mixture and the quantum-mechanical probabilities that would result from the R-procedure. The idea is that one cannot operationally distinguish between the two, so a mathematical description – the density matrix – which does not distinguish between them is operationally appropriate. (SotM, 317)

As a description, Penrose calls the density matrix “elegant” and useful “for all practical purposes” (FAPP); however, as a complete description of reality, it will not do:

“The fact that the physicist considers that the state of his detector is described by the density matrix D does not in any way explain why he always finds that the detector is either in a YES state…or else in a NO state…For precisely the same density matrix would be given if the state were an equal-probability-weighted combination of classical absurdities…(…the quantum linear absurdities ‘YES plus NO’ and ‘YES minus NO’)!…The upshot of all this is that merely knowing that the density matrix is some D does not tell us that the system is a probability mixture of some particular set of states that give rise to a particular D. There are always numerous completely different ways of getting the same D, most of which would be ‘absurd’ from the common-sense point of view. Moreover, this kind of ambiguity holds for any density matrix whatsoever.” (SotM, 326-7)

What is being said, here, is that there is no reason whatsoever given by the current, state-of-the-art mathematical description why a quantum system assumes some particular real, observable, even in principle, classical answer to the experimental question, Where is the particle now? Even more bizarrely, one cannot ascertain why, on the basis of the density matrix, one ever finds a real answer, a real position, a real particle, at all!

What this really means, argues Sir Penrose, is that the R procedure cannot and does not follow from the unitary evolution of the wave equation and seems to represent a completely independent and as yet not understood process of which R is only an approximation. R must be some kind of gravitational or gravitationally-related process; in fact, it must be a quantum gravitational process:

“[in a quantum superposition] when are two geometries to be considered as actually ‘significantly different’ from one another? It is here, in effect, that the Planck scale of 10^-33 cm comes in. The argument would roughly be that the scale of the difference between these geometries has to be, in an appropriate sense, something like 10^-33 cm or more for reduction to take place. We might, for example, attempt to imagine that these two geometries are trying to be forced into coincidence, but when the measure of the difference becomes too large, on this kind of scale, reduction R takes place – so, rather than the superposition involved in U being maintained, Nature must choose one geometry or the other.” (SotM, 337)

The reasoning, it seems, is that we don’t have a mathematics of quantum gravity and this must be why scientists have not found a non-computable physical process as described above. So, Penrose sets out to develop one for us!

Hence, the need for The New Criterion. Penrose simply and elegantly surmises that the reduction of a quantum superposition is analogous to the spontaneous decay of atomic nuclei in that it is unstable. He calculates the simple gravitational displacement, in absolute units (see 338-9) and:

“…we ask that there be a rate of state-vector reduction determined by such a difference measure. The greater the difference, the faster would be the rate at which reduction takes place…In general, when we consider an object in a superposition of two spatially displaced states, we simply ask for the energy that it would take to effect this displacement, considering only the gravitational interaction between the two. The reciprocal of this energy measures a kind of ‘half-life’ for the superposed state. The larger this energy, the shorter would be the time that the superposed state could persist.” (SotM, 339, 341)

Penrose goes on to explain that the numbers at the Planck scale for this descriptive equation, E= h/t, correspond well with observations of nature, “It is reassuring that this provides very ‘reasonable’ answers in certain simple situations.” (340) (cf Diósi) In terms of biological systems,

“A biological system, being very much entangled with its environment…would have its own state continually reduced because of the continual reduction of its environment. We may imagine, on the other hand, that for some reason it might be favourable to a biological system that its state remain unreduced for a long time, in appropriate circumstances. In such cases it would be necessary for the system to be, in some way, very effectively insulated from its surroundings.” (SotM, 343)

Here we have the rudiments of a mathematical language for consciousness. We will now have to wait till I finish the book to see how this all ties in with brain science. For a preview, see Stuart Hameroff’s YouTube video, A New Marriage of Brain and Computer.

A Brief History of the Density Matrix

20120429-162418.jpg
NASA Cassini Image

What I like about the physics sections of Sir Penrose’s two books, The Emperor’s New Mind and Shadows of the Mind, is that they are not the standard layman’s or “popular” versions of the discussion. Not that these are by any means less interesting [two of my very favorites are Michio Kaku’s Hyperspace and John Gribbin’s Schrodinger’s Kittens] but I was looking for something new and something more and I found it in these two books. I especially like that Penrose is not afraid of the quantum superposition; rather, he bravely, brazenly faces the two-peaked amplitude without blinking and calculates the extremely reasonable squared modulus of the amplitude to obtain the probability. He uses the unit circle in complex space to show why it is the square of the modulus that is the “real” probability, but we all know that doesn’t really explain anything.

There is such a richness and depth of mathematical knowledge in these two books that I know I will be returning to them again in the future and I cannot hope to cover every detail here. Instead, I will try to carry a thread through some of the more relevant concepts, at least as I see them at this point. The point about Sir Roger’s discussion of the deterministic wave equation (U) is that the reduction (R) of this equation appears arbitrary and cannot be derived from (U). There is no law of physics and no mathematical rule for when to apply R. This is the impetus for his assertion that the wave equation is an incomplete description of reality and that a new, greater mathematics, a GUT, is necessary especially in order to derive R.

The importance of this problem is fundamental. If R is not applied, quantum entities remain in superposition…indefinitely, according to ψ (Psi is the symbol denoting the quantum state, or “state vector” of a particle, whether in superposition or not). What this means is hotly debated in the field. No overall consensus exists regarding the meaning of a particle in superposition [aside from the significant fact that unless a waveform resolves, there is no particle, no ‘reality,’ for waves are intangible; this places objective reduction at the centre of the question of universal reality and how it is involved in productions for perception], even though the mathematics provide an unparalleled predictive accuracy demonstrated by endless experimental evidence. What the wave function (ψ) shows is that a measurable, “real” entity seems to exist in two (or more) separate, distinct places at exactly the same time, but only until its state is “reduced” (R) or, as Penrose sees it, “magnified to the classical level” by a measurement (R), by observation. This is ridiculous from the point of view of a mathematician. A valid mathematical description of reality should provide a specific location (not a probable one) of a particle for every time (τ), independent of observation. ψ does not give us this information and it is only upon the arbitrarily applied magnification by measurement, or observation, that the operation R takes place and a specific location at a specific time, τ, is obtained.

According to the Copenhagen interpretation, it is said that at this point the wave function “collapses” and all probabilities are cancelled except the one that is measured. The implication is that of wave-particle duality, and for many physicists and philosophers, this is good enough. Not so for Sir Roger, whose reasons are encompassed by his theory of consciousness, which involves the necessity of an objective reduction of the wave function, independent of measurement or observation. It is this objective reduction that produces consciousness. Later we will see the biological evidence, discovered by Hameroff, et. al. For now, let us explore the problem of duality and quantum superposition as they relate to basic ontology and objective reality.

According to Victor Lenzen, “Einstein’s Theory of Knowledge,” Albert Einstein, Philosopher -Scientist, Volume II (Paul Arthur Schilpp, ed., Harper & Brothers, New York, 1959):

“Einstein in an essay on Maxwell appears to have accepted the realist doctrine, for he says, “The belief in an external world independent of the percipient subject is the foundation of all science. But since sense-perceptions inform us only indirectly of this external world, of Physical Reality, it is only by speculation that it can become comprehensible to us.” In his essay on the method of theoretical physics he expresses the conviction that pure mathematical construction is the method of discovering the concepts and laws for the comprehension of nature…[Einstein says]”Our experience up to date justifies us in feeling sure that in Nature is actualized the ideal of mathematical simplicity.””(p.363)

The implications of the quantum superposition in ψ, as I said, have been hotly debated among physicists, philosophers, mathematicians…let’s just say it is the foremost scientific mystery of our time. What the theory seems to say is that the processes which underlie and produce productions for reality are themselves not real, for if they were, we are left with the embarrassing problem of explaining why there are “classical” states of reality. Here is what Sir Roger says:

“I do not see how reality can transform itself from a complex (or real) linear superposition of two alternatives into one or the other of these alternatives, on the basis merely of the evolution of U….”(The Emperor’s New Mind, p.380).

“I have made no bones of the fact that I believe that the resolution of the puzzles of quantum theory must lie in our finding an improved theory…But even if one believes that the theory is somehow to be modified, the constraints on how one might do this are enormous. Perhaps some kind of “hidden variable” viewpoint will eventually turn out to be acceptable. But the non-locality that is exhibited by the EPR-type experiments severely challenges any ‘realistic’ description of the world that can comfortably occur within an ordinary space-time of the particular type that has been given to us to accord with the principles of relativity – so I believe that a much more radical change is needed. Moreover, no discrepancy of any kind between quantum theory and experiment has ever been found – unless, of course, one regards the evident absence of linearly superposed cricket balls as contrary evidence. In my own view, the non-existence of linearly superposed cricket balls actually is contrary evidence!…Somewhere in between, I would maintain, we need to understand the new law, in order to see how the quantum world merges with the classical. I believe, also, that we shall need this new law if we are ever to understand minds!” (ENM, p.385-6).

So what did Einstein think of the quantum superposition? In his “Reply to Criticisms: Remarks Concerning the Essays Brought Together in this Volume,” (op. cit., Albert Einstein, pp. 665-688), Einstein emphatically states:

“…I reject the basic idea of contemporary statistical quantum theory, insofar as I do not believe that this fundamental concept will provide a useful basis for the whole of physics…I am, in fact, firmly convinced that the essentially statistical character of contemporary quantum theory is solely to be ascribed to the fact that this [theory], sic., operates with an incomplete description of physical systems.”

To emphasize his repugnance toward the metaphysical implications of ψ, Einstein says,

“Whenever the positivistically inclined modern physicist hears such a formulation his reaction is that of a pitying smile. He says to himself: “there we have the naked formulation of a metaphysical prejudice, moreover, the conquest of which constitutes the major epistemological achievement of physicists within the last quarter-century. Has any man ever perceived a ‘real physical situation’? How is it possible that a reasonable person could today still believe that he can refute our essential knowledge and understanding by drawing up such a bloodless ghost?” (p.667)

Well, I have no response to that.

Einstein rails against quantum uncertainty. There must be a real, objective universe for me to be conscious of, not a universe created by the observation of consciousness. Yet, he admits that any construct of reality, including mathematical ones, can emerge only from pure consciousness and not from empirical evidence. Here is where Sir Penrose has come to rescue us! And from his scabbard he draws, of all things, quantum gravity! We have come full circle.

Now, I don’t know about you, but I remember thinking that quantum entanglement seemed strangely like consciousness. The uncanny awareness exhibited by the particles themselves, the instantaneous nature of this awareness over timespace, the strange irreverence for the laws of physics demonstrated by this phenomenon, the time symmetry, all seemed unreal in the illogical, intuitive, surreal way that my own consciousness seems unreal, able to defy the universe by, say, imagining purple cows. When I wrote my essay on the Star Trek robot, Data, I was feeling around in the dark, looking for something in the quantum material structure of his brain that utilized or organized this quantum awareness [see my blog entitled, Still Don’t Want To Talk About It]. Was I astounded or what when I stumbled upon Stuart Hameroff’s YouTube video discussions (A New Marriage of Brain and Computer) about quantum consciousness?

Stuart Hameroff identifies a biological quantum process of neurotransmission, gamma synchrony, that is associated directly with consciousness in that, consciousness is present with gamma synchrony and not present without it (i.e., under anesthesia, of all brain activity it is only the gamma synchronies, “coherent 40 Hz” high frequency activity, that disappear). These are produced at body temperature in dendritic microtubule components, called tubulin. Inside each tubulin dimer there are hydrophobic pockets, which are insulated regions produced by folding proteins. These protein chains are induced to fold around dipolar aromatic rings, forming hydrophobic pockets in which the quantum dipoles, produced by van der Waals forces, combine in concert to produce high frequency potentials. It is found that these high frequency wave potentials – gamma synchronies, about 40-80 per second – maintain coherency and transmit non-local, instantaneous signals via gap junctions in the dendrites.

Gap junctions are connections between dendrites that are not involved in the sequential chemically induced potentials at dendritic junctions. Rather, gamma synchronies transmit simultaneously, instantaneously across many thousands of dendrites at a time through the gap junctions, thus strengthening the coherence of the high frequency waves. This is achieved by quantum superposition of the magnetic dipoles inside the tubulin dimers, so that they become quantum bits. These individual bits in concert create interference patterns over the cylindrical tubules, thought to be possible code for quantum computational activity. These potentials maintain coherence over several thousand neurons simultaneously and each, single potential is thought to be a conscious moment.

So here we have brain activity on the grain level of quantum processes. Hameroff theorizes that the collapse of each of these gamma potentials is a conscious moment. This process is described by the deterministic wave function (U) for the rise of the potential, and the probabilistic collapse of the wave (R) for each potential. The potential itself involves coherent quantum superposition across many individual tubulin dimer proteins. The problem addressed by Penrose is to explain the collapse of the wave function objectively – why does the gamma potential reduce all by itself, without an observer? And, what has this got to do with consciousness?

Enter, the Density Matrix.

Is You Is O’ Is You Ain’t Conscious?

Are you an algorithm? More specifically, is there an algorithm for consciousness?

20120426-145728.jpg
My own copy of Vincent’s Starry Night

Roger Penrose, Emeritus Rouse Ball Professor of Mathematics at Oxford University, and Stuart Hameroff, Director of The Center for Consciousness Studies at the University of Arizona, are approaching this question from two very different, yet strangely compatible angles.

Penrose’s two books, The Emperor’s New Mind and Shadows of the Mind, discuss the mathematical ins and outs of consciousness as a formal, logical system. This is an amazing and fascinating, albeit convoluted and technical, refutation of the assertions of AI specialists that a conscious machine is possible. Penrose leaves no options for AI by invoking the devastating theorem of Kurt Gödel, the Incompleteness Theorem. This powerful piece of mathematical artillery absolutely (and I mean, absolutely in the fullest sense of the term) destroys the possibility of there being a knowable, sound, consistent formal mathematical (or “logical”) system as the basis for conscious understanding. If there is no knowably true formal system, how is it that human consciousness has derived and understood many fundamental laws of science? Well, certainly not by means of any mathematical system, since all mathematical systems are shown to be self-contradictory by Gödel’s reductio ad absurdum. Therefore, consciousness is something which overflows the bounds of formal systems, including those from which any and all algorithms might be derived.

Penrose uses, as an example, Turing’s famous “stopping problem.” This problem asks if there is an algorithm which can determine whether any given computation does or does not stop, in that there seem to be some computations which do not stop but just carry on and on, never rendering an “answer.” According to Gödel’s theorem, there can be no algorithm to determine this, as any formal system which can be shown to soundly and consistently render an answer is demonstrated by reductio ad absurdum, to also not render an answer; in effect, the solution given by Gödel’s theorem is that if the computation stops then it does not stop. The analogous concept is rendered in plain language by the proposition, “This sentence is false.” If the sentence is true, it’s false and if it’s false, it’s true.

Penrose spends a great deal of time in both books on this argument. Somehow, says Sir Roger, consciousness is able to become aware of mathematical truth even in the face of the powerful Gödelian assertion that truth cannot be formally derived (or “proven”). Even a child can discover that 1+1=2, but this awareness, this understanding of truth, is shown by Gödel’s theorem to be not computable, and therefore, it cannot be the result of an algorithmic computation. In Shadows of the Mind, Penrose adds nails to the coffin by raising every argument imaginable from AI and blowing them all completely out of the water. There is no algorithmic scenario that can overcome the awe inspiring purview of Gödel’s omnipotent theorem.

I know that I, for one, have never thought of it this way. Penrose notes that even Gödel himself did not believe that consciousness is fundamentally material, but that it is completely beyond and separate from materialism. Turing, though slightly less metaphysical, made the remarkable assertion that whatever algorithm might run consciousness, it would be an imperfect one, capable of making mistakes, learning, forgetting, etc. However, Penrose also has this covered, and even such “random” or “unknowable” algorithms are ruled out.

He then moves on to the altogether more interesting part of his thesis, involving the origin of consciousness in quantum processes. The upshot of his argument is that we do not have a mathematical system that is adequate to describe reality and this is the reason we do not have a science of consciousness. Schrodinger’s tatty equation is dragged out once again, and the huge, insurmountable problem of Objective Reduction becomes the focus of the second part of both books. Some fascinating mathematical digressions are made, especially with respect to the Second Law of Thermodynamics and Weyl’s theory, which places the problem of the original singularity, or “big bang” theory, in very sharp perspective. Simply reversing Schrodinger’s equation is not going to cut it for Sir Roger.

So, both forwards and backwards, coming and going, Penrose takes the wave function to task. It is inadequate and cannot be considered a complete description of reality. What has this got to do with consciousness? In the coming days I will attempt to illuminate what I feel is the most important contribution to the scientific study of consciousness yet made in the combination of Penrose’s theory of Objective Reduction with Stuart Hameroff’s discoveries in brain science.

Stay tuned…same bat time, same bat channel.