Leonard Susskind explains how the Englert-Brout-Higgs boson imparts mass #waycool 😀

# science

# There Will Never Be a Conscious Robot, Part 2

Wow, indeed 🙂

Dr. Stuart Hameroff has discovered the mechanism envisioned by Roger Penrose. What I’m saying is *astonishing*: there is a structure in the brain that operates on the grain level of quantum gravity! Think about that for a minute. In all the experiments I have ever seen involving quantum processes, one major factor has been the extremely cold temperatures and highly insulated environments necessary to create and maintain observable quantum effects, as with the D-Wave computer, for example. Living tissue is far too warm and ‘noisy’ for the maintenance of quantum superposition coherency. Not so, according to Drs. Hameroff and Penrose.

Stuart Hameroff is an anesthesiologist as well as a Professor at the University of Arizona, Associate Director of the Center for Consciousness Studies, so consciousness is basically his middle name. In the course of his studies, he discovered that anesthesia seems to operate by disrupting a very specific type of electrical activity in the brain, the higher frequency gamma synchronies. These electrical potentials arise in a very different manner than the neuronal axonal spikes associated with dendritic chemical synapses.

Gamma synchrony, first of all, is a *synchrony*; that is, the neurons are not firing consecutively, as with the chemically induced synaptic spikes, but rather, *simultaneously* across several hundred thousand neurons at a time. How do they do this? It appears that there is another kind of synaptic junction in brain dendrites, in addition to the familiar chemical type involving neurotransmitters and electrolytes. Gamma synchronies travel as quantum superpositions, tunnelling instantaneously, *not* firing consecutively, through *gap junctions*, that is, directly conjoined portals in the dendritic membranes.

How these superpositions arise will be discussed, but first let us understand that this is not the typical kind of dendritic activity we all learned in grade school. Rather than an exchange of neurotransmitters at synaptic junctions across a space between the neurons, one by one, the gamma potential arises simultaneously in a large group of neurons and is shared directly from neuron to neuron through the gap junctions with no space in between:

The gamma potentials arise from lattice structures inside the brain dendrites, *microtubules*, seen as the horizontal structures inside the circle in the diagram above. The diagram also shows the microtubules emitting the high frequency gamma waves and how the *waves* pass through the gap junction. These waves constitute the synchronies, which occur at a frequency of 40-80+ per second. Each superposition potential, each individual wave, is allowed to quantum tunnel because of the time symmetry allowed as the wave rises into the density matrix (nonlocality), thereby eliminating the need for consecutive spikes. This phenomenon was demonstrated in ‘Libet’s Case of Backwards Referral in Time,’ (Dennett, *Consciousness Explained,* 153-166) but denied because of the lack of an appropriate physical mechanism for backwards time referral of perception. Here we have it!

Now here’s where it gets weird.

Each microtubule is made of individual *dimers* called *tubulin*. The tubulin dimers arrange themselves in precise geometric patterns to form fibonacci spiral lattice tubes. Tubulin is a kind of protein that forms an *aromatic ring* so that the protein folds and creates *hydrophobic pockets* in the interior of the folds. These are electrically insulated areas, as water cannot enter and therefore, the noisy electrical interference associated with water cannot interfere with the more subtle electrical processes inside the hydrophobic pockets of the dimer. As I said, the dimers arrange themselves in a precise geometrical pattern so that their connections are regular and their mass energies consistent. This is important for the formation and maintenance of sustained coherent gamma potentials along the individual microtubules and among the microtubules en masse. The diagram below is a beautiful illustration of the geometrical order of a portion of a microtubule:

Inside the hydrophobic pockets there form magnetic dipoles, induced by adjacent dimers. The electron clouds inside one dimer repel the electrons in the neighboring dimer to produce an electronic dipole. This is called a London force, a type of van der Waals force. It is a *quantum physical effect*, not a chemical one. The result is a kind of electrical ‘switch’ in which the poles oscillate back and forth. The above diagram looks like a set of polarized tubulin dimers…that is, if you imagine you could actually see the London dipoles. These quantum forces may also exist in a state of superposition, that is, a state of being in both polarities *at the same time*. The diagrams below illustrate the London van der Waals force and how it creates the electrical switching potential among tubulin dimers.

The gamma potential, then, arises via the superposition state of London force dipoles, *tunneling instantaneously* through hundreds of thousands of neurons simultaneously. O.M.G. Hameroff calls this kind of network a *hyperneuron* or dendritic web.

The big problem with this theory is how the superposition resolves on its own, without an objective measurement or *measurer*. Penrose calls this phenomenon ‘*objective reduction*‘ and has coined the term ‘*orchestrated* objective reduction’ for the extremely organized and precise process occurring in the brain among the gorgeously, amazingly, mind blowingly beautifully arranged material substrate of the dendritic microtubules. As discussed in Part 1, the objective reduction of the tunneling gamma superposition wave occurs on the basis of the total mass energy involved in the superposition separation (i.e. tubulin mass), and the amount of time the superposition can be maintained, given a Planck length superposition separation distance: E=ℏ/t:

High amplitude gamma potentials resolve more quickly, resulting in a greater number of arguably more intense conscious moments per second (closer to 80, say), possibly with a resultant perception of time passing more slowly, exactly like increased frames per second in slow motion film photography.

This process of orchestrated objective reduction of gamma synchronies arising as quantum superpositions in dendritic microtubules is the production of conscious frames of reality, at a rate of 40-80+ per second:

Consciousness is a process in fundamental spacetime geometry, coupled to brain function.

Penrose suggests that Platonic information embedded in Plank scale geometry pervades the universe and is accessible to our conscious process.

~ Stuart Hameroff

Some thoughts on Part 2: on the basis of the above theory, it seems that our conscious process might work in concert with Plank scale geometry to create *meaning*. I would associate the gamma synchrony input as the ‘immediacy’ concept of Sartre and the further processing of this information as the ‘reflective’ process defined by Sartre. The conscious pilot described by Hameroff seems to be related to the ‘narrative center of gravity’ envisioned by Daniel Dennett.

Finally, what is the potential for quantum nanotechnology, the D-Wave chip, to run an algorithm for consciousness? Stay tuned for Part 3.

# A Show About Nothing

A panel of scientists debate the existence of nothing.

# There Will Never Be A Conscious Robot: Part 1

You can find Part 2 here.

A while back, I began to explore the origin of consciousness in the work of Roger Penrose and Stuart Hameroff; check out my blog posts, entitled Is You Is O’ Is You Ain’t Conscious?, A Brief History of the Density Matrix , The Density Matrix, and A Note on R and the 2nd Law of Thermodynamics. These notes approach the problem from the point of view of mathematics, first of all, and second, specifically from the model given by quantum physics.

In his books, *The Emperor’s New Mind* and *Shadows of the Mind,* Penrose took me on a journey through the limitations of mathematical knowledge in terms of creating an algorithm, that is, an organized, logical, consistent ‘formal’ system, for producing consciousness. On the basis of Kurt Gödel’s awesome, all-powerful Incompleteness Theorem, Penrose concludes that consciousness is not computable, and he points out that there are many, many concepts that are not computable, this is nothing new, Turing’s ‘halting problem’ being the primary example. In a nutshell then, Penrose uses Gödel’s mighty reductio ad absurdum to demonstrate that there is no such thing as a provably true, consistent formal system: this sentence, indeed, is false.

This raises the question as to how we can know truth, since we cannot discover it through any formal logical system whatsoever. For instance, the formal proof that 1+1=2 is quite lengthy and can be shown by the Incompleteness Theorem to be not *provably* consistent, and therefore, *not a proof*. While it is a proof of consistency, it is not a provably certain one. And this is, in fact, the upshot of Gödel’s theorem, according to Penrose; i.e., that the Incompleteness Theorem is an unassailable refutation of the provable consistency of any and every formal system whatsoever, and so, therefore, no formal system is reliably capable of generating that kind of intuitive grasp of truth that we all seem to possess. In effect, truth does not emerge from mathematics but rather, mathematics emerges from truth. Truth is important in that any algorithm that might generate consciousness should generate a *true* perception of the world. Without a mathematics whose consistency is provably true, we could never know for sure if our algorithm is reliable.

The point is that **our intuitive grasp of truth, how we do know that 1+1=2, is obviously not on the basis of, i.e. not generated by, any formal mathematical system or any algorithm based on any such system and never will be** because of the great Kurt Gödel. Further, Penrose supposes, perhaps a new, more powerful mathematics will be necessary to approach the problem of how consciousness arises from ordinary matter but this does not mean that the solution will be computable. This obviously has strong implications for AI and the possibility of developing a conscious computer.

What Penrose *does* find is unbelievably fascinating. He raises the question whether there is some *non-computable *property of ordinary matter…photons, electrons, atoms…you know… that has been overlooked by science to this point. He asks whether such a property might be employed in the production of consciousness and, therefore, in the engineering of a conscious machine. This is an astounding question, the question of an innovator, the question a child might ask! I mean, most of us would probably assume that there couldn’t be such an unknown property, given all the myriad and plethora of work that’s been done in theoretical physics. Even more astoundingly, he then *finds* this property in the standard quantum mechanical model.

Penrose proposes that the specific material property demonstrated by the phenomenon of quantum *superposition* involves a *resolution* or reduction process in the identification of a particle from out of the undifferentiated complex nonlocality of the density matrix, a deliberately ‘fuzzy’ mathematical terminology (|ψ + |φ), into the well defined classical state we discover upon measurement. He argues that this resolution process (R) of the deterministic (meaning it will continue indefinitely) Schrödinger wave (U), reveals that quantum superpositions are not time-symmetric, that is, they cannot run both forwards *and* backwards in time. This is revealed because the R procedure is arbitrary and is not derived from the deterministic equation (U), so it must be an approximation of some process yet unknown. It is also revealed by the obvious common sense absurdities arising from a reversal of U, including, for instance, the emission of a photon from a non-light source. Penrose reasons that this is similar and perhaps related to that other non-time-symmetric process in mathematics, the second law of thermodynamics involving entropy. Gravity provides a constraint or obstacle in spacetime so that entropy cannot flow backwards in time; likewise, quantum superpositions must be constrained by some kind of gravitational action.

Penrose brilliantly theorizes that the double-spiked state vector of the superposition is associated with *two completely separate spacetime geometries*, something for which Einstein’s theory of General Relativity has no expression. There is no way to mathematically express the relationship between the two separate spacetime geometries of the quantum superposition on Einstein’s relativistic curved spacetime tensor, making the mathematical formulation “profoundly obscure” and lending weight to the conjecture that it is not computable. Penrose argues that because of this separation of gravitational spacetime geometries, the superposition state is unstable and that the energy required to maintain the superposition separation is inversely proportional to the time that the state can be maintained; that is, the greater the energy, the shorter the time. Further, the quantum superposition separation *distance* is supposed to be on the Planck scale, resulting in a *quantum gravitational* measurement for the resolution process, yielding extremely reasonable mathematical results when applied to the tiny masses involved: E=ħ/t. Wow.

While I’m catching my breath, Sir Roger goes on to describe how consciousness can arise from this kind of *objective reduction* of a quantum superposition, in that each resolution (R) of the superposition would be a *conscious moment*, like a little captured frame of reality…REAL reality, not just a reproduction or simulation. The ‘decision’ that is made when the resolution occurs is the snapshot, the *production*, of a tiny frame of reality. On this basis, he wonders whether there might be some structure or process in the brain that can produce and maintain quantum superpositions at the appropriate amplitudes. He reasons that while low amplitude quantum superpositions might exist in the universe and might yield low-grade conscious moments individually, a sustained series of high amplitude objective reductions would require an insulated environment.

So, although it is not a generator, mathematics might *make use* of the rudimentary conscious perception inherent in particulate matter, at this most basic level, in appropriately designed devices. Mathematics seems to be the thing that *shapes* consciousness, analogous to the manifold of string theory; the gravitational field is this mathematical thing, in reality, and quantum gravity is the mathematics of objectively resolving trajectories in this field. According to Penrose’s colleague, Stuart Hameroff, Director of the Center for Consciousness Studies at the University of Arizona, one could make the astounding assertion that,

…we are built into the universe, I mean, these objective reductions are…reorganizations, a reshuffling of the makeup of the universe…of material reality as it’s forming; we are part of that.

Again, wow.

# Six interesting findings from recent benchmarking results

D-Wave’s official blog, Hack The Multiverse #checkitout

# Stuart Hameroff on Quantum Consciousness

Professor Hameroff discusses the quantum mechanism of consciousness in the brain and it’s implications, for instance, in terms of time symmetrical processes taking place (that is, *backwards* in time) in the brain, and the potential for quantum coherence of the mind *outside* the body.