Some Thoughts on Consciousness

Two weeks ago I attended NIPS, one of the leading conferences on machine learning and AI. This was my first time at NIPS, but I got the impression that they always like to have a sprinkling of neuroscience talks swimming in the sea of machine learning presentations. This year, they had no less than four talks with some variation of “consciousness” or “brain” in the title. They were given by Giulio Tononi, Scott Aaronson, Stanislas Dehaene, and Terrence Sejnowski. Despite the ambitious-sounding titles, unfortunately most of these talks did not really tackle the fundamental basis of consciousness, except for Giulio Tononi’s. On the other hand, while all the other talks were reasonably accessible, I found Giulio’s talk to be largely impenetrable. It was intriguing enough however that I will try to summarize my understanding of his theory, and offer some thoughts of my own on the subject. I should warn that I am not going to say anything particularly coherent, and I may in fact be grossly misrepresenting what Giulio intended to say because of my limited understanding of his talk.

Three of his claims stood out:

1. Consciousness is substrate-agnostic. It is more about the patterns than what the patterns are made of.

2. Consciousness is a continuous quantity, resultant from some type of complex non-local set of interactions.

3. Consciousness is independent from the ability to sense the outside world or to affect it.

As you may be able to guess, claim 2 is the one I’m most hazy about. I will come back to it later, but Giulio claims that consciousness is a quantitatively measurable thing, and even proposed a specific mathematical function “Phi” (presumably a function of the connectivity and firing patterns of neurons in the brain) that literally yields a number quantifying the degree of consciousness. The larger the number, the more conscious something is. This part of the talk was the least clear and invoked a lot of non-standard jargon that made it difficult to decipher. It was very poorly presented unfortunately.

But before I get ahead of myself, let me start with claim 1. The notion here is that consciousness is more akin to a wave than to a particle. It is an arrangement or pattern that can be embodied by many substrates, just like how waves can be made up of air particles, water particles, etc. It is unclear what the specific arrangement or pattern is, but I find the general notion to be plausible. The stuff of consciousness can be anything. What determines whether a phenomena is conscious or not is a property that is emergent from that phenomena. In other words it is a systems-level property.

Now coming back to claim 2, my sense is that his consciousness-measuring function has to do with how tightly integrated a set of interacting parts are. To make a graph theoretic analogy, a complete graph would be more conscious than a sparsely connected one. In particular, he seems to be making the claim that the brain only becomes conscious when all of its parts are firing together, when there are global and not just short-range interactions localized to regions of the brain. Actually, I think he is more than just claiming this. He offered experimental support in the form of neuroimaging studies (I forgot the method) that exhibited much stronger global-level neuronal activation patterns in the brain when a person is conscious versus when s/he is not. Of course, there is a question of sufficiency vs. necessity here. Arguably he established that having such global connectivity, at least in the human brain, is sufficient (under normal, non-drugged circumstances) for consciousness to emerge. But it is unclear why this implies necessity. Must conscious phenomena always emerge from such global connectivity patterns? And what does it even mean to say the connectivity pattern is global? I.e. why is a localized activation pattern in the brain, if its dense enough, any less global than an activation pattern that spans the entire brain? Where does one draw the line? In particular, what if the localized regions of the brain are individually conscious, but we only get to detect the top-level global consciousness because it is the one that processes information from the outside world, and the one that controls the body’s response to this information?

Specifically, and this brings me to claim 3, one can conceive of conscious phenomena as existing independently from the outside world. There’s a sort of “I/O” that goes on between our brain and the outside world. We receive stimuli from the outside world, the “inputs”, and based on these stimuli we effect changes in our body that correspond to moving our hands, articulating speech using our mouth, etc, the “output”. But Giulio made the (uncontroversial) point that consciousness is independent of I/O. We are conscious during our dreams, when there’s little input or output, and it is clear that even if a waking person were derived of all external senses and the ability to move their body, i.e. if someone were fully paralyzed with their eyes and ears shut, they would still be conscious. Such a consciousness may not be stable, in the sense that if one were to persist in that state for a long time the quality of one’s consciousness may start to deteriorate, but nonetheless it is clear that the two are fundamentally separable.

This brings me back to the question of whether there really is one consciousness in the brain. We call someone conscious when we detect their I/O, their ability to interact with and affect the outside world, and Giulio’s research suggests that a person is only in that state when there are global activation patterns across the whole brain. But this does not rule out the possibility that subparts of the brain are conscious, but are not plugged into the I/O of the person, and thus escape our detection. One can even engage in a fanciful thought experiment about the consciousness of a single neuron, say one in the visual cortex. Its entire existence, its I/O, may be limited to detecting flashes of light and deciding on whether to initiate a pulse or not. We would have no way of communicating with this neuron, and to be sure it would not have anything resembling a personality, given the lack of memory, etc, but it may nonetheless be minimally conscious.

In other words, is it the case that every brain has a single consciousness, or is it instead the case that a single consciousness gets to control the I/O of the brain? Maybe one of the consciousnesses in our brain is special, in that its state/internal patterns are affected by outside inputs, and what happens inside it actually affects our outputs, i.e. our movements, speech, etc. But does this necessarily imply that only a single consciousness exists in the brain? I would like to follow this train of thought for a little longer, but before I do, another idea on consciousness merits mentioning here. In Mind and Matter, Erwin Schrodinger philosophizes on the nature of consciousness. One idea he proposes is that consciousness is not something that is there or not there. Rather it is a property of all matter, and something even as simple as the humble electron has it. If one were to take this view seriously, it would suggest that not only would the brain as a whole be conscious, but so would every subset of it, including individual neurons like I alluded to earlier. At face value this seems unlikely, in particular because we do act as a single integrated entity, at least our brains do, which suggests that there’s a certain discreteness to it that is altogether separate from the single actions of individual neurons or collections of neurons in the brain. That if in fact all these things were individually conscious, a person would not behave as a person at all, but an amalgamation of conflicting opinions and actions. However on this basis alone it is not sufficient to dismiss the idea. That’s because we are only discussing consciousness, not freewill. By the very structure of the brain, all the parts are consistent with the whole (at least for the most part). If the brain as a whole is instructing the body to move the left arm, then the part of the brain responsible for moving the arm cannot be in a different state. (This is actually not strictly true, because we find such dissonance when one’s subconscious reflex conflicts with one’s intended conscious actions, but that only lends further support to the notion that subparts of the brain may be individually conscious.) And so while we may have the appearance of a single conscious entity, this may only be so because there’s a sort of built-in consistency check that insures everything works together. And in fact during Giulio’s talk it was mentioned that there are individuals who appear to possess multiple consciousnesses, which, with careful teasing apart, can be communicated with separately. (This is a different question though because presumably these multiple consciousnesses still exist on the global brain level).

The reason I find the notion of “consciousness intrinsic to all matter” attractive is because it would eliminate the seemingly arbitrary distinction between conscious and unconscious things, just like the arbitrary and entirely non-biological distinction between living and non-living things, the vitalism that was popular in the mid-19th century before it was finally understood that all of biology is just chemistry. It would be nice if the phenomenon of consciousness is simply just physics, and physics as we know it. It would also eliminate the need for Giulio’s function that quantifies the degree to which something is conscious, because everything would be conscious, and the only special thing about our consciousness is that it is wired to a brain that senses the outside world and is able to respond to it. I/O becomes the unique property of our brain, and not consciousness.

Unfortunately, I don’t think this idea is quite workable yet. That’s because while the arguments I put forth above are valid, they overlook the fact that the global brain-wide consciousness, the one that controls the I/O of the brain, is not always on. It is not always present. It fades in and out of existence, depending precisely on the activation pattern of the brain. And so if consciousness were something truly intrinsic to matter, and not an emergent systems-level phenomena, we would not see this appearance and disappearance. So we are left with the following: consciousness is a substrate-agnostic phenomenon that arises when matter takes on a certain pattern of interactions. It is not dependent on sensing or affecting the outside world, but in the particular case of our brains, this conscious phenomena gets wired to the outside world, which enables our detection of it. We are left to speculate on precisely what this “pattern of interactions” is. Giulio appears to have a specific view in mind that I hope to explore further in the future, and there are others like Douglas Hofstadter’s self-referential loops. Finally, coming back to Schrodinger’s view that matter is intrinsically conscious, the above discussion suggests a sort of synthesis: maybe matter has the potential for consciousness, but is not always conscious in its inert state. Only when it participates in a larger pattern of interactions does that consciousness surface. This is in fact another way of saying that consciousness is substrate-agnostic.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s