Philosopher David Chalmers gets the credit for naming the so-called hard problem of consciousness, but the mystery it describes is about as ancient and perplexing as they come: What is consciousness — and how and why does it arise?
In other words, how or what causes, maintains and houses sentience? What causes and maintains that sense you have of experiencing yourself, your sensations, your thoughts and everything and everyone around you?
Or, as biologist T.H. Huxley put the question:
“… and how it is that anything so remarkable as a state of consciousness comes about as the result of irritating nervous tissue is just as unaccountable as the appearance of the Djin when Aladdin rubbed his lamp in the story, or as any other ultimate fact of nature.”
At an upcoming AAAI symposium on machine consciousness, Hanson Robotics and SingularityNet researchers discuss one possible framework for measuring emerging and real sentience in robots like Sophia, mostly based on the Tononi Phi measurement from U. Wisconsin neuroscientist Giulio Tononi. Read an early version here — or read it in full and in place below the fold.
But before you do, let’s back up for a minute. Consciousness isn’t just a big question. In many ways, it’s the question. And its rife with controversy.
What is Consciousness Anyway?
Why are we conscious at all? And what is sentience, anyway? Notable thinkers like Gottfried Leibniz , John Locke and Isaac Newton have famously weighed in here, but in many ways we are still no closer to answers than we were the day Descartes gave us cogito, ergo sum in 1641.
It’s true that in the last few years there has been a flurry of furious consciousness research, much of it fueled by daring work from Nobel biologist Francis Crick and neuroscientist Christof Koch.
It’s true also that, thanks to the advent of such technologies as magneto-resonance imaging, neuroscientists have been making meteoric progress in better understanding the brain and its activities. Especially in the last decade, huge progress has been made in identifying the so-called neural correlates of consciousness, or NCCs. These link single measurable perceptions to specific processes and mechanisms in the brain.
For instance, we now know what in the brain is responsible for the experience of seeing red or hearing a door slam.
But why and where does the experience of seeing read or hearing a loud slam come from, subjectively?
We still don’t know.
The thing is, there is still no formal agreement for what sentience is, much less what anatomical mechanisms might express and maintain it.
“We have no objective, rational method, no step-by-step procedure, to determine whether a given organism has subjective states, has feelings,” bemoaned Koch in 2008.
“The situation,” he added, “is scandalous. We have a detailed and very successful framework for matter and for energy but not for the mind-body problem.
That may be changing, thanks in part to a swell of interest coming from artificial intelligence theorists and roboticists who aim to build artificially sentient systems and beings.
As Marcello Massimini glibly points out, once humans figure out what processes and structures give rise to sentience, and maybe only then, we can re-engineer parts or all of it in the machines we create.
This begins to explain the present excitement around an evolving framework for studying and measuring consciousness called the Integrated Information Theory.
Created by Univ. Of Wisconsin psychiatrist and neuroscientist Giulio Tononi in 2004, IIT is an evolving system and calculus for studying and quantifying consciousness.
A unique blend of phenomenology and information theory, it is strikingly Cartesian in how it approaches the problem. That’s because it doesn’t attempt to investigate consciousness by looking at neurons and neurological networks. Rather it begins by examining lived experience of what it is to be conscious.
It then builds an understanding of what consciousness must require neurologically from there.
The IIT that emerges from that work is a detailed, complex system that describes how consciousness behaves and is organized. Its centerpiece is Phi, Tononi’s mathematical quantifier for consciousness.
Phi is based on the number and quality of interconnections a given entity has between bits of information. The resulting number — the Phi score — corresponds directly to how conscious it is.
The more connections, the more conscious an entity is, a factor quantifies as PHI
Consciousness, in this model, doesn’t rely on a network of information. It is the network. As such, it doesn’t discriminate based on whether the subject is organic or electronic.
Put simply, high PHI measure means more consciousness — more sentience — no matter who or what you are.
IIT is a complicated theory involving axioms one must accept about consciousness, postulates about the interrelatedness of information, theories and rules about the physical substrate in which connections are created and maintained and the method one must use to calculate Phi .
And as critics point out and Tononi himself concedes, PHI and its correlates are exceedingly difficult to calculate.
Yet, Tononi’s original IIT concepts and predictions do appear to be bearing out in various neurological studies.
In 2013, Adenauer Casali and colleagues completed a study that showed it was possible to use the IIT framework within an EEG paradigm for measuring consciousness in some patients.
Also, IIT constructs seem to fit well with recent neurological insights and discoveries.
IIT tenets easily comprehend findings regarding why the cerebral cortex and thalamus are more critical to consciousness than the more neutron-rich cerebellum. The cerebral cortex, fMRI data reveals, contains elaborate interconnections and connections with the thalamus.
Sedation and Contention
Animal research also bears out some of Tononi’s original IIT predictions around how anesthesia blots consciousness: it in fact works by shorting interconnections in that heavily interconnected corticothalamic complex.
The field of consciousness research is contentious, though. Even Chalmer’s original description of the hard problem of consciousness a matter of debate, with some theorists complaining that it isn’t the right problem to consider at all.
Still other theorists, like new mysteries Colin McGinn decry the entire effort as futile. Human intellect just is not capable of investigating any relation between physical and phenomenal structures .
“Importantly,” responds Tononi, “the more convincingly IIT can be validated under conditions in which it is relatively easy to assess how consciousness changes, the more it will help to make inferences about consciousness in hard examples, such as brain-damaged patients with residual areas of highly intelligent functions and simulate human cognition.” Tononi views ITT as a “principled, empirically testable and clinically useful account of how three pounds of organized excitable matter support the central fact of our existence — subjective experience.
Time will tell whether this account is anywhere near the mark.”
No one said it would be easy. Here is Isaac Newton commenting on the problem in a 17th-century letter to Royal Society secretary, Henry Oldenburg.
“To determine by what modes or actions light product in our minds the phantasm of color is not so easie.”
Photo: Adobe Stock