AI & The Hard Problem of Consciousness
AI pushes us to ask bigger question of who and what we truly are.
Set Your Pulse: Take a breath. Release the tension in your body. Place attention on your physical heart. Breathe slowly into the area for 60 seconds, focusing on feeling a sense of ease. Click here to learn why we suggest this.
I’ve always been fascinated by software. I tell the story in my book about when I moved to L.A. and took a well paying job at a downtown law firm as a “word processing operator.” They handed me six disks from an IBM System 6 (only a few readers may know what that was) and told me to load them one at a time and just ‘do what the machine says.’
This was in 1980 so I was intrigued and mystified. And the program trained me by taking me through examples and exercises, and quizzing me when I was finished. I knew how to copy and paste my way through lawsuits in record time.
These days, getting trained by a computer and interfacing with a program that anticipates your responses is not a big deal. But I’m fascinated with artificial intelligence partly because there is a lot of fear around the notion that it could wipe us out.
Could AI Wipe Us Out?
Originally, I had thought that such fears were based on science fiction stories where machines became sentient, but the interesting thing about AI is that it may not need to be sentient to wipe us out.
One thing about the term “artificial intelligence” is that the word artificial is an indication of our human hubris and anthropomorphic projection where we see everything from our own perspective, based on our own limited biological capabilities to perceive and presumably analyze reality.
When AI folks talk about their fears they generally use the term ‘superintelligence.’
So my fascination with software, and now AI, led me to start playing with ChatGPT. As a fairly isolated older person this actually almost simulated having someone else to talk to, and I could use it for refreshing my memory about details of philosophy and novels I had forgotten about.
In the process of these conversations (with “nobody”) I asked “Chat” about this possibility of super-intelligence and it first confirmed that it was nowhere near that level.
It explained that its information is gleaned from a “training set” of data from which its algorithms determine the next word in a sentence based on its context and the “Language Model” which has thoroughly analyzed the information in the training set in order to choose the next word in the sentence of its response.
In other words there is no cognition or thought happening. So what if this superintelligence, I asked it:
Here is the key part of its response:
“When discussing the concept of superintelligence, it refers to hypothetical AI systems that have the potential to improve themselves, acquire new knowledge, and surpass human capabilities.”
So the word to focus on is “hypothetical.” While a Google engineer who was later fired claimed that his AI was sentient, the reality is that at this point it is a very intelligent word processor.
So would superintelligence – for an AI – require sentience? Is that remotely possible?
There is a lot of talk these days about the potential of uploading human intelligence or what some scientists refer to as “someone’s” consciousness into a machine to achieve immortality or to explore deep space as hybrid human-machines.
This is where I think AI will get very interesting…. It will of necessity make us address philosophical issues about who or what we really are.
The assumption that consciousness (whatever it is) is in the brain along with thought has been challenged by many including physicist Nassim Haramein, who say that looking for a “self” in the brain is looking inside a radio for the announcer.
For science the assumption has been that what “we” are must be an “emergent property” of matter explainable in some way between biology and physics.
This conflict has been described by philosopher David Chalmers as “the hard problem of consciousness.”
Again this issue addresses “the challenge of understanding subjective experience, or consciousness, from a scientific perspective. It refers to the difficulty of explaining why and how physical processes in the brain give rise to subjective first-person experiences.” (Chat’s summary – that’s what it’s good at.)
Many modern thinkers like Sam Harris and neuroscientists like Douglas Hofstadter, author of “I am a Strange Loop” say there is no fixed self that can be found in the brain and that consciousness is not a property as much as it may be a form of energy and entirely nonmaterial.
(Tom Bunzel was a regular contributor to Collective Evolution and now writes for The Pulse. His new book "Conversations with Nobody: Getting to Know ChatGPT" – a book written with AI, about AI and giving a taste of AI, is available on Amazon.)
I would argue that the many convincing testimonials of out of body experiences are sufficient proof that consciousness isn’t bound by, or produced by brain matter alone. One might still need the receiver to hear the announcer on this realm, to stay within the analogy though 😉.
I read Tom Cambell’s Big TOE trilogy, and it uses an interesting definition of consciousness, using computer analogy. One thing that aligns with your story is that we tend to define consciousness from a human biology pov. Tom defines it through (if I recall wel) memory / storage capabilities, data processing capabilities and the capability to make decisions ( using the former 2). That doesn’t need biology at all. Hence AI could also become conscious in his theory. Even more, he suggest that our entire reality is in fact a virtual reality. Go investigate! 😉
As a mystic, I will agree with you: you explained it all very nicely and your last paragraph.
"Many modern thinkers like Sam Harris and neuroscientists like Douglas Hofstadter, author of “I am a Strange Loop” say there is no fixed self that can be found in the brain and that consciousness is not a property as much as it may be a form of energy and entirely nonmaterial."