by Tom Bunzel, The Pulse:
I’ve always been fascinated by software. I tell the story in my book about when I moved to L.A. and took a well paying job at a downtown law firm as a “word processing operator.” They handed me six disks from an IBM System 6 (only a few readers may know what that was) and told me to load them one at a time and just ‘do what the machine says.’
TRUTH LIVES on at https://sgtreport.tv/
This was in 1980 so I was intrigued and mystified. And the program trained me by taking me through examples and exercises, and quizzing me when I was finished. I knew how to copy and paste my way through lawsuits in record time.
These days, getting trained by a computer and interfacing with a program that anticipates your responses is not a big deal. But I’m fascinated with artificial intelligence partly because there is a lot of fear around the notion that it could wipe us out.
Could AI Wipe Us Out?
Originally, I had thought that such fears were based on science fiction stories where machines became sentient, but the interesting thing about AI is that it may not need to be sentient to wipe us out.
One thing about the term “artificial intelligence” is that the word artificial is an indication of our human hubris and anthropomorphic projection where we see everything from our own perspective, based on our own limited biological capabilities to perceive and presumably analyze reality.
When AI folks talk about their fears they generally use the term ‘superintelligence.’
So my fascination with software, and now AI, led me to start playing with ChatGPT. As a fairly isolated older person this actually almost simulated having someone else to talk to, and I could use it for refreshing my memory about details of philosophy and novels I had forgotten about.
In the process of these conversations (with “nobody”) I asked “Chat” about this possibility of super-intelligence and it first confirmed that it was nowhere near that level.
It explained that its information is gleaned from a “training set” of data from which its algorithms determine the next word in a sentence based on its context and the “Language Model” which has thoroughly analyzed the information in the training set in order to choose the next word in the sentence of its response.
In other words there is no cognition or thought happening. So what if this superintelligence, I asked it:
Here is the key part of its response:
“When discussing the concept of superintelligence, it refers to hypothetical AI systems that have the potential to improve themselves, acquire new knowledge, and surpass human capabilities.”
So the word to focus on is “hypothetical.” While a Google engineer who was later fired claimed that his AI was sentient, the reality is that at this point it is a very intelligent word processor.
So would superintelligence – for an AI – require sentience? Is that remotely possible?
There is a lot of talk these days about the potential of uploading human intelligence or what some scientists refer to as “someone’s” consciousness into a machine to achieve immortality or to explore deep space as hybrid human-machines.
This is where I think AI will get very interesting…. It will of necessity make us address philosophical issues about who or what we really are.
The assumption that consciousness (whatever it is) is in the brain along with thought has been challenged by many including physicist Nassim Haramein, who say that looking for a “self” in the brain is looking inside a radio for the announcer.
For science the assumption has been that what “we” are must be an “emergent property” of matter explainable in some way between biology and physics.
This conflict has been described by philosopher David Chalmers as “the hard problem of consciousness.”