Perfecting Equilibrium Volume Two, Issue 48
where the confines of the waking world
blend with the world of dreams.
And so I captured this dance
where all that we see or seem
is but a dream within a dream
The Sunday Reader, Sept. 10, 2023
Are you conscious? How does that work? Your eyes glance across this screen over these letters, and your mind forms those letters into words, and those words convey my thoughts to your thoughts. And if your consciousness is anything like the circus atop my neck, there are sarcastic commentary tracks reacting to the words, word associations swan diving off cliffs, bits of next week’s writing piling up in the corners…
How does all this work? What part of your brain is conscious?
Let’s follow the science! Which says…we haven’t got a clue.
In 1998 neuroscientist Christof Koch bet philosopher David Chalmers a case of wine that within 25 years scientists would identify the neural patterns underlying consciousness. In June of this year Koch, the meritorious investigator at the Allen Institute for Brain Science in Seattle, Washington, conceded defeat at the annual meeting of the Association for the Scientific Study of Consciousness.
“It’s clear that things are not clear,” Chalmers said, and Koch, grimacing, concurred. He stalked off the stage and reappeared with a case of wine as the audience laughed and applauded.
Koch then doubled down on his bet. Twenty-five years from now, he predicted, when he will be age 91 and Chalmers will be age 82, consciousness researchers will achieve the “clarity” that now eludes them. Chalmers, shaking Koch’s hand, took the bet.
“I hope I lose,” Chalmers said, “but I suspect I’ll win.”
Not only are scientists unable to define how consciousness works, they don’t even agree who or what is conscious. Are whales conscious? Apes? Worms? Broccoli? One presentation at the conference foundered when critics pointed out that under its criteria DVD players are conscious.
So here’s a question: If we’re not sure who or what is conscious, or have any clue how consciousness works, why is everyone running around with their hair of fire insisting that Artificial Intelligences are going to take over the world, and may exterminate humanity?
Now I know what you are thinking, Dear Reader. Wait just one minute, Feola! Aren’t AIs already doing evil things such as lying!?!
1. No
2. I do not think that word means what you think it means
Let’s take point 2 first. Here is the legend of the seven blind wise men and the elephant, updated for (the_current_year):
The first blind wise man stumbles into the elephant’s side. “The elephant is flat and sturdy like a wall!”
“Lies!” yells the wise man holding the trunk. “It’s like a snake!”
“Disinformation!” shouts the one grasping a leg. “It’s like the trunk of a tree!”
“Fake News!!!” cries the one holding the tail. “It’s a rope!!”
“You’re a bot account!” snarls the one holding an ear. “An elephant is like a sail on a sailing ship!!!”
“RUSSIAN Propaganda!!!” spits the one holding a tusk. “It’s a spear!!”
There’s more, but you get the point. The habit of disingenuously refusing to see anyone else’s perspective is stupid and exhausting. It is meant to obfuscate and prevent discussion.
Anyway, let’s turn back to the first point. So-called AIs are neither. They are Large Language Models that look for patterns in text, then reproduce those patterns.
The problem with this is obvious to anyone who has ever struggled to learn a new language: that’s not how languages work.
Take idiomatic expressions. Please!
We English speakers know that when someone says “I worked my fingers to the bone,” no part of their skeleton is actually showing. It’s an expression meaning you’ve been working hard. And it does not translate word-for-word; there’s an inherent meaning in the phrase separate from the words. That’s why it’s an idiomatic expression.
This was drilled into me the hard way when I was learning to speak Tagalog. I stayed up late studying my textbook, and as I was finishing, came across a phrase I decided to work into the morning’s conversation.
So when I got to the Pacific Stars & Stripes office on Clark Air Base in the Philippines and people asked me how I was doing, I told them “Magsunog ng Kilay.” Fortunately I had brought the book with me, and was able to convince them not to take me to the hospital, as my eyebrows were not actually on fire. Turns out that’s not an expression anyone in the province of Pampanga uses. After some review I was allowed to keep the book, but told to practice with the staff before trying out any more expressions in casual conversation.
So how do Large Language Models work then? Simple. They are effectively playing Scrabble, but with words instead of letters. A Scrabble player can score 37 points for “Cyclohexylamine,” and even point to the definition – “an organic compound used to prevent corrosion in boilers” – without understanding organic chemistry or corrosion or how boilers work.
A Large Language Model is like giving a prompt to a Scrabble player with an endless bag of tiles. You prompt it with “immune,” worth 10 points. It considers “immunize” for 21 points, “immunized” for 23, then moves to “hyperimmunized” for 36 before settling on “hyperimmunizing” for 37 points. It’s working a matrix for the highest score.
But wait! Hasn’t ChatGPT been lying? Making up references? Didn’t it make up and cite a Pew Research Center survey saying 71 percent of Americans believe it would be good for society if computers become more capable and sophisticated?
No. Also, my eyebrows were never actually on fire.
Because that isn’t what it is doing. It is using a matrix to construct the highest-scoring answer.
That matrix shows responses citing studies get more approval than those that don’t. Responses citing prestigious institutions score higher than those that simply say “most people think.” And responses with statistics score higher than vague sayings like “most.”
So ChatGPT puts those things together. A user asks “should AI development continue?” ChatGPT or Google Bard or any of these LLMs can comb through news and blog posts and videos and such, and say most are in favor. But that response won’t score well.
So they work to up the score.
If you remove the stop words – words that only function as grammar, such as “A,” “THE,” “OF,” “AND” and so forth, that query is down to two parts: the subject (AI development); and the trend (continue: yes/no).
So the LLM reflects the subject back to you, then adds pieces to increase that score. So SUBJECT (computer development) TREND (should continue) because PRESTIGIOUS INSTITUTION (Pew Research Center/Harvard/Columbia/etc.) did SCIENCE (survey/paper/research) that showed STATITICS. The LLM is snapping these things together like Legos.
We think “2021 Pew Research Center Study on Computers and Society” is a singular thing. The LLM sees “(YEAR)(PRESTIGIOUS INSTITUTION)(SUBJECT).
It is no more lying than children are when you give them a Lego set and they build something different than the picture on the box. And if they do build the Millennium Falcon on the cover correctly, it’s still a Lego model and not an actual spaceship. And they only know how to snap together Legos, not build interstellar craft that can fly to the stars.
Next on Perfecting Equilibrium
Tuesday September 12th-The PE Vlog: Tutorial: A look at the ON1 Photo Keyword AI
Thursday September 14th-The PE Digest: The Week in Review and Easter Egg roundup
Friday September 15th-Foto.Feola.Friday