Perfecting Equilibrium Volume Two, Issue 61
Just need a little brain salad surgery
I got to cure my insecurity
But I've been in the wrong place
But it must have been the right time
I been in the right place
But it must have been the wrong song
I been in the right vein
But it seems like a wrong arm
I been in the right world
But it seems like wrong, wrong, wrong, wrong, wrong
The Sunday Reader, Jan 14, 2024
Last year’s Artificial Intelligence roller coaster ran on oh-so-familiar tracks. First: It’s AMAZING! It’s the end of the world as we know it! Second: It’s AWFUL! It’s the end of the world as we know it. Third: Silence, as everyone gets bored and moves over to the next roller coaster – cryptocurrency Exchange-Traded Funds or somesuch.
That’s when the real work begins, and the real change starts.
We’ve covered how so-called AIs are actually Large Language Models that absorb enormous amounts of data and discern patterns. And when you feed them the mountains of data that comprise the Internet, you are feeding them mountains of garbage. So it should be no surprise that the answers are garbage.
Garbage in. Garbage out. It’s fundamental.
But therein is also the solution. Feed LLMs good, clean, curated data, and they will discern patterns and produce analytics that have never before been possible.
Now there are two important things about that statement:
It is absolutely true
It is absolutely true in the way Monty Python spoke true in How To Play The Flute: First you blow in one end, and then you move your fingers up and down the outside.
Ummm….THAT’S THE HARD PART. People spend years learning how to move their fingers up and down the outside of a flute so the right notes come out!
Trying to get the right tool into the right place at the right time often leads to confusion and failure. Look at the current controversies over whether plug-in electric vehicles have any value. “What in the heck are you talking about?” says thousands of golf courses and senior communities all over the world where herds of electric golf carts buzz everywhere by day and rest and recharge by night, and have been doing it fuss-free for decades. And plug-in electric cars are an excellent fit for similar usage profiles; if you commute to work and shopping, and have a charger in your garage, an EV is a great fit.
Road-tripping in an EV…not so much. The Internet is jammed with horror stories by drivers who went crazy and drove an EV out of their neighborhood, which apparently is somewhat akin to trying to reach the North Pole with a wooden sailing ship and a couple of dogsleds. Jack Baruth has reviewed hundreds of vehicles; here’s his report on a road trip in a Tesla Model Y Long Range and how its accompanying Supercharger plan - routing between chargers - became more important than the trip itself:
This is likely the worst road trip vehicle created in… well, decades. Certainly since the Fifties. Early in our journey, I went 23 miles out of the way to look at the airliner boneyard north of Tuscon. Doing that changed our Supercharger plan and basically added 70 minutes to the trip over and above the actual detour. You quickly learn that any change in direction or destination has to be accompanied by serious thought and planning.
In fairness to the Model Y, I didn’t just hate the way it bumbled and paused through over-the-road journeys that could easily be accomplished in two-thirds the time by a Honda CB300 or Chevy Spark. I also hated everything about the way it drove. The divided highways of Arizona are full of crossovers. The Tesla will “see” a vehicle crossing over an eighth of a mile away and panic-brake everything from the back seat into the front seat. The “Autopilot” requires that you jerk on the wheel at random intervals to keep it auto-piloting, but no worries; to keep things fair, the car will occasionally jerk the wheel itself for no reason. Sometimes this will be in the direction of oncoming traffic. Using the regular cruise control is not much better; it will panic-brake you even after you’ve configured it not to. The seats are miserable, worse than anything I’ve driven since the Nineties. The A/C uses more power than the Hoover Dam can generate but it is about as effective as waving a piece of damp paper in someone’s general direction. The stereo is trash.
Apparently the headlights are quite nice, though using them does cut down your range…
So if feeding Large Language Models the mess that is the Internet is their version of road-tripping an EV, what’s the right place and right time for LLMs? Use them internally as analytical tools, and feed them only corporate data.
A Large Language Model will discern even tiny patterns in enterprise data. The chatbot, no-code interface means employees will use it much more frequently, and will make better use of the data. And Large Language Models that are only trained on corporate data cannot have the problems with data provenance and spurious output that plague LLMs trained on public data.
In plain language: Large Language Models trained on public data suffer from Garbage In; Garbage out. LLMs trained on clean corporate data reward you with Good Data In; Good Analytics out.
So it’s magic! Every company everywhere do this immediately!
Alas, no. The rewards scale with the data.
In other words, if you are a small mom-and-pop pizzeria running your company with a single Point-of-Sale cash register and copy of Microsoft Excel…keep on trucking! No Large Language Model will see anything you won’t see faster eyeballing that amount of data.
On the other hand, if you’re the Chief Financial Officer of a corporation with dozens of divisions and operating units in dozens of states, a Large Language Model will pay for itself lickety-split.
Not only will large enterprises see the most benefit, they’ve already done much of the prep work. Large Language Models can feed on existing data marts and data lakes. The control work already done to facilitate these repositories, such as Master Data Management, can be used to train Large Language Models.
The good news is that in such a setup Large Language Models will supercharge enterprise analytics, revealing patterns and correlations that have never before been available.
The important thing to remember is that Large Language Models do not have intelligence, artificial or otherwise. LLMs will definitely uncover new patterns in your data. Most of these will be extremely useful, and give LLM-powered enterprises speed and foresight over their competition. But some of these patterns will actually be discoveries of problems; bad data, system bugs, problem processes and the like.
At the end of the day, Large Language Models – indeed, all analytics – give you questions, not answers. But do not doubt the value of those questions.
Next on Perfecting Equilibrium
Tuesday January 16th - The PE Vlog: We’re taking a couple of weeks developing marketing graphics for Feola Factory as an exercise to understand how and when AI tools are useful. This week we’re going to see how Adobe Firefly does. Second in a series.
Thursday January 18th - The PE Digest: The Week in Review and Easter Egg roundup
Friday January 19th - Foto.Feola.Friday
Even corporate data is riddled with errors... often systematic errors. Far better than, say, X, but dangerous nonetheless. I would not rely on people or machines to totally get things right ALL of the time.
Just finished a weeklong job looking at 10 million medical records, culled from 40 million. The data runs, using neural network routines at MIT, identified just over 20,000 corrupted records. There are certainly more. LLMs use neural networking to get the job done. As a QA/QC guy, I was specifically looking for them and specifically wanted them culled for further examination. An error rate around 0.002 (0.2% or 1 in 500) is far lower than typical in corporate or government data, but could have particular effect when searching for a rare phenomenon, as we have been on this project.
Look at Boeing's predicament right now. Airframe manufacture in theory has very tight specs (design standards to design-in quality) and more-than-typical inspection regimes. But lots of stuff on the factory floor is never documented in any corporate record, in any factory other than factories that are 100% automated.
Typical protection against that is close-at-hand management (management level high enough to instantly allocate significant funds to fix an observed shortcoming). The 737-Max9 airframes are made in St. Louis, and the final plane assembly is in Washington State. And where has Boeing top management been since 2001? Chicago. The door plugs were supposed to be secured with 4 high-strength bolts -- which are usually quite brittle. Drop them onto a hard floor and they may crack internally. A typical "fix" would be to have the floor below the plug-install area padded and STILL throw away dropped bolts. Or "cue-up" the bolts in the assembly. Every once in awhile a bad bolt would sneak through, but the other three would do the job. The extra pad costs MONEY. Not much, but just a little. And it requires LABOR, which is in short supply.
BTW, I shot an episode of Invisible for Oprah Winfrey Network years ago at the Tucson aircraft boneyard. Worth the trip.