Perfecting Equilibrium Volume Three, Issue 34
You been tellin' me you're a genius since you were seventeen
In all the time I've known you I still don't know what you mean
The weekend in the college didn't turn out like you planned
The things that pass for knowledge I can't understand
The Sunday Reader, March 9, 2025
It was clear I’d lost them.
They’d stayed with me through Smart Data Objects, and Managing Distributed Data, and even Data That Searches for Users.
And then their nerd asked the question I’d been dreading:
What’s the backend?
So…
I told The Tale of Jini & the Missing Objects. They collapsed, word by word, until by the time I said “Self-Negotiating Object Model” the entire brain trust of a Fortune 100 energy giant was in a coma.
Figures. Predictable, really I suppose. It was an act of purest optimism to have attempted to answer the question in the first place.
After all, I’d spent months meeting with Andrew Hopkins and his team in the Fishbowl.
After the Belo Interactive tech team won a stack of awards and was named Number 13 on the InfoWorld 100 for VelocIT – our first Distributed Data Management System – upper management had a brilliant idea:
Hey, maybe we can commercialize this thing!
So they gave a six-figure chunk of money to Accenture to come in and evaluate the commercial potential of VelocIT.
Accenture took the money and sent a team to our office. They were convinced they were going to listen very nicely to our enthusiasm for our little home-grown document management system, pat us on the head, tell us that market was already locked up by established players, and collect their money.
Easy peasy!
My staff, knowing my fairly complete lack of filters and proclivity for abusing people who I felt were wasting my time, kept me well out of things until the Accenture team was settled into a conference room called The Fishbowl.
The Fishbowl was a long, narrow oval conference room with one wall of semi-translucent glass brick that gave a distorted view in – hence the name. A giant whiteboard ran the entire length of one of the long walls.
When they were settled I walked in, said Hi! Then picked up a dry erase marker, and started writing at the left edge of the white board.
I talked for about two hours. I started with The Tale of Jini & the Missing Objects, and how we’d written our own new tale with Smart Objects and created VelocIT, a Distributed Data Management System. I explained Self-Negotiating Object Models, and how that enabled us to scale horizontally instead of vertically.
I may have said non-Markovian reward models once or twice – OK, like two dozen times – a phrase that makes Andrew twitch reflexively to this very day.
After two hours I’d reached the end of my story and the right edge of the white board. I said “Lemme know if you have any questions,” and headed back to my office. Andrew shut the door behind me.
That door remained shut for days. When it finally opened Andrew headed straight to my office and said in his sumptuous South African accent:
Could you do that again?
So back to The Fishbowl I went. Except now the Accenture team was larger; reinforcements had been brought in to figure out what in the heck we were up to at Belo Interactive.
This cycle repeated over the ensuing weeks: the doors would open after a few days to reveal a larger Accenture team reenforced with specialists from increasingly distant offices, with intros to more Dallas people augmented by those from, say, Chicago. After a half-dozen of my white boards Accenture began asking my lieutenants to present, and got the same thing.
Finally Andrew showed up in my office and pulled up a chair.
This isn’t a document management system. This changes everything.
Truth. Let’s dig into why that’s true, starting with the problem with current data architectures, and how these problems are solved by Distributed Data Management Systems.
Current centralized IT data architectures have been overwhelmed by the tsunami of edge data. The data has to be ETL’d from sources (Extract, Transform, Load), then correlated, coordinated synchronized with the sources for updates. Processing and managing the data is a Sisyphean task; just one such process, Master Data Management, consumes $11 billion annually.
This architecture is based on computing’s Model T: the centralized mainframe-style system. The earliest computers were entirely centralized; users had to set up jobs on punch cards and have them run as time permitted. Next, mainframes were ringed with “dumb terminals” that allowed users to interact with the mainframe via keyboard and screen; they were called dumb terminals because they had no computing power of their own.
When Personal Computers came along they were used to replace dumb terminals. Which made sense; the earliest PCs had so little computing power they were almost toys. PCs did have enough power to run clients, and client-server computing was born. Running a client locally on a PC was a much better user experience than that offered by dumb terminals, while the servers did the heavy computing. Client-server systems were so successful that the number of clients exploded, launching a hunt for a “universal client” that could be used for any server.
Hence the web browser.
This centralized architecture is drowning as edge data increases exponentially. Think of it this way: two decades ago the hot high-end phone was a Blackberry texting over 2g. Today it’s an iPhone shooting 4k video. Meanwhile robust PCs back then were running single-core processors. Today the base iPad runs 26 cores.
It took Fuji Heavy Industries 7 years and more than a billion dollars to build Fugaku, named the world’s fastest supercomputer in June 2020. It takes Apple less than an hour to ship that much computing power in just iPhones.
Now add data uses like the wild array of separate, often contradictory laws and regulations coming from governments across the globe. How do you handle a user who lives in Victoria, British Columbia, and takes the ferry to work in Seattle? What about Dublin residents – Ireland is part of the EU – who work in Belfast? (Northern Island is part of Great Britain, which Brexited the EU.)
The answer is to make the data itself an independent smart object. Let’s dive into the weeds using a tool most everyone recognizes: Microsoft Excel. We’ll start by trying to answer a simple question: $6 million – good or bad?
Well, it’s good if you play the Lotto and find it in your checking account. It’s bad if it’s an unexpected charge on your credit card. And it’s a disaster if it is Microsoft’s market cap.
A simple budget spreadsheet illustrates the problem:
Simple, yes? Let’s turn it into a budget.
Now lets add some formulas and see how we are doing!
That $6 million is looking awfully good!
Excellent!
But remember, the question was whether $6 million was good or bad. If it’s Feola Factory, W00T!! We’re cornering the market on dinosaurs! (I have grandkids!)
But what if that $6 million is for Microsoft…
Yikes! That’s a disaster!
Note that the only actual data in this spreadsheet are the six entries in the yellow block, Everything else is a formula or call to a formula on another spreadsheet.
Notice how everything updates with an update to those data entries. Let’s suppose that $6 million was a typo; it should have been $6 billion.
Yikes! Still a disaster. But a smaller disaster.
This exercise illustrates that data only becomes information when it has context. The $6 million doesn’t change from sheet to sheet here; the context changes, and that changes the meaning.
The architectural problem is that context is largely in the Excel software package. Let’s take a look at the data by itself; we’ll export this file from Excel:
Well, that’s just sad, isn’t it? The formulas are all gone. Here’s the same spreadsheet in Excel; note the formula in the Formula Bar:
Excel shows the formula in the Formula Bar, and the result in the cell. Without Excel you lose the formula.
Now think of all the systems you’ve ever heard of — PC software like Excel and AutoCAD; enterprise packages like SAP and Oracle Financials; and cloud services like those available on AWS and Azure. And think of all those systems keeping the context to themselves, and you begin to understand how $11 billion is spent annually on Master Data Management trying to restore context…and every database in your life is still such a mess.
Keep the context in the data not only solves the problem — it obviates the problem, and frees data to power new types of solutions. That’s the philosophy behind the encapsulated well-formed data objects of first VelocIT, and now PrivacyChain.
Andrew not only survived, we became fast friends. These days he’s PrivacyChain’s head of business development. And we no longer have to explain self-negotiating object models; a few years ago a way to replace that hot mess with blockchain occurred to me. We’ll cover how blockchain and smart data objects are a perfect pairing in a future edition.