Learn to code-Bad career advice since 1999
Why would basic coders worry about AI when they were outsourced decades ago?
Perfecting Equilibrium Volume Three, Issue 25
War is all around us
My mind says prepare to fight
So if I gotta die
I'm gonna listen to my body tonight, yeah
They say, 2000-00, party over
Oops, out of time
So tonight I'm gonna party like it's 1999
The Sunday Reader, Dec. 8, 2024
Entry-level coders aren’t worried about AI. Because their jobs were outsourced three decades ago.
Freddie deBoer worries that AI is gutting the entry-level coding market: Even as a skeptic of LLMs and their potential to spark actually-meaningful changes to human society, I recognize that coding is one of the tasks where current AI would be most useful. It’s exactly the kind of rule-bound, iterative, recursive work that a ChatGPT-style system would be well suited for.
Well, it happens that if you’re a talented and experienced coder, if you have a resume and good references, your job prospects are very good indeed. That hasn’t changed. But then, that was always part of why the whole “learn to code” discourse was misleading, because so much of the perception of the life of professional programmers was based on conditions for the elite in the profession. Yes, it’s probably great to be a senior programmer at Google or Apple or Microsoft! Experienced and respected programmers, the kind of people who have a deep set of connections and an impressive resume, are still in an excellent position to get the really choice gigs. But of course those weren’t the jobs that the large majority of new entrants were going to be accessing, even if they were fortunate when they went on the market.
Freddie’s overall point that job markets are constantly changing is absolutely correct and, as always, thoughtful and well-written.
That said, coding as a profession is an interesting analog for how automation affects professions. The thing is, there is coding, and then there is coding.
Think of it this way: building a massive software project is like building a massive building. And saying “coders” is like saying “builders.” Let’s say the builders for that massive building project include architect I. M. Pei, steelworkers, electricians, cement workers, master carpenters and apprentices who just swing hammers.
Those guys are not interchangeable.
To understand which jobs Large Language Models – so called AIs – will replace and which are immune we have to break down these jobs and differentiate the hammer swingers from the architects.
Let’s start with a little refresher on how Large Language Models work: Large Language Models are effectively playing Scrabble, but with words instead of letters. A Scrabble player can score 37 points for “Cyclohexylamine,” and even point to the definition – “an organic compound used to prevent corrosion in boilers” – without understanding organic chemistry or corrosion or how boilers work.
A Large Language Model is like giving a prompt to a Scrabble player with an endless bag of tiles. You prompt it with “immune,” worth 10 points. It considers “immunize” for 21 points, “immunized” for 23, then moves to “hyperimmunized” for 36 before settling on “hyperimmunizing” for 37 points. It’s working a matrix for the highest score.
Now, gentle reader, we know what you are thinking: Surely having mad Scrabble skillz means you must have SOME mastery of the language!!!
Reader, please meet Nigel Richards, who last month won the 2024 Spanish-Language Scramble World Championship in Granada, Spain.
Nigel Richards does not speak Spanish.
Nigel Richards also does not speak French, but that didn’t stop him from winning the French-Language Scrabble World Championship.
Nigel Richards has also won the English-Language Scrabble World Championship four times. He does speak English.
It’s clear that doesn’t matter. Nigel Richards is a Human Large Language Model. He’s memorized the rules, and the scoring. He memorizes lists of words, regardless of the language.
Richards and LLMs are good at reassembling data – letters, words – into patterns dictated by scoring rules. This means you can count on Richards or an LLM to triumph at things like Scrabble in English, French and Spanish.
On the other hand, there is pretty much a zero chance of Richards or an LLM producing Alice Munro’s short stories in English, Molliere’s plays in French, or Gabriel Garcia Marquez’ Love in the Time of Cholera in Spanish.
Back in my days at the Columbia Graduate School of Journalism I used to tell students We can teach you to be professional writers and reporters.
We cannot teach you to be great.
In other words, students can be taught the fundamentals of reporting, to ask Who? What? When? Where? Why? And then How? And then to weave those Four Ws and an H into a solid inverted pyramid story.
We cannot teach you to have a nose for news, to sniff out hints, allegations and things left unsaid and follow that trail to news that isn’t from an official at a podium. We cannot teach you to be Weegee, who routinely beat cops to crime scenes.
We cannot teach you to be Pete Hamill, who held his city’s beating heart in his hands and shared it daily in New York newspapers.
This is the thing to understand about the impact Large Language Models will have on coding, on journalism and other forms of writing, on lawyers and all the other professions threatened by this new technology. Work that can be boiled down to aligning data according to a fixed set of rules – like Scrabble – will be automated by Large Language Models.
Work that requires intuition and innovation are under no threat from this technology.
Coders have already been through this. Do you remember off-shoring? Coders do. Tech companies were going to turbocharge profitability by replacing expensive American coders with cheaper workers in other countries. And because they were cheap you could hire lots of them, and software development would speed up exponentially!
Yeah, well not so much. After a decade of disasters the new hotness was “on-shoring,” the process of bringing all that work back.
What happened? Simple. It turns out that having teams coding 10 time zones away only works if the work is accurately and clearly defined and well documented. Note that everyone who has ever worked in tech just snorted. For those of you who haven’t worked in tech, here’s why: getting coders to document…well, anything, is like solving Goldbach’s Conjecture: It sounds easy enough, but it just never seems to get done.
On top of that, outsourcing work to the other side of the planet meant that those off-shore coders were working when the project architects here in the United States were sleeping, and visa versa. This proved to be at best sub-optimum.
The architect of a large building project has to create a design that merges all the work of those ironworkers and hammer swingers into a building that holds together in the winds, meshes with the environment, sways with earthquakes, and meets all the other necessary considerations.
Software architects have a similar task designing the logic flow through an application: how do you resolve conflicts between data collected in one system with data from another? For example, are Christopher J Feola, Christopher Feola, Chris Feola, Chris J Feola, and CJ Feola five people? One person? Three? What should the application do if a data source is offline? If the network is down? What if the user needs data that requires a higher level of security?
Here's an interesting illustration: Tomasz Tunguz is a venture capitalist and engineer who blogs a lot about his experiments with Large Language Models. Here’s his process for using Large Language Models for blogging: To create this post, I used a cascade of AIs. I needed the AI to write like me. I’ve tried fine tuning, but that didn’t work, so instead I fed some of my previous blog posts into an AI to generate a prompt. “Make a prompt to write like this”, I asked. The prompt is about 500 words. Then I spoke to the computer who transcribed my thoughts, I pasted them below the prompt, and asked the computer to produce this post. I went back and forth with the machine: “Remove the headers,” “Make it sound more friendly,” “Talk about the stories in the first person.” “Vary the sentence length and the paragraph length. Ensure two paragraphs have only a single sentence.”
In other words, Tunguz has architected an application using multiple Large Language Models as the modules, and manually routes data through these modules to get his desired results. That’s not something a non-technical user can do.
And if a non-technical user cannot do it to produce blog posts, they certainly cannot do it to produce software.
Rote-work jobs are certainly threatened by Large Language Models.
Reporters with instincts, software and data architects, barristers who develop new theories of law…none of these are threatened by these so-called AIs.
Judging by the output volume and quality, there are at least five Chris Feolas. Some writers would insist on 42.