the madness of king altman: how ai became the god of silicon valley
However bizarre you think Silicon Valley's sudden obsession with AI, I assure you, its roots are way weirder than that, and so are the plans for an "AI-powered" future.
Let’s start at the beginning, since according to a certain musical, it’s a decent place to start. And the story of how and why various AI agents are suddenly being squeezed in every gadget, gizmo, and app, while tech executives sound like deranged cyberpunk villains begin with an old joke about a university president who was once asked which department was the most expensive to keep running.
“Oh, definitely the computer scientists! They always want the newest, most powerful, and most sophisticated toys.”
“And which department is the cheapest?”
“Mathematicians. All they need is paper, pencils, and erasers. Or, actually, even better yet, the philosophers. They don’t even need the erasers!”
Which brings us to Nick Bostrom, the Oxford philosopher who has been cosplaying as one of the world’s top experts in everything from computer science, to economics, to sociology, to geopolitics on more pop sci documentaries than I can count. Until quite recently, he led something known as the Future of Humanity Institute, which is kind of like what would happen if academics took over the Joe Rogan Experience but the only change they made was to get rid of the regular “that’s crazy” interjections.
And yet, Bostrom has been incredibly influential in the Singularitarian movement that swallowed Silicon Valley whole over the past 20 years, popularizing several ideologies you may have heard of only in passing, if at all, but which nevertheless are behind the seemingly bizarre state of what was once a legitimate global innovation hub, and the out of control AI hype dominating the media and helping to fuel the recent layoff fever. Once you understand the basics behind these ideas, the seemingly absurd weirdness of today’s economy and social chaos suddenly start to sort of make sense.
Let’s rewind for a moment to 2014, when unfazed by the torrent of expert criticism he received, Bostrom published a dry and very academic book called Superintelligence: Paths, Dangers, Strategies. As the title suggests, it was about how humans could end up creating a thing smarter, more determined, and more motivated than them, as well as what kind of problems this could cause. Of course, there was one small problem. His idea of how we could end up with an AI smarter than humans didn’t make sense, nor did he ever successfully define the super part of that super-intelligence.
Well, to be fair, he thought he did as far back as 1998, when he wrote the paper which served as the starting point for his book. In it, he says that a human brain works at 100 trillion floating point operations per second, i.e. 100 teraflops — or TFLOPs as we call them in the tech world’s gritty underground rap battle simulation scene. This metric is the standard way to gauge computing power, and we calculate it by making all of the computer’s processors solve as many equations with decimals as possible.
Just for reference, your smartphone can do between one and two TFLOPs, a computer or laptop will clock in between 5 and 18 TFLOPs depending on configuration, and the latest generation of top tier gaming consoles register between 10 and 12 TFLOPs. So, consumer products don’t have nearly enough to be on par with your noggin according to Bostrom.
But again, those are just consumer gadgets. We also have supercomputers which are now well past the 100 TFLOPs mark. Oak Ridge National Laboratory’s pride and joy Frontier is currently capable of 1.1 exaFLOPs, that is 1.1 quintillion floating point operations per second, with plans to double that capability in the very foreseeable future. If you were keeping track, this is 11,000 times faster than the human brain. Clearly, we should be living in fear that a super-intelligent system will awaken any minute now, right?
Obviously, the computer would have to run the right software, like a simulation of the human brain, as planned by Australia’s biggest computing project DeepSouth, but if given the entirety of the internet to scan and train on, surely something profound and super intelligent is on the horizon, right?
Actually, no. No one deeply involved with DeepSouth or any other major AI venture expects any system we build today to actually be intelligent. Instead, the goal is often to take advantage of our best models of biology to make more efficient and capable models. Supercomputers that can hit 100 TFLOPs need at least a million watts to do that. Humans can, apparently do it with just 20 watts. And that puts a fundamental lie to many of Bostrom’s and his institute’s ideas.
at the mountains of ai madness
You see, while we often compare our brains to computers, the two are very different entities and we just have a habit of comparing our minds to whatever is the newest or most complicated technology we have at the time. The idea that the brain is capable of 100 TFLOPs is a back of the napkin estimate based on how many neurons we have, how many synapses and dendrites they had on average, and how much data could all those connections represent if the brain worked like a computer.
In other words, have have no idea how many FLOP-equivalents our minds actually can do, and that 100 trillion figure is just the most commonly accepted for the purposes of having a number to look good in tech press releases. It’s based on nothing more than estimating the number of synapses in the brain and saying “if they worked like typical CPUs, this is how performant the brain would be.”
Nevertheless, without pausing to consider that maybe, just maybe, the issue of AI is not just a matter of hitting the right number of FLOPs, be it terra or exa, Bostrom used some grants to fund a blog called Overcoming Bias, recruiting a fellow named Eliezer Yudkowsky to help flesh out more of these abstract a-black-box-AI-smarter-than-all-humanity-combined-could-X thought experiments that would eventually be summed up in the aforementioned Superintelligence book.
Yudkowsky’s qualifications were that he… Well, he writes on these topics passionately and voluminously. In fact, he has no formal education past middle school but he’s very confident when he writes and speaks, so a lot of people with sci-fi adjacent ideas do tend to gravitate to him, and he’s good for a quote in an article about AI. Obviously, he has to be introduced as having some expertise on the subject, and so he became an “AI safety researcher” according to the web.
Now, hold on a second, put down the torch and pitchfork, I’m not saying that you have to have a college degree to be an AI expert. Geniuses do exist, and many of them tend to teach themselves very advanced concepts very quickly. But when I pursued essays and papers on his subsequent project, LessWrong, as a grad student, I found nothing more than random thought experiments and lofty abstractions which began with the assumption that Artificial General Intelligence, or AGI, is inevitable, and it will become smarter than humans, so we all need to know how to cope with it very soon.
Being the asshole that I am, I decided to question one of these random papers on the Ye Olde Ye Bloge back in 2011, which attracted some attention on LessWrong, and an avalanche of very upset responses when I wouldn’t stop asking questions. This would eventually include a defense from the authors, and belligerence from Yudkowsky.
Over the years that I’ve sparred with super-AGI adherents — many of whom believe that we’re all cruising to a future where we’d enter a sort of Nerd Rapture after which we’d be able to upload our minds to machines and live forever as digital gods — this was always the pattern. I ask for concrete definitions as if I’m building the system and get a whole lot of technobabble, thought experiments, and outright gibberish in reply.
At one point, I was told that a super-AGI would be able to handle something involving a supposedly complicated banking transaction. My reply was a JavaScript method in which the transaction was handled with all the relevant parameters. It did not go over well, especially a snarky line at the end asking if I had just created a super-AGI three decades ahead of schedule and we can all go home now.
Over the next few years, a lot of these blogs would go away or just keep recycling the same old material with every new interesting announcement about machine learning, quantum computers, or genetic engineering. But with ChatGPT, there’s been a surge of activity, and it turned out that the belief that AGI was on the horizon and we must do everything that we could to hasten its approach, had quietly become a full blown religion with Sam Altman, the CEO of OpenAI, as its new high priest.
from nerdy dreamers to messianic prophets
For a casual consumer of tech news, OpenAI is just a successful company which has been working on chatbots that are now all the rage, and investors demand that every company finds a way to stick them into everything to replace more and more workers, despite the fact that AI has much better uses as an assistant to skilled experts. But if you’ve been paying a little more attention, you’ll notice that Altman incessantly talks about AGI and his need to create it, so much so that he demanded $7 trillion — yes, that’s trillion with a t — to make it a reality.
How will anyone make a return on a sum equivalent to the economies of Germany and the UK combined? Well, that’s the most interesting part. He has no idea. He’ll just ask the AGI when it’s online. No, seriously. Those are almost exactly his words.
Look, if I walked into a room and asked for the GDP of California in cash to build some device that I can’t define because a philosopher at Oxford inspired by a blogger who wrote Harry Potter fan fiction where the magic was actually Bayes Theorem, told me it was going to make the world a utopia of abundance after it will fix every last problem on Earth if done right, I’d be facing a very real risk of being sent on a long grippy sock vacay post haste. Altman? He got billions from tech VCs and Microsoft.
In fact, most of the pitches getting funded and rhetoric popular in Silicon Valley today seem unmoored from reality. Having the FDA ran by Yelp reviews instead of experts, custom built virtual governments with the laws you want instead of real nations, and having tech workers wear gray shirts in some sort of techno-Nazi cosplay, a universal basic income of $13,500 per year for all humans, no, wait, guaranteed compute time on ChatGPT-7, and colonizing space so trillions of us spread across the solar system and beyond because something-something infinite markets and AI, are ideas casually thrown out on a weekly basis nowadays.
If that sounds utterly bizarre and borderline demented, is because to normies like us, it sort of is. And we’re no longer the intended audience. In fact, we are irrelevant. They are the saviors of humanity and the guardians of its future in their own minds, and if we ask too many questions, we are simply enemies getting in the way of the Golden Path they’ve laid out for us. (They’re big on reading and watching sci-fi, not so big on actually finishing it or learning its lessons.)
Regardless, the message is as clear as in any cyberpunk dystopia. This is their world and we’re lucky to live in it and pay them rent for the privilege. And if we’re lucky and talented enough, we’ll be allowed to be minor cogs in their grand machines and bring their dreams to life. Which is, I guess, where people like me are supposed to fit in. Our job is to make billion-dimensional matrices of decimals infinitely intelligent and able to run their brains in the cloud by 2045, and saying that it may be impossible is heresy, plain and simple.
Even worse is that while the media allows them to dazzle reporters with confident and wildly unrealistic technobabble, their claims have no evidence behind them and many of their goals are downright dystopian. Rather than living up to their desired image of benevolent tech overlords whose days are spent inventing the future, they’re vying to become neo-feudal overlords of a far off technocracy.
They warn us that our world will become obsolete — which is true, although that’s just a function of time passing — but also that all our present and future woes will soon be solved by AGI, which will be nearly magical in its capabilities, and because they’re the ones who made it happen, and are the only ones capable of understanding it, it makes sense for them to become the high priests of our new digital gods. Much like the high priests of Ancient Egypt who served the pharaohs on paper, but in reality, used fear of the magical and the unknown to manipulate kings, nobles, and commoners alike.
The promises of an AGI stirring within our data centers are reminiscent of the claims that the spirits of Ra, or Hathor, or Set live in statues hidden behind veils in the temple complexes of Luxor to those who either studied machine learning or work with AI on a regular basis and can see all the warts, embarrassing shortcomings, and poorly paid, tedious human labor being actively hidden from the public to exaggerate the abilities of today’s systems.
Why don’t you hear more about it? It’s simple. Taken in by tech execs’ and VCs’ loud, confident pronouncements which guarantee a steady stream of fearful and anxious clicks, far too many reporters don’t even bother asking skeptical experts, or applying nearly enough critical thought in their articles. As a result, the technobabble is rarely, if ever, challenged no matter how outlandish it is. In fact, the more absurd, the more traffic to the article, the less incentive there is to fact check.
how to sleepwalk into a cyberpunk dystopia
But let’s say that maybe, just maybe, I’m not giving other comp sci specialists enough credit and with $7 trillion they could, in fact, create a full blown AGI to give everyone in the world enough resources to thrive just by existing. But why? Why spend enough to overhaul the planet to use exclusively green energy to build an app that tells us to do something we could have already done for decades and refused? Allow me to explain.
Today, nearly 830 million people struggle with hunger even though we can meet the basic caloric requirements of 12 billion humans, way more than the peak projected population of Earth by 2100. If we wanted to eliminate hunger permanently by 2030, we would need to spend a maximum of $50 billion per year. That sounds like a lot of money, but that’s less than a third of what the world spends on pizza, and less than one percent of global food expenditures. Clearly, distribution and the lack of political will is the problem, not ability or capacity.
The same could be said for global poverty. Slightly over 700 million are living on less than $2.15 per day around the world while global wealth is estimated at around $455 trillion, nearly $85,000 per every adult on the planet. But just 59 million adults hold a tad shy of 46% of all assets, while 2.8 billion are effectively the reverse 1% — meaning that they own around one percent of the world’s wealth. This also seems far more like a distribution problem than anything else.
All that said, both of the above are massive improvements on stats from 2000, when almost 13% of the world was dealing with hunger compared to around 9% today, and over 80% of the planet’s adults were in dire poverty vs. fewer than 53% now. While it is true that doom and gloom are very much in fashion right now, the world is starting to become more fair and prosperous, despite facing numerous crises and the process itself being extremely uneven and full of hiccups. And it’s all happening without an AI super-mind to guide our hand.
Even more important is that crises like climate change threatening the homes of half the world’s population, a predicted decline of humans in the 22nd century, and highly corrosive runaway greed that gave us decades of horrific pollution, and now, surging inflation, were all deliberate policy choices made against the advice of experts.
We knew burning fossil fuels with wild abandon would be awful since the late 1800s. We knew making a literal mountain of plastic would end badly in 1970. We knew that leaded gasoline was poison by 1969, and that PFAS and smoking caused cancers by 1970. And we knew that our unsustainable lifestyles and turning our existence into endless toil would lead to incentives not to have large families, leading to an eventual population decline, since the dawn of industrialization.
We could’ve had an AI crunch the same numbers experts did and arrive at the same conclusions. If we ignored the experts for the sake of short term profits, why wouldn’t we just ignore the AI too? There’s an underlying assumption that because AI is based on computer algorithms, it won’t be biased or will be impossible to ignore. But the fact of the matter is that outside of heavily curated demos, today’s AI is highly error-prone and is used not in spite of that, but because it can be trained to justify whatever those in power want it to justify. That way they can blame it for unpopular policies they want to enact, claiming they had no choice. Math said so.
And this is ultimately where we are with AI today. We are sold the idea that it’s a magic black box that can solve all of our problems despite many said problems having been predicted for over a century, or known for decades and ignored for the sake of cash, the warnings often buried by those benefiting from them.
We are then told that a group of tech billionaires and their handpicked underlings are the only ones who can talk to the magic box and make it turn the world into a utopia instead of destroying it, as we are warned it could by a random philosopher and his fanfiction author friend who believe that we must sacrifice people suffering now for the prosperity of those who might be alive thousands of years in the future so much so, they started an entire cult dedicated to the idea.
The aforementioned utopia is, of course, is meant for the tech moguls rather than us because in that utopia, they’ll run the galaxy as immortal digital gods while we live in company towns as indentured servants to their interplanetary oligopolies. Why else would Elon Musk be advertising loans to move to his Mars colony that can be paid off by working for SpaceX? If he cared about humanity so much, why would he glibly and nonchalantly laugh that “a bunch of people will die” trying to actually build his dream city in space to a fellow tech tycoon and Cult of Nerd Rapture member?
If there’s no consensus that we’re expendable cannon fodder for their grand plans, why are Jeff Bezos and members of the PayPal Mafia who run Silicon Valley today, and the VC funds enabling them, putting forward very similar, if not even grander proposals in which ordinary mortals and their very practical and immediate needs aren’t even footnotes?
In short, scratch the surface and what’s being pitched as an AGI-powered future of luxury and plenty looks a lot less like The Jetsons, and a lot more like modern Dubai: an oasis full of opulent, futuristic grand projects lit up in neon presented as a lavish, crime-free paradise, but scratch under the service and you find a whole lot of human misery because the secret ingredient is not advanced technology and efficiency, but slavery the authorities go to great lengths to keep from the public eye.
If the promised AGI revolution never happens, we’re not going to get apologies and retractions from the Valley. We’re going to get more APIs — Actually People in India paid the lowest possible wages — helping these companies fake their way through more and more grandiose and elaborate demos and offerings designed to make it look as if the magic black box that takes problems in and spits utopian solutions out finally arrived, or is just about to.
Meanwhile, you’ll be paying off a predatory loan that took you to Mars to build not a gleaming city, but a cramped underground base because there were no other jobs available to you after your previous one was automated, and your promised universal basic income checks never arrived despite years of assurances that the magic AI box would take care of you.
At this point, it doesn’t matter if tech executives and VCs believe their own hype and wild claims about what’s next for AI — though I firmly believe that they genuinely buy into the things they’re preaching and chugging their own kool aid — because they’ve made the grand pronouncements, they started their cults, and they promised a future of AGI-powered wonder. They sold equity to investors who give them billions every year, and those investors now expect steady double-digit percentage returns.
Given their track records and scientific limitations, this is how they’re most likely to go about the execution of all those big promises. If we continue to indulge them instead of holding their feet to the fire in the press and by actual experts who can weigh in on what’s possible and what’s bullshit, the majority of humans are very likely to end up as digital serfs, with an app as their boss. And you can ask almost anyone who works for a gig economy platform full time just how pleasant and lucrative that is…
As a non-tech person I'm dubious of the promises of AI, and this article fleshes out the reasoning for this well. I've tried using some AI in my work as a pharmacist, and it's been a mixed bag. Perplexity performs reasonably well, and provides references for the answers it generates making it less of a black box than some others.
I definitely see the potential of AI as an assistant to highly skilled workers, and the curation of the data used to train these AIs is going to be a vital piece of the puzzle.