why social media can't stop hyping a.i.
Social media's aversion to expertise, hype from AI companies, and fear of automation, turned useful new tools into objects of dutiful worship.
When I started studying computer science, I thought it was only a matter of time and practice until there were no secrets or mysteries left. After spending more than half my life in some tech capacity, that seems like a ridiculous dream. Comp sci is such a vast area of expertise that it’s impossible to be an expert in everything from AI to the intricacies of UX and embedded programming. All three involve code, yes, but each discipline requires a fundamentally different approach and focus, and comes with its own set of gotchas, limitations, and core principles.
Yet when you open social media, there is no shortage of “experts” who have seen the future of tech and exactly how AI will ascend to godhood while writing every new app, fixing every bug, and effectively replacing every human who isn’t a VC or an executive with a text box. Actual programmers explaining why AI generated code still needs a lot of work and the hard mathematical limits of what LLMs can do are met with dismissive prognostications that it’s only a matter of six to twelve to eighteen months until these limits will be broken and the code will be perfect in all respects.
Of course, to me, this raises the question of which code an AI that will write every app ever would generate. Coding in a high level language seems like a waste because the toolset is designed for humans to easily follow and express abstract concepts without worrying about memory usage and addresses. Computers would have no such limits, so it seems like the perfect AI-generated code will be a very low level assembly which will be adjusted for each hardware configuration on which its run.
But hold on a second. LLMs have to be trained on something, and assembly coding is getting rarer and rarer for general purpose software. It’s usually limited to domains in which performance and precision are mission critical, like cryptography, robotics, and specialized hardware.
This is a bit of a problem since all these models are prone to errors as a consequence of their architecture — too much randomness and they vomit gibberish, too little and they spit out the same exact solutions like deterministic code, making them a lot less flexible and thus useful — they will have to create a brand new approach to assembly languages that both supports abstractions while also optimizing performance.
I’m not saying this is impossible, but it doesn’t sound like a trivial task, and needs the creators of these ultimate coding LLMs to understand the theories behind computer languages, types, design patterns, and how abstractions leak, and effectively invent this approach and create tens of thousands of code samples as a training set.
Digesting a bunch of random code from GitHub written in 500 different languages for millions of different reasons seems counterproductive if your goal is to create the one code generator to rule them all. These LLMs will keep on doing what they do now: just regurgitating blocks of code taken from whatever high level languages they ingested or a modern mix of some assembly instructions inlined in C and C++ snippets.
why everyone is suddenly an a.i. expert
Again, however, in a world where someone can watch a 15 minute TED talk or a video on TikTok, then take to the web to explain to us know-nothings how the world really is and the future of technology, none of what I said matters and can be dismissed as the bias of someone trying to keep their job, and they can find someone with a comp sci credentials who sincerely believes they can replace human intelligence at some point in the foreseeable future, dismissing any critics as dinosaurs. See Ilya Sutskever and Geoffrey Hinton for example.
On top of that, there are two big incentives for unquestionably parroting this hype, a psychological one and a professional one. The former comes from the same reason people want to believe in psychics. They like the idea of an oracle who will give them clear, black and white answers to important questions without judgment, with perfect knowledge, and in a matter of seconds, and AI promises exactly that. Hell, it can even be your friend, or friend with digital benefits if you’re so inclined.
The latter reason is that they think their jump onto the AI bandwagon early could help them look like they’re ahead of the curve and thus worth keeping employed. What the models can and cannot do almost doesn’t matter because all that matters is that all of us see them as embracing the new technology and having very loud and public ideas on how to use them. And to be fair, using these tools does help a lot of people do their jobs by automating relatively mindless or time-consuming busywork which until very recently couldn’t be efficiently handled by a computer.
Companies may talk nonstop about how they’re embracing AI or becoming an AI-first company, but studies are surveys find that it’s not making nearly the expected dent as businesses try to figure out exactly how to deploy it and how much they want to hand off to it. Still, people know that for some things, mediocre generative AI output is good enough and costs significantly less than thoughtful, precise, well-thought out work by a human, so their bosses will have no qualms about laying them off, pivoting to AI, and then reviewing the output for anything egregious before calling it a day.
And so, if your LinkedIn and social media accounts are full of posts about how much you love AI tools, and how you’re using it to boost your productivity, and how you’re just full of ideas for “leveraging AI-fist strategies for synergistic opportunities in B2B and B2C SaaS space,” it will at least give the right people pause. Even better if the AI is actually helpful for prototyping or giving you a template to speed things along.
Meanwhile, influencers selling the dream of “passive income” which is actually just a bunch of classes that boil down to asking paying customers if they tried dropshipping or mining crypto, are adding tutorials on how to build AI spam bots to their repertoire, and making the web a worse place in the process. To them, keeping the AI hype cycle going so they can pitch their get rich quick schemes as mastering this new tech that’s taking over the world, is an excellent investment of time and energy.
So, yes, AI boosters on social media have every reason to either want to believe in the idea that some sort of computer hyper-intelligence is imminent. They get nothing but hype from Microsoft, OpenAI, and Anthropic, while there’s no incentive for them to be even remotely skeptical. Neither the metrics or pressure from hustle influencers who have their bosses’ ears and eyeballs pay off in any way, shape, or form if you’re trying to be realistic and skeptical of grand claims.
why predicting the future of a.i. is so difficult
Now, you can argue that a lot of computer scientists were skeptical LLMs were going to be as effective as they turned out to be and say that this lack of imagination was a clear and obvious failure on their part. But the reality is that LLMs and GANs are more or less brute force statistical calculators that require enormous computing power, so the real surprise wasn’t that the models would be pretty effective, it was how cheap and easily available computing power became so quickly, and how many researchers thought it was worth using so much power for real world problems.
I also know that because generative AIs use brute force calculations, making them a lot more powerful and giving them more context to be useful at larger scales means increasing their number of input parameters by orders of magnitude, except it’s not a given that you will get a better model because at some point, the model is too big to create good statistical relationships between inputs and outputs. An enormous new model may actually perform worse, with more hallucinations or outright errors.
But that’s boring and undermines the hype cycle, according to which, AI is already an amazing tool that can solve every problem and will only get better. This is why you’re already hearing about “agentic” AI in which researchers are trying to chain separate models together in hopes they can come up with something more than than sums of their parts as they’re reaching these aforementioned limits.
It’s not a new idea either, it’s actually borrowed from AI researchers working on how to get swarms of robots to cooperate with each other on complex tasks, or produce new and useful behaviors through real life experience. We know it works with robots since robots are much simpler in terms of inputs and outputs. We don’t know how well it will work with large, abstract models in an environment full of externalities which LLMs do not handle well as it, being prone to jailbreaks and leaking confidential data of users if you ask the right questions.
The point of all of this isn’t to say that AI is dumb, or useless. It’s absolutely not. I use AI when coding as an auto-complete on steroids, generating the plumbing I need for unit tests with a few hotkeys, and saving me hours when writing straightforward data manipulation logic. Now, that this code can contain security vulnerabilities, incorrect assumptions, over-complicated, memory and processor intensive approaches thanks to the code on which it was trained and expected hallucinations, so I still spend a fair deal of time reviewing it before finally committing it to main.
It’s an enormously useful set of tools that have unfortunately been vastly overhyped, and are now forever associated with chatbots and picture generators in the public’s mind. But generative AI is simply a small subset of machine learning in general, and it has to be used wisely instead of how we all too often use it now: to spam the web with low quality content or scams to trick ad networks and gullible people, or speedrun all the busywork that we’re sick and tired of doing but may still be important.
I think that a lot of people very much realize this, but the fact that we know machines are coming for our jobs because that was very much the plan since the beginning of the industrial revolution, and that our leaders do not understand this and are refusing to make necessary changes, means they have to go out and give generative AI a pat on the back in public to show that they’re very much getting with the program, they aren’t dinosaurs that can’t adapt, should definitely stay in their jobs because they’ll figure out how to leverage AI in ways that will make the shareholders happy, and the algorithm will reward them for using all the right buzzwords.