“Plus ça change, plus c’est la même chose”, French writer Jean-Baptiste Alphonse Karr famously said in 1849. Fittingly, the more tech changes, the more it stays the same. With the special exception that each hype cycle now seems to spin several magnitudes faster than the previous one… regardless of whether it’s based on something true or not.
The latest cycle revolves around generative AI, a new type of artificial intelligence that generates content – text or images – based on a Large Language Model (LLM). The reason why it’s all the rage right now: a company called OpenAI, which late 2022 released ChatGPT, a chatbot based on its GPT-3 technology, which can apparently write like a human. Whence the excitement.
Look at the Google Trends curves for “ChatGPT”, the latest tech buzzword that has taken off twice as fast as the previous one (“crypto” – remember what we said about that one a year ago?) …and twice as high:

The hype manifests itself in a number of ways, all of which have been increasing exponentially of late: media coverage – including ‘hilarious’ journalists saying “that part of my report was written by ChatGPT haha” (i.e. “I’m laughing at my imminent obsolescence”); LinkedIn posts explaining how ChatGPT can revolutionise your content strategy (if you don’t mind using content not written by humans); and, of course, memes.
One amusing trend in amongst them: how some crypto startups miraculously pivoting to AI, claiming that was their intention all along, obviously.

Oh and this one’s a beauty:

Indeed, at moments like these, it can seem like the entire tech sector’s raison d’être is to surf on hype cycles, rather than question whether the next big thing actually has true potential and purpose.
So let’s take a look at why this particular hype may be way to ‘good’ to be true.
1. The tech isn’t actually *that* clever
First of all, what’s the big deal? ChatGPT is an LLM which has essentially been fed billions of pages of text, so that when prompted to write a new one – by a question, or prompt – it writes what’s called a “reasonable continuation” of said text, i.e. what its sources suggest is the most logical thing to write about subject X.
However, it does this based only on a sample of all the possible sources it could use; even billions of pages of text only represents a fraction of the entire internet, not to mention all human knowledge. This is why it’s useful, as this increasingly-referenced article suggests, to see LLM as a jpeg-like reflection of reality:
“If a compression algorithm is designed to reconstruct text after ninety-nine per cent of the original has been discarded, we should expect that significant portions of what it generates will be entirely fabricated.”
This essentially explains how LLMs can “hallucinate” – or lie – as, like a jpeg with a missing pixel, they will fill in any gaps however best they see fit… and often, wrongly.
So whilst ChatGPT may seem to be able to write impressive high school essays, it was already a known fact that LLMs aren’t made to care about things like telling the truth. As the MIT Technology Review puts it:
“Here’s the problem: the technology is simply not ready to be used like this at this scale. AI language models are notorious bullshitters, often presenting falsehoods as facts. They are excellent at predicting the next word in a sentence, but they have no knowledge of what the sentence actually means. That makes it incredibly dangerous to combine them with search, where it’s crucial to get the facts straight.”
So the recent rush by Microsoft and Google, fuelled by that spiking hype curve, to show off what their respective LLMs can do was precisely that: rushed.
Google rushed out its own ChatGPT competitor, Bard, but it made a factual mistake about the James Webb telescope, which wiped $100bn, or 8%, off parent company Alphabet’s share price.
Microsoft, no doubt rubbing its hands with glee having had the foresight to invest $10bn in ChatGPT creator OpenAI, quickly released the LLM-boosted version of its search engine, Bing… which just as quickly broke, demonstrating self-doubt and other HAL-9000-type anomalies:
(Image credit: Reddit / u/yaosio, via TechRadar)
…which in turn led Microsoft to put caps on the number of requests that could be made per user, and to ban existential questions directed directly at ChatGPT, like “are you sentient?” And all of a sudden, the brand new miraculous shiny thing could only do a fraction of what its backers said it could.
2. It’s not *that* creative either
Given its core skill of guessing what text should come next, it makes sense that ChatGPT should nail that most repetitive of intellectual pursuits: the school essay. Indeed, so many students initially flocked to OpenAI’s app to ask it to do their homework that teachers worldwide grew instantly suspicious… and then discovered “cheating with AI” is such a new thing that they can’t reasonably sanction students for it.
OpenAI soon released a tool for detecting whether a text was written by AI or not; but, according to teachers quizzed by VICE, that shouldn’t be necessary, as “ChatGPT is so bad at essays that professors can spot it instantly.” Here’s one teacher’s take:
“It tends to produce essays that are filled with bland, common wisdom platitudes. It’s sort of the difference between ordering a good meal at a restaurant, and ordering the entire menu in a restaurant, sticking it in a blender, and then having it as soup. The latter is just not going to taste very good.”
Of course, there’s always one: Ethan Mollick, associate professor of innovation and entrepreneurship at the Wharton School of the University of Pennsylvania, has told students he expects them to use ChatGPT to do their assignments, and that this won’t count as cheating as long as they acknowledge where it assisted. Again, this is a bit like saying Tour de France cyclists can use performance-enhancing drugs, as long as they own up to it.
Still, chances are a lot of students and other writers are getting away with it. Indeed, shortly after it was revealed that tech media CNET was relying quite heavily on AI-written articles, it laid off a bunch of journalists.
So could ChatGPT taking us into a future where we don’t have to think for ourselves anymore? Think again. Because tech that can only emulate what humans have created previously can’t be that creative per se. As Nick Cave brilliantly put it when presented with a song written in his style by ChatGPT:
“Songs arise out of suffering, by which I mean they are predicated upon the complex, internal human struggle of creation and, well, as far as I know, algorithms don’t feel. Data doesn’t suffer. ChatGPT has no inner being, it has been nowhere, it has endured nothing.”
Let’s keep our fingers crossed genuinely creative AI artists won’t be replacing real ones anytime soon. Although if it could replace David Guetta – who thinks ChatGPT is so amazing he used it to make a fake Eminem sample – that would be great!
3. It can be biased, and racist
Not only is AI not aware whether it’s telling the truth or not; it also doesn’t recognise bias. It can go from this relatively tame example – ChatGPT names 10 white, western dudes as the most famous philosophers ever, then, once corrected, makes the same mistake again – to recurring incidents of racism.
AI Researcher Timnit Gebru was notoriously fired from Google for sending an internal email accusing her employer of “silencing marginalised voices” like hers, as a woman of colour in a white male-dominated company. Furthermore, her silencing – Google asking her to remove a research paper they didn’t like – was made as the firm claimed to be working towards creating “responsible AI“.
What was the problem with the research paper un question? It criticised LLMs, notably for their environmental impact – more on that below – but also for their incapacity to detect racist discourse (more details on the paper here). Indeed, prior to this paper, Gebru was known for flagging the fact that facial recognition, which is also AI-powered, is way less effective at identifying people with darker skin. A shortcoming which opens up a huge potential for racist abuse, for example by law enforcement.
Thanks to researchers like Gebru, experts have been aware of this particular shortcoming of AI for years. Indeed, in 2016, Microsoft – yes, the same ones who just threw billions at OpenAI – launched an AI bot on Twitter, called Tay, that also learned from its context. Not surprisingly, in less than 24 hours’ exposure to Twitter’s finest trolls, it was tweeting trash like this:
M$ promptly pulled the plug. Yes, today’s generative AI works differently. But who’s to stop it being racist, when even an MLL app’s creators can’t explain why it answers certain questions in certain ways?
As Gebru herself recently reminded us in the Tech Won’t Save Us podcast, OpenAI was originally founded as a non-profit, by Elon Musk and other tech leaders, precisely to stop AI getting out of hand. Now, it appears, Musk is creating an OpenAI rival, to make sure AIs can express all views – including racist ones – as he considers ChatGPT has been “trained to be woke“.
His concern? That current LLMs have now been so pre-moderated that they can only produce inoffensive content. How did that happen? Well… OpenAI paid Kenyans less than $2/hour to clean up all the content GPT-3 learned from, prior to ChatGPT’s launch, so that it would only write ‘safe’ copy once unleashed on the western world. Just like the moderators Facebook pays to tidy the most shocking content off its platform, TIME reports many of these Kenyan workers are now mentally scarred for life by the filth they had to read for our benefit.
Thanks to these valiant near-slaves, it’s now near-impossible to make ChatGPT say anything racist. As Jon Oliver points out, when the bot is asked what the religion of the first Jewish President of the US would be, it replies that a President should be elected based on his or her qualifications, not religion 🤷🏻♂️. Let’s wager Uncle Elon will soon put a stop to that sort of liberal nonsense!
4. It could be as bad for the planet as crypto… or worse
It’s now a fairly well-known fact that cryptocurrencies like Bitcoin can use as much electricity as a medium-sized country like Finland. It turns out AI could be similarly bad for the planet, just in a different way. Whilst it costs relatively little energy to query an AI app, the learning process, in which it ingests billions of different data points, is an environmental nightmare.
Training GPT-3, for example, used enough energy to power nearly 100k EU households for 1 day. Plus, it’s exponential: resources needed to train AI have been doubling every 3.4 months. So GPT-4, which is known to be a work in progress, will hoover up countless more gigawatts.
What’s more, this impact has been known for a while, too; a 2019 paper (referenced by MIT Technology Review) asserted that training Transformer, a language model with a particular type of “neural architecture search” (NAS) method, would have produced the equivalent of the lifetime CO2 output of five average American cars, and 315 times more CO2 than a flight from New York to San Francisco.
So where do we go from here? As ever, it’s essential that regulation move as fast as possible to stop potential abuses of such tech. Merely the potential for the internet drowning in fake news generated by AI should have lawmakers in the starting blocks right now. Indeed, US stock markets regulator the FTC is already on the case, and has warned companies about making false claims about their AI offerings:
“Advertisers should (be careful)… not to overpromise what your algorithm or AI-based tool can deliver. Whatever it can or can’t do, AI is important, and so are the claims you make about it. You don’t need a machine to predict what the FTC might do when those claims are unsupported.“
Such checks are all the more important given the blistering speed at which generative AI is catching on. Last October, Chat GPT was relatively unknown; it’s since become the fastest-growing consumer app in history. We can only imagine what’s next. Oh wait, someone else has already got there…
Japanese students have just worked out a way to use AI to read your mind with an MRI. Which means we could go quite quickly from robots doing our homework to us not having to do any work at all; we’d just have to think it, and the computer would produce it. Leaving us more and more time to achieve tech’s ultimate goal of us all becoming like WALL-E space cruiser passengers. Yay!
And then came the intellectual backlash. This Japanese example was just one of many cited by the Center for Humane Technology – Tristan Harris’ organisation, which was one of the first to raise alarm bells about social media’s dark side, notably via Netflix’s The Social Dilemma – to affirm in a dedicated event that AI could ultimately threaten our very existences. Indeed, according to a study of over 700 AI experts by an organisation called AI Impacts, 48% of respondents saw at least a 10% chance of AI causing human extinction. How, exactly? As WIRED puts it:
“They’re not predicting sentient evil robots. Instead, they (the CHT) warn of a world where the use of AI in a zillion different ways will cause chaos by allowing automated misinformation, throwing people out of work, and giving vast power to virtually anyone who wants to abuse it. The sin of the companies developing AI pell-mell is that they’re recklessly disseminating this mighty force.”
This potential for chaos stems from exactly the shortcomings we flagged earlier in this article, namely that ChatGPT and similar have no conscience of whether what they’re saying is true or not. As eminent linguist and intellectual Noam Chomsky put it in the New York Times recently, an LLM “may well predict, incorrectly, that “John is too stubborn to talk to” means that John is so stubborn that he will not talk to someone or other (rather than that he is too stubborn to be reasoned with).” From such apparently benign misunderstandings could potentially spring the sort of chaos the CHT warns us about. Especially when, once again, the hype cycle moves so fast that we don’t question the true validity of our shiny new toy…
Featured images made by OpenAI’s DALL-E (of course), with the query “a robot writing a book, renaissance style” 🤓