What if the inevitable enshittification of big AI – these excessively massive and problematic chatbots – was just as rapid as their meteoric rise to fame? Might we soon all collectively realise there’s no such thing as a responsible big AI provider? It’s increasingly looking that way.
Most recently, Anthropic – *supposedly* the most responsible big AI lab since it *supposedly* refused to let its products be used for war by Trump & co – announced that it will soon be using Colossus, Elon Musk’s supercomputer housed in one of the world’s dirtiest data centres, in Memphis.
How responsible is it to rely on facilities that run on 35 mainly-unauthorised methane generators, polluting its neighbourhood and exacerbating locals’ already-pronounced health problems? (more on that here)
That established, what can we do about it? Well actually, quite a few things:
- Check why big AI is so bad for the planet, and society
- Ask yourself in what cases you really NEED (big) AI, and when you can do without it
- Once you’ve identified those needs, take your pick from a huge range of non-big AI alternatives…
…and this blogpost will help you do precisely that!
(N.B.: this post will focus more on everyday usage of AI, rather than on business applications. More on the latter soon!)
1. Why Big AI is bad for the planet
We’ve already covered this subject extensively, here (where you’ll find all the sources for the below info); but the TL;DR version is:
- When you ask a generative AI chatbot a question, it doesn’t go and fetch the answer in a pre-defined answer database (like a web search essentially does)
- It composes a custom-made answer to your question, by working out the next most probable word based on its training data, like this (source):

- Each sentence (or pixel, if it’s an image) is made up of tokens. Generating tokens consumes electricity. And water, to cool the servers. And induces wear and tear on said hardware. All of these imply environmental impacts (and these are just three of a total of 16 impacts)
- The GPUs, or processors doing all that work, consume at least 4 times more electricity than standard processors, or CPUs. They also heat up 2.5 times more, which means they need more water to cool down
- This demultiplied resource consumption largely explains why data centres are expected to gobble up three times more electricity by the end of this decade. Or in just one year (2025), if you consider Virginia, home to most of the US’ datacentres…
- Why are these impacts growing so fast? Because big AI is convinced that the more data its models consume, the more intelligent they will be. This is not only increasingly false (GPT-5.4 can’t even tell the time properly), but increasingly damaging for the planet. Because more data requires more compute power, which means more GPUs, and hence more data centres. Whence Anthropic’s search for more and more data centres, including in Memphis.
2. Why you may not need Big AI
GPT-5.4, the aforementioned model behind ChatGPT, is approximately 200 times bigger than most people’s needs. Model sizes are measured by parameters, and we think GPT-5.4 has about 2 trillion of those (no, big AI won’t even tell us that). Whereas most needs can be covered by models with about 10 billion parameters.
What are those needs, exactly? Let’s ask ChatGPT! Well, according to a study validated by its maker, OpenAI, by order of importance:
- Practical guidance (28.8%)
- Seeking information (24.4%)
- Writing (23.9%)
- Multimedia, i.e. create or analyse an image/video (7.3%)
- Self-expression (5.3%)
- Technical help (5.1%)

That’s it. So essentially, what we used to ask Google, we now ask ChatGPT. Why is that important?
- Because in a lot of the above cases, we could just carry on doing a standard web search, which is way less impactful (as a web search could easily cover usages 1 and 2, we could argue 53.2% of ChatGPT activity is unnecessary…)
- The things that only generative AI can do, like rewriting texts (usage n°3) or creating images (usage n°4) do not require a bunch of “PhDs in your pocket” (how OpenAI CEO Sam Altman described the latest version of ChatGPT’s model). Small models are just fine.
TL;DR: Using big AI products like ChatGPT, Claude or Gemini is like using a bazooka to swat a fly.
Big AI should have given us fly-swatters, then selectively given a handful of bazookas to those who really needed them. Instead, they gave everyone bazookas.
More parameters give models more firepower to handle ever-bigger quantities of data. As a rule of thumb, the more parameters a model has, the more resources it needs to function, and therefore the more environmentally impactful it is. But it doesn’t have to be this way.
Numerous studies – e.g. here, and here – indicate that models 60-100 times smaller than the biggest LLMs (large language models) can do just as good a job. And often, when they can’t quite, they can be customised to reach comparable accuracy and speed levels.
So why don’t more people use smaller models? Because big AI marketing makes it seem like their products are the only ones. They are not.
But before moving from a big AI model to a smaller one, ask yourself: do I really need (generative) AI, when in most cases, a web search will probably do the trick? If not, off you go!
If so, please read on…
3. How to #QuitBigAI, by moving to a smaller model
Again, I’ll presume you’re using generative AI for one or several of the six simple tasks cited by the OpenAI report (above). In that case, some very straightforward alternatives spring to mind:

- GreenPT (above) – created by a small Dutch startup that consistently punches above its weight, GreenPT is my generative AI app of choice. Why? Its main chat function uses a small, open source model, with just 24 billion parameters; it has been trained on minimal data; it runs on GPU servers based in France, which has some of the greenest electricity in the world; and after each reply provided, it gives you the energy and emissions of said reply, encouraging you to prompt responsibly. It’s not free (from €4/month), but is well worth its small price tag. The rest of our picks, however, are free…
- Qwen 3.6-27B – China’s Alibaba Group’s Qwen range consistently offers best bang for your buck, i.e. excellent performance with a consistently parameter count… and hence lower impacts. Just don’t expect help in researching Tien-an-Man Square; Chinese models are usually government-censored
- Mistral le Chat – although not a small model strictly speaking, at 123 billion parameters, Le Chat is made in France, and hosted in Europe, which means its data centres use significantly cleaner electricity than big AI models. French electricity, for example, contains ten times less carbon than Virginia’s
- Gemma 4 – Google’s latest small model (31 billion parameters) is brand new, and already getting rave reviews
- HuggingChat – the chat interface of Hugging Face, the world’s biggest library of open source models. A super-nifty functionality called “Omni” picks the best model for your request. And if you’d rather choose manually (e.g. by the lowest number of parameters), you’re also welcome. My tip in that case; Microsoft’s Phi 4 (14 billion parameters), the latest in the tech giant’s range of small models… that they strangely never publicise 🤔 HuggingChat is also free. Hurrah!
These are the big AI chatbot alternatives I recommend the most often. As they range from 14 to 123 billion parameters, their impacts are logically hundreds of times lower than those of big AI’s LLMs.
Another bonus: some of these models are small enough to run on your phone or PC, without an internet connection. Thereby incurring none of the cloud/data centre-related impacts listed above 💡
Do be aware, however, that AI impacts are complex. As such, running AI models locally avoids cloud impacts, but it could cause your hardware to expire more quickly. Similarly, parameter numbers are just a rule of thumb. Smaller models are generally less impactful. But some might run on less clean energy, for example; and others might not be as impressive at all tasks as the bigger models.
It’s all about finding the right fit for your needs. And on that topic, if you’re still stuck, or would like to dig deeper, you should definitely try Bearing, an impressibe new service that suggests the right models – from a seleciton of 30, big and small – once you’ve expressed what your needs are.
And a final word of reassurance: it’s not your fault that big AI is so impactful. The main onus should be on its providers to reduce those impacts. And that onus should come from legislation… which is painfully slow on these topics at the moment.
That’s all for now. Good luck; and let me know how you get on!
To dig deeper into moving away from big AI and towards smaller, more sustainable models, follow my “Frugal AI” training course 💡 More info here…
