Last year, at Viva Technology (aka VivaTech), I hosted a round table session for a select few attendees keen to find out more about AI. This year, not only were sustainable AI luminaries Sasha Luccioni (Climate & AI Lead, Hugging Face) and Kate Kallot (Founder & CEO, Amini) invited to speak on 3 different sessions each: I was lucky enough to moderate the first one.
The panel, called “AI & Impact: Catalyst for Change or Force of Disruption?” (above) kicked off VivaTech’s brand new Sustainability Executive Summit, for Chief Sustainability Officers (CSOs) looking to get a grip on climate tech challenges. And challenges, there are many!
I kicked off by reminding the audience that, by 2030, the biggest AI supercomputers are predicted to use 30x more energy than today (re. Epoch AI); data centres will consume 3-4x more energy and water than today; and surging demand for AI services is delaying the energy transition, by keeping on coal- and gas-fueled electricity generation that was due to be decommissioned, as nuclear power stations can’t be built quickly enough (re. multiple sources).

So, what can we do? And what needs to change? Whilst I couldn’t quote them directly due to moderating (ha!), the messages were clear. For Luccioni, CSOs must demand more transparency of AI service providers when it comes to reporting the full impacts of their models (and no, Sam Altman, just sharing the energy and water impact of one ChatGPT query is not enough!) Indeed, Luccioni’s next white paper will illustrate how big tech transparency on AI’s impacts has decreased progressively in the past ten years, she teased.
Kallot, for her part, said cloud/AI clients should “challenge the status quo” and ask first what their real need is, before determining whether they really need big AI models or not.
Whilst that isn’t always easy – pressure to ‘go big’ often comes from the boardroom, and, according to Luccioni, Microsoft probably doesn’t push its highly capable smaller AI models (called “phi“) because they make less money, as they use less compute. And yet, as both of our speakers can testify, smaller models can do just as good a job.
Luccioni’s work has been instrumental in showing that models up to 60x smaller can perform as well as bigger ones; and Amini uses models that are 1000s of times smaller than the usual LLM suspects, to optimise farming and resource management in Africa. Successfully, in that Kallot has managed to build a thriving company with Amini, whilst using the very opposite of the massive LLMs that big tech is so keen for us to use…

Speaking of big tech, I couldn’t resist asking former NVIDIA employee Kallot if she thought the chipmaker was doing enough to limit AI’s impact on the planet. Lo and behold, she revealed that just the day before, NVIDIA had released its first ever PCF (Product Carbon Footprint) document for its ubiquitous H100 AI GPUs. It doesn’t say much, but it’s a start…
Fortunately, our two morning speakers returned for an afternoon session packed with mind-bending insights… that I was able to note down this time 🙂
They were joined by the IEA’s Siddarth Singh (right below), who kicked off with some sobering infrastructure observations:
“Data centres (DCs) tend to cluster. So energy demand can be as high as 20% in some places and we don’t have the infrastructure to built that out. Today a DC could consume as much as 20 million households. So you’re adding an entire city to the grid in one go. We need to make sure DCs are in places that don’t strain the grid.”

Kallot agreed that distributing workloads across multiple data centres could be one way to get around this problem: “you could build more (smaller) DCs in places with less capacity, like Africa,” she said, “but also move [them] to places like Kenya, where 80% of power is renewable.”
As for models, Singh bemoaned the “absolute lack of data” provided by AI service vendors and data centre operators. So much so that the IEA has to try to localize the world’s 11,000 major data centres itself, for example with satellite imagery, and even run its own experiments. This is how the IEA established, for example, that generating an 8 second video with AI can consume as much energy as charging a laptop twice (this and more in the IEA’s much-cited “AI and Energy” report).
Kallot and Luccioni agreed that, with regards models, we’re now in a similar place to ten years ago, when everyone was looking for smaller, more specialized models, to meet specific needs. “Why is everyone using the same models and expecting them to work for all these different uses?” asked Luccioni. “In my last paper, we established that the models that come second in terms of performance use 25 times less energy and are only 2% less accurate than the bigger ones.” Furthermore, she added, the AI Energy Score project that Luccioni initiated has demonstrated there’s a 60,000x energy performance difference between the most and least efficient AI models. “So choosing more frugal models can make a huge difference, especially if they’re used at scale,” she said.
“It’s all about what problem do you have to solve?” said Kallot. “How much data do you really need? Some say ‘hey, 60 billion parameters’, but as we’ve noticed, the results are not relevant to [specific markets like] Africa. Many countries don’t understand what this implies, so they believe the fluff [that bigger is better].”
So, what’s next? Luccioni is “starting to see the tide turn. Developers are now saying ‘we don’t need a large language model, we need a small one’… I hope this will percolate to the edges. And companies will see that paying for generic AI is not the solution.”
Fingers crossed…

The Sustainability Executive Summit featured another panel focused on why we need frugal AI – for “sustainability, sovereignty and democracy”, said Orange‘s Jean-Benoît Besset, Head of Group Environment & Energy Transition. France’s biggest telco is a big supporter of frugal AI, and actively encourages its teams to use smaller alternatives to big LLMs like OpenAI’s, whenever possible. Indeed, “the biggest challenge is convincing people this [frugality] is important“, said Besset; but it has done so, for example by including carbon in financial assessments before purchase decisions.
“We translate all costs into carbon,” said Besset. “We ask how much money can we make from one tonne of carbon emitted. If I can’t make that value back, it’s not good business for us. All of our data and AI projects are evaluated in this way.”
This is also why Orange has joined the Coalition for Sustainable AI, a grouping of over 200 companies, led by Salesforce and coordinated by the French environment ministry, in order to find alternatives to the US-led “bigger is better” approach. The coalition aims to work towards better measurement, more transparency, and “design to impact”, i.e. by asking whether AI is really needed at the start of the project, and if so, choosing the most frugal type. Another key tenet of Green IT – needs first, not technology – we were happy to hear repeated throughout the day (more on the Sustainable AI Coalition here).
Big up also to French AI startup Pruna.ai, present on the same panel: as Founder & President Bertrand Charpentier explained, his company’s tech can make models up to “eight times faster, and therefore greener, because less energy is used.”
To pre-conclude, a hats off to the VivaTech organisers for making this topic so front- and-central this year. A huge and welcome change for an event that had Elon Musk keynote its last two editions. Noone’s perfect, of course – this year notably had a panel in which a staff member of TotalEnergies mused on how AI could be used to fight climate change (who needs AI to say “stop burning fossil fuels”?) – but the overall trend was remarkably positive. Cf. this lovely ‘green’ mural:

…and the fact that even advertising companies, some of the most active users of generative AI in business today, are now also using it to fight greenwashing.
Antigreenwashing AI, demonstrated on Publicis Group‘s stand by Publicis Sapient’s Clemence Knoebel, has used as its training data the guidelines of French advertising watchdog the ARPP. This way, it can analyse advertising text or images to see if any claims have been made without sufficient data behind it, for example.
Interestingly, the tool has integrated Ecologits, the AI impact calculator made by association GenAI Impact (of which yours truly is a member), to check that its usage isn’t excessively impactful (despite the fact that it uses GPT-4o). No panic, insists Knoebel: if all of Publicis’ many agencies used this tool, its impact would only equal one flight to Helsinki.
Next step, then: get them to use it!

Other sustainable highlights this year – albeit not AI-related – were Sweep CEO Rachel Delacour (above, in white) declaring, at the aforementioned Sustainability Executive Summit, that “CEOs don’t deserve their paycheck if they’re not thinking about how to make their company last forever” – hear hear! – or Barbara Martin Coppola, ex-CEO of sporting retailer Decathlon (above in blue), who said that circular business models, namely selling repaired items or renting bikes, have represented Decathlon’s biggest growth area in recent years. “People buying circular products come 7x more often to stores than average,” she affirmed. “It is possible to grow and decarbonise at the same time,” she said.
That, plus panels such as “The Sustainable Edge: Why Doing Good is Good for Business”, featuring former Amazon whistleblower and now advisor to Workforclimate Maren Costa (more on her here), all suggested that, despite a challenging international context, tech can literally no longer afford to ignore sustainability. As Delacour put it, “this is the ROI year for sustainability.” Let’s hope she’s right…
No AI, frugal or otherwise, was used to write this article, and never will be!
Official VivaTech photos © Lewis JOLY – VivaTech 2025