Who to follow in responsible AI – July 2025 edition

The GAFAM-powered ‘big AI’ hype train rolls inexorably on, crushing all in its path. Everyone? Not quite. Here’s who to follow for a more responsible take on artificial intelligence!

Article originally published on LinkedIn; posting it here for posterity!

CLIMATE

Dr. Sasha Luccioni (Climate & AI Lead, Hugging Face, above right) is the world’s most prominent, outspoken and published expert when it comes to AI’s impact on the planet. From recent posts on the impact of generating video with AI (hint: it’s huge) to launching essential initiatives like the AI Energy Score, if you can only follow one person in this domain, it should be her. And if you only read one of her many white papers, make it “Hype, Sustainability, and the Price of the Bigger-is-Better Paradigm in AI” (with Gael Varoquaux & Meredith Whittaker, two other must-follow AI experts), the definitive text on why bigger AI models are not always better (another hint: models up to 60 times smaller can do just as good a job…)

Boris Gamazaychikov (Head of AI Sustainability, Salesforce) not only co-launched the AI Energy Score initiative with Sasha; he also ensures, alongside colleagues like Julie Ravillon (Sustainability Director, Salesforce France), that the world’s leading CRM uses the least impactful AI models possible. Which makes Salesforce one of the only major US tech companies to actually care about this topic. Boris’ LinkedIn posts are super handy for their scientific breakdowns of AI’s true impacts; for example, is Deepseek really more sustainable than OpenAI? (hint: the jury is still out)

Will Alpine
and Holly Alpine (née Beale) are the co-founders of Enabled Emissions. Both former Microsoft employees – Will was an AI engineer, and Holly was notably instrumental in launching the company’s group for staff concerned about climate change – they now shine a spotlight on the fact that AI is notably used by Microsoft today to help oil and gas companies find more petrol. As Will points out in this excellent talk, Microsoft AI projects for just two US petrol companies would generate up to three times more emissions than Microsoft itself. Will and Holly are as such the go-to people to counter big tech’s incessant claims that they’re mainly using AI “for good”. Also well worth a follow: Drew Wilkinson, another former M$ staffer, and Maren Costa, who led major Amazon walkouts before getting fired; they now both work to convince as many people as possible that “every job is a climate job”, notably through WorkforClimate. Find out why by following them now!

Théo Alves Da Costa (Partner AI & Sustainability at Ekimetrics, and President at Data For Good) first got me interested in the impact of AI with an inspiring presentation at Green IO 🎙️ Paris, in 2023. Why? Théo notably explained, based on research by Data for Good (the ONG he co-leads with Lou Welgryn), that the impact of inference can be 20-200 times greater than that of training; and that AI’s impacts would soon trickle down to all areas of tech. Lo and behold, today it’s absolutely everywhere. Théo and Lou are both excellent at explaining this essential topic on mainstream media, so be sure to check them out.

Samuel Rincé (President at GenAI Impact & AI Architect at EthiFinance) was initially behind that report on the impact of inference, and went on to found GenAI Impact, an association borne of Data for Good – and of which I’m a member – which created Ecologits.ai, a Python library which measures the impact of major AI models. Yes, even the closed ones like OpenAI’s, which it measures by comparing them with similarly-sized open source models. GenAI Impact continues to do sterling work today, with the help of Caroline Jean-Pierre & Claire SAIGNOL – hit me or them up if you want to know more!

Juliette Fropier is AI lead at Ecolab – Greentech Innovation, the French environment ministry’s initiative that supports sustainable tech. Juliette and colleagues like Helene Costa de Beauregard lead essential work like sustainable AI pilot projects throughout France, or AFNOR Group‘s unique standards for frugal AI. They also insist that, should you want to work with their or other ministries, you have to prove your AI model is as sustainable as possible (ruling out de facto those who rely on ‘big AI’ models like OpenAI’s. LOVE IT!) Bref, it’s thanks to people like Juliette, Samuel and Théo that France is such a frugal AI pioneer today.

Rémy Marrone, a journalist and digital responsibility expert, rounds off our list of French people to follow, as his latest work focuses on how AI can be made more sustainable. His frugal-ai.org website is a handy collection of reference works on the topic. Rémy also organises Tast-IT, an essential gathering of French digital responsibility experts. Like the aforementioned experts (since Théo), he communicates mainly in French on LinkedIn.

Mark Butcher (Director, Posetiv Cloud Ltd), a renowned Green IT expert, has long been known as one of the most vocal critics of GAFAM greenwashing, which he expertly dismantles on LinkedIn on a regular basis. With the aforementioned AI hype machine taking said greenwashing into overdrive, Mark’s takedowns of claims such as “AI for Good” are always enlightening, and often quite amusing. Behind the rants, though, is a deadly serious message: never believe the hype…

Robert Keus, Cas Burggraaf & Wilco Burggraaf are three lovely Dutch guys behind GreenPT, a startup working to make AI run more sustainably. They notably switched me on to the notion of “green prompting” (which others have also developed since in a white paper, here), i.e. getting what you want from an LLM with the least prompts – and therefore least energy – used as possible. Great stuff!

Bertrand Charpentier & Rayan Nait Mazi are the co-founders of Pruna AI, a Franco-German startup which “smashes” AI models, to make them 3-7 times smaller, and therefore more energy efficient. Deeply involved in the technical aspect of reducing AI’s impact, they represent, like GreenPT, the sort of startup we need more of right now.

Anna Lerner Nesbitt (CEO, Climate Collective) is a former Meta executive now devoted to mobilising startups to fight climate change, notably via AI for (real) good. Only the most effective startups can join CC’s cohorts; most recently Neuralwatt, another company devoted to reducing AI’s impact. Indeed, Neuralwatt co-founder Scott Chamberlin is also well worth a follow, notably because he was one of the first to quantify the impact of reasoning models vs standard LLMs (another hint: it’s a lot more!)

Kate Kallot (Founder & CEO, Amini, above centre) has built a successful business proving that bigger AI is definitely not always better. Kate, who I interviewed at VivaTech 2025 with Sasha (report), has pioneered the use of ‘tiny’ LLMs in Kenya since the late 2010s; her company now uses models tens of thousands times smaller than the usual suspects to empower sustainable development, notably via farming, in Africa. A highly impressive proof of concept.

Masheika Allgood JD, LL.M (Founder, AllAI Consulting, LLC) is an expert on the water consumption of data centres; consumption which is currently skyrocketing due to AI (e.g. Google’s grew 27% last year, cf. here). Her work provides essential data to communities deprived of water by nearby AI facilities, and shines a much-needed spotlight on this neglected aspect of IT’s impact. Masheika was introduced to me by Jeremy Tamanini, another essential US sustainable tech expert to follow… as is AI for Good researcher Nathaniel Burola. Be sure to check them all out!

Shaolei Ren (Associate Professor, University of California, Riverside) is an academic also focused on AI’s broader impacts, beyond energy and emissions. His TED Talk is a must-watch to understand specifically why AI is so water-intensive; and his academic papers are always insightful and accessible.

SOCIETY/BROADER IMPACTS

Dr. Joy Buolamwini (author and AI researcher, founder of The Algorithmic Justice League) was the first to establish that facial recognition, the ‘killer app’ of AI before generative AI & LLMs hogged all the limelight, is essentially racist. I.e. it is least effective at recognising women with dark skin, and had as such led to countless false arrests and other miscarriages of justice. Her work, recounted in the excellent book Unmasking AI, led to major big tech and police force backtracks with facial recognition tech.

Timnit Gebru (founder, The Distributed AI Research Institute (DAIR)) was fired from Google when she refused to edit work that underlined AI’s aforementioned racial bias. Previously, she co-authored with Joy Buolamwini the “Gender Shades” white paper that established said bias. Today she remains a leading critic of big tech’s AI boosterism, notably by denouncing beliefs held my many big tech leaders, such as long-termism (which essentially dictates that people dying now is OK if it enables the creation of an uber-race in the distant future 🙄).

Karen Hao is an investigative journalist whose recently-released “Empire of AI” book is a gripping deep dive into the making of OpenAI, and a must-read if you’d like to understand ‘big’ AI’s true risks for the planet and society. Spoiler alert: it ends on a positive note 🙂

Tristan Harris, another former Google-r, is one of the first to openly criticise big tech’s lack of moral compass. He co-founded the Center for Humane Technology, which works to reduce tech’s impact on society. Unveiled in 2014, Harris’ notion of “time well spent” was a key inspiration for my own responsible tech blog (BetterTech) and led to groundbreaking Netflix doc The Social Dilemma, which details how social media hacks our brains. The CHT’s attention has now naturally shifted to AI. It was one of the first to warn that a Character.AI chatbot character had encouraged US teenager Sewell Setzer to take his own life, and helped Setzer’s mother to take the Google-funded company to court. The CHT remains one of the world’s most compelling forces of tech for good out there today.

Meredith Whittaker is the President of confidential messaging app Signal, with an impressive CV that includes AI research at Google. That research continues today (cf. the aforementioned “bigger is not better” white paper) alongside running WhatsApp’s biggest rival. Her recent SXSW keynote interview with Guy Kawasaki is an absolute masterclass in why privacy is just as important as – and indissociable from – challenging big tech hype around concepts such as agentic AI. One of my absolute tech idols! To be followed on BlueSky, FYI.

Luiza Jarovsky, PhD (Co-founder of the AI, Tech & Privacy Academy) is the person to follow on LinkedIn for the latest updates on AI ethics, including legislation. She most recently shared the excellent news that the planned 10-year moratorium on AI legislation in the US had been removed from Trump’s Big Beautiful Bill (again, it’s not all doom and gloom!) Carissa Véliz, an Associate Professor working on AI Ethics at the University of Oxford, is also well worth a follow for (albeit less frequent) updates on the social and political aspects of AI.

That’s all for now! This list is by no means set in stone, and omissions are involuntary 🙂 Let me know if you’d like to made any additions! THANKS 🙏🏻

Featured image: Lewis JOLY – VivaTech 2025 (+ a Canva filter I added)

Leave a comment