Why more Responsible AI is both necessary, and possible

The generative AI revolution began with the fastest-adopted technology product in history: ChatGPT. Three years later, this revolution continues at a pace that leaves little room to step back and reflect on whether we’re headed in the right direction.

How did we get here? What have we sacrificed in the name of the productivity that AI is supposed to bring? Is it too late to put this revolution back on a responsible track? 

Maybe not, if that track takes the form of the pillars of CSR (corporate and social responsibility): People, Planet, Profit.

People

All products must ensure that they do not harm the people who use them. AI has already failed in this responsibility. That is why social impacts must be a priority when it comes to responsible AI.

Despite numerous recent cases of suicides assisted by large language models (LLMs), the only change to date has been the decision by Character.AI, a Google-linked LLM companion platform, to ban its services to those under the age of 18

OpenAI even argued in court that the death of Adam Raine was solely the responsibility of the 16-year-old, even though ChatGPT actively facilitated his suicide. Admittedly, the company has since implemented parental control systems and strengthened its team dedicated to these issues. However, vigilance remains essential. 

Above all, the question remains open: if an AI system developed or deployed by a company puts its users at risk, should it not be the full responsibility of that company?

In the workplace, the debate remains wide open. Are Amazon and other companies really laying off tens of thousands of employees with a view to replacing them with AI? 

Nothing could be less certain. Why replace humans with AI whose supposed superior productivity has not yet been proven?

On the one hand, GitHub Copilot (owned by Microsoft), a programming assistance tool, claims to speed up developers’ work by around 55%. On the other, MIT researchers estimate a productivity gain of 0.7% thanks to AI… over ten years. So who is right? 

In any case, for now, 80% of C-level executives see no revenue gains or savings from AI, according to 6000 of them, surveyed recently by the National Bureau of Economic Research (US). But they continue to invest in it anyway.

This stubbornness is all the more surprising given that modern AI remains highly unreliable. For example: 

  • Google’s AI Overviews provide answers to searches powered by generative AI that are often false, including those related to health
  • ChatGPT, by OpenAI’s own admission, hallucinates (lies) more as it becomes more powerful (undermining the AI industry’s deeply-held conviction that ‘bigger is better’)
  • AgentForce, Salesforce’s agentic solution. Heralded as a ‘revolution’, it is considered too unreliable by its users.

The transparency of AI also needs to be improved, and not just that of generative AI. The CNAF – the French authority responsible for housing benefits – is currently the object of an official complaint from 25 NGOs, because the algorithm it uses to determine who to audit first too often selects the most vulnerable beneficiaries. An example which demonstrates that predictive AI, based on the relatively old principle of machine learning, can also accentuate bias.

Finally, intellectual property is also being undermined by generative AI. Since OpenAI knowingly violated YouTube’s regulations in 2021 by scraping millions of hours of video to train GPT-4, AI leaders have been helping themselves to anything that might satisfy their LLMs’ enormous thirst for learning. This is often done in violation of copyright laws. Anthropic, for example, wants to “destructively scan” all the books in the world, as they are considered the best as-yet-untapped source of training data, now that the entire internet has already been scraped.

How can we limit these risks ?

Here are a few levers for more responsible AI : 

  • always demand more transparency from AI providers
  • learn to detect deepfake videos, and other fake AI content
  • carry out algorithmic impact and bias assessments
  • re-train models with proven bias
  • protect your IP
  • choose an AI provider (or configuration) whereby your data does not leave Europe. As such, only EU law (e.g. GDPR) will apply.

We’ll develop these levers in future blogposts…

Planet

After its social impacts, it is no doubt the environmental impacts of AI that should concern us the most.

We have known since late 2024 that AI will triple the electricity consumption of data centres in the United States by 2028. In Virginia, the US state that hosts the most data centres, energy demand is said to have tripled in just one year. And this peak demand is currently being met mainly by fossil fuels. Well, partially met: according to Elon Musk himself, many GPUs – the processors needed for generative AI – will run out of power by the end of the year.

But electricity is just the tip of the iceberg. If we take all 16 environmental impacts of products (PEF) into account, these will increase sevenfold by 2030, according to the Green IT Association. The main impact is indeed emissions, but let’s not forget the rare metals needed to manufacture all these GPUs; the water needed to cool data centres (as well as to manufacture GPUs); air pollution, and so on.

Added to the environmental impacts are destructive risks, or CBRNE (Chemical, Biological, Radiological, Nuclear, Explosive). We know, for example, that LLMs could increase the risk of genocide by biological weapon fivefold, according to a study by 46 renowned experts. Or that the Israeli army used AI to increase the impact of its attacks in Gaza following the attacks of 7 October 2023. AI is notably widely used by Palantir, a company founded by Peter Thiel, to maximise the lethal effectiveness of its ‘war solutions’…

How can we limit these risks ?

Here are a few levers for more responsible AI : 

  • question the AI sector’s “bigger is better” philosophy. Most users’ needs can be met by models that are hundreds of times smaller – and therefore less impactful – than those currently imposed on us
  • encourage the use of more small models, on less hardware resources
  • use cloud providers with Europe-based data centres, where electricity’s carbon intensity can be 10x lower (France vs. Virginia)
  • lobby companies and – above all – governments to ensure dangerous usage of AI is strictly regulated.

We’ll develop these levers in future blogposts…

Profit

The last pillar of CSR is the one that most closely aligns with corporate interests.

Security and confidentiality are among the main risks associated with AI in business, according to numerous studies.

Security, because LLMs are particularly easy to hack, thanks to a technique called prompt injection, or the submission of text designed to circumvent the safeguards inherent in most generative AI tools. This is how, in some cases, a poem can unlock a recipe for creating a nuclear weapon. What’s more, prompt injections are by definition impossible to resolve, according to the leading AI solution providers.

Agentic AI also represents considerable security issues. OpenClaw, the ‘first social network for AI agents’, may have been described as “genius” by Sam Altman; it left 1.5 million API keys exposed when it was created. In other words, anyone could have taken control of it.

Confidentiality, because, as with intellectual property, the thirst for data of ‘big AI’ is such that it sucks up our prompts just as readily as it does protected works. Each of the largest LLM creators collects all the prompts they receive by default in order to train future models. It is possible to opt out in some cases, but most users will not do so. As a result, most of their data ends up in the United States, where it is happily exploited.

This is problematic for two main reasons. Firstly, US companies are obliged to provide all prompts to US authorities, for example in the event of a search or investigation. Secondly, as prompts will increasingly be used for commercial purposes, for example following the integration of advertising into LLMs like ChatGPT.

These are all issues that require a high level of accountability within every company involved in AI in any way. This calls for defining in advance who is responsible for managing which aspect of any potential AI-related incident. A RACI will naturally be required here. But how do you know who to include in the loop? This is the first step in establishing a Responsible AI Charter for your organisation.

Why bother creating such a document? Amongst other things, it could prevent employees from disclosing confidential content to an LLM. This is why Samsung banned ChatGPT in 2023, for example.

However, 59% of companies still do not have an AI policy, according to KPMG, even though shadow AI – the use of personal LLM accounts for professional purposes – is widespread in nearly half of them. So three years on, the ‘Samsung risk’ remains just as prevalent as then.

How can we limit these risks ?

Here are a few levers for more responsible AI: 

  • train developers on how to anticipate prompt injection and similar attacks
  • train all staff on the importance of respecting the confidentiality of company and client data (starting by cutting out shadow AI…)
  • favour European cloud providers, and as such data sovereignty (yes, this lever applies to all three pillars of responsible AI!)
  • develop a Responsible AI Charter, adapted to the specific risks, values and audiences of your organisation.

We’ll develop these levers in future blogposts…

Conclusion

The journey towards more responsible AI begins with a full understanding of its risks and impacts, and how they could endanger your company’s CSR commitments… not to mention compromising the data of your clients or staff. The next step of the journey involves adopting tactics and strategies to mitigate these risks, within the boundaries of laws, standards and best practices around AI. Finally, the journey can be completed by formalising your company’s AI policy, for example in the shape of a Responsible AI Charter.

Find out more in our next blogpost… and by following GreenIT.fr’s training course, “Responsible AI – State of the art”. More information here

I’d also recommend All Tech is Human’s excellent work on Responsible AI; a key inspiration and essential starting point for me 👌🏻

Featured image by Steve Johnson via Pexels

Leave a comment