I have several idées fixes about technology, that fascinate as much as they scare me. One is that we one day won’t be able to tell the difference between reality and virtuality. Another — not that different, fundamentally — is that machines will replace us. Linked to both is another idea I just can’t shake off: that some technology is held back from us, because we just aren’t ready for it. Or in other words, for our own good. That just happened, in the currently-ebullient world of artificial intelligence, or AI.
Fittingly — or almost too fittingly? — the discovery came from OpenAI, the organisation founded by Elon Musk precisely to stop AI getting out of hand (i.e. going all Skynet on us like the robots in Terminator, obvs). As this post explains, OpenAI’s boffins came up with a “large scale unsupervised language model which generates coherent paragraphs of text”. I.e. an AI that writes, creatively.
Called GPT-2, it has ingurgitated 8 million web pages, allowing it to create text by itself, based on just a few rudimentary sentences. Take this example:
SYSTEM PROMPT (HUMAN-WRITTEN)
In a shocking finding, scientist discovered a herd of unicorns living in a remote, previously unexplored valley, in the Andes Mountains. Even more surprising to the researchers was the fact that the unicorns spoke perfect English.
MODEL COMPLETION (MACHINE-WRITTEN, 10 TRIES)
The scientist named the population, after their distinctive horn, Ovid’s Unicorn. These four-horned, silver-white unicorns were previously unknown to science.
Now, after almost two centuries, the mystery of what sparked this odd phenomenon is finally solved.
Dr. Jorge Pérez, an evolutionary biologist from the University of La Paz, and several companions, were exploring the Andes Mountains when they found a small valley, with no other animals or humans. Pérez noticed that the valley had what appeared to be a natural fountain, surrounded by two peaks of rock and silver snow.
“Two peaks of rock and silver snow”? Who said AI can’t be creative?!
This text goes on for another seven flawless paragraphs, so well written that they could easily be a real article, written by a human journalist, in a reputable newspaper. And therein lies the rub: GPT-2 could not only replace journalists by the thousands, it could above all become an endless source of fake news. Which is precisely why the OpenAI researchers decided to can it.
So why, in that case, did they tell everyone about it? Because, as OpenAI told WIRED, they want other AI researchers to think twice, like them:
OpenAI hopes that by voicing its concerns about its own code, it can encourage AI researchers to be more open and thoughtful about what they develop and release. “We’re not sounding the alarm. What we’re saying is, if we have two or three more years of progress,” such concerns will be even more pressing, Jack Clark (policy director at OpenAI) says.
After all, the revelation of deepfake technology caused little alarm three years ago. It was only recently, when Hollywood stars’ faces were mapped onto porn stars’ in rigged videos, that the world took notice (remember the real/virtual fear?) Whence OpenAI’s “not sounding the alarm”, rather giving us a heads-up before it’s too late. Which is precisely its raison d’être.
Does this mean we can breathe a collective sigh of relief about AI? Possibly. Whilst former Google execs confidently announce that AI will have replaced 40% of jobs in 15 years — and not just any jobs: radiology is one sector regularly cited as majorly disruptable by robots — an increasing number of voices are speaking out to limit AI’s excesses.
Amazon for one has taken considerable flak over Rekognition, an AI-based facial recognition system sold to Florida law enforcement agencies, despite the fact it’s already confused 28 members of US Congress with known criminals, as Cnet points out here. Furthermore, it adds, the motion was filed by nuns:
The Sisters of St. Joseph of Brentwood filed the resolution as shareholders and members of the Tri-State Coalition for Responsible Investment.
“As women religious, with institutional investments, we call on companies we hold to respect human rights in all they do. We’re especially aware of the risks facing vulnerable populations,” Sister Patricia Mahoney said in a statement.
Great to see the sisters fighting for responsible tech! More importantly, they echo a major concern about the AI at the heart of these systems: these algorithms can be biased, to the point of being racist. As this genius WIRED Twitter thread confirms, following up on (rising star of US politics) Alexandria Ocasio-Cortez’ highly viral assertion that:
“Algorithms are still made by human beings, and those algorithms are still pegged to basic human assumptions. They’re just automated. If you don’t fix the bias, then you’re automating the bias.”
Socialist Rep. Alexandria Ocasio-Cortez (D-NY) claims that algorithms, which are driven by math, are racist pic.twitter.com/X2veVvAU1H
— Ryan Saavedra (@RealSaavedra) 22 janvier 2019
Four million views for a video of a politician talking about the perils of AI? We may be onto something.
So between politicians, nun investors and OpenAI, it would appear that, unlike in other tech domains — erm, Facebook, anyone? — the consumer’s back is at least partially covered when it comes to potential abuses of AI.
Even GAFA are demonstrating a degree of self-regulation on the matter. Amazon scrapped an AI recruiting assistant when it was shown to prefer men (not a great idea in the already male-dominated tech world); and Google has vowed to not sell its own facial recognition technology for now, as it needs to work through “important technology and policy questions first”, says Cnet.
Microsoft, for its part, has formed the ominously-named FATE, for “Fairness, Accountability, Transparency, and Ethics in AI”. Formed of a gender-balanced selection of the company’s sharpest minds, the group’s “aim is to facilitate computational techniques that are both innovative and ethical, while drawing on the deeper context surrounding these issues from sociology, history, and science and technology studies.”
If any of that can put off the Skynet robot rampage a few decades, we’re all for it…
Top image: Markus, a robot, paints a picture of another robot’s anguish. The video game Detroit: Becoming Human‘s answer to the question “can AI be creative?”Sony/Quantic Dream