Can tech companies be encouraged to think about social before financial profit? Such is the mission of EthicalOS, which uses highly plausible, Black Mirror-style scenarios of where we could be in the near future if startups and big tech don’t think responsibly. Here’s why it matters.
Why is technological responsibility such a big deal right now?
Tech companies today are “under huge pressure to become more ethical”, says Jane McGonigal, Director of Games Research and Development at US think tank Institute for the Future (top photo). This is precisely why late 2018 Facebook, Google and Apple all announced new features to help moderate use of their products. But this is just the beginning. Looking ahead, EthicalOS, a programme devised by the Institute for the Future and the Tech & Society Solutions Lab of the Omidyar Network (the ethical investment fund created by eBay founder Pierre Omidyar), aims to help the tech sector avoid what McGonigal calls “bad surprises” like screen addiction, fake news and more, in the future.
What is EthicalOS?
Not an operating system in the computer sense, EthicalOS is rather a toolkit comprised of a range of elements designed to help tech companies work more ethically.
Firstly, it presents fourteen nightmare scenarios, many evocative of dystopian TV series Black Mirror. If some are benevolent – screen time limits imposed by mobile operators, or a blockchain-based system to confirm consent before a sexual act – others are downright chilling. What if Facebook became a bank that didn’t grant loans if it considered, based on your posts, that you were too depressed? Or if artificial intelligence (AI) meant you can’t tell if you’re talking online to a real person or not? Or, worse yet, if AI destroyed 73 million US citizens’ jobs – especially the jobs of the least well-off – by 2030?
These scenarios are voluntarily dark, explains McGonigal, “not because we think technology is inherently bad for society, but because tech sector workers focus on problems they’re trying to solve, rather than the problems they could potentially cause.” Whence this first warning.
How does it work?
Once these scenarios have been read and understood, startup founders (amongst others) are encouraged to answer an EthicalOS questionnaire which seeks to establish, for example, if they are gathering more of their users’ data than necessary; if their app couldn’t warn its users after one hour’s activity; or if their algorithms or AIs might be excluding certain types of the population.
Finally, the toolkit concludes by proposing six strategies – for example, the equivalent of doctors’ Hippocratic oath for data analysts – thereby providing concrete next steps to help turn EthicalOS’ theory into practice.
Of course, in such a complex domain, there’s no silver bullet. As useful as the document may be, its real impact will, of course, depend on whether its advice is followed. But it’s very existence confirms its necessity.
Why should everyone care about EthicalOS?
“We all influence the future,” says McGonigal, “simply by choosing to use one technology over another. Do you have a digital assistant in your home? Will you give your family the latest connected fitness app? Do you use facial recognition to validate your posts on social media? It’s down to all of us to think about the consequences of our technological choices… and thereby create a better future.”
Download EthicalOS: https://ethicalos.org
Adapted from an article originally published in A Nous Paris magazine, September 2018: https://www.anousparis.fr/edition/