avatar
The Invention of “Ethical AI”


Source: The Intercept

The irony of the ethical scandal enveloping Joichi Ito, the former director of the MIT Media Lab, is that he used to lead academic initiatives on ethics. After the revelation of his financial ties to Jeffrey Epstein, the financier charged with sex trafficking underage girls as young as 14, Ito resigned from multiple roles at MIT, a visiting professorship at Harvard Law School, and the boards of the John D. and Catherine T. MacArthur Foundation, the John S. and James L. Knight Foundation, and the New York Times Company.

Many spectators are puzzled by Ito’s influential role as an ethicist of artificial intelligence. Indeed, his initiatives were crucial in establishing the discourse of “ethical AI” that is now ubiquitous in academia and in the mainstream press. In 2016, then-President Barack Obama described him as an “expert” on AI and ethics. Since 2017, Ito financed many projects through the $27 million Ethics and Governance of AI Fund, an initiative anchored by the MIT Media Lab and the Berkman Klein Center for Internet and Society at Harvard University. What was all the talk of “ethics” really about?

For 14 months, I worked as a graduate student researcher in Ito’s group on AI ethics at the Media Lab. I stopped on August 15, immediately after Ito published his initial “apology” regarding his ties to Epstein, in which he acknowledged accepting money from the financier both for the Media Lab and for Ito’s outside venture funds. Ito did not disclose that Epstein had, at the time this money changed hands, already pleaded guilty to a child prostitution charge in Florida, or that Ito took numerous steps to hide Epstein’s name from official records, as The New Yorker later revealed.

The discourse of “ethical AI” was aligned strategically with a Silicon Valley effort seeking to avoid legally enforceable restrictions of controversial technologies.

Inspired by whistleblower Signe Swenson and others who have spoken out, I have decided to report what I came to learn regarding Ito’s role in shaping the field of AI ethics, since this is a matter of public concern. The emergence of this field is a recent phenomenon, as past AI researchers had been largely uninterested in the study of ethics. A former Media Lab colleague recalls that Marvin Minsky, the deceased AI pioneer at MIT, used to say that “an ethicist is someone who has a problem with whatever you have in your mind.” (In recently unsealed court filings, victim Virginia Roberts Giuffre testified that Epstein directed her to have sex with Minsky.) Why, then, did AI researchers suddenly start talking about ethics?

At the Media Lab, I learned that the discourse of “ethical AI,” championed substantially by Ito, was aligned strategically with a Silicon Valley effort seeking to avoid legally enforceable restrictions of controversial technologies. A key group behind this effort, with the lab as a member, made policy recommendations in California that contradicted the conclusions of research I conducted with several lab colleagues, research that led us to oppose the use of computer algorithms in deciding whether to jail people pending trial. Ito himself would eventually complain, in private meetings with financial and tech executives, that the group’s recommendations amounted to “whitewashing” a thorny ethical issue. “They water down stuff we try to say to prevent the use of algorithms that don’t seem to work well” in detention decisions, he confided to one billionaire.

I also watched MIT help the U.S. military brush aside the moral complexities of drone warfare, hosting a superficial talk on AI and ethics by Henry Kissinger, the former secretary of state and notorious war criminal, and giving input on the U.S. Department of Defense’s “AI Ethics Principles” for warfare, which embraced “permissibly biased” algorithms and which avoided using the word “fairness” because the Pentagon believes “that fights should not be fair.”

Ito did not respond to requests for comment.

MIT lent credibility to the idea that big tech could police its own use of artificial intelligence at a time when the industry faced increasing criticism and calls for legal regulation. Just in 2018, there were several controversies: Facebook’s breach of private data on more than 50 million users to a political marketing firm hired by Donald Trump’s presidential campaign, revealed in March 2018; Google’s contract with the Pentagon for computer vision software to be used in combat zones, revealed that same month; Amazon’s sale of facial recognition technology to police departments, revealed in May; Microsoft’s contract with the U.S. Immigration and Customs Enforcement revealed in June; and IBM’s secret collaboration with the New York Police Department for facial recognition and racial classification in video surveillance footage, revealed in September. Under the slogan #TechWontBuildIt, thousands of workers at these firms have organized protests and circulated petitions against such contracts. From #NoTechForICE to #Data4BlackLives, several grassroots campaigns have demanded legal restrictions of some uses of computational technologies (e.g., forbidding the use of facial recognition by police).Meanwhile, corporations have tried to shift the discussion to focus on voluntary “ethical principles,” “responsible practices,” and technical adjustments or “safeguards” framed in terms of “bias” and “fairness” (e.g., requiring or encouraging police to adopt “unbiased” or “fair” facial recognition). In January 2018, Microsoft published its “ethical principles” for AI, starting with “fairness.” In May, Facebook announced its “commitment to the ethical development and deployment of AI” and a tool to “search for bias” called “Fairness Flow.” In June, Google published its “responsible practices” for AI research and development. In September, IBM announced a tool called “AI Fairness 360,” designed to “check for unwanted bias in datasets and machine learning models.” In January 2019, Facebook granted $7.5 million for the creation of an AI ethics center in Munich, Germany. In March, Amazon co-sponsored a $20 million program on “fairness in AI” with the U.S. National Science Foundation. In April, Google canceled its AI ethics council after backlash over the selection of Kay Coles James, the vocally anti-trans president of the right-wing Heritage Foundation. These corporate initiatives frequently cited academic research that Ito had supported, at least partially, through the MIT-Harvard fund.

To characterize the corporate agenda, it is helpful to distinguish between three kinds of regulatory possibilities for a given technology: (1) no legal regulation at all, leaving “ethical principles” and “responsible practices” as merely voluntary; (2) moderate legal regulation encouraging or requiring technical adjustments that do not conflict significantly with profits; or (3) restrictive legal regulation curbing or banning deployment of the technology. Unsurprisingly, the tech industry tends to support the first two and oppose the last. The corporate-sponsored discourse of “ethical AI” enables precisely this position. Consider the case of facial recognition. This year, the municipal legislatures of San Francisco, Oakland, and Berkeley — all in California — plus Somerville, Massachusetts, have passed strict bans on facial recognition technology. Meanwhile, Microsoft has lobbied in favor of less restrictive legislation, requiring technical adjustments such as tests for “bias,” most notably in Washington state. Some big firms may even prefer this kind of mild legal regulation over a complete lack thereof, since larger firms can more easily invest in specialized teams to develop systems that comply with regulatory requirements.

Thus, Silicon Valley’s vigorous promotion of “ethical AI” has constituted a strategic lobbying effort, one that has enrolled academia to legitimize itself. Ito played a key role in this corporate-academic fraternizing, meeting regularly with tech executives. The MIT-Harvard fund’s initial director was the former “global public policy lead” for AI at Google. Through the fund, Ito and his associates sponsored many projects, including the creation of a prominent conference on “Fairness, Accountability, and Transparency” in computer science; other sponsors of the conference included Google, Facebook, and Microsoft.

Leave a comment