Please Help ZNet
Over the past decade, Google and Facebook have built globe-spanning digital platforms that impact almost every facet of our lives, and often in harmful ways. Apart from their use of “surveillance capitalism” on a massive scale, or their distribution of disinformation during the 2016 election, the algorithms Google uses at YouTube have been implicated in the rise of alt-right groups like QAnon. Facebook’s private groups and WhatsApp messaging service have been cited by the United Nations as helping to perpetuate a genocide against the Rohingya people in Myanmar. And yet traditional antitrust legislation, or at least the way it’s been interpreted for the past couple of decades, makes it difficult to regulate these two giant platforms—as does Section 230 of the Communications Decency Act, which absolves them of liability for anything that is posted by their users, and gives them wide latitude to moderate content as they wish.
Is there another path we could take that might allow us to harness the benefits of these huge services, while also blunting their negative effects? Dipayan Ghosh thinks there is. He’s the director of the Digital Platforms and Democracy Project at Harvard’s Shorenstein Center, a former policy adviser to the Obama administration, and a former adviser at Facebook. He’s also the co-author of a recent paper with Joshua Simons, a fellow at the Edmond J. Safra Center for Ethics at Harvard and a former adviser to the UK Labour Party, as well as a former policy adviser at Facebook. Their paper is titled “Utilities for Democracy: Why and How the Algorithmic Infrastructure of Facebook and Google Must Be Regulated.” CJR used its Galley discussion platform to speak with both men about their proposals, and their belief that the algorithms used by both companies have become part of the infrastructure of our public sphere, and should be regulated as public utilities.
“These companies control the social infrastructure we all use for communication and organization, political expression, and collective decision-making,” said Simons. “Their control over this infrastructure concentrates not only economic power, but social and political power too.” In effect, he and Ghosh argue, the kind of oligopoly that Google and Facebook have created isn’t that different from the massive “trusts” of previous generations, which controlled railways or oil production. Innovation is a good thing, Simons says, but “it creates new concentrations of power — railroads, oil trusts, telecommunications companies — and those concentrations of power matter for democracy in different kinds of ways.” The strength of the public utility concept that was developed in the Progressive era, he says, was that it offered a way to think about how and why different kinds of corporations might pose a threat to democracy.
The first question we should ask is, Ghosh argues, is whether Facebook is powerful enough to be considered a monopoly. “I think it is,” he says. “In fact, in several important markets including social media and web-based text messaging, Facebook is a dominant monopoly,” with more than 50 percent of the relevant market, and in some cases as much as 90 percent. The next question, Ghosh says, is whether the company has used this market power to cause broad social harm. The answer to this is also yes, he says. “I think we can make the case that Facebook has indeed caused harm in the three traditional areas where competition regulators look — namely, in market innovation; quality of service; and consumer pricing (i.e., the amount of data-and-attention monetized by the firm).” In each of these areas, says Ghosh, you could argue that Facebook has caused real harm not just to consumers but to society as a whole.
If both of those statements are true, Ghosh argues, then the only proper course of action is to regulate them in a variety of ways that reflect the different functions they serve, and to ”treat them like the utilities they are.” There is a case to be made, he says, that the two companies may actually be what are called “natural monopolies,” in the sense that the market barriers that come from the network effects they rely on can be insurmountable for smaller companies. And both have then reinforced those monopolies by acquiring firms like Instagram and Doubleclick, which make the barriers higher. “This is not innovation any longer,” Ghosh says. “It is a pair of behemoths getting ever fatter at the expense of everyone else.”
Here’s more on Google, Facebook and democracy:
- Catch 22: In a recent discussion on Galley, author and freedom-of-information activist Cory Doctorow, whose latest book is called “How to Destroy Surveillance Capitalism” said the problem with much of the technology regulation that is currently taking place, including laws against hate speech and other phenomena in a number of European countries, is that these regulations require massive moderation and oversight—and the cost of those solutions means that only huge platforms with dominant market positions can participate. “It’s not that I’m opposed to regulating Big Tech—quite the contrary!” he says. “It’s just that I think that regulations that have high compliance costs primarily work to benefit monopolies, who can afford the costs, and who can treat those costs as a moat that prevents new firms from entering the market.”
- Collective goods: Olivier Sylvain, a professor of law at Fordham University and director of the McGannon Center for Information Research, said during a recent Galley discussion that much of the danger in online networks is unseen by users directly and therefore regulation is needed. “Regulators and legislators are better positioned to intervene when consumers cannot easily see the deep or long-term harms and costs,” he said. Jennifer King, director of privacy at Stanford Law School’s Center for Internet and Society said that privacy is a collective good. “I often analogize this to pollution and recycling; we are all harmed by the net effects of the individual negative actions we take, whether it is throwing away another piece of plastic, or sharing or disclosing more personal information online,” she says. “Both problems require systemic solutions.”
- Too little, too late: Facebook made some changes to its rules aimed at clamping down on disinformation recently, including a ban on political ads with misinformation in them. But as Steve Kovach pointed out, the changes don’t really do anything to stop anyone, including Donald Trump and his campaign, from posting misinformation on their personal or campaign pages, so long as the posts aren’t ads. The company added another new rule on Wednesday, saying it won’t allow any ads on the network that seek to delegitimize the outcome of an election. The new policy will prohibit any ads that call specific methods of voting inherently fraudulent or corrupt. The new rule comes after repeated false claims by Donald Trump that voting by mail leads to election fraud.