It took a long time to convince Americans to wear seat belts. Though a 1968 federal law required that the safety devices be installed in the front seats of all passenger vehicles, only around 15 percent of drivers were using them in 1983. It took a mixture of regulation, via a raft of state laws criminalizing the lack of seat belt use, and technology, via that annoying beep that harangues any unbuckled passenger in modern vehicles, to push seat belt use up to 90.1 percent by 2016.
Some researchers and lawmakers believe the world’s largest internet companies must undergo a similar transformation. The ways in which platforms like Google, Facebook, and Twitter can rapidly disseminate misinformation and allow bad actors to manipulate users is being recognized as a similar—albeit digital—danger. “We’re kind of like in Detroit 1963 here, where people are like, ‘Seat belts? What seat belts?’” says Michael Caulfield, the director of blended and networked learning at Washington State University Vancouver.
We’ve been anxious for years over the way social media influences how we think, but evidence that this power could be used for nefarious political purposes has suddenly galvanized lawmakers to do something about it. The newly heightened scrutiny kicked off when Facebook revealed that Russian entities had been buying up Facebook ads as a means of influencing political opinion in the United States since 2015. The ads reached 10 million people before and after the 2016 presidential election and constituted $100,000 in advertising spending. Twitter found that more than 200 accounts from Russian actors (mostly bots) were inundating users with political messages. Russian agents also bought ads on Google, the search giant disclosed this week.
The blowback from these revelations has been particularly acute because for so long, tech giants have tried to glide blamelessly above the muck of politics. If the internet had an influence on political discourse, it was long portrayed only as a democratizing force for good. In 2012, Facebook coauthored a study showing that its Election Day reminders boosted voter turnout and took credit for populist global uprisings that were partially spurred via social media. But in the 2016 election cycle, as researchers and journalists questioned how misinformation might have influenced voters, the company was much less eager to talk politics. Facebook CEO Mark Zuckerberg’s quip two days after the election that it was “pretty crazy” to think that his company played a role in Donald Trump’s victory now reads as either poorly executed damage control, an inability to conceptualize the power of the platform he created, or an indifference to the societal ramifications Facebook has wrought (he apologized for the comment last month).
The dissonance between the power that tech giants wield and the neutral way they frame their missions has created a vacuum of leadership that political players on both sides of the aisle are now eager to fill with regulations. Democratic lawmakers in Washington are pushing a bill that would force Facebook to disclose more information about political ads bought on the platform, similar to TV networks. And Steve Bannon reportedly proposed regulating Facebook and Google like utilities during his tenure as Trump’s chief strategist. This is to say nothing of the climate in Europe and Asia, where government officials have been attempting to rein in the power of Silicon Valley for years.
Exactly what form tech regulation would take is anyone’s guess. Facebook is trying to preempt legal action by voluntarily sharing the Russian ads it discovered with Congress, but more detailed knowledge of how the company’s systems can be gamed may, in fact, strengthen calls for government oversight. According to The New York Times, a company with ties to the Russian government enacted a carefully coordinated misinformation campaign by creating Facebook pages tied to specific hot-button issues, from gun rights to Black Lives Matter, then used them to seed misleading info and discontent. The ploy played on Facebook’s tendency to elevate polarizing, identity-driven content above somber facts. “This is a problem that goes well beyond Russia,” U.S. Representative Adam Schiff (D.-Calif.) said in a September interview. “It’s far broader, and we have to ask, is this in our society’s interest to create these informational silos?”
Tech companies could be regulated in a variety of ways. The Federal Trade Commission, in an attempt to lessen the influence of a single tech giant, could bring an antitrust suit against a company that had concentrated too much market power in a way that harms consumers (in 2012, the agency investigated Google for anticompetitive behavior but did not pursue a case, though staffers recommended doing so). Congress could vote to regulate the internet giants like utilities, creating oversight boards that would decide content disputes or compel companies to disclose more IP. In Europe, next year citizens will be awarded a “right to explanation,” which stipulates that a company must explain to a user how it arrived at an algorithmic decision involving their data.
All of these approaches come with risks. The companies argue that opening up the black boxes of their secretive algorithms would make them more vulnerable to competitors and even more susceptible to gaming by both foreign powers and run-of-the-mill spammers. Granting the government more power to view and make decisions about citizens’ private personal data, especially when the use of that data could be changed at the whims of a new executive regime, could also further blur the lines between private and public life. “If you call for some type of speech to be controlled, then think long and hard of how those rules/systems can be abused both here and abroad,” Facebook chief security officer Alex Stamos wrote in a widely circulated Twitter thread over the weekend. “Likewise if your call for data to be protected from governments is based upon who the person being protected is.”
An intermediate step between the current light regulatory touch and utility-like control would be opening up more industry data to social science researchers to get a deeper understanding of how these platforms operate. Citing competitive or privacy concerns, tech companies often hoard the large-scale datasets needed to draw conclusions about human behavior online, opting instead to release their own self-conducted research or to commission studies to advance specific agendas. “You can’t have a public policy debate where the company we’d be thinking about regulating or influencing controls all the data about what’s going on with their product,” Caulfield says. “That’s like letting the tobacco companies have all the health data of smokers. It doesn't make any sense.”
Tech companies may have their first Big Tobacco moment very soon. Executives from Facebook, Google, and Twitter are expected to testify at a House Intelligence Committee meeting later this month and a Senate Intelligence Committee meeting in November. They’ll arrive not as emblems of a hopeful future, but as businesses under a cloud of suspicion and fear about how their growing power will manifest itself next. After years of growing in size and influence at breakneck speed, it may finally be time for Silicon Valley to buckle up.