clock menu more-arrow no yes

Filed under:

Mark Zuckerberg Is Very Sorry. And So Are We. But Can We Trust Him?

The CEO’s congressional testimony showed how Facebook’s problems may be too sprawling for legislators to fully comprehend. But Zuckerberg’s performance highlighted an aw-shucks approach that is wearing thin.

Mark Zuckerberg Getty Images/Ringer illustration

For 14 years, Facebook has been playing a dangerous game, building a communication platform, and eventually an advertising empire, entirely on trust. The value exchange—users’ personal data given to Facebook to pay for a popular, stable social network—was always implied. But it was spelled out clearly only in privacy menus buried far below the website’s cheery blue exterior. This has always made Facebook seem ambiently suspicious on its best day and deeply duplicitous on its worst. Verizon might charge a monthly fee to reach your loved ones, and NBC might insert annoying advertisements in the middle of your entertainment, but those companies never claimed to be your friends.

The disconnect between Facebook’s relentless idealism and the reality that up to 87 million users’ data was improperly shared with the political analytics firm Cambridge Analytica helps explain, in part, how CEO Mark Zuckerberg wound up testifying in front of the U.S. Senate Judiciary Committee on Tuesday. This was a confrontation nearly a year and a half in the making, ever since Zuckerberg claimed the idea that fake news on Facebook influenced the 2016 presidential election was “pretty crazy.” Since then the company has divulged its data malpractice in fits and starts—not just the Cambridge Analytica lapse, but also 126 million Americans exposed to ads by Russian propagandists. Zuckerberg was so wrong in his assessment of Facebook’s political impact—a point he’s hammered home in apology after apology—that Congress decided he was in need of a nice public flogging.

But if Tuesday’s hearing—the first of two days of congressional testimony for the CEO—proved anything, it’s that Facebook’s scope and influence are beyond the grasp of many of the people best empowered to regulate it. Democratic Senator Maria Cantwell pursued a confused line of questioning about the secretive data-mining platform Palantir, never quite linking Facebook with the company. Fellow Democrat Brian Schatz insisted that Facebook must be secretly serving people Black Panther ads by surveilling “emailing in WhatsApp,” even though Zuckerberg said the app has end-to-end encryption. Senator Deb Fischer, a Republican, asked how many data categories Facebook stores, a question Zuckerberg couldn’t quite process. Though each senator was given five minutes to grill the Facebook CEO, much of the time was spent by the lawmakers struggling to articulate cogent questions, and Zuckerberg patiently repeating basic facts about Facebook’s core functionality and business model.

These questions were a preview of the challenges Congress will face in trying to draft legislation that makes Facebook a fundamentally healthier platform. Facebook has become embroiled in a seemingly endless list of controversies, many of which received airtime during Zuckerberg’s five-hour testimony. Cambridge Analytica and Russia might be the most prominent, but senators also raised Facebook’s discriminatory housing ads, its hoovering up of users’ call histories on Android phones, and its questionable ethics launching Messenger Kids. Facebook is involved in too many data-harvesting activities on too many platforms in too many countries for the government to keep track of.

The sheer scope of Facebook’s transgressions may paradoxically protect the company from a true reckoning. Zuckerberg, for the most part, slithered around questions of whether laws should grant users more control over their data. He said he was open to regulation, and offered to help draft it. And he pointed out that newly announced Facebook initiatives, like expanded data access for independent researchers and a new verification system for prominent pages, would address many of the issues that plagued the company in the 2016 election. The lingering problems should be solved by artificial intelligence, sooner or later.

Zuckerberg’s pitch is the same as it’s always been: trust me. As CEO, board chairman, and controlling shareholder of Facebook, he answers to no one, but he works hard to ensure that his unlimited power appears nonthreatening. If Zuckerberg was like the arrogant Bill Gates of ’90s Microsoft, he’d invite vengeful punishment; if he were like the passionate Steve Jobs of resurgent Apple, he’d invite mockery. But he effuses nondescript neutrality so effectively that it’s hard to get offended by anything he says, even when it’s “I don’t know if we need a law” ensuring children’s privacy. Zuckerberg has bored us into complacency for a generation now, and as lawmakers filtered out of the chamber during his marathon testimony, it was clear he’d effectively done it again.

The big hypocrisy of Facebook, though—and the reason it will always be a powerful but vulnerable company—is that Zuckerberg has never afforded his users the same trust. A Facebook that trusted its users would let them control which posts appeared in their News Feed, rather than decreeing rankings via algorithm. A Facebook that trusted its users would transparently explain how their web browsing activity was being monetized, rather than insisting that relevant ads are their own reward. And a Facebook that trusted its users would proactively tell them when their data had been compromised, rather than begging for forgiveness later.

Democratic Senator Kamala Harris keyed in on that last point during her pointed grilling of Zuckerberg. Facebook found out about the Cambridge Analytica trove of private data in 2015, and Zuckerberg has repeatedly said his company should have done more to ensure that the firm deleted the Facebook data it had acquired and was banned from the platform. But he’s said less about why Facebook didn’t inform users back then about the shady company that had violated their privacy. “This relates to the issue of transparency and the relationship of trust—informing the user about what you know in terms of how their personal information has been misused,” Harris said.

Zuckerberg apologized, again. “In retrospect, I think we clearly view it as a mistake that we didn’t inform people,” he said, and he acknowledged that Facebook had in fact made a strategic decision to not inform users of the breach. Under the threat of user revolt, government regulation, and perjury charges for lying under oath, citizens could be confident that Zuckerberg was telling the truth. For a few hours, the relationship between the lowly user and the vast, unknowable Facebook machine felt equitable. It’s a shame more legislators weren’t able to take advantage of the opportunity.