clock menu more-arrow no yes

Filed under:

Don’t Be Mad at Facebook for Being Facebook

Third-party data collection is a feature, not a bug

Getty Images/Ringer illustration

On Friday, Facebook announced that it had suspended the accounts of a company called Strategic Communication Laboratories, including one of its businesses, Cambridge Analytica. It seemed like a Friday news dump, but one that was noticeably less vague than most of the prepared statements Facebook makes. In the post, the network acknowledged that in 2015, researcher Aleksandr Kogan used a Facebook app he made to pull information about users, which he then passed on to Cambridge Analytica, a voter-profiling technology company that was paid by the Trump campaign in the 2016 election.

Kogan sharing the information with Cambridge Analytica was against the platform’s rules; Facebook told them both to delete the data. Now it appears they did not, and Facebook is investigating to find out when and whether they ever did. Since Kogan’s app was allowed to connect to Facebook, the platform has amended its policies, and today, that app would not be approved, Facebook says. But the data collection is not the problem; Facebook intended for third-party developers to be able to do so. The issue is that Facebook didn’t create a platform that accounts for how third parties might use the data they access.

Facebook provided a more detailed explanation of the situation than we’ve come to expect from the social network, but the implications of its failure are not exactly laid out. Cambridge Analytica was described by The New York Times as being able to “influence [voter] behavior,” and Facebook has yet to say what it will do going forward, only that this was enabled by its old app-review system and that investigations are ongoing.

This tale of Facebook’s complicity in malfeasance is a familiar story, but this one has a new dimension to it since it potentially swayed at least one world event. Yet this incident still might not fundamentally change how Facebook operates.

The news has struck a chord with the media and tech press. Notorious Facebook defenders are acknowledging this failure, and there are suggestions this incident could finally affect the company’s bottom line as stock dipped on Monday morning. Maybe this time there will be real legal ramifications, as politicians including Senators Ron Wyden and Amy Klobuchar have made statements calling for further investigation into the affair. Facebook executive Andrew Bosworth addressed the news in a lengthy post defending the company’s intentions but admitting it had screwed up. After years of muddied information that underscored the complications of Facebook’s vast network, we finally have clarity on a specific problem—with names, dates, and timestamps; we know how and who originally pulled user data.

So, who is at fault? Kogan is, for sharing the data with a third party outside of his own app; but he’s not at fault for creating an app that capitalized on a faulty system. Facebook says Cambridge Analytica is in the wrong for not deleting the data as the firm said it would. Facebook also blames its users, as it asserts in its post about the incident, for giving consent to Kogan’s app. The app itself was not duplicitous—it was just a regular app, with regular app permissions. But security and privacy advocates point to Facebook for building a system that allows apps and app makers to access so much information. And it’s something we’ve known all along and cared little about.

Characterizing what happened with Kogan and Cambridge Analytica as a hack or a data breach is incorrect. As Bosworth put it in his post, it is “a breach of trust,” and the way Facebook’s system was originally set up is what allowed this to happen. For years, we’ve been warned about the amount of information that third-party apps can access and what that access could result in. The fact that Facebook changed how app permissions work suggests that it was previously not rigorous enough—but Facebook will never be rigorous enough. That is how the system works: Facebook offers developers access for a price. Facebook built a machine that can be incredibly valuable for businesses capable of mining it, and data brokers will always find ways to leverage the platform.

To companies like Cambridge Analytica and others, the social network is a database for profiling, and whatever it allows them to do with that data, they will. The social network will continue to be reactive, not proactive, to incidents like these, and it’s a broken process. Strengthening security after data is wrongfully accessed or accessed for unsavory purposes doesn’t mean we get that data back. Strengthened security measures can’t retroactively change an election, or hasten a burgeoning conflict. It’s in Facebook’s interest to give developers and companies the access they crave until there’s resistance. Toeing the line is a savvy strategy.

Now that the pushback has begun, Facebook is trying to downplay its role in swaying political elections—something it used to advertise. But it’s too late to walk this one back; the platform was built to give third-party developers data, and the realization that this can have serious implications that manifest offline is coming too late and taking too long. A problem of this scale requires drastic and monumental platform changes entirely, a reimagining of how Facebook works. But motivating the majority of users to care and demand that change remains hugely challenging. It’s starting to seem impossible.