On Wednesday, Facebook CEO Mark Zuckerberg conducted a rare Q&A with journalists over the phone. Zuckerberg usually leaves mere minutes at the end of earnings calls to take questions, but in the wake of the Cambridge Analytica scandal and a sudden consumer concern over Facebook privacy and social networks’ access to users’ lives, he was pressed to speak. The CEO has been on a press tour since the news of the privacy failure broke, reiterating his company’s commitment to increasing transparency.
Despite that effort, the phone session failed to address how Facebook plans to fix many of the larger issues plaguing the social network. While the company weathers the Cambridge Analytica storm, it is ill equipped to answer bigger questions—like whether Facebook is good for us—or the fatigue endured waiting for answers. The Q&A left us with a few lingering questions that Zuckerberg and Facebook will be forced to answer in the near future. —Molly McHugh
How much Facebook data did Cambridge Analytica actually use?
Victor Luckerson: No one seems willing to pin an exact number on how many users’ data was harvested by Cambridge Analytica. The news stories that kicked off Facebook’s current scandal said it was 50 million. On Wednesday, Facebook acknowledged the number could be as high as 87 million. But in a tweet during Zuckerberg’s press call, Cambridge Analytica said it licensed data from about 30 million users. So who’s telling the truth?
Facebook says the 87 million figure is a high-ball estimate, based on the idea that Cambridge Analytica got information on every single person who accessed the quiz app used for the data harvesting, as well as all of their friends. But Facebook’s knowledge is limited to theoretical figures. “We don’t actually know how many people’s information Kogan actually got,” Zuckerberg said, referencing the researcher who developed the quiz app. “We don’t know what he sold to Cambridge Analytica. And we don’t know today what they have in their system.”
Cambridge Analytica has claimed multiple times that the data it acquired was not used in the 2016 U.S. presidential election and that it no longer has any of it. Facebook plans to conduct a forensic audit to see if the company is telling the truth, after an investigation by the British government is completed. And while the U.S. government is already investigating Facebook, there have been calls for the Justice Department and Federal Election Commission to investigate the political firm as well.
How, exactly, will Facebook determine when a user or group of users is a bad actor?
Kate Knibbs: Facebook announced advances in its initiative to rid the platform of content from the Internet Research Agency, a Russia-based propaganda unit. “The IRA has repeatedly used complex networks of inauthentic accounts to deceive and manipulate people who use Facebook, including before, during and after the 2016 US presidential elections. It’s why we don’t want them on Facebook,” chief security officer Alex Stamos wrote in a blog post detailing the company’s push to remove IRA content, classifying the IRA as a “bad actor.”
Zuckerberg fielded questions about the company’s larger struggle to protect users, both from bad actors like the IRA and from data-scraping efforts like that undertaken by Cambridge Analytica. In the call, Zuckerberg mentioned that Facebook had shuttered a phone-call look-up feature after other bad actors had taken advantage of it. When Washington Post journalist Tony Romm asked for more information on who other bad actors were, Zuckerberg did not specify.
This begs an important question: Does Facebook have a criterion in place to identify a “bad actor”? And will it ever be transparent about how it classifies people and organizations as malicious? In the case of the Internet Research Agency, investigative journalism efforts like Adrian Chen’s 2015 New York Times Magazine exposé laid out the IRA’s malicious intent quite clearly. What isn’t clear is whether Facebook plans to rely on third parties, like journalists, to pinpoint foreign propaganda efforts, or whether it will employ an in-house team to do so. It’s also not clear whether the focus will be solely on actors outside of the United States, or whether Facebook will also be monitoring for domestic organizations set on spreading false information and exacerbating political polarization.
Will Facebook’s response to European privacy laws change privacy elsewhere?
McHugh: One confusing moment during the call came when Zuckerberg said that Facebook was going to make privacy controls the same across the platform for everyone everywhere in the world. That statement comes one day after a Reuters story that said otherwise. Facebook will soon deploy stricter privacy controls in Europe to comply with the General Data Protection Regulation (GDPR). According to Reuters, he said Facebook would use some version of the law globally, but not the exact same thing.
“I think regulations like the GDPR are very positive, and I was somewhat surprised by yesterday’s Reuters story that ran on this, because the reporter asked me if I was planning on, if we were planning on running the controls for GDPR across the world and my answer was yes,” Zuckerberg said in today’s call. He went on to say it likely wouldn’t exist in “the same format,” though. So which is it? The likely outcome is that GDPR-regulated controls will not be issued to users outside of Europe, but rather a less stringent version of them. This is one question we still need definitively answered.
What impact is this controversy having on Facebook usage?
Luckerson: Have several weeks of negative Facebook headlines and a #DeleteFacebook hashtag actually caused people to abandon the social network? “I don’t think there’s been any meaningful impact that we’ve observed,” Zuckerberg said.
That’s not a huge shock. According to the social media analytics firm Keyhole, #DeleteFacebook was tweeted about 364,000 times in the month of March, when the current controversy was cresting. #DeleteUber racked up 412,000 tweets in early 2017 when that company was going through its own PR nightmare, even though Uber has a much smaller user base. For now, the threat to leave Facebook seems to be a hollow one for most people. We’ll know for sure what impact Cambridge Analytica has had on Facebook’s bottom line when the company releases its quarterly earnings at the end of April. But even if users aren’t yet jumping ship, Zuckerberg’s repeated attempts to quell this crisis—and the fact that he still has to testify before Congress—indicate that Facebook’s headaches may only be getting started.
Will Facebook start regulating or limiting political ad targeting?
McHugh: Zuckerberg also discussed the role apps have played in data scraping and how the Cambridge Analytica scandal manipulated Facebook’s system. And while the CEO talked about increased limitations on app developers, he did not discuss whether Facebook will proactively limit or regulate political ad targeting.
Facebook’s underlying problem is that it facilitates ad targeting and then decides who has access to those tools. The platform has pulled back on touting itself as a tool for political campaigns, but we still don’t know how it’ll protect elections from manipulation or even how the Facebook political advertising business operates. One central question still remains: How did malicious actors use this data politically, and what precisely is Facebook going to do to stop it from happening again? A first step is to heavily regulate political advertising altogether.