Facebook’s F8 Developer Conference ended much like it began: mildly. During the Day 1 opening keynote, CEO Mark Zuckerberg devoted almost as much time to the company’s recent privacy failures as he did to announcing new features and reaffirming why Facebook will continue to build. Usually Zuckerberg and Co. deliver a few updates about user growth and activity, and then move along to what’s new, but this year they needed to remind us about Facebook’s good side—not dissimilar from the message in the company’s new TV spot. “From now on, Facebook will do more to keep you safe and protect your privacy, so we can all get back to what made Facebook good in the first place: friends,” the commercial—which was played for the audience just before the first day’s keynote began—promises. “Because when this place does what it was built for, we all get a little closer.”
The decision to emphasize Facebook’s purpose, as opposed to focusing on its bold plans to change the digital and analog worlds, is valid. Ignoring Cambridge Analytica or the sudden, increased friction between the company and its users wasn’t an option, and thus much of the F8 conference was full of feel-good stories and reassurances. The about-face was especially obvious during Wednesday morning’s keynote. The Day 2 talk is typically when Facebook presents its moonshot projects. Last year the company unveiled its futuristic plans to connect brains to computers and updates about the Aquila drone project. Instead, this year, we reviewed Facebook’s contributions to the world of open-source artificial intelligence research and how its language-translation technology is improving, among other important, yet decidedly un-flashy, updates.
One of those improvements discussed on Day 2 was Facebook’s AI ethics work. Facebook data scientist Isabel Kloumann took the stage Wednesday morning, joking that she wanted to keep it quick since she was seven months pregnant. She then spoke about the responsibilities of raising children: You have to give them a strong set of values—just like you have to with AI systems, she said. “We have to decide: ‘How should this AI treat people?’” Kloumann explained. “And we need to support people without compromising privacy.”
In the past year, Facebook created a tool called Fairness Flow, which is meant to help eliminate unconscious bias that might slip into different algorithms Facebook engineers are building. It was first used for Facebook’s own internal job board, Kloumann said. “We wanted to ensure job recommendations weren’t biased against some groups over others.” From there, the tool expanded to the rest of Facebook. “Now we’re working to scale the Fairness Flow to evaluate the personal and societal implications of every product that we build. As a step in that direction, we’ve integrated the Fairness Flow into our internal machine-learning platform,” she said. “Any engineer can plug in to this technology and evaluate their algorithms for bias.”
Minutes later, during Facebook’s VR improvement demo, it became clear how badly the social network will need a strong code of AI ethics. Audience members watched as two videos of a first-person view of a person exploring a room played; one was real, we were told, and one was VR. The demo made it look like you were looking around a living room. Could we tell the difference? No, not really; both rooms looked incredibly real. Eventually everyone seemed to pick out the real one, and we were correct. But it was difficult to tell, amazingly so. We also saw improvements to avatars, which transformed from cartoonish to far more human. These aren’t Bitmoji impressions of people; these are lifelike re-creations mapping faces so well that it’s easy to imagine a future when VR simulations are confused for actual people.
Facebook said its goal is to make the VR world “indistinguishable” from the real one. It looks like the company will be able to, and those worlds will be built in part thanks to AI and neural networks. How Facebook constructs the new world is more important than ever. Algorithms have always been subjected to implicit bias and prejudice, and if the social network is going to build these virtual worlds, they’ll end up reflecting the ideals of the people building them (or rather, the people building the AI networks and algorithms that build them).
It’s possible, even likely, that this was a topic Facebook planned to present at F8 all along. But ethics have certainly become more relevant to the company than it could have imagined a month ago, and recent news has had a sobering effect on this year’s conference. Usually, ethics would have been a footnote at F8; maybe it’s for the best that this year they were a focus.