Step 1: participate in a meme. Step 2: worry about how your identity might be sold or stolen. It’s the new American pastime!

On Tuesday night, my phone lit up with images of the elderly. A friend in one of my group texts had downloaded a viral photo-editing software called FaceApp and run each member’s photo through its “Old” filter. For a brief moment, we convened in horror and fascination over our future selves. “I look grumpy,” one member of the chat observed. “Age has been unkind to me,” another concluded. I spent at least 10 minutes examining the lines of my face that the app’s artificial intelligence had generated, obsessing over the loose, leathery skin that awaited me at 80. 

The next morning, we all woke up with infosec hangovers. “I’m now in the ‘paranoid that it’s part of surveillance/training neural nets’ phase of interest,” said the same friend who had innocently noted his graying beard the evening before. The rest of the internet seemed to be coming to a similar realization. By Wednesday, everyone from noted old-person cosplayer Heidi Klum to Drake had dutifully run their images through the software and posted them to social media, doing their part to feed the voracious content mill. But that’s around the time that people began to wonder where this digital crystal ball came from, and what it wanted in exchange for this brief peek into the future. 

Scholars of gimmicky photo-editing software know that this is not our first FaceApp rodeo. The app was released in 2017 amid a rush of face-altering services like the Chinese-owned Meitu and drew attention for its quick ability to alter the gender, hairstyle, and age of people’s portraits. Not long after its debut, FaceApp faced backlash for a “Hot” filter that lightened users’ skin tones. Soon after, it came under fire for a tool that turned people into racist caricatures. Nevertheless, it has amassed more than 80 million users. 

With each scandal, the app has set off suspicious side conversations along the lines of: Who are these people, and what are they doing with all our dumb selfies, anyway? FaceApp is owned by a company called Wireless Lab, which is run by a handful of developers in St. Petersburg, Russia. Their origins are of little comfort to digital security hawks, who warned on Twitter that, like many other apps, the company’s broad terms of service gives it license to use your photos, name, username, and likeness for any purpose, including advertisements. Nor did it quell the understandably jittery Democratic National Committee, which sent a security alert to 2020 presidential campaigns on Wednesday afternoon, warning them not to use FaceApp. “This app allows users to perform different transformations on photos of people, such as aging the person in the picture,” DNC chief security officer Bob Lord wrote in an email. “Unfortunately, this novelty is not without risk: FaceApp was developed by Russians. … It’s not clear at this point what the privacy risks are, but what is clear is that the benefits of avoiding the app outweigh the risks.” (A DNC press representative confirmed the message.)

FaceApp’s predatory nature has been largely exaggerated. The company’s CEO Yaroslav Goncharov told The Washington Post that, though its research and development team is based in Russia, user data is not transferred into the country and “most images” are removed from FaceApp’s servers within 48 hours. Security researcher Baptiste Robert, who goes by the pseudonym Elliot Alderson, took it upon himself to confirm these details for Forbes. He found that the app transferred only submitted user photos (not entire photo rolls) to company servers, and that those servers were mostly hosted by Amazon and Google in the U.S. “While Russian intelligence or police agencies could demand FaceApp hand over data if they believed it was lawful, they’d have a considerably harder time getting that information from Amazon in the U.S.,” concluded security reporter Thomas Brewster.

It’s a relief to know that the online gag we all turned to for temporary entertainment does not appear to be part of some larger scheme to undermine our democracy. (Though some have argued that late capitalism is a concerning enough scheme on its own.) But the ease at which these face filters go viral, paired with the excessive permissions they seek, mark a new era of meme-propelled information harvesting techniques. The subsequent panic over their terms of service agreements proves that we, as a society, have become riddled with anxiety over predatory online schemes. Soon after the Chinese-owned app Meitu went viral for its anime face filters in 2017, people realized that it sought an uncomfortable amount of tracking information, including access to the GPS on their phones, and deleted it en masse. The following year, when Google released an app that matched people’s selfies to works of art, some privacy experts surmised that the company may have also been using those photos to better train its AI-facial recognition models. In January 2019, Facebook posted side-by-side photos from 2008 and 2018 for something called the #10YearChallenge. But it was only after users had tallied their likes that we considered how the massive social network might be able to leverage the information we willfully provided to it for its own opaque purposes. Selfie-centric, App Store–sanctioned data phishing is now a stressful fact of our online existence.

It’s the same old story, now filtered through advanced software. “Persuading people to do things under the auspices of thinking they’re doing something else has been around forever,” said Crane Hassold, a senior director of threat research at the email security company Agari. “It’s all social engineering. You look back, and the best example from thousands of years ago is the Trojan horse.” Hassold spent 11 years in the FBI analyzing the motivations of criminals both online and off, and now researches the habits of email phishing groups. He traces the same sort of inherent sense of flattery that the Greeks used to invade the independent city of Troy to modern-day information breaches like Facebook’s Cambridge Analytica scandal, which used BuzzFeed-style quiz formats to lure users into sharing personal information with the company. “It’s all about basic human curiosity, and it’s very hard to override that,” Hassold said. “There’s a reason why ‘curiosity killed the cat’ is a very overused saying. Because as human beings, we want to know about ourselves, and if someone is offering information about how we can better understand who we are, we’ll usually take it.” 

Traditionally, phishing is an explicitly illegal form of fraud that attempts to goad people into offering personal data via email correspondence. But as third-party companies have boldly extended their reach into users’ personal phone data, a more sanctioned version of this technique has emerged. Apps have found healthy business in selling user information to third-parties. The New York Times reported last year that at least 75 companies receive anonymous, precise location data from apps. And several of those companies claim to track up to 200 million mobile devices in the United States alone. As Facebook has demonstrated, social networks tend to downplay their involvement in such practices or straight-up lie about them.

Though we don’t know for sure whether the FaceApps or Meitus of the world have used our images for those purposes, we do know that they have developed extremely effective schemes for amassing a giant database of our photos—and, depending on the permissions they request—much more. In many ways, these startups are not unlike the Nigerian cybercriminal organization that Hassold recently wrote a report on. Over the span of 10 years, the organization grew from a single sole proprietorship that focused on Craigslist scams to a 35-plus group that has expanded from romance scams to enterprise-focused phishing attacks to government-focused phishing attacks. Though the illegal activities they take part in are far more fraudulent, their structure and general business goals have commonalities with many of the startups that bait us into downloading their apps to participate in a single, fleeting trend.

By meeting us in a digital comfort zone, somewhere in between a pop culture movement and a platform-sanctioned exchange, face-filtering apps enjoy a level of trust that moves far beyond email or phone calls. “If you’re on a third-party website, like a third-party Chinese website and you see some apps on there, there’s probably going to be a little bit of hesitation to trust what those are,” Hassold said. “But when you’re on something like Facebook, or when you download an app from Google Play or iTunes or the App Store, you trust that what’s on there is going to be legitimate.”

Hassold notes that the apparent approval of celebrities makes it all the more dangerous. As a New York Giants fan, he even happened upon the FaceApp filter applied to Saquon Barkley on the team’s official feed. “It’s easy for me to say, if you don’t need it, don’t download it, but that’s not really going to happen,” he said. Like so many times before, predatory capitalism and our own insatiable egos have led us into yet another digital trap. If we’ve learned anything from the past four years, what’s group chat-fodder today might be a democracy-undermining scheme tomorrow.

Keep Exploring

Latest in Tech