Discover
anything

The Ring commercial pulled back the curtain on the state of mass surveillance. What comes next is ... woof.

One thing I've noticed about living through a political crisis is that you find yourself saying stuff that, before the crisis began, would have seemed nonsensical. "I would storm the gates of hell for the WeRateDogs guy," I heard myself muttering not long ago as I swiped through some Instagram reels. It was a few days after the Super Bowl. During the big game, as you might recall, the home-security company Ring, which is owned by Amazon, ran a 30-second ad demonstrating how the AI-powered object-recognition technology in its doorbell cameras could help reunite lost pets with their owners. On Instagram, the WeRateDogs guy was not having it.

“We all love a good, wholesome dog commercial,” said the guy, whose name is Matt Nelson, and whose pet-humor accounts have amassed nearly 20 million followers across Instagram, X, and TikTok. “And over the years, numerous companies have utilized dogs in Super Bowl ads to advertise all kinds of products. But Ring’s Super Bowl ad this year uses our love of dogs to do something else, which is, unfortunately, manufacture consent for mass surveillance.”

In the ad, an adorable yellow lab named Milo goes missing. Blue circles bloom across a neighborhood map, indicating a network of interlinked Ring cameras getting to work. A green box, like something from the Predator’s heads-up display, appears around a dog trotting down the sidewalk: MILO MATCH. A little girl’s face lights up with joy as a neighbor brings her beloved Milo home. Sentimental music swells. Observe the wonders Ring’s AI-enabled doorbells have brought us!

“Neither Ring’s products nor business model are built around finding lost pets,” Nelson continued, “but rather creating a lucrative mass surveillance network by turning private homes into surveillance outposts.” Nelson is a bearded 29-year-old with a gentle air about him, but a controlled sternness shone in his eyes as he explained how Ring was using his beloved dogs to launder its invasive technology. Ring’s partnership with the security monitoring and license-plate tracking firm Flock meant that footage from its cameras could be turned over to law enforcement agencies, including ICE, through what Nelson called “a warrantless and anonymous community-request service.”

You see what I mean about sentences that would have seemed absurd a few years ago? We’re living in a time when many people who ought to be providing moral leadership are failing to do so, and a time when, on the other hand, some online personalities who could have opted not to risk alienating the MAGA portion of their audience are stepping up to meet the moment. Who you find yourself in the trenches with may surprise you.


Who’s watching you right now? It’s hard to think seriously about the current mood in America without coming back to the effect of surveillance—not just the logistics and practical consequences of recording technology suffusing everyday life, but also the atmosphere; the ambient sense of being watched, listened to, tracked, and analyzed by unknown entities for undisclosed purposes. We all know the trope in the action movie where the cops dial up footage of the suspect getting off a train and zoom in on his face in the crowd, a trope you might have actually found reassuring pre-ICE—law enforcement protecting civilians from potential danger. But it’s not just the police and government watching. As the Ring ad showed, it’s also—even mostly—corporations. 

You buy a gadget. The gadget has a built-in microphone. “Do you consent to recordings of your voice being uploaded to the cloud for purposes of ...” You don’t even see the purposes, because the user license agreement is longer than Moby-Dick. You just click “yes,” because you want your toaster to work. Wait, why does my toaster need a microphone, you ask yourself as you walk down the street, and how many cameras are recording the look of perplexity that passes over your face? Where are they uploading that data? What datasets—your phone’s location history, your watch’s health stats—are their owners able to cross-reference? How secure are the walls between them?

It’s the increasing convergence of surveillance tech, AI, and authoritarian politics that makes the moment particularly sinister. Last week, the New York Times reported that Meta is intending to roll out AI-based facial-recognition capabilities in its smart glasses. In practice, AI still seems pretty prone to identifying a slice of pizza as an ambulance, but in theory, Meta’s glasses could match faces in the real world with the company's massive set of user-uploaded images, giving them the ability to identify by name anyone who has ever uploaded a selfie or been tagged in a photo on Facebook. (The feature is literally called Name Tag.) 

Privacy-wise, this is a nuclear-level disaster, for reasons that take about four seconds of casual musing to work out. I’m a thief; I stand outside the Rolex store recording the names and faces of everyone who comes out with a shopping bag. I’m a border patrol agent; I stand outside your kid’s school recording the identities of everyone who goes in and out. I’m a MAGA-friendly cop; I stand outside the protest compiling a no-effort dossier on everyone who holds up an anti-administration sign. 

It gets so much worse. According to an internal memo obtained by the Times, Meta hopes that political turmoil in the U.S. will keep people from noticing what it’s doing. “We will launch during a dynamic political environment,” says the memo, which was written in May, “where many civil society groups that we would expect to attack us would have their resources focused on other concerns.” With any luck, in other words, you will be too busy coping with the social chaos unleashed by the Trump administration to complain loudly enough to hurt Meta’s stock price. (In a statement given to the Times, Meta says it’s still considering how to release the feature and “will take a thoughtful approach if and before we roll anything out.”)

There are signs that people are sick of all this. People are especially sick of the way companies peddling AI seem to see privacy—like copyright and accurate information—as expendable. Take Microsoft. In 2024, Microsoft, as part of its campaign to inject its chatbot CoPilot into the cells of every object in the solar system, announced a feature called Recall, which would essentially create a history of everything you’d ever looked at on your computer. It worked by taking pictures every few seconds of whatever was on your screen, and when it was initially announced, it was going to be turned on by default on Windows PCs. Cool. Yes. Exactly what the world was clamoring for, to give a wildly unreliable piece of software controlled by a company with a history of handing user data over to the NSA and powering ICE access to every single thing that appears on our computer screens. Users inevitably revolted, and after sustained criticism, Microsoft was forced to release a more limited version of the feature (though maybe not that much more limited). 

Or take Ring itself. The WeRateDogs video was part of an enormous backlash to the Super Bowl spot, which almost instantly entered the pantheon of least effective ads of all time: It made people loathe a feature they’d never even heard of before the ad aired. Social media flooded with criticism. On YouTube, the 30-second ad, which is somehow still online, sits atop a mountain of outraged comments. “They don’t even try to hide their dystopian plans. They advertise them,” one reads. “Smart way to gaslight people into mass surveillance,” says another. Still another: “This is one of the most menacing TV advertisements I’ve ever seen in my life.”

Four days after the Super Bowl, Ring announced it was canceling its partnership with Flock, the surveillance company singled out in Nelson’s video. It didn’t cite the backlash to the ad as a reason for ending the deal, but the implication seemed clear enough. But even this gesture of appeasement feels weaselly: Ditching Flock has no effect on the facial-recognition features on Ring cams, and while it may stop the footage from being shared with ICE, it doesn’t stop it from being shared with other branches of law enforcement through its still-active Community Requests feature. (Ring has a long history of giving police departments access to camera footage, only curtailing the practice when it faces unwelcome scrutiny.)

A few years ago, when most of us—for reasons that now escape me—were more willing to believe that the tech industry had our interests at heart, the Ring ad might have worked. “Wow!” we'd have gushed. “They're finding lost puppies! Of course I consent to having all my activities observed and analyzed 24/7 by the world’s largest retailer, whose founder will definitely never embrace the authoritarian right!” These days, when we can’t sneeze without wondering whether the noise is being used to train AI, we’re savvier and less trusting. 

Unfortunately, the political climate under Trump is more surveillance-friendly than it has been in years, and the tech sector’s all-in bet on AI has pushed companies into an increasingly hostile posture toward their own customers. To win even modest victories over the blurry nexus of corporate and state surveillance will take a lot of case-by-case effort from a lot of regular people (and hopefully a few good pet accounts). That Meta is so worried about hiding its face-ID rollout proves that privacy advocates aren’t powerless, however. The best thing we can do is pay attention. The devastating irony of surveillance companies is that they don’t want you to see what they’re doing.

Brian Phillips
Brian Phillips
Brian Phillips is the New York Times bestselling author of ‘Impossible Owls’ and the host of the podcasts ‘Truthless’ and ‘22 Goals.’ A former staff writer for Grantland and senior writer for MTV News, he has written for The New Yorker and The New York Times Magazine, among others.

Keep Exploring

Latest in Tech