clock menu more-arrow no yes

Filed under:

Don’t Blame the Poop Roomba

On the “poopocalypse,” self-driving cars, and taking responsibility in an automated world

Ringer illustration
Ringer illustration

Before you get too disappointed, right off the bat, I’d like to be clear that Poop Roomba is not the name of a scatalogically fixated hardcore band. (Although it could be!) It’s an Arkansas-based dad’s Facebook post about how his Roomba scooted puppy poop all over his home. Kinda like Chewbacca Mom, except Poop Roomba Dad. His post stirred up more stories of “poopocalypses” wrought by the combo of vacuum robot + dog droppings. “Quite honestly, we see this a lot,” a spokesperson from iRobot, the company that makes Roomba, told The Guardian. As a follow-up, The Guardian asked iRobot if it planned to introduce any poop-detection technology to its automated vacuum product.

I find this question ridiculous, even more ridiculous than waking up in a house decorated with pet feces — even more ridiculous than actual scatalogically fixated hardcore band names. (To be fair to Poop Roomba Dad, he wasn’t blaming iRobot for his troubles.) There are other horror stories about Roombas smearing cat vomit all over — should Roomba develop cat vomit technology as well? Should Roomba develop a chili-spill detector to avoid a “pintopocalypse?”

In this case, there was no design failure on iRobot’s part. There are many cases of gadget disasters that are the manufacturer’s fault. For instance, your iPhone explodes. Or your Samsung Galaxy explodes. Or your hoverboard explodes. When this happens, the company making it likely screwed up somewhere along the way. But this time, the little round vacuum worked as advertised. A wayward asshole — in this case, a dog’s asshole — messed things up. The lesson of Poop Roomba: When technology fails us, it’s often because some asshole messed things up.

But the relationship between automated consumer technology and responsibility is often way more convoluted than the Poop Roomba incident makes it seem, and the question of who is responsible for the monumental failures that can happen when fallible humans use fallible devices is often not as funny as “Who can we blame for a Poop Roomba?”

Here’s the setup to a joke: What do a Roomba and a self-driving car have in common?

The appeal of a Roomba is that you don’t have to vacuum. The robot takes care of the chore. That’s also the appeal of self-driving cars — the monotony of paying attention to the road is lifted. You can be a laid-back passenger.

And the joke’s on you, dear reader, because I want to stop talking about HILARIOUS things (like Poop Roombas) and talk about something more serious: the murky question of responsibility and self-driving vehicles.

Google releases monthly reports about its self-driving cars, and, for a long time, none of its reported crashes were caused by its fleet. It was always a human driver befouling the self-driving vehicle with a human error, and this was a point of pride. This changed last February, when Google admitted that one of its self-driving cars had, in fact, caused a crash after it sideswiped a bus attempting to avoid sandbags in the road. The algorithm was not fail-safe.

“This is a classic example of the negotiation that’s a normal part of driving — we’re all trying to predict each other’s movements. In this case, we clearly bear some responsibility, because if our car hadn’t moved there wouldn’t have been a collision,” Google’s crash report reads. But it goes on to note that the test driver in the car made the same bad assumption as the machine. “Our test driver believed the bus was going to slow or stop to allow us to merge into the traffic, and that there would be sufficient space to do that.”

If Google only bears some of the responsibility, who bears the rest? Is it the bus driver’s fault? The test driver? Google’s report noted that, in the future, its cars would “more deeply understand” how buses operate compared to other vehicles. But this crash suggests something that should be fairly obvious: There will probably never be a self-driving car that is 100 percent impervious to mistakes.

More recently, Tesla’s Model S, which uses semiautomated driving technology, was involved in its first fatal crash in Florida this June, prompting an investigation. Tesla has admitted that its autopilot system didn’t register the truck that the Model S struck as an obstacle, but officials insisted that the system is safe. In their eyes, the driver hadn’t been properly aware of the risks, and that’s why the crash happened. Later, federal investigators found that the driver had been speeding.

With self-driving cars, the question of responsibility — the question of who is the asshole here? — has much more weight than a pile of poop. Is it the driver’s responsibility to carefully assess the risks of driving something like the Tesla Model S, released in imperfect beta? Or is it the company’s responsibility to ensure that it is properly educating its customers about the risks, and to assume liability for damages?

Some car manufacturers have decided that it’s best to take on the responsibility. Volvo, for instance, has accepted “full liability” for crashes that involve its self-driving cars. This means even if a Volvo driver is jacking off to Harry Potter and doesn’t bother taking any corrective action when there’s an unexpected obstacle in the road, it’s still Volvo’s fault if it crashes. But that’s an expensive proposition, and it’s unlikely that all manufacturers will embrace it. The Atlantic suggested that, rather than have companies or owners take responsibility, it’d make more sense to give robots personhood so that the individual machines can take the blame.

A little self-driving robot mows into poop, and it’s a funny anecdote on Facebook. But when the stakes are higher and the question of responsibility is murkier, we’ll get darker documents: police reports, lawsuits, hospital bills. Poop Roomba is one of the funniest phrases I’ve heard in awhile, and also a wry little mascot for our flawed, messy relationship with technology.