The new feature is fun, but what if it steers you wrong?

Amid a handful of announcements at the I/O developers’ conference Wednesday, Google introduced a feature that allows you to Shazam the world. The artificial-intelligence-powered software is integrated into Siri’s much sharper competitor, Google Assistant, and is straightforward: Point your phone’s camera at a storefront, and it’ll surface its cumulative online rating. Point it at a concert poster, and your screen will pull up tickets. Point it at a flower, and it will school you on the species.

The feature was a crowd favorite. At no other point in the company’s two-hour keynote did the audience cheer so loudly as when engineering VP Scott Huffman aimed the camera at a router’s network information and automatically logged onto the Wi-Fi. Sure, Google knows its audience. But those developers were excited for a bigger reason than making it easier to connect to the internet: If Google’s camera-based phone searches work, they have the potential to change how we interact with the world. The tool could very well streamline our curiosities the same way Google’s search tool did temperature conversions and celebrity heights. We will never look at a city skyline and struggle to remember the name of a building again. We will be able to identify the difference between pappardelle and tagliatelle immediately. We can stop and smell the flowers and learn their complicated Latin names, too.

With every cheery demonstration of powerful technology there also comes a foreboding question of its real-life applications. Since it became a verb, Google has played an often controversial role in reinforcing the perceptions of both society and its own woefully uniform staff. A year ago BuzzFeed compiled a comprehensive list of recent offenses built into the company’s software, many of which were surfaced by black users. When one woman did an image search for “beautiful dreadlocks,” the query brought up mostly photos of white people with the hairstyle. Another woman discovered that the search “unprofessional hairstyles for work” yielded images of black women while “professional hairstyles for work” brought up images of white women. In other cases, Google’s algorithm has unintentionally adopted the blatantly racist beliefs of its vast customer base. In 2015, users discovered that searching for “n*gga house” in Google Maps directed users to the White House. That same year, a tool that automatically categorizes images in the Google Photos app tagged a black user and his friend as gorillas, a particularly egregious error considering that comparison is often used by white supremacists as a deliberately racist insult. Even though the company quickly updated the app and profusely apologized, it was clear that the developer team that tested out the app’s beta version was not particularly diverse.

Technology companies exhibited racial bias in their products long before Google came around. Camera companies like Kodak sold film that photographed white skin better than black skin, and companies like Nikon have also shown racial bias toward Caucasian features in their facial-recognition technology. But as AI continues to infiltrate nearly every aspect of our digital lives, the sexist and racist tidbits tucked into machine learning add up to something more harmful: widely used systems that don’t have the same ability as humans to correct or learn from their cultural insensitivities. As Joanna Bryson, a computer scientist and coauthor of a recent study on the gender and racial biases of AI systems, put it: “A danger would be if you had an AI system that didn’t have an explicit part that was driven by moral ideas.” We have already seen that danger play out in even simpler “smart” software systems. A 2016 investigation from ProPublica found that significant racial disparities were embedded within the risk-assessment software used in courts across the country, resulting in more severe sentencing for black offenders than for white ones — real, life-ruining consequences.

Google Assistant’s point-and-search feature has no role in determining prison sentences, but it is another step in the increasingly intimate relationship between people and our computers. If the feature catches on, it could very well be the way thousands, maybe millions, of people instantly contextualize the people, places, cultures, and foods around them. The immediacy of those results may be particularly influential, encouraging our brains to process tidbits of information that Google’s AI sees as relevant without providing necessary educational subtleties that the real world often demands.

The potential to gloss over these details was hinted at during Wednesday’s demo. During Huffman’s presentation, he used a photo of a restaurant sign written in Japanese to demonstrate its impressive translation abilities. Within seconds, the screen transformed the characters to English that read “Octopus dumplings 6 pieces 130 yen,” and, once prompted, brought up photos of the dish.

It was a translation that seemed weird to Dami Lee, a social media manager at The Verge. The image that appeared was not one she would ever describe as an “octopus dumpling,” but rather, takoyaki — a doughy ball of minced octopus that’s often sold as street food in Japan.

“It’s like calling a croissant ‘rolled up bread,’” she told me via Twitter DM. “Maybe it’s like a pet peeve for me, but the only way to understand a new food is to call it by its name. Like the difference between seeing ‘bibimbap’ on the menu or having it translated to like ‘mixed rice with vegetables.’”

Huffman’s presentation was an innocent way to show off the app’s helpful machine-learning capabilities on the go — something that would be valuable to any clueless English-speaking tourist. But, even for something as simple as identifying a delicious snack, the AI managed to project a clunky un-American “otherness.” My bet is it will by no means be Google Assistant’s worst offense.

Keep Exploring

Latest in Tech