In the debut ad for Amazon Go, the e-tailer’s new physical store concept, hot but environmentally conscious urbanites shovel perishable goods into their tote bags, then stroll out of the store without paying a cashier. How can such sorcery be achieved? “We used computer vision, deep-learning algorithms, and sensor fusion, much like you’d find in self-driving cars,” a narrator says in the ad. “We call it ‘just-walk-out technology.’”
OK, got it. Actually, wait … what? Tech companies have gotten so scarily adept at breezily dropping technical jargon over sedating indie rock guitar riffs that it’s easy to miss that the Amazon Go store is quite possibly a camera-and-microphone-laden surveillance box that will follow your every move and potentially gather demographic data about your shopping habits based on your skin color.
Nascent technologies are sometime pitched with the M-word: magic. Amazon considered calling its new two-hour delivery service “Amazon Magic,” but opted for the less hubristic Prime Now. Apple, whose marketing and brand is still grounded in Steve Jobs’s reality-distortion field, at least hedged a bit by calling the iPhone 7 “practically magic.” The one firm bold enough to go all in on the descriptor is the augmented-reality company Magic Leap, and — surprise! — the startup’s headset may not be as magical as originally pitched. To use the word “magic” to describe a technology in 2016 is to ask for your comeuppance. That’s why Silicon Valley has traded in “magic” for a host of technical buzzwords that ultimately mean the same thing: Look in awe at the things I am able to achieve, and don’t dare ask me to reveal my secrets.
There are now plenty of technical words that do more to obscure a company’s goals and processes than define them — and we in the press often make this process easier by applying too many definitions to a single concept (see: the current debate over fake news, a propaganda war about a propaganda war). These are just a few of the terms that were trotted out too many times in 2016 and should be replaced or recontextualized in the year to come.
Definition: “A step-by-step procedure for solving a problem or accomplishing some end especially by a computer.”
Perhaps the most pernicious term in consumer technology today, the algorithm is granted an undue level of scientific reverence, though it typically influences user activity to maximize corporate aims. The most famous example is Facebook’s News Feed. The company states that the goal of its algorithm is to show users the “stories that are most relevant to them” in an unbiased manner. (“We don’t favor specific kinds of sources,” says Facebook.) But Facebook’s other, unstated goals include maximizing engagement and prodding users toward specific ad-ready Facebook products (get ready for more video). Sometimes these goals are aligned, but not always — hence, a false headline about the pope endorsing Donald Trump can go viral because it is tailored to perform well in the Facebook ecosystem, whether the story is factual or not. Though we don’t know the exact rules that govern tech companies’ algorithms, it’s easy to observe how their games are played. “Blame it on the algorithm” can no longer be a valid excuse when problems arise.
Definition: “The capability of a machine to imitate intelligent human behavior.”
Remember chatbots? When Facebook rolled out its own bot platform at F8 this year, there was a lot of talk about how programs powered by artificial intelligence could revolutionize the way brands and customers interact. Instead, they’re often either glorified keyword searches (the CNN bot) or largely scripted interactions that are about as entertaining as sparring with SmarterChild back in the AIM days. Artificial intelligence is an extremely old research field, and ongoing advances are being applied to subtly improve the products we use every day. But if a startup is using AI as the main pitch for its new idea, chances are it’s overhyped.
Definition: “An enhanced version of reality created by the use of technology to overlay digital information on an image of something being viewed through a device (as a smartphone camera).”
Virtual reality continues to carry a stigma thanks to things like Nintendo’s failed ’90s headset and that one scene in First Kid. But augmented reality — now that sounds like a winner! 2016 brought a nonstop stream of AR hype, from the breakout success of Pokémon Go, which overlays virtual Pikachus in real-world locales, to the first in-depth look at the secretive Magic Leap, which has been teasing a fantastical immersive headset for more than a year. But Pokémon, while still popular, is no longer a world-beating phenomenon. And Magic Leap’s tech is still far off from being usable in a reasonably priced portable headset, according to a recent report by The Information. Perhaps it’s Microsoft that will incite the AR revolution — but not while its HoloLens costs $3,000. This is one trend that still isn’t advanced or affordable enough to reach its full potential.
If teaching computers to “hear” has been a driving goal of AI research in the last five years, teaching them to “see” will be a major focus in the coming five. Computer vision will be an important tool for making driverless cars safe, as the vehicles must be able to see and assess obstacles in their path. It’s also key to the Amazon Go concept, which could use high-definition cameras to assess and log changes to product inventory when a customer grabs a bag of chips. But the fancy tech lingo obscures the fact that this tech is powered by filling our world with more omniscient cameras. A security camera at a regular convenience store captures footage that is useless until a person (let’s say a cop) chooses to access it (let’s say after obtaining a subpoena). A camera at the Amazon store, using computer vision, could automatically transform a customer’s real-world activities into actionable data without human intervention, creating marketing profiles — or criminal ones. The companies that tout “computer vision” as a selling point are really talking about turning our physical movements into data points — so be careful whose camera lens you fall under.
Definition: “The subfield of computer science that ‘gives computers the ability to learn without being explicitly programmed.’”
When a company wants to sound like it’s got something more cutting-edge than mere artificial intelligence, it’ll trot out machine learning. The concept of feeding computers mountains of data in hopes that the machines can learn from them independently (i.e., intuiting what a cat looks like) has actually been around for decades, but it’s only been in recent years that processing power has advanced to make the technique practical. Machine learning is now used to power everything from Facebook’s News Feed to Google’s search results to the Uber app. But as with the word “algorithm,” it’s important to not conflate “machine learning” with “objectively correct,” because there are still human programmers choosing what data to feed into these systems. Last year, Google faced embarrassment when its object-recognition software mistakenly labeled black people as gorillas. We have a ways to go.