
My name, Molly, evokes several cultural references. It’s code for a certain party drug which became widely known in part because of Miley Cyrus lyrics (strangers give me the wink-wink-nudge-nudge about it upon introduction). It’s also a favorite eponym for big, dumb golden retrievers, everyone’s fourth-favorite American Girl doll, and not much else—certainly nothing in the technology world. So when Chris Messina—formerly a developer advocate and UX designer at Google, part of Uber’s developer experience, and the man who helped Twitter craft the hashtag—DM’d me about a new venture called Molly, yes, I initially clicked because of the name. What I got was a mini existential and ethical crisis, as well as a tiny window into what life is now like for anyone named “Alexa.”
Molly is essentially a question-and-answer bot. The app (also available at molly.com) allows users to ask questions of anyone via Molly. For instance, if I had a question for Mark Zuckerberg and he were a Molly user, I could find him on the app and pose whatever query I wanted. If it’s something he’d already answered before—either via Molly, which sends users prompts to answer random questions throughout the day, or via Product Hunt AMA, which are fed into Molly—I would see the answer.

Eventually, though, it will provide answers that are not already in its system using machine learning. By using all the information the app can gather about you (upon creating a profile, you have the option to add social accounts from which Molly can learn), it will be able to predict an answer to a question relevant to you. Molly would confirm its formulated answer with you—to affirm yes, that’s what you would say—and whoever asked the query will be sent an answer. For instance, if someone asked me via Molly if I’d ever been to the Philippines—information I know I haven’t given to Molly—the app could gather from my connected social media profiles that yes, I had. It would craft the affirmative answer, send me a notification to confirm it was correct, and then send that response to whoever had asked the question. Currently, the app poses more questions to users who get a high volume of inbound questions, making responding to everything impossible.
At the moment, Messina says, Molly hasn’t moved into this answer-creating territory. And more crucially, he stresses that the idea is not to automate a person. The app is still building a user base and learning about them in order to eventually respond on their behalf. He wants Molly to become a smarter bot that connects people instead of replacing them. Still, the idea of uploading the contents of our brains and our personalities into a digital database—even in an effort to save time and allow our actual selves to be offline more—shouldn’t go unexamined.
Do we want to turn ourselves into bots? What if a bot created to speak for you offered offensive responses? Or what if the delivery of an answer was wrong or bad or phrased strangely? This fear has justifiably slowed bot adoption, but it has also stifled the progress necessary to make these solutions tenable ones.
So far, the mass introduction of bots (mainly via Facebook) has been disappointing. The ones that exist have been half-hearted and underwhelming, and largely failed to save time and find answers. About two and a half years after launching its AI assistant M, Facebook shut it down when it did not deliver on its promise of streamlining certain elements of the Facebook experiment.
The New York Times recently wrote about the fraught existence of “conversational chatbots.” The Times explained that the best way to make these systems more sophisticated is to increase the number of people who use them, but that carries its own risk. Remember Microsoft’s Tay bot, the Holocaust denier? It was intended as a fun conversational bot on Twitter, learning from its dialogues with other users. Within 24 hours, hateful rhetoric that can be found on the platform had rendered Tay into a racist, misogynist account. Tay bot was shuttered, and any hopes for it to become smarter and pave a way for better bots were dashed.
One of the biggest philosophical queries of the digital age is, if you upload yourself and your mind to a computer, are you still you? This is not lost on Molly, the app; she asked me this question. The loss of self with the rise of technology is an oft-discussed topic, particularly with regard to the singularity, and while Molly doesn’t directly address it, the app contributes to the existential crisis. I asked Irina Raicu, director of the Internet Ethics Program at Santa Clara University, what she thought.

“I don’t think that what this app purports to do constitutes ‘uploading yourself’ into a computer. You would still be you; the chatbot would be a simulacrum of you—a stagnant version of you that doesn’t change, doesn’t learn from interactions with others, doesn’t have spontaneity—or worse, might have the ability to ‘learn’ and adjust but in ways that would be different than the changes you would make yourself,” she told me via email.
So the plot of Transcendence coming to life isn’t Raicu’s concern with Molly; instead, she’s worried about how much education it will provide those asking questions with the app. Will Molly inform users that some of its answer text is crafted by the algorithm and only confirmed by the experts? Will some users think they’re actually having an interaction with another living Molly member, or will they know it’s a Molly-assisted version? And will these interactions with computer-generated responses be more meaningful than with a person on the other end?
Messina thinks all interactions on his platform can be worthwhile. Part of the inspiration for Molly, he explains, came from OkCupid. Messina says that the dating app’s rich algorithms pull information that users provide to find matches, but the success of the app is also its downfall. “If it succeeds, then you don’t need the product anymore,” he says. You find your match, the one that finally takes you off OkCupid, and that’s the end of your membership. Molly is different because it can function as a discovery engine for platonic relationships between people who otherwise wouldn’t be asking and answering each other’s questions.
He likens it to “an AI answering machine for the web,” wherein users can ask everything from questions like “What’s the best restaurant in the Haight?” to “What do you think about the Fermi paradox?” to “Who wants to see Annihilation Friday?” If you posed these queries on Facebook, you would likely get radio silence or responses from people you didn’t want to hear from or simply answers you didn’t want. The idea is that Molly would know whom to pose these questions to and gather answers even if the people you wanted them from weren’t logging into the app and seeking out your questions. The app is actually more akin to an AIM away message and an answering machine, except instead of providing a static message, it would be able to respond for us.
Messina envisions Molly as a time-saver and a way to curb some of the detrimental online behavior we’re starting to develop—something everyone is becoming increasingly wary of. “[We need to] give people back some of their time and attention,” he says. One way to do that is through conversational tools: Products like Google Home and Amazon’s Echo with Alexa are chief examples Messina points to. He’s right: As of December 2017, the Pew Research Center found that 46 percent of U.S. adults used voice assistant technology (most of them via smartphones), indicating increasing comfort with allowing certain types of AI into our daily lives. And Google’s auto-suggest email reply functions has proved to be a fairly intuitive tool. Messina identified a shift: We’re going from making technology easier for us to understand to making it easier for technology to understand us.
Over the past week, as I played around with Molly, answering questions and helping the app create a repository of my responses, the coincidence of sharing a name with it was not lost on me. It seems more likely than not that all of us will someday, to some degree, have to reckon with the amount of information we’ve handed over to internet platforms. But will we find ways to harness the information we’ve turned over? Perhaps. Messina hopes his tool will aid in allowing us to take back more of the real world for our authentic selves, offloading some quotidian work to these digitized databases of ourselves—in my case, to the other Molly. If the current rate of social web development is any indication, I’ll find out sooner rather than later.