Below the Fold

When artificial intelligence becomes artificial intimacy
Author(s): 
May 1, 2020

The Woebot chatbot - official illustration

With the world in self-isolating lockdown, people are relying more than ever on technology to maintain a range of relationships and social bonds. Some in the technology sector want to take this dependence even further. They envision a world where we don’t just use their products to talk to each other, but also to talk to technology as if it were another human—through advancements in artificial intelligence (AI). 

While interactive AI can provide real benefits in some circumstances—customer service chatbots are continually improving, and robots such as Paro the seal provide therapeutic companionship to isolated patients—the use of AI to replace human-to-human conversation in personal situations raises a host of ethical issues. 

For example, AI-based chatbots for mental health, such as Woebot (pictured) or Wysa, attempt to “converse” with users and guide them through cognitive behavioural therapy techniques. Google Duplex can make reservations and appointments on your behalf, in a hyper-realistic human voice, by phone. The messaging app Mei will advise you how to continue a conversation after purportedly analyzing the other person’s emotional state based on incoming texts. 

Even more intimately, the unreleased Assist app supplies users with fully composed comments, questions, invitations and “opening lines” based on their personal dating preferences, with the goal of making the user appear “curious,” “cute,” “witty” or “funny.” Assist users can set the app to send out messages in large automated batches to potential dating matches, who are none the wiser.

Not all ethical, legal or sociological concerns about AI-assisted communication may apply in all cases. However, developers, users and regulators must keep four critical considerations in mind when assessing whether a particular service would ultimately benefit or harm human relationships and well-being in the long run.

First, lack of transparency and disclosure risks manipulation or exploitation. A business might use Google Duplex to cultivate a false human connection with individual customers and engender higher spending. In a dating context, as podcast host Laurie Segall noted in an interview with Assist founder Shane Mac, “When the machine [as secretly deployed by the other person]…gets some kind of vulnerability out of you, it feels like you’ve been violated in some way.”

Second, AI has shown notorious bias problems to date, for instance, in racist and sexist online search results, facial recognition technologies, and hiring and criminal justice algorithms. If AI is used to analyze and suggest text for social conversations, it may similarly perpetuate the biases of its developers and its users, or exclude the perspectives of entire groups outright. “We don’t really have it [Assist] for a woman,” Mac told Segall in the same interview.

In fact, the very existence of Assist and similar dating bots reflects a broader gender bias in the AI-based tech world: the devaluation of what is generally known as emotional labour, which is disproportionately performed by women in society. Consider the “problem” these apps set out to “solve” with AI: the need to spend time putting thought into communication, parsing your tone and that of others, engaging with another’s interests and words, assessing their emotional state and considering the impact of your own words. 

Women are expected to do all this “naturally,” or often learn to do so to avoid negative repercussions; for men, there’s an app for that. (To be clear, some individuals with autism or speech disabilities may benefit from similar technology, in which cases the tool is closer to a logistical aid to the act of expression rather than the source of the words or sentiment being expressed.)

Third, apps that analyze or store users’ sensitive personal information, like their health conditions, sexual interests and conversational patterns, automatically raise privacy, security and consent issues. The Mei app could disclose to a co-worker that you’re depressed, for example. 

Fourth, and perhaps most importantly, we must ask whether the human communication to be substituted by AI generally serves an end in itself, or as a means to some other objective.

Customers of banks or retail stores do not usually talk to the bot, or the human, for the sake of talking to them. Therapy chatbots are more complicated due to the vulnerability of those seeking help, but ultimately they too can be seen as a means to an end—that of improved mental health. 

But in the context of dating and relationships, meaningful human connection achieved through conversation is often an end in itself. Relying on software like the Assist app sets up those involved for failure or disappointment at best, and manipulative deceit at worst. 

In short, using AI to replace human conversation, where connecting through conversation is the end and not the means, sets the groundwork for friends, partners, or those in other intimate or significant relationships, to outsource empathy, sensitivity, understanding and general emotional intelligence to a bot. In this transaction, users risk losing the ability—and responsibility—to genuinely develop those skills in themselves. 

That is the point at which we risk becoming the most isolated of all. 


Cynthia Khoo is a technology and human rights lawyer and founder of Tekhnos Law, a sole practice digital rights law firm. 

Offices: