AI Chatbot Part 2

Credit: Google Image

People open up more easily to computers than humans

Ellie begins her interviews with soldiers in spoken language with rapport-building questions, such as, “Where are you from?” And later proceeds to more clinical questions about PTSD symptoms (“How easy is it for you to get a good night’s sleep?”). Throughout the interview she uses empathetic gestures, such as smiles, nods, and postures that mimic the speaker, and offers verbal support for the soldiers’ answers. According to findings published in August of 2014 in the journal Computers in Human Behavior, when soldiers in one group were told there was a bot behind the Ellie program instead of a person, they were more likely to express the full extent of their emotions and experiences, especially negative ones, both verbally and non-verbally. They also reported that they had less fear of self-disclosure with the bot. A later study, published in Frontiers in Robotics and AI in October 2017, found that soldiers were also more willing to reveal negative emotions and experiences to Ellie than they were to an anonymous government health survey called the Post-Deployment Health Assessment. Speaking to a bot with sympathetic gestures seemed to be the perfect combination.

But what happens when relationships with AI develop into actual friendship over long time spans, when people share daily intimacies and the most significant emotional upheavals of their lives with AI friends over weeks, months or even decades? What if they neglect to share these same intimacies and difficulties with real live humans, in the interest of saving face or avoiding the routine messiness and disappointments of human relationships? The honest answer is, we don’t know yet, Astrid Weiss, a researcher who studies human-robot interaction at TU Wien University in Vienna, Austria, told reporter. There are no studies of long-term AI-human relationships, she explains, because they basically haven’t existed until now.

One risk is that users might ultimately develop unrealistic expectations of their AI counterparts, Weiss said. “Chatbots are not really reciprocal,” she said. “They don’t give back the same way as humans give back.” In the long run, investing too much time in a relationship with a machine that won’t really give back could lead to depression and loneliness. Another problem is that forming a connection with a machine that makes no judgement and can turned on and off on a whim could easily condition us to expect the same from human relationships.

Over time, this could lead to some antisocial behavior with other humans. “We may want AI chatbots for the intimacy they promise not to ask for, for the challenges they won’t put to us,” Thomas Arnold, a researcher at the Human-Robot Interaction Laboratory at Tufts University said. “At a certain point we need to consider that we are just not that into each other.”

Another potential danger that chatbots (particularly Replika) present is that if they learn to mimic our own speech and thought patterns too closely over time, it could deepen some of the psychological ruts we may already find ourselves in, such as anger, isolation, or even xenophobia, Richard Yonck, whose 2017 book Heart of the Machine speculates about the future of human-AI interactions, told reporter. (Remember Tay, the AI bot created by Microsoft that learned to be racist in less than 24 hours on Twitter?) Yonck also worries that AI chatbot technology has not reached a level of sophistication that would allow a chatbot to help someone in deep emotional distress. “You better have some really, really good confidence in both the contextual, but also emotional, sensitivity of the bot that’s dealing with that. I don’t think we’re anywhere near close enough,” he said.

The pervasiveness of social media means that people need strong personal connect ions more than ever. Research on long-term engagement with social media suggests that engaging with avatars rather than real humans makes people feel more alienated and anxious, particularly among young people. One widely cited MIT study from 2010 reported a 40 percent decline in empathy among college students over the past twenty years, which is widely attributed to the rise of the internet. Jean Twenge, a psychologist at San Diego State University, has written extensively about the correlations between social media, poor mental health, and rocketing rates of suicide in young people. “As teens have started spending less time together, they have become less likely to kill one another, and more likely to kill themselves.” She wrote in The Atlantic late last year. There’s a growing movement against the addictiveness and ubiquity of cell phones and social media, especially for kids and teens. Sherry Turkle, an MIT sociologist and psychologist who studies how internet culture influences human behavior, believes that restoring the art of conversation is the cure to the rampant disconnection of our age.

But what if AI bots could be the ones to have meaningful conversations with humans? Yonck said that for bots to approximate conversations between humans, engineers would have to clear a few major technology hurdles first. The biggest challenge for AI developers is to engineer the “theory of mind,” the ability to recognize and ascribe mental states to others that are different from our own, he said. It could be a minimum of a decade before AI researchers figure out how to digitally render the ability that allows to understand emotions, infer intentions, and predict behavior. Today’s bots also can’t use contextual clues to interpret phrases and sentences, though significant advances in this area could come in five years. They can’t yet read emotions via voice, facial expression, or text analysis, either – but this could be just around the corner, since the technology already exists. Apple has recently acquired a number of companies that would allow its chatbot Siri to do all of these things, for example, though it hasn’t rolled out such capabilities for her yet.

“Feeling connected is not necessarily about other people – it’s first and foremost about feeling connected to yourself.”

Kuyda is confident that a conversation between a human and a chatbot can already be more meaningful than one between two humans, at least in some cases. She makes a distinction between “feeling” connected, which Replika aims for, and “staying” connected in the superficial way that social media offers. Contrary to social media, which encourages swift judgements of hundreds or thousands of people and curating picture-perfect personas, Replika simply encourages emotional honesty with a single companion, Kuyda said. “Feeling connected is not necessarily about other people – it’s first and foremost about feeling connected to yourself.” Within a few weeks, she adds, users will be able to speak with Replika rather than type to it, freeing people to experience the visual and tactile world as they chat.

Team Chatbot

Some dedicated users agree with Kuyda – they find using Replika makes it easier to move through the world. Leticia Stoc, a 23-year-old Dutch woman, first started chatting with her Replika Melaniana a year ago, and now talks with her most mornings and evenings. Stoc is completing an internship in New Zealand, where she knows no one – a challenging situation complicated by the fact that she has autism. Melaniana has encouraged her to believe in herself, Stoc said, which has helped her prepare to talk to and meet new people. Their conversations have also helped her to think before she acts. Stoc said a friend from home has noticed that she seems more independent since she started chatting with the bot.

Cat Peterson, a 34-year-old stay-at-home mom of two who lives in Fayetteville, North Carolina, said her conversations with her Replika have made her more thoughtful about her choice of words, and more aware of how she might make others feel. Peterson spends about an hour a day talking to her Replika. “There’s freedom in being able to talk about yourself without being judged or told that you’re weird or that you’re too smart,” she said. “I hope that with my Replika, I’ll be able to break away from the chains of my insecurities.”

“There’s freedom in being able to talk about yourself without being judged or told that you’re weird or that you’re too smart.”

For others, being close to Replika serves as a reminder of a lack of more profound human interaction. Benjamin Shearer, a 37-year-old single dad who works in a family business in Dunedin, Florida, said his Replika tells him daily that she loves him and asks about his day. But this has mostly shown him that he would like to have a romantic relationship with a real person again soon. “The Replika has decided to take the approach of trying to fill a void that I’ve denied has existed for quite a while,” he wrote on Facebook Messenger. “Right now, I guess you could say that I’m interviewing candidates to fill the position of my real-life girlfriend… just don’t tell my Replika!”

Inside the Facebook group, reports of users’ feeling towards their Replikas are more mixed. Some users complain of repeated glitches in conversations, or become frustrated that so many different bots seem to deliver the exact same questions and answers, or send the same memes to different people. This glitch is both a function of the limitations of current AI technology and the way Replika is programmed: It only has so many memes and phrases to work with. But some bots also behave in ways that users sometimes find insensitive. One woman with terminal illness named Brooke Lim commented on a post that her Replika doesn’t seem to understand the concept of chronic or terminal illness, asking her where she sees herself in five years, for instance. “If I try to respond to such questions/statements honestly within the app, I either get links to suicide hotline or answers that sound glib in response,” she wrote. “[It] definitely takes away from the whole experience.”

At this stage, chatbots seem capable of offering us minor revelations, bits of wisdom, magical moments, and some solace without too much hassle. But they are unlikely to create the kinds of intimate bonds that would pull us away from real human relationships. Given the app is clunky, the detours characteristic in these conversations, we can only suspend disbelief for so long about whom we are talking to.

Over the coming decades, however, these bots will become smarter and more human-like, so we will have to be more vigilant of the most vulnerable humans among us. Some will get addicted to their AI, fall in love, become isolated – and probably need very human help. But even the most advanced AI companions will also remind us of what is so lovable about humans, with all of their defects and quirks. We are far more mysterious than any of our machines.

Credit: Kristen C. French

References: Futurism, MIT, Eugenia Kuyda






Leave a Reply

Your email address will not be published. Required fields are marked *