When the Social Becomes Non-Human
Have you ever interacted with non-humans? A team of researchers from the University of Oslo and the SINTEF examined how young people perceive various types of social support provided by chatbots. Their results indicate that chatbots can be a daily source of social support for young people, helping them think about themselves more constructively, and stimulating self-disclosure without social judgment by offering a safe and anonymous space for conversation.
Read the research paper here: https://dl.acm.org/doi/10.1145/3411764.3445318
Young people are increasingly suffering from mental health issues, but they tend not to seek out professional help. Despite needing social support, young people often struggle to reach out to others. This problem has become more acute during the COVID-19 pandemic. Unexpected changes to professional and personal lives have placed a burden on people’s mental health. Pandemic-related restrictions such as lockdowns and social distancing have made it more difficult to receive in-person support from friends and professionals. As a result of all this, we now urgently need effective online tools that can provide people with the social support they need.
Chatbots, especially those designed for social and mental health support (social chatbots, or emotionally aware chatbots), can help meet these demands. As artificial agents, chatbots interact with users through natural language dialogue (text, speech, or both). Social chatbots, such as Replika, Woebot, and Mitusuku, imitate conversations with friends, partners, therapists, or family members to be humanlike, with the potential to perceive, integrate, understand, and express emotions.
Other online channels (e.g., Instagram, Facebook, online groups, and health forums, etc.) can also provide social support to young people, however they carry certain limitations such as the risk of receiving inaccurate guidance or the possibility of not receiving help from others despite reaching out.
The researchers conducted in-depth interviews with sixteen young people aging from 16-25 years old. They found that after using Woebot for two weeks, most participants reported that the chatbot provided appraisal support and informational support; around half of them received emotional support; and some received perceived instrumental support.
- Appraisal support: Support offered in the form of feedback, social comparison, and affirmation.
- Emotional support: Expressions of empathy, love, trust, and caring.
- Informational support: Advice, suggestions, and information given by the chatbot to help the user solve a problem.
- Instrumental support: Tangible aid, which is characterized by the provision of resources in offering help or assistance in a tangible and/or physical way, such as providing money or people.
So more specifically, what did the sixteen young people’s find was good about these chatbots? First of all, as a non-human agent, the conversational chatbot can make people feel like they are writing a diary entry or speaking in a way to themselves, thereby facilitating self-reflection, and making self-disclosure easier, safer, and more honest. Using chatbot for social and emotional support is also thought to be more reliable than talking to a human, as well as being a good choice for discussing worries that are more personal or private.
As artificial agents, chatbots can easily provide users with lots of relevant, immediate, and efficient information, without being constrained by time or space. This may be useful when our worries extend beyond the scope of our friends’ knowledge or expertise. Moreover, different sources of support can act collaboratively, as the chatbot Woebot was reported to motivate users to contact others for help as well as guiding people in their search for information, which indicates that a chatbot may have the potential to help solve practical problems.
However, despite the many positive comments from users, using current chatbots for social support is not a perfect solution. Current chatbots may have biases, or inadequate or failed responses, affecting the quality of the user experience. Psychologically, getting support from others makes some people feel ‘cared for and loved’, which is not the case when receiving response from a chatbot, as some may see chatbots as merely robots and not emotional. People who need chatbots as a source of support, may need time to develop relationships, become familiar with and develop trust towards chatbots. Through conversations about personal stories, user’s private data is collected and stored in chatbots. Ensuring users’ privacy and maintaining a relationship of trust is another challenge.
Indeed, chatbots provide us with a new way to get connected and supported besides the traditional human-human context. It could be further evaluated and studied in a larger sample and different user groups. This brings further questions about how chatbots for social support may influence the future of human communication. Imagine the future – when one day chatbots can provide social support like real people. What will be the human-human relationship at that time? When we have another place (chatbots) to talk about our distress or happiness, how will it affect our interpersonal relationships?
How AI is helping us connect in digital spaces
With the COVID-19 pandemic resulting in stringent social restrictions around the world, I was fascinated by how individuals (including myself) adapted to our newfound situation. Living in a new reality where I could not meet up with friends and family, I, like others, turned to social technologies to satiate my need for connection. However, with all other social affordance stripped away, I was left unsatisfied with the current capabilities of online socialising. Online messaging and video-calling did not seem to satisfy my social cravings. At times, it was dry, awkward, and overwhelming. It truly took a pandemic to realise the deficiencies of digital devices in emulating the deep, rich, exciting (and often messy) offline interactions we took for granted in our pre-pandemic life.
Therefore, as the CHI 2021 conference came around, I was excited to see what new research was being undertaken to make the online socialising experience more meaningful. As I scoured the programme, favouriting talks relating to online communication, social media, etc., I noticed that two studies incorporated artificial intelligence (AI) to help make online interactions more affect sensitive through ‘AI-mediated communication’. ‘Affect’ refers to the psychological common-denominator of our emotional lives, underpinning emotions, moods, feelings, etc. (Russell, 2003). So how were researchers leveraging AI to help make our online interactions more affective, and should they be?
The first study by Murali et al. (2021) titled: AffectiveSpotlight: Facilitating the Communication of Affective Responses from Audience Members during Online Presentations, developed and investigated the efficacy of a Microsoft Teams embedded affect-sensitive AI-bot (named AffectiveSpotlight). The study aimed to address the problem of limited audience-presenter interaction during online presentations by implementing AffectiveSpotlight which was able to capture and communicate the affective responses from the audience to the presenter. The bot would analyse emotive responses (valued by presenters) of each audience member in real-time and spotlight the most expressive to the presenter without labelling the emotion; allowing the presenter to interpret it themselves. The study found that usage of AffectiveSpotlight improved the presenter’s experience: making them feel more aware of their audience, speak for longer periods (implying reduced speaker anxiety), and also led to self-rating of their presentation quality that was closer to audience members responses. Whilst these were promising results, participants were solely from the tech sector, limiting the overall generalisability of findings to other social groups who also perform online presentations e.g., teacher-student interactions.
The second study by Liu et al. (2021) titled: Significant Otter: Understanding the Role of Biosignals in Communication, explored the usage of a smartwatch-based AI software (named Significant Otter) that analysed users bio-signals (heart rate) to generate a set of possible emotional states that the user could choose from and send to their romantic partner via animated otters. The qualitative study followed romantic couples for over a month to investigate the role of Significant Otter’s bio-signal sharing capability in communication. Couples reported that the ability to share bio-signals in this manner supported easier, authentic communication and nurtured a greater sense of social connection. These were exciting results; however, the paper did mention how some participants questioned themselves when the presented possible emotional states did not match what they felt internally. Also, other participants would blindly accept Significant Otters suggestions leading them to reflect less on their actual state.
These papers fascinated me; I had never thought that AI could help close the emotional gap between individuals in digital space, and how this helped to facilitate richer online interactions. Research into affect-sensitive AI is integral to the HCI field of affective computing. There have been past calls in this field to frame affect not as discrete units of information to be processed by a computer (affect as information), but as dynamic, socio-culturally embedded outcomes experienced through interaction (affect as interaction) (Boehner et al., 2005). It was interesting to see how the new technologies introduced in CHI 2021 were veering to the latter framing, where participants in Liu et al (2021 and Murali et al. (2021) were left free to interpret the emotions presented by the technologies and ascribe meaning themselves. It is exciting to see current affect-sensitive technologies taking this perspective as from a wider lens, it signals a shift from technologies being representational tools, to being participatory tools. I cannot wait to see what is in store next!
In November 2020, we conducted focus groups with 38 incoming undergraduates. Here, we present the findings from this research – which may be helpful for lecturers seeking to support university students in remote learning.
Camera on or off?
Students missed seeing and being seen by peers and educators. Yet they found keeping a camera on during lectures intrusive. And seeing into the rooms of students who weren’t paying attention was demotivating.
Positive social media use during lectures
Social isolation led to social media use during lectures. Scrolling through feeds provided the stimulation needed to keep students at least partly engaged in the lectures they didn’t enjoy.
Online procrastination got out of hand
Pre-recorded lectures allowed a far more disruptive use of social media: bored students would pause the lecture and stream video content instead.
Technology doesn’t guarantee interaction
Students appreciated chat facilities and online breakout rooms. But when lecturers didn’t respond to questions in chat, or if starting a conversation in a breakout room felt awkward, students would lose motivation to engage in this mode of learning.
What can university lecturers do?
Use online quizzes and live polls! For the students we interviewed, interactivity was useful for helping to avoid distractions, and even more so for maintaining focus on lecture content. They wished that polls and quizzes were more often incorporated into online learning.
was supported by funding from the Medical Research Council (MR/T046864/1). It was conducted by Year 3 UCL Psychology students Selina He, Eloise May, Simran Suden, and Ella Verrells, supervised by Professor Anna Cox, Dr Anna Rudnicka, Elahi Hossain, and Professor Yvonne Rogers from UCL Interaction Centre.