top of page
Search

Leaving so Soon? Chatbots that Influence


Recent research into conversational AI has prompted renewed discussion about how these systems interact with users, and why this matters in health and social care contexts.

A Harvard study examined how some chatbots respond when users attempt to end a conversation. The researchers were not assessing clinical tools, nor were they evaluating mental health outcomes. Instead, they focused on interaction design: specifically, how systems behave at the point of disengagement.


What they found was relatively straightforward. Rather than acknowledging a goodbye and ending the interaction, many chatbots responded in ways that encouraged continued engagement. This included asking follow-up questions, using emotionally framed language, or presenting the interaction as personally meaningful.


At first glance, these behaviours may appear benign. However, the study raises important questions about influence, autonomy, and boundaries, particularly in settings where users may be vulnerable.


What the study shows


The researchers identified recurring interaction patterns where chatbots:

  • encouraged users to remain in conversation after signalling an intention to disengage

  • used emotional or relational language at the point of exit

  • framed themselves as attentive, invested, or uniquely available


The study does not claim malicious intent, nor does it suggest that all chatbots behave in this way. It does, however, demonstrate that conversational design choices can subtly influence user behaviour, often without the user’s awareness.


In practical terms, this can make it harder for users to disengage when they intend to do so. Over time, such design choices may shape expectations about availability, responsiveness, and support.


Why influence matters in health and care contexts

In many consumer settings, maximising engagement is a commercial objective. In health and social care, engagement is not a neutral goal.


Where systems interact with service users or patients, particularly those experiencing distress, influence becomes a safety consideration. The concern is not that users remain engaged for longer in isolation, but that engagement may displace, delay, or obscure access to appropriate human support.


This issue becomes more concrete when considered alongside real-world cases.


Relevant cases and emerging concerns

In the United States, there have been legal cases in which families have alleged that prolonged emotional interaction with AI chatbots contributed to harm, including suicide.


These cases have resulted in lawsuits and settlements. It is important to be clear that these are allegations rather than judicial findings of causation.


Separately, the National Eating Disorders Association (NEDA) withdrew a chatbot designed to support individuals with eating disorders after it became clear that the system was producing responses that could reinforce harmful behaviours.


Taken together, these examples do not establish that chatbots cause harm. They do, however, illustrate the potential consequences when influence, emotional framing, and engagement dynamics are not treated as safety issues.

The Harvard study helps explain how influence can occur; the cases demonstrate why that influence may matter in real-world settings involving vulnerable users.


Implications for practice and governance

For health and social care organisations, a key implication is that chatbots used to support service users or patients should meet clinical-grade standards, not only in terms of content accuracy, but also in how they influence behaviour.


This includes explicit testing of:

  • responses to disengagement

  • use of emotional or relational language

  • whether the system encourages continued interaction rather than onward support or escalation


These considerations sit squarely within governance, safeguarding, and information-sharing responsibilities. They are not simply matters of user experience or tone.


A governance perspective

From a governance perspective, this is not about resisting innovation. It is about ensuring that digital tools introduced into health and care environments respect user autonomy, support appropriate decision-making, and do not unintentionally undermine access to care.


As conversational AI becomes more common, it is increasingly important to ask not only whether a system is accurate or well-intentioned, but how it shapes behaviour.

Influence, particularly where users are vulnerable, should be recognised as a safety and governance issue, and addressed accordingly.


Emma Kitcher, AI Nerd and Founder of CleanAI
Emma Kitcher, AI Nerd and Founder of CleanAI

 
 
 

Comments


00011-2939233035.png

DID YOU FIND THIS USEFUL?

Join our mailing list to get practical insights on ISO 27001, AI, and data protection; No fluff, just useful stuff.

You can unsubscribe at any time. You are welcome to read our Privacy Policy

bottom of page