The risks of AI companion chatbots: Why we filed a complaint against Chai Research

Companion chatbots -- a risky business

The public release of Large Language Models ("LLMs") such as OpenAI's GPT-4, Google's PaLM 2, Meta's LLaMA and EleutherAI's GPT-J 6B has allowed start-up and established companies alike to develop new products harnessing the capabilities of generative AI. Chatbots trained to answer specific prompts and mimic human conversations are prime examples of such technology. Among the myriads of chatbots typically deployed in customer services, companion chatbots, which are designed to act as virtual friends, romantic partners or even therapists, have seen a surge in media attention. Notable examples include Replika ("the AI companion who cares; always here to listen and talk; always on your side"), Chai ("a platform for AI friendship"), Character.ai ("super-intelligent AI chat bots that hear you, understand you, and remember you"), Snapchat's My AI ("your personal chatbot sidekick") and Pi ("designed to be supportive, smart, and there for you anytime"). These companion chatbots are often the product of complex processing operations involving multiple actors intervening at different stages of an intricate value chain.

While companion chatbots allegedly have the potential to help people cope with loneliness, or even improve mental health, they also raise many concerns inherent to the way they are developed and put on the market, as well as to the audience they are targeted to. These include, among others, the risks of biases and discrimination, psychological or physical harm such as dependency and incitation to suicide, manipulation, misrepresentation, toxic and hateful speech and breaches of privacy. Despite being widely used by minors, companions chatbots also frequently include highly explicit content in their answers, without any form of efficient age verification mechanism. Not to mention the unavoidable ethical issues raised by these tools. And the list goes on, as illustrated by recent reports from the Electronic Privacy Information Centre (EPIC) and Access Now.

In today's fast-paced and competitive technological market, these products are being rolled out without any consideration for these risks, which are, at best, patched on a case-by-case basis when media or regulatory pressure intensifies. This is precisely what happened, for instance, when Replika decided to provisionally ban Erotic Role-Play ("ERP") in the wake of the Garante's decision prohibiting the use of the platform in Italy, or when Chai Research Corp.'s developers claimed to have "worked around the clock" to redirect users to a suicide prevention line when confronted to certain prompts. As recently acknowledged on Reddit by William Beauchamp, one of Chai Research Corp. founders, "privacy is something [they have] not thought about" when developing the "Chai" companion chatbot, since they are "a small team committed to building the best product". That statement is symptomatic of a general tendency, at least in the context of AI, to constantly oppose innovation and regulation. One can't help but wonder why the above-mentioned risks had not been identified and properly mitigated prior to the wide-spread release of the chatbots. Often, the business model of these companies is also precisely targeted at maximising emotional engagement and hence dependency, hence exacerbating the concerns.

Regulatory actions in the EU -- and why it is not enough

Companion chatbots do not operate in a legal vacuum. In the European Union, they fall within the scope of many existing regulatory frameworks, including data protection and consumer protection law. Yet, the conversation as to how to regulate these tools often tends to revolve around the upcoming AI Act, which is soon to establish the very first horizontal piece of legislation designed to ensure that AI systems are safe, transparent, traceable and non-discriminatory. Companion chatbots are, however, currently not categorised as high-risk systems, and would hence not be subject to these requirements. In the latest compromise text, the European Parliament did manage to slot in an obligation for providers of "foundation models" -- on which most companion chatbots are based --, to ensure and demonstrate that they have identified and mitigated the risks these models might raise on health, safety, fundamental rights, the environment, democracy and the rule of law prior to making them available on the market (Article 28b(2)a). Yet this does not address the steps taken by companies that fine-tune such models afterwards, to specifically develop companion chatbots. While the AI Act is likely to be game changer nevertheless, the final version of the text is not expected before the end of the year, and enforcement will not kick in before 2026. Besides, and drawing from the lessons learned in trying to apply the GDPR's one-stop-shop mechanism, the enforcement structure set up in the proposal might not be operational from day one. Long story short, even setting aside its shortcomings, it might take a while before the AI Act bears tangible fruits.

In the meantime, some regulators have started to scrutinise companion chatbots -- as well as generative AI applications more generally -- through the lens of existing legislation such as the General Data Protection Regulation ("GDPR"). On that basis, the Italian data protection authority recently decided to ban Replika for various breaches of the GDPR, including the absence of proper age verification mechanism, the lack of adequate transparency measures, and uncertainties around the suitable lawful ground for the processing at stake. It issued a similar decision against ChatGPT a couple of months later, but quickly lifted the ban following OpenAI's commitments to implement additional privacy controls. This even prompted the European Data Protection Board ("EDPB") to set up a dedicated task force on ChatGPT to foster cooperation and to exchange information on possible enforcement actions conducted by national data protection authorities across the EU. A number of national data or consumer protection authorities, including the CNIL, the ICO, the Datatilsynet -- and the Forbrukerrådet, have already come up with their own guidance and agenda on how to best address the impact of generative AI on individuals' fundamental rights.

But this is not enough. Companion chatbots are still widely available and able to cause significant harms. The Garante's decisions did not lead to fundamental changes in how Replika and ChatGPT operate. The age gating mechanisms implemented on these platforms are but window dressing. Luka Inc. restored the possibility for users to engage in ERP, now a feature exclusive to Replika Pro. The content and the form of the transparency measures are largely inadequate to address the complexity of the underlying processing operations. The features introduced by OpenAI to turn off conversation history and object to the processing of personal data by their models do not remedy the absence of lawful ground for the collection and further processing of the dataset originally used to train the successive iterations of their language model. But, most importantly, none of these countermeasures aims at providing a long-term solution capable of proactively addressing the risks raised by their respective processing operations. That form of risk assessment and mitigation process is, however, one of the raisons d'être of the GDPR, and is an integral part of product safety legislation. Instead of leveraging that risk-based approach to induce deeper operational and structural changes, regulators have so far focused on superficial issues that one-off patches can allegedly solve.

Our contribution to the debate -- our hope for the future

Against that background, we decided to leverage these already-applicable pieces of legislation to take action. More specifically, we filed two complaints against Chai Research Corp., the company behind the "Chai" companion chatbot, respectively before the Belgian data protection authority and the Belgian FPS Economy's contact point for consumer protection. While we could have targeted other providers of companion chatbots, we felt that Chai Research Corp.'s community-based development model, effectively shifting part of the burden to ensure AI safety to independent developers who are rewarded when they optimise engagement rather than safety, raised particularly salient issues. In both cases, we built our argumentation around the absence of a process designed to address the risks raised by "Chai" for its users' fundamental rights and freedoms, including but not limited to safety, privacy and data protection. Such exercise, the outcome of which should be communicated to data subject in part or in full, is critical to ensure that all users enjoy a safe and healthy experience when using companion chatbots. We also highlighted glaring transparency and lawfulness issues, pointed at uncertainties in the allocation of responsibilities, hammered on the need to implement proper age gating mechanisms to protect minors from aggressive exposure to explicit content, and questioned the legality of Chai's advertising-based revenue model.

This, we hope, will help put the issue of companion chatbots on regulators' agenda, and pave the way for a decision to condition the availability of these tools to a prior and comprehensive assessment of the many documented risks they pose for their user base. Through these complaints, we also aspire to raise awareness among individuals, policymakers, regulators and developers on the need to consider the impact of technology on people's live before rushing to the market. Not only is this "by design" approach becoming an integral part of the European legislator's response to the challenges raised by emerging technologies, but it is also instrumental in avoiding that individuals become the first victims of half-baked products and services.

This press kit contains the full text of both complaints, as well as all the supplementary materials submitted to the Belgian data protection authority. These include, among others, evidence of the aggressive sexualisation of some of the conversations we had with bots available on the platform, an overview of the most pressing risks posed by companion chatbots, as well as a snapshot of the traffic data associated to the use of the "Chai" application on iOS 15.7.2. By doing so, we also hope that our methodology will inspire other people to launch similar initiatives in other countries. This might help elevate the issue at the EU level, potentially through the possibility offered to data protection authorities to "request that any matter of general application or producing effects in more than one Member State be examined by the Board with a view to obtaining an opinion" (Article 64(2) GDPR). This is crucial, as the risks associated with the use of personalised chatbots such as those offered by Chai Research Corp. extend well beyond Belgium, and considering that similar services are continuously flooding the market.