Safe AI Companion Collective
Promoting the safe and accountable deployment of AI companion chatbots
We work on
We raise our voice about the risks of shaping intimate relationships through insufficiently tested and controlled AI with individuals, policymakers, regulators, and AI companion providers.
Pushing for safety
We advocate for proper enforcement of privacy, consumer rights, content moderation and intellectual property.
We want to share the wealth of resources that inform our views to help you navigate the technical, social, legal and ethical aspects of AI companion chatbots.
AI companion chatbots mimic human conversations for intimacy
AI companion chatbots are designed to act as virtual friends, romantic partners or even therapists. Notable examples include Replika (“the AI companion who cares; always here to listen and talk; always on your side”), Chai (“a platform for AI friendship”),Character.ai (“super-intelligent AI chat bots that hear you, understand you, and remember you”), My AI by Snapchat (“your personal chatbot sidekick”), and Pi (“designed to be supportive, smart, and there for you anytime”).
Providers of AI Companion chatbots need to play by the rules
AI companion chatbots undoubtedly have the potential to entertain, fight loneliness, or boost confidence. They also present unprecedented risks such as bias and discrimination, psychological or physical harm, manipulation, misrepresentation, toxic and hateful speech , anthropomorphism, and breaches of privacy. These risks need to be addressed properly to comply with data protection, consumer protection and content moderation regulation.
We have filed complaints and raise awareness
Our members have lodged a complaint with the Belgian data protection authority and one with the Belgian FPS Economy's contact point for consumer protection. We share our approach to inspire other citizens and organisations to launch similar actions.