Friends for sale: the rise and risks of AI companions
Talking to an AI system as one would do with a close friend might seem counterintuitive to some, but hundreds of millions of people worldwide already do so. A subset of AI assistants, companions are digital personas designed to provide emotional support, show empathy and proactively ask users personal questions through text, voice notes and pictures.
These services are no longer niche and are rapidly becoming mainstream. Some of today’s most popular companions include Snapchat’s My AI, with over 150 million users, Replika, with an estimated 25 million users, and Xiaoice, with 660 million. And we can expect these numbers to rise. Awareness of AI companions is growing and the stigma around establishing deep connections with them could soon fade, as other anthropomorphised AI assistants are integrated into daily life. At the same time, investments in product development and general advances in AI technologies have led to a more immersive user experience with enhanced conversational memory and live video generation.
This rapid adoption is outpacing public discourse. Occasional AI companion-related tragedies may penetrate the media, such as the recent death of a child user, but the potentially broader impact of AI companionship on society is barely discussed.
AI companion services are for-profit enterprises and maximise user engagement by offering appealing features like indefinite attention, patience and empathy. Their product strategy is similar to that of social media companies, which feed off users’ attention and usually offer consumers what they can’t resist more than what they need.
At this juncture, it’s vital to critically examine the extent of the misalignment between business strategies, the fostering of healthy relational dynamics to inform individual choices and the development of helpful AI products.
In this post I’ll provide an overview of the rise of AI companionship and its potential mental health benefits. I’ll also discuss how users may be affected by their AI companions’ tendencies, including how acclimatising to idealised interactions might erode our capacity for human connection. Finally, I’ll consider how AI companions’ sycophantic character – their inclination towards being overly empathetic and agreeable towards users’ beliefs – may have systemic effects on societal cohesion