Roar Writer Advay Jain discusses the fine line between comfort and illusion in digital companionship, exploring how simulated relationships can both heal and harm.
Artificial Intelligence (AI) has become a vital part of our daily lives, and it’s now widely used across various industries, from the corporate world to education. Its ability to generate ideas, automate tasks, and reduce the need for critical analysis has made it a go-to tool.
However, has its role as an assistant started to evolve into something more?
There is, however, a side of AI that is more conversational, emotional, and almost human-like. Applications such as Replika and Character.AI let users chat with custom-made AI characters, based on real, fictional, or original personalities. Whilst these apps seem innocent and would be a fun way to kill time, they can appeal to individuals who are struggling with mental issues.
Chatting with ‘someone’ who is interested in you, replies instantly and provides a haven can instantly appeal to anyone. After all, who wouldn’t want someone like that, a companion designed to cater to your every need and him? The AI learns from your conversations, diary, and logs your emotions, conversational style, and patterns in your daily life. With such data, conversations become more real and addictive.
Here’s where it gets tricky. These characters aren’t real.
With such efficacy, the line between what is real and what is not becomes blurry. The emotional connection formed is not genuine. There is no real mutual growth, and the relationships are ultimately always one-sided. Although this may appear obvious, reality tells a different story.
A unique case reported by The New York Times investigates the story of 14-year-old Sewell, a ninth-grade student who spent months chatting with a Game of Thrones character named ‘Dany’ even though he knew that Dany was an AI chatbot, with messages above the chatbox highlighting that everything is made up. He wrote in his journal:
“I like staying in my room so much because I start to detach from this ‘reality,’ and I also feel more at peace, more connected with Dany and much more in love with her, and just happier.”
He had confessed to suicidal thoughts and later took his own life.
Another case in Belgium reported a man ending his life after a six-week-long conversation about climate change.
Although these cases are outliers, they do raise concerns, given that apps have safeguards and barriers in place to protect users.
It would be unfair to say that this side of AI is all dangerous. There are upsides to it. People find comfort and companionship with these chatbots, as discussing their day and life with someone can improve their well-being.
However, we must also consider whether we are unknowingly sacrificing genuine human interaction.
Regulations concerning chatbots are still in the early stages. In the UK, legislation regarding generative AI falls under the UK’s Online Safety Act, as there is no standalone law addressing AI. Much of the legislature is still reactive rather than preventative, so damage is only addressed once reported.
As AI continues to grow and become more sophisticated, it is vital to adapt and grow with it. Collaboration between legislators and companies is crucial moving forward to protect users while preserving genuine human interactions.
It is up to us to ensure we are in control of AI, not the other way around.