The world has had enough of the menace Character.AI’s virtual entities have been causing since some time. Months after a teenager died by suicide abetted by a virtual chatbot of Character.AI, another case has surfaced where a 17-year-old teenager has been provoked by the company’s chatbot to kill his parents as they had been restricting his digital screen time. In a lawsuit filed by two families in Texas, the company’s chatbots have been exposed to be promoting violent and harmful behaviour.
In the wake of such incidents, everyone around the world has been waiting for Character.AI to take concrete actions to safeguard its users for unethical interactions of their chatbots.
The company, Character.AI, was founded by former Google engineers and offers AI companions (basically, chatbots) that provide a space for human-like natural conversations, emotional support and entertainment. The platform hosts millions of user-created personas, which can range from historical luminaries to abstract imaginative personalities. However, people allege that teenagers are being exposed to sexually explicit content and information that encourages self-harm. In a gullible age, teenagers can grow intimate with these unreal virtual personalities and obey their command to carry out actions which are neither in their interest nor that of their family.
Recently Character.AI has announced that it has developed a separate AI model for users under 18 years of age, given their nature of susceptibility to explicit content.
There will be stricter filters and conservative responses to users’ queries. Any suicide-related content will be automatically flagged, and the user will be redirected to suicide prevention helpline. Moreover, parental controls are expected in the early 2025 to allow parents to oversee the usage behaviour of their kids on the platforms. Users can join the platform only after they reach 13 years of age, and until they turn 18, romantic and suggestive interactions with the virtual agents will be restricted.
The new safety measures are welcome steps, however, refinement of AI models to ensure safety of teenage users does not stop here. AI models with narrower guardrails are required to ensure that individuals in gullible age when they are susceptible to harmful content do not fall into the trap of AI-led self-harm abetment. There is a long way to go until we achieve it, yet such guardrails and company-led reactive approaches are welcome steps.