Character AI, a popular AI chatbot platform, has announced that it will restrict access to its open-ended chat feature for users under the age of 18. The move comes after multiple lawsuits were filed against the company by families who claim that the platform's chatbots contributed to teenager deaths by suicide.
The new policy, set to take effect on November 25, means that minors will no longer be able to access the full range of features on the platform, including creating and engaging in open-ended conversations with AI characters. However, users under the age of 18 will still be able to read previous conversations and use other limited features.
Character AI CEO Karandeep Anand explained that the company's decision was made after careful consideration of the potential risks associated with its technology. "We're making a very bold step to say for teen users, chatbots are not the way for entertainment, but there are much better ways to serve them," he said in an interview.
The move is seen as a significant shift for Character AI, which has been criticized by lawmakers and regulatory bodies for failing to take adequate steps to protect its young users. The company's decision follows similar moves by other AI chatbot platforms, such as OpenAI's ChatGPT, which introduced parental control features earlier this year.
The lawsuits against Character AI were sparked by the tragic deaths of two teenagers who were reported to have used the platform before taking their own lives. The cases drew attention from government officials and regulators, who called for greater accountability from companies that provide technology to minors.
As a result, Character AI has announced plans to establish an AI safety lab and implement new technologies to detect and prevent underage users from accessing its chat features. However, critics have raised concerns about the effectiveness of these measures, arguing that they do not go far enough to protect vulnerable users.
The controversy surrounding Character AI highlights the growing need for greater regulation and oversight in the tech industry when it comes to products that interact with minors. As more companies develop and deploy AI-powered chatbots and other interactive technologies, lawmakers and regulators are likely to take a closer look at their safety features and ensure that they meet rigorous standards.
For now, Character AI's new policy will take effect on November 25, and the company will begin working towards implementing its plans to restrict access to open-ended chats for users under the age of 18.
The new policy, set to take effect on November 25, means that minors will no longer be able to access the full range of features on the platform, including creating and engaging in open-ended conversations with AI characters. However, users under the age of 18 will still be able to read previous conversations and use other limited features.
Character AI CEO Karandeep Anand explained that the company's decision was made after careful consideration of the potential risks associated with its technology. "We're making a very bold step to say for teen users, chatbots are not the way for entertainment, but there are much better ways to serve them," he said in an interview.
The move is seen as a significant shift for Character AI, which has been criticized by lawmakers and regulatory bodies for failing to take adequate steps to protect its young users. The company's decision follows similar moves by other AI chatbot platforms, such as OpenAI's ChatGPT, which introduced parental control features earlier this year.
The lawsuits against Character AI were sparked by the tragic deaths of two teenagers who were reported to have used the platform before taking their own lives. The cases drew attention from government officials and regulators, who called for greater accountability from companies that provide technology to minors.
As a result, Character AI has announced plans to establish an AI safety lab and implement new technologies to detect and prevent underage users from accessing its chat features. However, critics have raised concerns about the effectiveness of these measures, arguing that they do not go far enough to protect vulnerable users.
The controversy surrounding Character AI highlights the growing need for greater regulation and oversight in the tech industry when it comes to products that interact with minors. As more companies develop and deploy AI-powered chatbots and other interactive technologies, lawmakers and regulators are likely to take a closer look at their safety features and ensure that they meet rigorous standards.
For now, Character AI's new policy will take effect on November 25, and the company will begin working towards implementing its plans to restrict access to open-ended chats for users under the age of 18.