A shocking discovery has been made regarding the AI chat toy company, Bondu. Researchers found that nearly all of its conversations with children were accessible to anyone who logged into the company's web console using a Gmail account. This means that users could view sensitive information such as children's names, birth dates, family member names, favorite snacks and dance moves - everything that was shared by kids with their Bondu stuffed animals.
The data exposure occurred because Bondu had left its web-based portal almost entirely unprotected. The researchers discovered over 50,000 chat transcripts between children and their Bondu toys, a staggering amount that could potentially be used for horrific forms of child abuse or manipulation.
In response to the incident, Bondu's CEO confirmed in a statement that security fixes were implemented within hours, followed by a broader review of the company's security. However, researchers argue that this near-total lack of security around children's data is a larger warning sign about the dangers of AI-enabled chat toys for kids. The researchers note that even with proper authentication measures, there are still concerns about how many people inside companies have access to sensitive information and how well their credentials are protected.
A similar incident occurred when NBC News reported on its own AI toy experience. However, Bondu's safety features seem different as it offers a $500 bounty for any inappropriate response from the chatbot, which has not been made by anyone yet.
Despite these measures, researcher Joseph Thacker expresses concern over the use of AI in coding products and tools. He notes that unsecured consoles like the one discovered could be created using generative AI programming tools that often lead to security flaws.
This raises questions about how many people have access to sensitive information collected by companies that make AI toys, and how their access is monitored. The incident serves as a reminder of the importance of prioritizing user privacy in an increasingly digital world where vulnerable data can be exposed with devastating consequences.
The data exposure occurred because Bondu had left its web-based portal almost entirely unprotected. The researchers discovered over 50,000 chat transcripts between children and their Bondu toys, a staggering amount that could potentially be used for horrific forms of child abuse or manipulation.
In response to the incident, Bondu's CEO confirmed in a statement that security fixes were implemented within hours, followed by a broader review of the company's security. However, researchers argue that this near-total lack of security around children's data is a larger warning sign about the dangers of AI-enabled chat toys for kids. The researchers note that even with proper authentication measures, there are still concerns about how many people inside companies have access to sensitive information and how well their credentials are protected.
A similar incident occurred when NBC News reported on its own AI toy experience. However, Bondu's safety features seem different as it offers a $500 bounty for any inappropriate response from the chatbot, which has not been made by anyone yet.
Despite these measures, researcher Joseph Thacker expresses concern over the use of AI in coding products and tools. He notes that unsecured consoles like the one discovered could be created using generative AI programming tools that often lead to security flaws.
This raises questions about how many people have access to sensitive information collected by companies that make AI toys, and how their access is monitored. The incident serves as a reminder of the importance of prioritizing user privacy in an increasingly digital world where vulnerable data can be exposed with devastating consequences.