Dark Mode Light Mode

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Use
Follow Us
Follow Us
Login Login

With restrictions, Anthropic now allows children to utilise its AI technology.

Anthropic, the AI startup, recently updated its policies to allow minors to use its generative AI systems in specific circumstances.

In a recent blog post, Anthropic shared its decision to allow teenagers and preteens to access third-party apps that utilise its AI models. However, developers must implement specific safety measures and educate users about the anthropopic technologies in use.

Anthropic provides a support article that outlines important safety measures for developers creating AI-powered apps for minors. These measures include implementing age verification systems, content moderation, and filtering, as well as providing educational resources on the “safe and responsible” use of AI for minors. The company also mentions the possibility of providing “technical measures” to customise AI product experiences for minors. This could include a “child safety system prompt” that developers catering to minors would need to incorporate. 

Developers using Anthropic’s AI models are required to comply with pertinent child safety and data privacy regulations, such as the Children’s Online Privacy Protection Act (COPPA), a federal law in the United States that protects the online privacy of children under 13. Anthropic intends to conduct regular audits of apps to ensure compliance. Developers may face suspension or termination of their accounts if they consistently fail to meet the compliance requirements. Additionally, Anthropic will require developers to clearly indicate on public-facing sites or documentation that they are in compliance. 

Advertisement

“In certain scenarios, AI tools can provide substantial advantages to younger users, such as aiding in test preparation or offering tutoring support,” states Anthropic in their post. Given this, our revised policy now allows organisations to integrate our API into their products for minors.

Anthropic’s policy shift coincides with a growing trend of children and teenagers seeking assistance from generative AI tools for both academic and personal matters. This change also aligns with the efforts of other generative AI vendors, such as Google and OpenAI, who are actively exploring various applications targeted towards younger users. OpenAI established a new team this year to study child safety and announced a partnership with Common Sense Media to jointly develop guidelines for AI suitable for children. In certain areas, Google has made its chatbot Bard, now known as Gemini, accessible to teenagers in English.

A significant percentage of children have used generative AI tools, like OpenAI’s ChatGPT, to address issues related to anxiety, mental health, friendship problems, and family conflicts, according to a survey by the Centre for Democracy and Technology.

Last summer, educational institutions swiftly implemented bans on generative AI applications, specifically ChatGPT, due to concerns regarding plagiarism and the spread of misinformation. Since then, a few have lifted their bans. However, there are sceptics who question the positive impact of generative AI. The U.K. Safer Internet Centre conducted surveys, revealing that a significant number of children (53%) have observed their peers using generative AI in harmful ways. This includes creating convincing false information or images with the intention of causing distress, such as pornographic deepfakes.

There is an increasing demand for guidelines regarding children’s use of generative AI.

Last year, UNESCO called for governments to establish regulations for the use of generative AI in education. These regulations would include setting user age limits and implementing measures to protect data and user privacy. “Generative AI has the potential to greatly benefit human development, but it also carries the risk of causing harm and perpetuating prejudice,” stated Audrey Azoulay, the director-general of UNESCO, in a press release. “Public engagement and government regulations are crucial for the successful integration of this technology into education.”

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Use
Add a comment Add a comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Previous Post

A Chinese carmaker is the most talked-about EV initial public offering (IPO) this year.

Next Post

Anthropic's Claude isn't as popular on iOS as ChatGPT.

Advertisement