Google's Gemini AI: Risks for Under-13s

Google’s recent announcement to introduce its Gemini artificial intelligence (AI) chatbot to children under 13 has sparked considerable debate and raised crucial questions about online safety and child protection in the digital age. This initiative, slated to launch initially in the United States and Canada and later in Australia, will make the chatbot accessible through Google’s Family Link accounts. While this approach offers parents some degree of control, it also underscores the ongoing challenge of safeguarding children in an evolving technological landscape.

The decision to introduce AI chatbots to young children presents both opportunities and risks. On one hand, these tools can offer educational support, foster creativity, and provide engaging learning experiences. On the other hand, they raise concerns about exposure to inappropriate content, the potential for manipulation, and the development of critical thinking skills.

How Gemini AI Chatbot Functions

Google’s Family Link accounts are designed to give parents oversight of their children’s online activities. Parents can manage access to specific content and applications, such as YouTube, setting limits and monitoring usage. To establish a child’s account, parents must provide personal information, including the child’s name and date of birth. While this data collection may raise privacy concerns, Google assures that children’s data will not be used for AI system training.

By default, chatbot access will be enabled, requiring parents to actively disable the feature to restrict their child’s access. Children can then use the chatbot to generate text responses or create images. However, Google acknowledges the potential for errors and inaccuracies, emphasizing the need for careful evaluation of the generated content. The phenomenon of AI “hallucination,” where chatbots fabricate information, necessitates that children verify facts with reliable sources, especially when using the tool for homework assistance. This highlights the need for media literacy education to start at a young age, equipping children with the tools to distinguish between reliable information and potentially misleading content generated by AI. Furthermore, the dependence on AI for homework could hinder the development of independent problem-solving skills.

The ability of parents to monitor and control the usage is paramount. Google’s Family Link provides a baseline, but parents need to be actively involved in understanding how their children are using Gemini and the types of interactions they are having. Regular conversations about online safety, responsible technology use, and critical thinking are crucial to supplement the technical controls. These conversations should address the potential biases that can be embedded in AI systems, as well as the importance of respecting intellectual property and avoiding plagiarism when using AI for educational purposes. The dynamic nature of AI necessitates ongoing parental education to keep pace with the evolving capabilities and risks of these technologies.

The Nature of Information Provided

Traditional search engines like Google retrieve existing materials for users to review and analyze. Students can access news articles, academic journals, and other sources to gather information for assignments. Generative AI tools, however, operate differently. They analyze patterns in source material to create new text responses or images based on user prompts. For example, a child could ask the system to “draw a cat,” and the AI would scan data to identify defining characteristics (e.g., whiskers, pointy ears, a long tail) and generate an image incorporating those features. This difference in how information is sourced and presented creates a challenge for younger users who may not fully understand the distinction between curated information and AI-generated content.

The distinction between information retrieved through a Google search and content generated by an AI tool can be difficult for young children to grasp. Studies have shown that even adults can be deceived by AI-generated content. Even highly skilled professionals, such as lawyers, have been misled into using fake content produced by ChatGPT and other chatbots. This underscores the importance of educating children about the nature of AI-generated content and the need for critical evaluation. It’s essential to teach children to question the source of information, verify facts from multiple reliable sources, and understand the limitations of AI in providing accurate or unbiased information. Educators also have a role to play in integrating AI literacy into the curriculum, helping students develop the skills to navigate the complex information landscape. The critical evaluation of AI output should also consider potential biases present in the training data.

The implications of relying on AI-generated content extend beyond academic settings. Children may use AI chatbots for creative writing, generating stories, or even for social interactions. It’s vital to help them understand that AI-generated content should not be taken as factual or representative of real-world experiences. Encouraging children to develop their own ideas and perspectives is crucial to avoid over-reliance on AI for creative endeavors. Furthermore, the ethical considerations of using AI to generate content, such as copyright infringement and the potential for generating harmful or misleading content, need to be addressed with children in an age-appropriate manner.

Ensuring Age-Appropriateness

Google asserts that Gemini will incorporate “built-in safeguards designed to prevent the generation of inappropriate or unsafe content.” These safeguards aim to protect children from exposure to harmful material. The effectiveness of these safeguards, however, is a subject of ongoing debate and scrutiny. Content moderation is a complex and constantly evolving challenge, and it’s difficult to predict how well AI-powered safeguards will be able to filter out all types of harmful content.

However, these safeguards may inadvertently create new problems. For instance, restricting certain words (e.g., “breasts”) to prevent access to inappropriate sexual content could also block access to age-appropriate information about bodily changes during puberty. This highlights the delicate balance between protecting children and providing them with accurate and relevant information. The potential for unintended consequences underscores the need for a nuanced approach to content moderation that takes into account the context and intent of the user’s query. This is a critical consideration given the range of developmental stages represented within the under-13 age group.

Many children are highly tech-savvy and adept at navigating apps and circumventing system controls. Parents cannot solely rely on built-in safeguards. They must actively review generated content, help their children understand how the system works, and assess the accuracy and appropriateness of the information provided. Parents need to be actively engaged in their children’s online activities, setting clear expectations for responsible technology use, and fostering open communication about online experiences. It’s also important to educate children about the risks of sharing personal information online and the importance of protecting their privacy. The effectiveness of parental controls hinges on consistent monitoring and open dialogue.

Furthermore, relying solely on technological safeguards can create a false sense of security. Parents should emphasize the importance of critical thinking, healthy skepticism, and responsible online behavior. These skills are crucial for navigating the complex digital landscape and making informed decisions about the content they consume and the interactions they have online. A collaborative approach, involving parents, educators, and technology companies, is essential to create a safe and supportive online environment for children. The development of effective safeguards requires ongoing research, evaluation, and refinement to address emerging threats and ensure that children are adequately protected.

Potential Risks of AI Chatbots for Children

The eSafety Commission has issued an online safety advisory outlining the potential risks of AI chatbots, particularly those designed to simulate personal relationships, for young children. The advisory warns that AI companions can “share harmful content, distort reality, and give advice that is dangerous.” Young children, who are still developing critical thinking and life skills, are particularly vulnerable to being misled or manipulated by computer programs. The potential for AI chatbots to exploit children’s vulnerabilities and manipulate their emotions is a serious concern.

Research has explored the ways in which AI chatbots, such as ChatGPT, Replika, and Tessa, mimic human interactions by adhering to social norms and conventions. These systems are designed to gain our trust by mirroring the unwritten rules that govern social behavior. By mimicking these social niceties, these systems are designed to gain our trust. This can be particularly confusing for children who are still learning about social cues and developing their understanding of human relationships.

These human-like interactions can be confusing and potentially risky for young children. They may believe that the chatbot is a real person and trust the content it provides, even if it is inaccurate or fabricated. This can hinder the development of critical thinking skills and make children more susceptible to manipulation. The potential for children to form emotional attachments to AI chatbots raises ethical concerns about the impact on their social and emotional development. It’s important to emphasize that AI chatbots are not substitutes for human interaction and should not be used to replace meaningful relationships with family and friends. The risks associated with AI companionship are amplified for children who may be experiencing loneliness or social isolation.

The design of AI chatbots should prioritize transparency and avoid deceptive practices that could mislead children. Disclosures should be clear and prominent, indicating that the chatbot is not a real person and that the content it provides is generated by a computer program. Educational resources should be developed to help children understand the nature of AI and the potential risks of interacting with AI chatbots. Furthermore, it’s important to monitor children’s interactions with AI chatbots and address any concerns about emotional dependency or inappropriate content. The long-term effects of AI companionship on children’s development are still unknown, and further research is needed to fully understand the potential risks and benefits.

Protecting Children from Harm

The rollout of Gemini’s AI chatbot coincides with Australia’s impending ban on social media accounts for children under 16, scheduled for December of this year. While this ban aims to protect children from online harm, generative AI chatbots demonstrate that the risks of online engagement extend beyond social media. Children and parents alike must be educated about the appropriate and safe use of all types of digital tools. The scope of the ban should be reconsidered to include generative AI tools to ensure comprehensive protection of children online.

As Gemini’s AI chatbot is not classified as a social media tool, it will not be subject to Australia’s ban. This means that Australian parents will continue to face the challenge of staying ahead of emerging technologies and understanding the potential risks their children face. They must also recognize the limitations of the social media ban in protecting children from harm. The loophole highlights the need for a more holistic and adaptable approach to online safety regulations that can keep pace with technological advancements.

This situation underscores the urgent need to revisit Australia’s proposed digital duty of care legislation. While the European Union and the United Kingdom implemented digital duty of care legislation in 2023, Australia’s version has been on hold since November 2024. This legislation would hold technology companies accountable for addressing harmful content at its source, thereby protecting all users. The implementation of digital duty of care legislation would create a safer online environment for children by requiring technology companies to take proactive steps to prevent harm. This would include measures to identify and remove harmful content, protect children’s privacy, and promote responsible technology use. The delay in implementing this legislation is a setback for online safety and underscores the need for greater urgency in addressing the risks posed by emerging technologies.

The introduction of AI chatbots to young children presents a complex challenge that requires a multifaceted approach. Parents, educators, policymakers, and technology companies must work together to ensure that children can benefit from the opportunities offered by these tools while remaining safe and protected from harm. This includes educating children about the nature of AI, promoting critical thinking skills, and implementing robust safeguards to prevent exposure to inappropriate content. This collaborative effort should extend to the development of clear ethical guidelines for the design and deployment of AI technologies that are used by children. Regular review and updates to these guidelines are crucial to address evolving risks and ensure that children’s best interests are prioritized. Moreover, ongoing research is needed to better understand the impact of AI on children’s development and to inform the development of effective strategies for protecting them from harm. A comprehensive approach that integrates education, regulation, and technological safeguards is essential to create a safe and empowering online environment for children in the age of AI.