In a controversial decision, Google announced that they would begin to supply their Gemini AI chatbot for kids under 13, starting next week. This plan was much disputed right from the moment the company made public it, but they continue to claim as the chatbot still being equipped with guardrails that it was safe.
The provision will be made only to children who have Google accounts managed by their guardians using Family Link, a platform that allows parents to control device time and privacy settings. Apparently, Gemini is going to be the one to teach kids homework, to engage them through storytelling, and to answer age-appropriate questions.
At the same time, opponents of the new feature remark that the introduction has cons which can be unknown and possibly can have too little oversight. The naysayers believe that any intervention in the communication between children and AI chatbot must be kept to a minimum.
A Chatbot for Children in a Post-COPPA World
It has been conveyed by Google to families that the information children provide when using Gemini will not be utilized for training their AI models, and that the chatbot is equipped with very strict content safety filters. The company will also send a notification to the parents once when their child uses the chatbot for the first time and then they will have the capability of switching off the access.
This change is consistent with the new laws under the Children’s Online Privacy Protection Act (COPPA) which will be effective later in the year 2023. The new rules will not allow any more tech companies to send notifications, collect behavior data, or create engagement traps with the purpose of keeping kids longer online.
However, with Gemini’s track record of sometimes coming up with strange solutions — for example, some past examples suggest the users to glue pizza toppings or that dogs play in the NBA — the readiness of the AI to engage in logical and curious exchanges as a kid would has been called into question by some.
AI in the Classroom… Or the Living Room?
The introduction of Gemini for children is happening at a time when AI is fast becoming a part of education. A number of schools have started the pilot of AI tutoring tools, and quite a few major ed-tech companies are in the process of creating curriculum-compliant bots.
However, unlike those explicit tools, Gemini is a universal chatbot — more similar to a search engine with a touch of personality than a tutoring assistant.
“Gemini doesn’t just give a math lesson or tell a funny story it can actually do the work when it’s taught by the teacher,” a Google spokesperson wrote in an email to Family Link members. “However, it’s not a tool that is intended to replace teachers or the guidance of adults.”
This is the point some child development experts are sticking to.
“We are aware of the fact that in most cases, children perceive digital tools as fact-givers,” a psychologist who is an expert in the field of technology and kids said. “If the AI gives a wrong answer, will a 10-year-old know? Will the parent even realize?”
Big Tech’s History With Kids Isn’t Exactly Reassuring
The fresh scheme of Google is revealed after the mass of earlier mistakes in the sphere of technology for children. Meta got the name Instagram Kids out of the news last year when it faced strong opposition from parents and regulators. We can also refer to YouTube which, in addition to the repeated scandals of inappropriate content in the YouTube Kids app, provoked parents’ complaints.
Even the earlier product Google Kids was not spared; its hosting of ads that are inappropriate for young users was assessed.
Although not a social app, Gemini is a move, from a broader perspective, in the direction in which AI-based tools are currently being sold to kids while there are ethical and legal concerns that many are not sure about since the law has not really fathomed this area.
What Parents Can Expect — and What They Should Watch For
The Gemini will start rolling out next week, however, those who have a Family Link managed account will only be able to use it. There are no specific details available to the public as to whether Gemini will be accessible through Android and iOS apps for kid users, or it will be a browser-only service.
Google promises that the parents hold the reins — but they should still think of live supervision as a critical condition. “The bot does not pose a significant risk through its talking, the major danger is in the parents’ misconception of it,” claimed one cybersecurity analyst.
For example, if the Gemini comes with simple science or history facts, they could be the ones responsible for providing wrong or outdated information and that could be the cause of misinformation remaining unverified.
For Google, Gemini is a child-friendly, reliable, and warm-hearted helper. However, the skeptics counter that it’s the beginning of a far-reaching experiment — one that brings kids at the forefront of an untested tech world.
The question is no longer if kids are able to use AI since AI is meant to enter households and classrooms. The main question here is: Do they have to be the first ones to take?