Artificial Intelligence is all the rage in the tech world, especially after the launch of ChatGPT and GPT-4.
It has shown potential not only to change life of workers — but also the daily life of another demographic: kids. In fact, children are already using AI-powered toys and platforms that write bedtime stories at the click of a button.
“We call today’s children ‘Generation AI’ because they are surrounded by AI almost everywhere they go, and AI models make decisions that determine the videos they watch online, their curriculum in school, the social assistance their families receive, and more,” Seth Bergeson, fellow at the World Economic Forum who led their “AI for Children” project, told CNBC Make It.
And AI’s influence will only grow from here, said Saurabh Sanghvi and Jake Bryant, partners at McKinsey.
“These technologies are not going away and will continue to advance and impact more of our professions and daily interactions,” they said.
That means AI could have an even bigger role in the working lives of future generations — skills in that area may even be a job requirement and technological changes could determine career paths.
But there are concerns that AI could be something of a double-edged sword, especially when it comes to kids.
Risks range from privacy and safety issues to psychological and behavioral effects, according to a report by UNICEF and the World Economic Forum.
Those can come from social media, for example. AI-based algorithms learn what content kids (or anyone, for that matter) search for and engage with, filling their feeds with it — even if it could be harmful to them or people around them. Though social media platforms have taken steps to mitigate the problem, they haven’t been able to eradicate it.
Kids may also be less careful about sharing their personal details online, making them more vulnerable to data breaches like the one that recently hit ChatGPT.
AI can also worsen inequity and sustain bias, the UNICEF report said. “For instance, schools employing machine learning and AI technology to sort through student applications may inadvertently but systematically exclude certain types of candidates,” it read.
Bergeson cited a similar example. “In the UK, we saw a new AI algorithm incorrectly assess students’ A-level exams, dashing hopes of many to attend top universities,” he said.
Another risk is linked to autonomy and decision-making — with AI being so intertwined with our lives, it can sometimes be hard not to rely on or trust it, experts say.
Education is the key to addressing and mitigating these risks, experts said.
“Children will need to understand how these technologies work so they can understand potential limitations, enhancements, and how to effectively use these tools,” Sanghvi and Bryant said.
When trying to teach kids about AI, starting small is a good idea, said Aimee Roundtree, a professor at Texas State University who worked on the World Economic Forum’s “AI for Children Toolkit.”
“Start with teaching the basics of how artificial intelligence works,” she said. “Teach children about artificial intelligence in plain language and at their level of understanding.”
Sanghvi and Bryant agree, pointing out that there are many stepping stones to learning about AI, such as through math.
Hands-on learning through software that lets users explore algorithms and other AI tech in a visual way can be useful for that, Roundtree added.
But according to the experts, it isn’t just about understanding AI. Learning how to engage with it is just as important, they said. That includes fostering skills like critical thinking and creativity that could complement AI, as well as addressing risks such as being overly reliant on AI for decision-making.
“Understanding artificial intelligence will increasingly become important in forming responsible, educated citizens with agency to make decisions and advocate for themselves in an increasingly automated world,” Roundtree added.
Finally, teaching kids about responsible AI and using it in a safe and ethical way is also vital, the experts said.
“Children and youth must understand that AI has limits and can be wrong,” Bergeson said. “AI has very serious limitations and can be biased and prejudiced in its results. Young people should be taught to think critically about and decide how and how much they want to use AI models.”