Latest UNICEF case studies on AI for children

Policy guidance on AI for children

Should children be asked to report cyberbullying to an AI tool? What do adolescents with poor mental health think about being directed towards chatbots? Would it work if families are automatically assessed by an AI tool to see if they should receive child protection visits? Should young children learn about fairness from social robots? Over the last year, UNICEF has piloted policy guidance on AI for children, which has been adapted to local contexts. To do this it has worked with governments, companies and academia (including the Alan Turing Institute, of which UCL is a founding member). . You can click on the links below to learn more about child-centred AI. Click on the key insights link to learn about four important themes that emerged from all the case studies. I would be interested in reading any comments on the different pilot projects – just leave a reply in the box at the end of this post.

Responsible AI Framework: H&M Group
The Responsible AI Team uses a Responsible AI framework with the aim of designing and deploying internal AI applications in an ethical and sustainable way. The team is currently reviewing the framework through a child rights lens, recognizing that the uniqueness of children has not been made explicit in their current structure and accompanying tools. Key to the evolution of the framework is providing transparency in their use of AI, data and analytics and using child-friendly language in cases where products have been designed for children. 

Imìsí 3D: AutismVR
AutismVR is a virtual reality game developed by the Nigerian-based start-up, Imìsí 3D, alongside a team of interdisciplinary experts, to help young users and adults simulate interactions with children affected by autism spectrum disorder (ASD). The game, which utilizes AI techniques, is designed for non-autistic young users and adults, notably siblings and caregivers, to better engage with children with ASD. The goal is for end users to gain an understanding of the range of behavioural capacities and challenges that characterize autistic children, and subsequently, improve ways to support their needs and development.

CrimeDetector: SomeBuddy
The CrimeDetector system, developed by the Finnish start-up SomeBuddy, helps support children in Finland and Sweden aged 7–18 who have potentially experienced online harassment. When children report incidents, such as cyberbullying, the system automatically analyzes the case using natural language processing and provides tailored legal and psychological guidance for the affected child, with the aid of a human-in-the-loop. The digital service has been conceived with the insights of social media experts and psychologists, child-rights experts and lawyers, and was also built through active co-creation with children.

Milli chatbot: Helsinki University Hospital
The psychiatry department at Helsinki University Hospital has developed Milli, an AI-powered chatbot on, which uses natural language processing to connect users, including adolescents, in Finland with helpful mental health information and medical providers. Milli was created through the multi-year work of interdisciplinary experts and practitioners, including psychologists, mental health experts, nurses, AI and design engineers, and adolescents. For instance, a design course was held at Aalto University where students played the role of ‘experience specialists’. As a result of this consultation, Milli’s avatar was redesigned to appear as an unmistakably virtual character, which increased the users’ believability and trust when engaging with the chatbot.

Policy for child-centric AI for the cities of Lund, Malmö and Helsingborg: AI Sweden
AI Sweden, Lund University, aiRikr Innovation and Mobile Heights worked with the Swedish municipalities of Helsingborg, Lund and Malmö, to evaluate UNICEF’s policy guidance against AI-related projects in these three cities. These projects included applying child-centred AI to an AI chatbot companion for preschoolers, translating child-centred AI requirements into fundamental legal and policy principles, and assessing social impact through AI and data. The results of this work also shaped a pre study to define the initial components required to set the foundation for a supportive national AI framework. 

Understanding AI ethics and safety – A guide for the public sector: The Alan Turing Institute
The Alan Turing Institute is expanding its public policy guide Understanding artificial intelligence ethics and safety, to provide public sector employees with a better practical understanding of how to design responsible AI for children. The Institute consulted with public sector organizations about the impact of strategic policy and legal initiatives such as UNICEF’s policy guidance and the European Union’s General Data Protection Regulation. The aim was to formulate ethical considerations to support the development of AI policies that are non-discriminatory and inclusive of and for children.

Hello Baby: Allegheny County Department of Human Services
The Allegheny County Department of Human Services developed Hello Baby, an AI-driven early-childhood maltreatment prevention initiative, with the aim to more efficiently address families’ complex needs, improve children’s outcomes, and maximize child and family well-being, safety and security. Hello Baby uses an algorithmic model, based on universal data held in existing administrative systems, to identify the needs of families and stratify them into appropriate tiers, associated with health and social support programmes. Several safeguards are offered to protect children’s data and privacy in the use, storage and access to the model score. The initiative was built on years of cross-disciplinary research, involving child welfare and clinical experts, judges and community leaders, ethicists and data scientists

Honda Research Institute Japan & European Commission, Joint Research Centre
Haru is a prototype robot that aims to stimulate the cognitive development, creativity, problem-solving and collaborative skills of children aged 6 to 18. Researchers from the Honda Research Institute Japan and European Commission, Joint Research Centre worked with a global consortium of experts with knowledge in the fields of AI, robotics, ethics, social sciences and psychology to better tailor the robot to the needs and rights of its young users. Haru’s design process involved school children in Japan and Uganda to gauge their understanding of the concepts of fairness and non-discrimination.  

1 thought on “Latest UNICEF case studies on AI for children”

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.