Investigating artificial intelligence technologies through the lens of children’s rights

Researchers, policy makers and industry should involve children and their caregivers in designing new policies and initiatives related to artificial intelligence (AI) based technologies. This is the main recommendation of a recently published JRC report on: Artificial intelligence and the rights of the child

Artificial intelligence-based internet and digital technologies offer many opportunities to children, but if not designed and used properly, they can also negatively affect some of their rights, such as their rights to protection, participation, education and privacy.

Parents, teachers and children are rarely directly involved in policy making or research that aims to reduce the risks and increase the benefits of AI. But the more stakeholders interact in policy design, the more we can ensure that the perspectives of all parties are considered.

The JRC report on AI and the rights of the child tries to shed light on these aspects.

In addition, it also identifies key requirements on which reliable AI should be built, methods to enable more effective engagement between key stakeholders, and knowledge gaps to be addressed to ensure that children’s fundamental rights are respected when they come into contact with them. with AI technology in their daily lives.

The JRC report includes reflections from invited experts in the field who participated in and contributed to the study. It also contributes to the science-by-policy research conducted at the JRC on trustworthy AI for children.

The findings will be used to support the implementation of several EU strategies, such as the EU Strategy for the Rights of the Child and the EU strategy for a better internet for children (BIK+), and the proposed EU AI law

What makes AI reliable?

According to the JRC report, developing reliable AI-based tools to be used by children requires:

  1. Make strategic and systemic choices while developing AI-based services and products intended for children to ensure their sustainability, as these tools use many natural and energy resources.
  2. Empowering children and their caregivers to control how their personal data is used by AI technology.
  3. Explain AI systems in child-friendly language and in a transparent manner, and address AI actors about the proper functioning of the systems they develop, operate or deploy.
  4. The absence of discriminatory biases in the data or algorithms they rely on.

Methods for Effective Engagement

The report also proposes some tangible methods for researchers and policy makers to facilitate the participation of both children and relevant stakeholders in the implementation of the above recommendations.

Participatory multi-stakeholder approaches should be applied, involving children, researchers, policy makers, industry, parents and teachers to define common goals and to build child-friendly AI by design.

These approaches become more effective when they are based on communication and collaboration and when they address conflicting priorities between parties. Including underrepresented populations would reduce discrimination and promote fairness between children growing up in diverse cultural contexts.

Also, the creation of frameworks and toolkits, incorporating aspects such as personal data protection and risk assessment, would guide the design and evaluation of child-friendly AI systems in the short and long term.

Knowledge gaps to be addressed

As there is limited scientific evidence on the impact of AI on children, the JRC authors have identified some knowledge gaps that need to be addressed in research and policy agendas.

For example, more research would be needed on the impact of the use of AI technology on children’s cognitive and socio-emotional capacities; schools should prepare children for a world transformed by AI technology and therefore develop their competences and literacy; and AI-based systems targeting children need to be developed to fit their cognitive stage.

A mix of research approaches

To reach these conclusions, JRC researchers used a mix of approaches.

They selected three AI applications for children and examined them through the lens of children’s rights, also identifying certain risks, such as lack of respect for children’s privacy, possible algorithmic discrimination and lack of fairness.

They organized two workshops with children and young people, and three workshops with policy makers and researchers in AI and children’s rights, which showed that each group had different concerns.

In addition, the current AI and children’s rights policy initiatives of eight major international organizations were examined. They were found to be aligned to some extent in terms of identified AI risks and opportunities for children, although they differ in terms of goals and priorities.

Leave a Comment

Your email address will not be published. Required fields are marked *