AI in Children Projects By UNICEF

0
1253

UNICEF created AI for Children project to promote children’s rights in the government and commercial sector, AI policies and practices, as well as raise understanding of how AI systems can support or undermine these rights. Fairness and non-discrimination for children should be prioritized.

Governments and the commercial sector are increasingly relying on AI systems to improve education, healthcare, and social services, for example, While AI is a driving force for innovation, it also poses a threat to children’s rights, such as privacy, safety, and security. However, most AI policies, plans, and recommendations just make a passing reference to it.

To fill this void, UNICEF has worked with the Finnish government to investigate ways to defend and uphold children’s rights in a developing AI future.

Many life-changing choices, such as who qualifies for a loan and whether someone is freed from prison, are already made using Machine Learning Technologies. To govern how those creating and implementing machine learning can handle the human rights implications of their products, a new model is required. 

In the context of the Fourth Industrial Revolution, the World Economic Forum’s Global Future Council on Human Rights strives to promote realistic industry-wide solutions to human rights concerns. Paradigm for recognizing the possible hazards of discriminating outcomes in Machine Learning applications, as well as a path for avoiding them.

While diverse uses of Machine Learning will necessitate different steps to combat discrimination and promote dignity, a set of transferable, guiding principles are particularly applicable to the field of Machine Learning. 

Based on the rights established in the Universal Declaration of Human Rights, as well as a dozen other binding international treaties that offer substantive legal criteria for the protection and respect of human rights and the prevention of discrimination, hazards aren’t meant to minimize the benefits of Machine Learning or to discourage its adoption.

Concerns have been raised about Discriminatory outcomes in Machine Learning are not only about defending human rights but also about maintaining the confidentiality and protecting the social contract based on the assumption that the technology people use or that is used on them is serving their best interests without such faith, potential to employ machine learning to benefit humanity.

Microsoft, Google, and Deep mind are among the firms that have begun to investigate the concepts of justice, inclusivity, accountability, and transparency in Machine Learning (Alphabet).

There are widespread and justified fears that initiatives to increase openness and accountability may jeopardize these companies’ intellectual property rights and trade secrets, as well as their security and, in some situations their right to privacy.

Follow and connect with us on Facebook, LinkedIn & Twitter