UNICEF Innovation Fund: Investing in Games-Based Learning Solutions
UNICEF Innovate: 2018 Davos Roundup
This press release was published at the World Economic Forum on March 13, 2018, by Oliver Cann, Public Engagement, Tel.: +41 79 799 3405; oca@weforum.org.

Geneva, 13 March 2018 – Strong standards are urgently needed to prevent discrimination and marginalization of humans in artificial intelligence. This is the finding of a new white paper, How to Prevent Discriminatory Outcomes in Machine Learning published today by the World Economic Forum’s Global Future Council on Human Rights.

The paper has been produced after a long consultation period and is based on research and interviews with industry experts, academics, human rights professionals and others working at the intersection of machine learning and human rights. The key recommendation for developers and all businesses looking to use machine learning is to prioritize non-discrimination by adopting a framework based on four guiding principles: active inclusion; fairness; right to understanding; and access to redress.

Recent examples of how machine learning has failed to prevent discrimination include:

  • Loan services – applicants from rural backgrounds, who have less digital infrastructure, could be unfairly excluded by algorithms trained on data points captured from more urban populations
  • Criminal justice – the underlying data used to train an algorithm may be biased, reflecting a history of discrimination.
  • Recruitment – applications might filter out people from lower-income backgrounds, those who attended less prestigious schools, based on factors such as educational attainment status

“We encourage companies working with machine learning to prioritize non-discrimination along with accuracy and efficiency to comply with human rights standards and uphold the social contract,” said Erica Kochi, Co-Chair of the Global Future Council for Human Rights and Co-Founder of UNICEF Innovation.

 

“One of the most important challenges we face today is ensuring we design positive values into systems that use machine learning. This means deeply understanding how and where we bias systems and creating innovative ways to protect people from being discriminated against,” said Nicholas Davis, Head of Society and Innovation, Member of the Executive Committee, World Economic Forum.

The white paper is part of a broader workstream within the Global Future Council looking at the social impact of machine learning, such as the way it amplifies longstanding problems related to unequal access.

    A closer look: 

Machine learning applications are already being used to make many life-changing decisions – such as who qualifies for a loan, and whether someone is released from prison. A new model is needed to govern how those developing and deploying machine can address the human rights implications of their products. This paper offers comprehensive recommendations on ways to integrate principles of non-discrimination and empathy into machine learning systems.

This White Paper was written as part of the ongoing work by the Global Future Council on Human Rights; a group of leading academic, civil society and industry experts providing thought leadership on the most critical issues shaping the future of human rights. Including UNICEF Innovation’s Futures Lead & WEF Global Future Council on Human Rights chair, Erica Kochi.

Explore the full report at https://www.weforum.org/whitepapers/how-to-prevent-discriminatory-outcomes-in-machine-learning


Read the Forum Agenda at http://wef.ch/agenda
Become a fan of the Forum on Facebook at http://wef.ch/facebook
Watch our videos at http://wef.ch/video
print
UNICEF Innovation Fund: Investing in Games-Based Learning Solutions
UNICEF Innovate: 2018 Davos Roundup