UNESCO’s 10 ethics on Artificial Intelligence (AI) use
Thursday, September 28, 2023
Audace Nakeshimana, a Rwandan scientist who graduated from Massachusetts Institute of Technology (MIT) and the Founder of Insightiv. File

The United Nations Educational, Scientific and Cultural Organization (UNESCO) is working with nations, including Rwanda, to comply with a number of recommended ethics on Artificial Intelligence (AI) use.

ALSO READ: Rwanda to become hub for AI research in Africa

Artificial Intelligence, commonly known as AI, is any technology that enables machines to emulate human capabilities to sense, comprehend, and act.

ALSO READ: Artificial intelligence key to achieving UN SDGs - Kagame

Rwanda looks to leverage AI to power economic growth, improve quality of life, and position the country as a global innovator of responsible and inclusive AI.

The 10 recommended ethics on AI use were presented on September 27, 2023, in Kigali, during discussions with Rwanda’s Ministry of ICT and Innovation and key concerned institutions.

ALSO READ: Rwanda needs $76m to implement new AI policy

Right to privacy and data protection

AI actors need to ensure that they are accountable for the design and implementation of AI systems to ensure that privacy or personal information is protected, UNESCO recommended.

It is important that data for AI systems be collected, used, shared, archived, and deleted in ways that are consistent with international law.

Responsibility and accountability

The ethical responsibility and liability for the decisions and actions based in any way on an AI system should always be attributable to AI actors corresponding to their role in the life cycle of the AI system.

"Appropriate oversight, impact assessment, audit and due diligence mechanisms, including whistle-blowers’ protection, should be developed to ensure accountability for AI systems and their impact throughout their life cycle.”

Ensuring diversity and inclusiveness

According to UNESCO, respect, protection, and promotion of diversity and inclusiveness should be ensured throughout the life cycle of AI systems.

This, it says, may be done by promoting the active participation of all individuals or groups, including people with disabilities.

Human rights, vulnerable people protection

"No human being or human community should be harmed, whether physically, economically, socially, politically, culturally or mentally during any phase of the life cycle of AI systems,” reads part of the recommendations.

"Persons may interact with AI systems throughout their life cycle and receive assistance from them, such as care for vulnerable people or people in vulnerable situations, including but not limited to children, older persons, persons with disabilities, or the ill.

"Within such interactions, persons should never be objectified, nor should their dignity be otherwise undermined, or human rights and fundamental freedoms violated or abused.”

Environment protection

UNESCO says all actors involved in the life cycle of AI systems must comply with laws designed for environmental protection restoration, and sustainable development.

The AI actors should reduce the environmental impact of AI systems, including carbon emissions to ensure the minimisation of climate change and degradation of ecosystems.

"Do no harm”

The principle says that none of the processes related to the AI system life cycle shall exceed what is necessary to achieve legitimate aims.

In the event of the possible occurrence of any harm, the adoption of measures to preclude the occurrence of such harm should be ensured.

Safety and security

Harms should be avoided and eliminated throughout the life cycle of AI systems to ensure human, environmental, and ecosystem safety and security.

Safe and secure AI will be enabled by the development of sustainable, privacy-protective data access frameworks that foster better training and validation of AI models utilising quality data.

Fairness and non-discrimination

AI actors should promote social justice, and safeguard fairness and non-discrimination of any kind. This implies an inclusive approach to ensuring that the benefits of AI technologies are available and accessible to all, taking into consideration the specific needs of different age groups, cultural systems, different language groups, persons with disabilities, girls and women, and disadvantaged, marginalised and vulnerable people or people in vulnerable situations.

Transparency and explainability

The transparency and explainability of AI systems are often essential preconditions to ensure the respect, protection, and promotion of human rights.

A lack of transparency could infringe on the right to a fair trial and effective remedy.

Transparency relates closely to adequate responsibility and accountability measures, as well as to the trustworthiness of AI systems.

Awareness and literacy

Public awareness and understanding of AI technologies should be promoted through open and accessible education, civic engagement, digital skills and AI ethics training, media and information literacy, and training.

Rwanda commended for bold steps

According to Ngandeu Ngatta Hugue, a specialist for social and human sciences at UNESCO Regional Office for Eastern Africa, the recommendations should be implemented in policy areas such as governance, environment, data policy, development and international cooperation, environment and ecosystems, gender, culture, education and research, judiciary, communication and information, economy and labour, health, and social well-being.

"Rwanda is not lagging behind. On the contrary, the country has already taken bold steps in terms of implementing some of the recommendations we are looking at. Rwanda has developed an AI policy, ethical guidelines on artificial intelligence to be published soon, a data protection policy, and other important bold steps. We want to share the experience of Rwanda with other countries who are struggling to build this step,” he said.

Victor Muvunyi, the Emerging Technologies Senior Technologist at the Ministry of ICT and Innovation, said Rwanda has taken action to ensure ethical compliance as it adopts AI use.

The national Artificial Intelligence policy, approved recently by the cabinet, will require a total investment of $76.5 million over the next five years.

Out of the total investment, however, at least $1.2 million is already funded or budgeted.

In Rwanda, the government predicts an ecosystem worth $589 million in the next five years.

"AI technology should not be discriminative. It should be used to benefit all groups of people. This technology is also being used in Rwanda in some sectors. The AI policy will guide the implementation while complying with recommended ethics,” Muvunyi said, adding that the policy will ensure people get value-added jobs based on AI impact.

Albert Mutesa, the Secretary-General of Rwanda National Commission for UNESCO (CNRU), said there is close monitoring of ethics to ensure AI doesn’t have a bad impact on people, adding that mapping of existing AI initiatives in Rwanda and conducting mapping of specific AI ethical challenges and opportunities is needed.