The first edition of the Global Index on Responsible AI shows that Nigeria lacks comprehensive safeguards and initiatives to protect and promote human rights within the context of artificial intelligence (AI).
The Index, measured by the Global Center on AI Governance, also shows the country’s current frameworks fail to uphold AI ethics principles across all phases of the AI life cycle and value chain.
Nigeria scored 7.21 out of 100 in the Global Index on Responsible AI.
This deficiency raises significant concerns about the ethical deployment and regulation of AI technologies, underlining an urgent need for robust, enforceable measures to address these critical gaps.
Responsible AI is “the design, development, and governance of AI in a way that respects and protects all human rights and upholds the principles of AI ethics through every stage of the AI lifecycle and value chain.”
To measure the degree of responsibility by various countries and territories, The Index aims to “generate insight into the performance and competencies of the responsible AI ecosystem within countries across 19 thematic areas and 3 dimensions.
“Each thematic area assesses the performance of 3 different pillars of the responsible AI ecosystem: Government frameworks, government actions and non-state actors’ initiatives.
“Each thematic area was scored on each pillar scaled to a 0 -100 range and averaged to compute the pillar score.” (0 = lowest, 100 = highest).
According to the Index, Nigeria is ranked 80th out of 138 globally with a score of 7.21.
The highest-ranked globally is the Netherlands with an index score of 86.16 and the lowest-ranked is South Sudan with a score of 0.47.
In Africa, Nigeria is ranked 10th out of 40 countries.
The highest-ranked in Africa is South Africa with an index score of 27.61, and the lowest-ranked in Africa is also the lowest ranked in the world – South Sudan.
Although Nigeria’s score is low, it surpassed the average score of Africa.
Yet, Africa’s average score of 5.8 shows that the continent is performing poorly in its Responsible use of AI.
The Pillars
Nigeria scored a total of 31.92 out of 300 in the Pillar score.
This indicates that the stakeholders (government and non-state actors) responsible for establishing and implementing frameworks in the AI ecosystem, and protecting and promoting human rights in the context of AI are doing poorly in Nigeria.
The highest score on the Pillars is the Non-State Actors with a score of 21.05 out of 100. This indicates that non-state actors are more involved in issues regarding AI than the government.
The poorest performance is in the Government Framework with a score of 3.9 out of 100. This indicates that there are very few or no State and Federal laws, regulations, policies, strategies, and/or guidelines that address the implications of AI on Nigerian society.
The score on Government Actions is 6.97 out of 100. This implies that minimal actions/initiatives are being taken by the State or Federal government to develop or implement frameworks on the use and procurement of AI.
The Dimensions
Nigeria’s overall poor performance also reflects various degrees of irresponsibility in the 3 dimensions of Responsible AI, namely, Responsible AI Capacities, Human Rights and AI, and Responsible Governance.
The country’s total Dimension score is 28.8 out of 300.
Responsible AI Governance has the highest score with 12.89 out of 100.
This measures the thematic areas: National AI policy, Impact Assessment, Human Oversight and Determination, Responsibility and Accountability, Proportionality and Do Not Harm, Public Procurement, Transparency and Explainability, Access to Remedy and Redress, and Safety, Accuracy and Reliability.
The low score implies Nigeria performs poorly in creating strategies to promote AI development, use, and governance. The country lacks enforceability and responsible AI principles.
In the Human Rights context of Responsible AI, Nigeria scored 8.9 out of 100.
This dimension measures Gender Equality, Data Protection and Privacy, Public Participation and Awareness, Bias and Unfair Discrimination, Children’s Rights, Labour Protection and Right to Work, and Cultural and Linguistic Diversity.
The poor score indicates that the country has minimal or no laws protecting human rights at risk from AI, especially concerning the use of AI in the delivery of socio-economic rights and services.
The lowest score is in the Responsible AI Capacities. Nigeria scored 7.32 out of 100.
This measures competition authorities, public sector skill development, and International Cooperation.
This shows that there is poor development of skills by the public sector in the use, implementation, and development of AI.
5 Recommendations to Strengthen Nigeria’s Scores
The research gave 5 recommendations that could improve Nigeria’s score in using, developing, and Implementing Responsible AI:
- Prioritize the adoption or update of data protection and privacy laws.
- Ensure the adoption of AI impact assessments
- Develop programs for public sector skills development in Responsible AI
- Encourage activities from Non-State actors in Responsible AI
- Develop standards for responsible procurement of AI
The Global Index on Responsible AI reveals that Nigeria is critically under-prepared in safeguarding human rights within the AI landscape.
The absence of robust legal frameworks, strategic policies, and ethical guidelines not only threatens the deployment of AI technologies but also highlights an urgent need for the country to fortify its AI governance structures to prevent potential human rights infringements and foster responsible AI innovation.
Lucy Okonkwo is a research analyst at Dataphyte with a background in Economics. She loves to write data-driven stories on socio-economic issues to help change the narratives to inspire growth and development.
Get real time update about this post categories directly on your device, subscribe now.