Journal of Public Health International

Journal of Public Health International

Journal of Public Health International

Current Issue Volume No: 2 Issue No: 4

Opinion Article Open Access Available online freely Peer Reviewed Citation

How Africa Should Engage Ubuntu Ethics and Artificial Intelligence

Abstract

Automation of human tasks has taken place for a long time now. Humans have in earlier periods dreamed of a world where machines capable of mimicking decision making would be created with some works of fiction describing in caricature, how machines would take over the human space in the world. Artificial intelligence has come to fruition in the last few decades following the development of fast computing capability and vast chip memory. Discussions of how the human space will look and feel when artificial intelligence have taken place at various levels of global organization geared towards ensuring that the new “thinking machines” do not rock human society in ways to render them obsolete.

This article looks at the ethics of AI considering the issues that have been outlined by others in the light of communitarian ethics as seen in Africa. It describes the possible impact of thinking machines on society and how individuals would relate with each other and with AI systems.

Author Contributions
Received 11 Jun 2020; Accepted 15 Jun 2020; Published 26 Dec 2020;

Academic Editor: Rahul Hajare, Indian Council of Medical Research, New Delhi

Checked for plagiarism: Yes

Review by: Single-blind

Copyright ©  2020 Simon K. Langat, et al.

License
Creative Commons License     This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Competing interests

The authors have declared that no competing interests exist.

Citation:

Simon K. Langat, Pascal M. Mwakio, David Ayuku (2020) How Africa Should Engage Ubuntu Ethics and Artificial Intelligence. Journal of Public Health International - 2(4):20-25. https://doi.org/10.14302/issn.2641-4538.jphi-20-3427

Download as RIS, BibTeX, Text (Include abstract )

DOI 10.14302/issn.2641-4538.jphi-20-3427

Introduction

Artificial Intelligence (AI) as we know it, has been around for seven decades. Immediately, following the assembling of electric computers, and probably, owing to old predictions, developers started thinking of new machines that could work more independently of human intervention. Artificial Intelligence became a study discipline attracting many students in universities, pursuing different disciplines from computer science to management, psychology and engineering. This brought about two categories of classifying the studies; theoretical and pragmatic 11. Theoretical AI study refers to the use of AI concepts and models to answer questions about human beings like what is meant by intelligence, how AI is different from natural intelligence and others. Theoretical generally means scientific, while pragmatic refers to the technological and focuses on engineering works dealing with machine learning, deep learning and automated reasoning 8. Pragmatic studies of AI combine Information and Communication Technology (ICT) with vast quantities of data now known as big data. Machines are currently able to operate independently, in various areas like medicine, transport and science using this combination of deep learning and reasoning.

AI accomplishes the decision making without any awareness the way humans would. Thus, it computes rather than think to arrive at a decision. Robots doing different human chores utilize strong or weak AI. Strong AI exhibits general human-like intelligence. Whereas, weak AI mimics human intelligence in dealing with one specific task for which they are developed. Highly specific human characteristics like free will and ethical decision-making is yet to be achieved in AI. Allan Turing predicted in a 1950 proposal that machines would learn until they are indistinguishable from human beings, possibly achieving consciousness. This prediction has not become a reality (depending on what we mean by consciousness) but work continues such that we might see it happen 1.

There are several publications on AI around the world but not much on AI and Africa, a developing region with a distinct community and its own experiences and ethic. There are publications on AI and human rights and on AI and libertarian ethics but not any in communitarian ethics (Ubuntu) and AI. This paper singles out ethics of AI with a communitarian approach. It has been noted in a recent publication that Africa has not contributed to the development of regulations that will inform future growth of AI 3.

Artificial Intelligence Development

The first computer was switched on in 1946. It dimmed lights in New York City by the colossal (at the time) amount of energy it required. It however had perhaps as much memory as a small calculator would today. Development moved fast and soon industry leaders were thinking about major improvements. In 1955, John McCarthy, Marvin L. Minsky, Nathaniel Rochester and Claude E. Shannon coined the term ‘artificial intelligence’7. Its meaning however, has changed slightly over time and today it is the ability of a computer program or a machine to ‘think and learn’ more or less like humans. AI is also a field of study, which tries to make computers "smart"; that is working on their own without being encoded with extra commands. Over the years, public understanding of AI has been between the science fiction narrative and the more practical usage of computers for various tasks and their incremental improvements. This has been the basis upon which people embraced, used and loathed computer systems and their application in different communities worldwide.

Our objective is to highlight the ethical/bioethical implications arising from artificial intelligence, showing how it would be perceived and applied in a developing country with a more or less communitarian grounding. At the same time we intend to create awareness on some of the challenging issues amidst the positive effects of artificial intelligence. This paper has a purpose of showing that AI is anthropocentric with man (humans) at the center. We start by giving a brief background, describing briefly the various forms of AI, then the bioethical implications, the legal implications then our recommendations and conclusion.

Ancient Predictions and Evolution of the Idea

Since antiquity, there have been myths or rumors of humans making artificial ‘beings’ possessing intelligence. These myths were followed by science fiction depictions of intelligent machines performing all sorts of tasks. Some ancient philosophers considered human thinking as mechanical manipulation of symbols. Aristotle (384-322 BC) developed a system of reasoning following simple steps that could lead to decisions. Later, Hobbes would state that inanimate machine objects would be able to follow simple rules and attain reasoning because the process was more or less like computation 8. Natural language has been used by programmers to enable computers mimic human thinking with varying degrees of success in artificial decision making.

It is most suitable to define intelligence here, as: “the capacity for logic, understanding, self-awareness, learning, emotional knowledge, reasoning, planning, creativity, critical thinking, and problem solving.” It sounds contradictory to match the two terms artificial and intelligence as the term was coined. Something artificial is a product of human craft. Artificial usually means something insincere, not original, therefore a copy; fake, and inferior to the real. When people talk of artificial rice, eggs, fish it means something negative.

AI is interdisciplinary and cross-disciplinary, much like many other areas of engagement today. It involves computer and cognitive science, psychology, philosophy, logic and mathematics (Report of COMEST on robotics ethics no.38). AI in more popular language is “the theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.” (Wikipedia). As machines become increasingly capable, mental faculties once thought to require intelligence are removed from the definition. Artificial intelligence is not limited to just IT or technology industry, it is found extensively in other areas such as medicine, business, education, law, and manufacturing. https://www.iqvis.com/blog/9-powerful-examples-of-artificial-intelligence-in-use-today/

Learning, Intelligence and Ethical Dimensions

Humans learn experientially from situations, and AI learns experientially from data.

The performance of AI-based machines improves as they receive more data training, much like a person learns through education and experience. The concern with AI is that it appears that humans are surrendering to a paradigm of forced reductionism putting humans into a purely mechanistic, utilitarian model of technology. As AI becomes more and more powerful and invasive, it may inevitably change the world aligning it with the design principles it rests upon. The consequence might be a world full of indistinctive societies. The other worries we can see include; non-benign actors, unconscious and conscious bias informing algorithms and the inevitable enhanced digital divide, manipulation and even coercion, the threat of a new surveillance society with humans turning into super-optimized machines and perhaps the least in a continuum of super-intelligence. AI has a potential to dominate humans or eventually render the species, as we know it, obsolete.

One of the trends that came into sharp focus in 2019 was lack of clarity around AI ethics. Harvard University’s Berkman Klein Center sought to extract consensus on AI

Ethics, in a report entitled “Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-Based Approaches to Principles for AI”. The authors, led by Jessica Fjeld (2020) looked at thirty six AI documents from global sources and came up with 8 common themes and values dimensions:

1. Privacy - your data should not be used without informed consent and the right to rectify, amend or modify information held by a data controller.

2. Accountability - On face value, the term “artificial intelligence” suggests equivalence with human intelligence. Depending on whom you ask, the age of autonomous AI is either upon us or is coming soon. Concerns about who will be accountable for decision made by AI and not humans are now taking shape. There is need for creation of a monitoring group to be ethically aligned in its pursuit to ensure that AI systems do not infringe upon human rights and the right of appeal.

3. Safety and Security– the principle of safety requires that an AI system be reliable and that, the system does what it is supposed to do without harming living beings or environment. Security concerns for an AI system’s ability to resist external threats such as cyber attacks and to protect privacy and integrity and confidentiality of personal data.

4. Transparency and Explainability–the greatest challenge that AI posses from a governance perspective is the complexity and cloudiness of the technology. It is not clear when an AI system has been implemented in a given context and for what task. The principle of transparency is the assertion that an AI system should be designed and implemented in such a way that oversight of their operations is possible and accessible.

5. Explainability is the requirement that you be notified that you are interacting with an AI or subjected to an automated decision not involving humans. People should reserve the right to information entitlement letting individuals know about various aspect of, the use of and interaction with AI systems including personal data use in the decision making process.

6. Fairness and Non-discrimination- Algorithmic bias associated with the systemic under or over prediction of probabilities for a specific population creeps into AI systems in a myriad of ways. A system might be trained on unrepresentative, flawed, or biased data. Alternatively, the predicted outcome may be an imperfect proxy for the true outcome of interest or that the outcome of interest may be influenced by earlier decisions that are themselves biased. As AI systems increasingly inform or dictate decisions, particularly in sensitive contexts where bias long pre-dates their introduction such as in lending, healthcare, and criminal justice, ensuring fairness and non-discrimination is imperative.

7. Human Control of Technology- UNI Global Union asserts that AI systems must maintain, the legal status of tools, and legal persons must retain control over, and responsibility for, these machines at all times. The public voice coalition’s principle of human control extends perhaps the farthest, explicitly stating that an institution has an obligation to terminate an AI system if they are no longer able to control it.

8. Professional Responsibility - The theme of professional responsibility brings together principles that are targeted at individuals and teams who are responsible for designing, developing, or deploying AI-based products or systems. These principles reflect an understanding that the behavior of such professionals, perhaps independent of the organizations, systems, and policies that they operate within, may have a direct influence on the ethics, human rights and impacts of AI. The theme consists of five principles: accuracy, responsible design, long-term effects consideration, multi-stakeholder collaboration, and scientific integrity.

9. Promotion of Human Values- the promotion of human values is a key element of ethical and rights-respecting AI. The ends to which AI is keen, and the means by which it is implemented, should correspond with and be strongly influenced by social norms. As AI’s use becomes more prevalent and the power of the technology increases, particularly if we begin to approach Artificial General Intelligence (AGI), the imposition of human priorities and judgment on AI is especially crucial. It must be invariably towards promotion of human values, human flourishing, access to technology, and leverage for the benefit of society. What began as a mapping of human meaning now defines human meaning and has begun to control rather than simply catalog or index human thinking.

Ubuntu Treatment of AI

Value based ethics are invariably anthropocentric; humans are central as moral agents and subjects of any acts of man or machine that are morally noteworthy or significant. Communitarian ethics stresses the connection between the individual and the community. The local variant of communitarian ethics is Ubuntu. It focuses more on reciprocity, the common good, tolerance, consensus, mutual respect and value of human life. It defines clearly the relationship between other life forms on earth and man. The advent of AI calls for similar treatment and placement in the earthly collective. Communitarianism asks the questions: what is the social meaning, what are the implications and contexts 2. These questions on moral acts should extend to the ethics of technologies and automated systems. How would AI affect persons and communities in terms of development, use and consequences? It is imperative that we consider how it will affect the relationships between communities and individuals within a country and globally both for the automated systems of today and the AGIs of tomorrow. Aristotle and Hegel have indicated to us that intimate communities share ends where people faithfully fulfill social roles and everyone benefits. Ubuntu proposes a specific method of approaching ethics namely; prioritization of the social rights at the top, followed by justification of the rights. We now consider Ubuntu ethics for widespread AI in communities.

Hoesle (1992), states that using computerized information systems require people to act and think in prescribed ways that privilege Western cultural traditions because of the origin of computers in these cultures. It contributes in marginalizing the cultural traditions of others5. The Information ethics models for Africa should be founded on African values but remain alive to the diversity within African culture, individual country-needs and international sensitivities6. This approach is equally useful to AI the aim being, to develop a suitable method of applying ethics that suits Africa and has a wide applicability and validity beyond national, regional and continental boundaries.

Concerns arising from AI include; the widening gap between countries in the economic south and north, colossal job loses in developing countries, increased poverty and loss of rights by minority groups in the same countries. Ubuntu raises distinct issues in the above situation: poverty dehumanises and makes the poor disabled in contributing to the shared life. It lowers the quality of life and increases discontent making some people more of recipients than participants in their communities. This creation of class in new ways will have destabilising effects on society.

It is possible that the information the public gets will be manipulated to produce specific social outcomes. This may not be only during election time as has been witnessed in two countries during recent elections; in the US 10, and in Kenya during the 2017 elections but at other times of significant public decisions 4. Besides this, plenty of misleading information is made to look real by many other persons operating privately. Too much skewed information needed or not needed, is cunningly availed to help shape opinion and action for a gullible public and in ways that may not be in their best interest.

Looking at ethics of AI within an Ubuntu framework provides an opportunity to reexamine the biases that existed prior to AI with renewed vigor. AI can however, contain biases innocently acquired during programming and “learning” that may exhibit some possibly unintended discriminatory effects. Teaching ethics to robots and AI is complicated and has no clear answers much like teaching ethics to children 9. Ubuntu places the weight of social and economic rights on the one hand and the individual/personal rights on the other so as to decide what action is preferable.

Looking at the eight themes above in the light of Ubuntu as an ethical framework privacy and accountability are key because everyone in a community has to be capable of being a useful member of the community. Each person is therefore accorded respect and autonomy. Safety and security constitute the reason a community is in the first place and therefore Ubuntu shares with other systems like liberalism to importance of these two thematic areas. Since safety and security is expected at all times and are attributed to the source of the act whether human or not, high standards are expected similarly for both the robot or AI system and their human operators or developers. The responsibility for any breach lies with the persons and not the robot or AI system in Ubuntu.

Transparency and explainability is important for oversight of operations and beyond that, for regulatory audits. Value based ethics, which include communitarianism places great importance on the character of the subject. To be virtuous in matters of AI means to be transparent and we may add beyond reproach. Developers, implementers and auditors of the AI system must all have to be able to explain in detail the system’s decision-making and predictability. This extends to fairness and non-discrimination in the sense that AI exhibits fairness based on what it has ‘learnt’. It should be noted that it is indeed in health care, banking and criminal justice where there is an enhance sensitivity to and fairness and non-discrimination. Ubuntu here demands extreme caution in this regard.

Human control and professional responsibility are two sides of the same coin; leaving AI to its own design is abdicating a responsibility spanning many millennia. Ubuntu would reject this abdication on the account that it does harm to the community.

Conclusion

The arrival of AI in Africa has brought with it new issues that challenge the current ways of relating, policymaking and practice. We are raising an alarm that it will mean changes, some of which nobody is thinking about at the moment. But none of which can be wished away nor ignored. Africa will need to play her role in the global context and must prepare to engage in way that may be discomforting, though urgent and important. AI will come in its true form for good or for worse, depending on the level of preparedness that it finds countries in the region. It is a useful tool here, just like it is elsewhere and will find application in the nascent economies. The greatest contribution to make is to humanize the technology and be ready to mitigate any apparent negative effects. African countries will first learn about the impact and be part of the development of AI going forward as a most effective way of ensuring the balance in the ethics that the machines will learn and the data it will learn from are all inclusive of human kind’s geographical variations.

Now that AI learns, we ought to teach it the correct way to ‘think’ through ethical issues so as to retain harmony such societies we have described above. It is the responsibility of those working in these areas in Africa to carry on these important training by being part of the development, data collection and testing.

References

  1. 1.Introduction to Artificial Intelligence. n.d
  1. 2.Callahan D. (2003) . Principlism and communitarianism.Journal of Medical Ethics,29(5) 287-291.
  1. 3.Fjeld J, Achten N, Hilligoss H, Nagy A, Srikumar M. (2020) . Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-Based Approaches to Principles for AI.SSRN Electronic Journal. https://doi.org/10.2139/ssrn.3518482 .
  1. 4.How Cambridge Analytica poisoned Kenya’s democracy - The Washington Post. (n.d.)
  1. 5. (2020) Information Technology Ethics: Cultural Perspectives: Cultural Perspectives - Google Books.
  1. 6.S M Mutula. (2013) Dimensions of the Information Society: Implications for Africa.
  1. 7. (2020) . Preliminary study on the Ethics of Artificial Intelligence - UNESCO Digital Library. (n.d.). Retrieved from https://unesdoc.unesco.org/ark:/48223/pf0000367823 .
  1. 8.S J Russell, Norvig P. (2003) . , Artificial Intelligence A Modern Approach; PearsonEducation. InPearson
  1. 9.Schmiljun A. (2019) Moral Competence and Moral Orientation. in Robots.ETHICS IN PROGRESS,10(2) 98-111.
  1. 10.Report Senate. (2020) Affirms Russia Interfered. in 2016 Election | Time. (n.d.). Retrieved .
  1. 11.UNESCO Digital Library. (n.d.)

Cited by (1)

  1. 1.Taylor Randon R., O’Dell Bessie, Murphy John W., 2023, Human-centric AI: philosophical and community-centric considerations, AI & SOCIETY, (), 10.1007/s00146-023-01694-1