Artificial intelligence (AI) was once part of science fiction. But it’s spreading. It is used in mobile phone technology and motor vehicles. It powers tools for agriculture and health.
But concerns have emerged about the liability of AI and related technologies such as machine learning. In December 2020, a computer scientist, Timnit Gebru, was fired from Google’s ethical AI team. She had previously sounded the alarm on the social effects of biases in AI technologies. For example, in a 2018 article, Gebru and fellow researcher Joy Buolamwini showed how less accurate facial recognition software is at identifying women and people of color than white men. Biases in training data can have large and unintended effects.
Black Friday – Cyber ââMonday
Sign up for our annual subscription and get full access to our award-winning articles, market indicators and data tools.
R630 R530 *
SAVE R226 vs monthly subscription.
* Valid until November 29, 2021. General conditions apply.
There is already a large body of research on ethics in AI. This highlights the importance of principles in ensuring that technologies do not just worsen prejudices or even introduce new social damage. As stated in the UNESCO draft recommendation on the ethics of AI:
We need international and national policies and regulatory frameworks to ensure that these emerging technologies benefit humanity as a whole.
In recent years, many frameworks and guidelines have been created to identify the goals and priorities of ethical AI.
It is certainly a step in the right direction. But it is also essential to look beyond technical solutions when addressing issues of bias or inclusiveness. Bias can enter into who frames objectives and balances priorities.
In a recent article, we argue that inclusiveness and diversity must also be at the level of identifying values ââand defining the frameworks of what counts as ethical AI in the first place. This is particularly relevant when considering the growth of AI research and machine learning on the African continent.
Research and development of AI and machine learning technologies are growing in African countries. Programs such as Data Science Africa, Data Science Nigeria, and Indaba Deep Learning with its IndabaX satellite events, which have so far been held in 27 different African countries, illustrate the interest and human investment in the fields.
The potential of AI and related technologies to promote opportunities for growth, development and democratization in Africa is a key driver of this research.
Yet very few African voices have so far been involved in international ethical frameworks that aim to guide research. This might not be a problem if the principles and values ââof these frameworks have universal application. But it is not clear that they do.
For example, the European framework AI4People offers a synthesis of six other ethical frameworks. It identifies respect for autonomy as one of its key principles. This principle has been criticized in the field of applied ethics of bioethics. It is seen as failing to do justice to community values ââcommon to all of Africa. These focus less on the individual and more on the community, even requiring exceptions to be made to such a principle to allow effective interventions.
Challenges like these – or even the recognition that there might be such challenges – are largely absent from discussions and frameworks for ethical AI.
Just as training data can entrench existing inequalities and injustices, so too does failure to recognize the possibility of diverse sets of values ââthat may vary across social, cultural and political contexts.
Moreover, failure to take into account social, cultural and political contexts can mean that even a seemingly perfect ethical technical solution can be ineffective or misguided once implemented.
For machine learning to be effective in making useful predictions, any learning system must have access to training data. This involves samples of the data of interest: inputs in the form of several characteristics or measurements, and outputs which are the labels that scientists want to predict. In most cases, these features and labels require human knowledge of the problem. But an inability to properly take into account the local context could lead to underperforming systems.
For example, cell phone call recordings have been used to estimate the size of populations before and after disasters. However, vulnerable populations are less likely to have access to mobile devices. So this kind of approach might yield results that are not helpful.
Likewise, computer vision technologies to identify different types of structures in an area are likely to perform less well when different building materials are used. In both of these cases, as we and other colleagues discuss in another recent article, ignoring regional differences can have profound effects on everything from the delivery of disaster relief to the performance of autonomous systems.
AI technologies should not simply aggravate or integrate problematic aspects of today’s human societies.
Being sensitive and inclusive in different contexts is essential to design effective technical solutions. Equally important is not to assume that values ââare universal. Those who are developing AI need to start including people from different backgrounds: not only in the technical aspects of designing datasets, etc., but also in defining the values ââthat can be relied on to frame and define. objectives and priorities.
Mary Carman, Lecturer in Philosophy, University of the Witwatersrand and Benjamin Rosman, Associate Professor, School of Computer Science and Applied Mathematics, University of the Witwatersrand.
This article is republished from The Conversation under a Creative Commons license. Read the original article.