Can regulators mitigate the human rights risks stemming from AI?
by Capucine May,
Artificial intelligence (AI) systems are rapidly growing in complexity and capability. The sheer computing power at the heart of these systems is such that, if harnessed positively, they have the potential to drive radical advances in fields such as medicine, education and green technology. But if they are irresponsibly directed, they could undermine human rights, both upstream (in the development of the tools) and downstream (in their use).
Risks to human rights from AI include some that are familiar, such as inequality, discrimination, and labour rights violations. But the rise of AI also raises a spectrum of new rights challenges, such as threats to the right to privacy through identity theft and deep fake technologies.
These rights issues are complicated further by the disproportionate control of these technologies by a handful of large and powerful companies, whose motivations are primarily commercial. Coupled with the relative lack of expertise within regulatory bodies, this raises questions as to who will benefit from the rise of AI, and who will lose out.
New technologies, new human rights problems?
While some of the social issues associated with AI are relatively well known, such as when ingrained biases within data make their way into AI models, others are flying under the radar.
For example, these technologies raise fundamental questions around the right to privacy, and what violations of that right might look like. In some areas these violations are clear cut, such as the use of AI facial recognition systems to survey the Uyghur minority in China. Such violations will tend to be more prevalent in jurisdictions with weak democratic credentials and, accordingly, lower privacy standards (see figure 1 below).
Already, we have seen the rapid rollout of facial recognition and biometric identity AI systems across Africa, particularly as part of the China-backed Digital Silk Road which has seen Smart Cities mushroom across the continent. In Zambia and Uganda, AI has been used to monitor political opposition and dissenting individuals, illustrating the risk to privacy and freedom of expression from such systems.
However, more subtle uses of AI, such as emotion recognition to manipulate consumer behaviour, deepfake technology, and sophisticated identity theft, will further challenge our traditional approach to the right to privacy.
Figure 1: Countries in Asia and Africa are already most at risk of privacy violations, a disparity that AI could exacerbate further
At the same time, while AI is often praised for reducing the need for human input into repetitive tasks, models currently still need humans to participate in the data labelling process, which is frequently outsourced to countries such as India, Kenya, the Philippines, and Uganda. All of these countries are rated extreme or high risk on our labour rights indices (see figure 2 below).
Data labellers often work in poor conditions for low wages and are exposed to traumatising content with limited access to mental health services. This neatly illustrates the negative impact AI can have on labour rights in upstream supply chains.
Regulators playing catch-up
With so many facets of human rights at risk, the big question is: where are the regulators? While the Chinese government has recently called for greater state control of AI to counter emerging ‘national security threats’, this is less an attempt at balanced regulation and more a demand to concentrate the power of AI in the hands of the state, raising yet more serious human rights concerns.
The US is being urged by some AI experts to consider regulation, but currently seems a long way from doing so. The EU is the only major regulator thinking seriously about this issue. Its AI Act - the first draft of which includes a requirement for deployers of AI systems to carry out a ‘fundamental rights impact assessment’ and mitigate identified risks - entered the final stages of EU negotiations in June, with a view to enactment by the end of 2023.
But as AI tools are rolled out around the world, piecemeal or regional regulations are unlikely to be effective. These technologies require concerted global action, as regulators are struggling to keep up with AI innovation. Authorities often have inadequate knowledge of the systems they are seeking to regulate, and therefore think AI can be regulated in a traditional, linear fashion. A lack of deep-rooted subject-matter expertise could thwart forward-looking – rather than reactive – regulation, leaving policymakers playing a perpetual game of catch-up.
Without regulation, companies could embed technological barriers and reinforce existing uneven power relations with civil society, undermining the inclusivity of progress and the social circular economy. In this case grassroots organisations and communities could struggle to remain part of the conversation – increasing the risk of human rights being side-lined.
Human rights key to AI’s progress
We have not begun to understand the possibilities for progress that these technologies could confer on us, which is why smart regulation is so important. If the world fails to control and direct the future development of AI, there are myriad ways it could threaten hard won freedoms.
The conversation around AI’s role in society is clearly one whose time has come, and about which we will be hearing far more frequently in the coming years. Human rights must sit at the heart of that discussion.