Are we teaching technology to be racist?

Are we teaching technology to be racist?

Are we teaching technology to be racist?

Artificial Intelligence systems are learning to replicate and codify systemic biases, posing problems for the fight against racial injustice.

Artificial intelligence, or AI, is becoming an increasingly present part of our everyday lives. Programmes designed to integrate AI into areas like healthcare, education, and criminal justice are cropping up everywhere, designed to streamline and simplify different elements of our everyday lives.

However, organisations and digital activists like the Algorithmic Justice League are drawing attention to how AI systems can amplify racism, sexism, and other forms of discrimination.

AI is infamous for its failure to grasp the complexities of race. It is often unable to detect non-White faces, to differentiate one Black face from another, and to detect issues like skin cancer in darker skin. However, AI has also been shown to have “learned” racial biases, whether or not this was the intention of the programmer. For example, AI has been shown to perpetuate racial inequalities in American healthcare sector by assigning lower risk scores to Black people than their equally sick White counterparts, restricting their access to specialised care.

Unsurprisingly, the biggest area of concern for the potentially racist effects of AI is policing and the criminal justice system.

The failures of AI to differentiate between different Black people, and the feedback loop that can occur when algorithms are based on already racially biased police records, mean that AI can exacerbate pre-existing racial disparities in the criminal justice system. For example, research in the United States has demonstrated the problems with AI programmes used to help judges by predicting the likelihood of arrested individuals reoffending, with “risk factor” programmes flagging Black defendants at almost twice the rate as white people (45% compared to 24%). Similarly, programmes designed to detect likely “crime hotspots” disproportionately identify neighbourhoods mostly inhabited by BAME communities.

Facial recognition AI is the biggest worry. Aside from the worrying implications that the use of facial recognition for police and government surveillance has for human rights more generally, facial recognition technology has been shown to exacerbate systemic racism by disproportionately misidentifying BAME communities — particularly Black people — as suspects of crime, in some cases with rate of 95% inaccuracy. If you’re Black, you’re more likely to be subjected to the technology — and the technology is more likely to be wrong.

AI therefore risks exacerbating the racial bias that already exists within the police system, or even making it worse — especially if facial recognition is integrated into body camera technology, which would mean that police officers can arrest individuals, or even commit acts of violence, without the same level of questioning usually involved in an arrest. This presents a major hurdle in the fight for racial justice in law enforcement. Efforts to tackle systemic biases and preconceptions amongst police are made more difficult when those same biases have become codified into the technology that police use; at the same time, false arrests and acts of police brutality can take blame away from the humans and place it on the algorithm. As organisations like Amnesty International have been quick to point out, AI risks being weaponised by police against already marginalised communities, presenting a new challenge for activists fighting police racism.

So, what is the cause of this robot racism? Obviously, AI technology is not inherently racist — computers cannot become biased on their own. AI is created by humans, with human input, and as a result cannot be truly neutral or free from the influence of racist attitudes and biases.

Rather, racism in AI reveals the “coded gaze”, the potentially discriminatory preferences of programmers. This is aided by the underrepresentation of BAME communities — for example, less than 2% of employees in technical roles at Google are Black — which means that the data sets that AI software “learns” from generally aren’t diverse. AI is unable to detect and differentiate Black faces, or privileges White ones, because it simply doesn’t “know” enough about people of colour.

At the same time, AI is innocently absorbing and using data that is grounded in racism. For instance, programmes that “learn” from previous crime reports and police records will automatically “learn” to associate crime hotspots with already overpoliced BAME communities, entering law enforcement into a feedback loop when police then act on this data.

As a result, activists have stressed the need for “equitable and accountable” AI in order to stop the constant cycle of humans and technology teaching each other to be racist. As AI becomes a more central feature of our lives, it is vital that our anti-racist efforts extend to the technology that we create — to quote Algorithmic Justice League, “racial justice requires algorithmic justice”.

At JAN Trust, we recognise the systemic nature of racism in our society, and the need for anti-racist efforts that undermine racial biases in order to challenge racial discrepancies.