Facebook’s Oversight Board: long-awaited accountability or smokescreen for culpability?

Facebook’s Oversight Board: long-awaited accountability or smokescreen for culpability?

Facebook’s Oversight Board: long-awaited accountability or smokescreen for culpability?

How trusting or sceptical should we be of Facebook’s Oversight Board and its decisions?

In response to years of criticisms of its algorithms and decisions on what content should be allowed, Facebook established its own Oversight Board, as a sort of independent appeals body that would review past controversial decisions and publish fully transparent decisions.

Though still a fairly new organisation — or perhaps because it is still so young — the Oversight Board has already been beset by the same controversy to which Facebook itself has long been subject.

Recently, the Board — whilst upholding Trump’s ban from Instagram and Facebook for promoting violence in January — lambasted Facebook for its “vague, standardless penalty and then referring this case to the Board to resolve… [thereby seeking] to avoid its responsibilities”. Having been provided with a recommended list of criteria for future decisions concerning leading political figures, Facebook is now required to review and amend its decision of imposing an indefinite ban on the former President’s social media activity.

Donald Trump is still banned, and Facebook must re-examine the permanent ban — it would be easy to assume that most sides are happy and that this has been widely hailed as a turning point in Facebook’s contentious relationship with the right to free speech.

Not quite.

Trump reacted viciously to news of his continued ban, and sceptics of the organisation suggest that the focus on — often widely reported upon — case-specific issues distracts from the systemic problems with Facebook, like the infamous algorithm and the power wielded by the social media giant. Critics have noted that, whilst the Trump decision went into detail on the ban itself, the exact method by which Facebook’s algorithm functions and may promote extremist content remains an opaque phenomenon.

The independence of the Oversight Board has also been called into question — members are paid by Facebook and, whilst there are experts in their respective fields on the panel, the strongest critics of Facebook are conspicuous by their absence.

An examination of the Board’s other decisions reveals understandable reasons for the lukewarm reaction.

One of the first decisions involved overturning Facebook’s decision to remove a post against Muslims for contravening the hate speech policy, as the content was deemed to be offensive, but not hate speech.

The policy itself defines hate speech as:

a direct attack against people on the basis of what we call protected characteristics: race, ethnicity, national origin, disability, religious affiliation, caste, sexual orientation, sex, gender identity and serious disease.

An attack is defined as:

violent or dehumanising speech, harmful stereotypes, statements of inferiority, expressions of contempt, disgust or dismissal, cursing and calls for exclusion or segregation

So, what was the issue?

The Board found that a post suggesting that there was “something wrong with Muslims psychologically” for reacting strongly to France’s Islamophobia but not China’s was offensive, but not hate speech when “taken in context”, the reasoning being that “it did not advocate hatred or intentionally incite any form of imminent harm”. Whilst this decision is understandable, it also reveals a slew of inconsistencies and potential blind spots.

The Community Standard quoted above does not actually require “threats against identifiable individuals” or for any language used to constitute “a significant part of… rhetoric” against a protected characteristic in the relevant country.

Similarly, a distinction is made between “intolerance” (acceptable) and content that is “derogatory or violent”. Intolerance has rarely — if at all — been used in a positive context, and indeed any expression of intolerance by default normally includes some sense of derogation.

Indeed, referring to an entire religion’s mental state in a negative sense could quite easily be considered dehumanising, a state of inferiority, or an expression of contempt, disgust, or dismissal. The point on inconsistent reactions could have quite easily been made — sarcastic or not — without reference to religion. If we take the Board’s approach to consider the context of Myanmar, the extreme vulnerability of Muslims to persecution should increase sensitivity to derogatory speech, particularly on Facebook, not limit the definition purely to what is commonly used.

As an organisation in its infancy with the unenviable job of reviewing controversial freedom of expression decisions, Facebook’s Oversight Board deserves the benefit of the doubt, but there are nevertheless areas where it could clearly improve.

It has been noted, indeed including by the Board itself in the Trump decision, that it can easily be used as an excuse for Facebook to avoid making decisions itself and this could even extend to the Board itself as, despite the significant prevalence of disinformation and hate in the run-up to the US elections in 2020, “no cases were reviewed before the US elections in November”.

Similarly, the Oversight Board appears to have much less power than it first appeared; whilst Facebook is bound to take account of Board decisions, it is not bound to obey them, and there do not seem to be any available sanctions to solidify accountability.

Only time will tell whether Facebook’s Oversight Board develops into a true medium for accountability or whether it becomes relegated to a conduit for Facebook avoiding taking ownership of its own culpability. Steps to move towards the former could include recruiting experts from specific minority communities or those who have been more critical of Facebook to the Board.

Regardless, JAN Trust will remain committed to our work countering extremism and disinformation, particularly on the internet and social media, spearheaded by our CEO, Sajda Mughal OBE, who has extensive expertise in this area and on Islamophobia. We know that online extremism and radicalisation present a major danger to our young people, which is why our pioneering Another Way Forward™ programme galvanises a whole generation of young women against extremism by educating them on hate and radicalisation, and empowering them to speak out and campaign for a better, more tolerant world, both online and offline.