Tackling abuse on social media

Tackling abuse on social media

Tackling abuse on social media

Reports of social media abuse have dramatically increased during the pandemic. With social media giants being under more scrutiny than ever before, they have decided to start tackling abuse. This blog will explore the ways they aim to keep us, and our children, safe from abuse online.

 

Covid-19: a rise in social media abuse, leaving women and children at risk of harm

The pandemic and its subsequent lockdowns have increased online consumption as well as harmful behaviour online: from trolls to hate speech or sexual exploitation. Since the start of the pandemic, “almost 1 in 2 (46%) women and non-binary people reported experiencing online abuse” with 29% of those having already experienced online abuse reporting it to be worse, 84% of respondents experienced online abuse from strangers and most of the abuse took place on mainstream social media platforms (Twitter 65%, Facebook 29%, Instagram 18%)”, leaving women at higher risk of online abuse. Furthermore, the popularity of sites such as OnlyFans, and its subsequent promotion on platforms like Instagram or Twitter during the pandemic has had an impact on child sexual exploitation. The Internet Watch Foundation’s 2020 report showed a 44% increase in the number of self-generated indecent images produced by children, especially amongst 11 to 13-year-olds, while “the NSPCC found that during the first three months of the UK’s 2020 lockdown, Instagram was used 37% of recorded cases of sexual communication with a child.”

 

Are social media giants finally tackling online abuse?

In a context of intense scrutiny, with rising instances of abuse towards minorities and promulgation of white-supremacist ideals and with half of the women “that reported harassment on Facebook [not receiving] a response or [having] their claim fall short” according to Amnesty. Media giants have thus “discussed creating a single, unified method of reporting abuse in a bid to reduce the onslaught of gender-based violence on social media”. Facebook just changed its algorithms, as a “2016 internal company report cited by The Wall Street Journal found that a majority of users who joined extremist groups did so because they had been recommended by Facebook”. Facebook will:

Instagram clamped down on the risks of child sexual abuse, as it announced in a tweet that it will forbid adults to DM underage users that are not amongst their followers, with 13 to 18-year olds receiving “messages advising them to be careful sharing photos, videos, or information with someone you don’t know” and getting “notifications whenever they interact with an adult who “has been exhibiting potentially suspicious behavior.” Instagram has also partnered with ConnectSafely and The Child Mind Institute to deliver a new parental guide that “includes tools, tips, resources, conversation starters for parents and teens.”

 

An Online Abuse Bill that leaves too much leeway

As appreciated as these updates are, are they sufficient? The government has recently tackled online abuse in its white paper, however, many deemed it too vague and left too much leeway for companies. In it, it announced that it will fine companies that fail to comply up to “£18 million or ten per cent of annual global turnover and block non-compliant services from being accessed in the UK. The government put social media companies, such as Facebook, TikTok, Instagram and Twitter, in Category 1, meaning they will need to set and enforce clear terms and conditions regarding their harmful but legal content, that can cause physical or psychological harm and publish transparency reports about the steps they are taking to tackle online harms. Without properly defining the terms of this paper, the government fails to address the support that’s needed for victims of online hate. Thus, “Charities Against Hate (CAH), a coalition of more than 40 leading charities, has urged social media platforms” to deliver stricter penalties, tackle online hate, misinformation and dangerous conspiracies.  In the past year we have thus seen an effort in tackling abuse as numbers have risen, targeting especially women and children and more especially those of colour, but more needs to be done.

 

At JAN Trust, we consider the issue of online harm very important to our community and realise how it disproportionately affects women and children.  In 2012, we published a report on ‘Internet Extremism: Working Towards a Community Solution’, which highlighted the need for education on the internet and the dangers and what can be done to protect young people online. You can also find out more through our Web GuardiansTM programme or Another Way ForwardTM programme.

 

Contact us at [email protected]

Toolkit to report abuse to your MP

Report child exploitation online: https://www.ceop.police.uk/Safety-Centre/