Facebook announces new policies for content removals, fake and hate speech
The Social Media platform, Facebook issued its latest Community Standards Enforcement Report, which states all the content removals and enforcement actions that the platform achieved through the second quarter of 2021.
The CSER report includes some gripping facts about the key trends, and the progress of Facebook's detection systems. Firstly, Facebook says that it deleted more than 20 millions of content from across Facebook and Instagram for violating the anti-abuse and harassment policies on COVID-19-related misinformation.
"We have removed over 3,000 accounts, pages, and groups for repeatedly violating our rules against spreading COVID-19 and vaccine misinformation. We also displayed warnings on more than 190 million pieces of COVID-related content on Facebook that our third-party fact-checking partners rated as false, partly false, altered or missing context." Facebook mentioned.
Countering these moves is critical to maximize vaccination take-up as the vaccine roll-out continues, and given Facebook's vast reach, this is a crucial issue for Facebook to focus on. Of course, Facebook has been chastised for facilitating the spread of health misinformation in the first place, but the figures show that the company is attempting to combat these issues.
In terms of other major developments, Facebook claims that its efforts to combat hate speech are yielding beneficial results:
"Prevalence of hate speech on Facebook continued to decrease for the third quarter in a row. In Q2, it was 0.05%, or 5 views per 10,000 views, down from 0.05-0.06%, or 5 to 6 views per 10,000 views in Q1."
So, even if it can't improve its exposure rates on all fronts, Facebook is reducing visibility for violating, offensive information. In practise, it's impossible to say what that implies because, for the most part, Facebook's raw enforcement numbers are basically unchanged in most areas, despite the fact that Facebook's usage counts are steadily increasing.
You'd think that total enforcement metrics would grow as a result, yet aside from these notable factors, most others have remained stable, despite the noted improvements.
Is this a sign that Facebook is becoming better, getting worse, or staying the same at spotting violations? It's tough to say, but Facebook is taking action on a lot of stuff and catching many infractions before they reach the public eye.