Shropshire Star

Facebook reveals rise in proactive removal of harmful content

The social network has published its latest Community Standards Enforcement Report.

Published
Facebook icon on a laptop

Facebook has said it is now proactively removing more harmful and abusive content than ever.

The social network made the announcement as it published its latest Community Standards Enforcement report.

The company said between July and September this year, it removed around 11.6 million pieces of content related to child exploitation and abuse, up from around 5.8 million in the first three months of the year.

Instagram introduces in app charity donations
Facebook also published enforcement figures for Instagram (Nick Ansell/PA)

Facebook revealed 99% of this content had been proactively detected by its technology and reported similar levels of detection for content related to terrorism, suicide and self-harm.

The social networking giant also published figures on the enforcement of its rules on Instagram for the first time.

The photo-sharing platform, which is owned by Facebook, said in the three months between July and September it removed more than 2.4 million pieces of content related to child exploitation, regulated goods and terrorist propaganda, all of which had been proactively detected at a rate above 90%.

Facebook said it had also removed 845,000 pieces of content related to self-harm and suicide from Instagram during that period, however, proactive detection on this subject was 79.1%.

The social network’s vice-president for integrity, Guy Rosen said a variety of factors meant detection rates differed across the firm’s apps.

“While we use the same proactive detection systems to find and remove harmful content across both Instagram and Facebook, the metrics may be different across the two services,” he said.

“There are many reasons for this, including the differences in the apps’ functionalities and how they’re used – for example, Instagram doesn’t have links, reshares in feed, pages or groups; the differing sizes of our communities; where people in the world use one app more than another, and where we’ve had greater ability to use our proactive detection technology to date.

“When comparing metrics in order to see where progress has been made and where more improvements are needed, we encourage people to see how metrics change, quarter-over-quarter, for individual policy areas within an app.”

Instagram, in particular, has been heavily criticised over the presence of self-harm content on its platform in the past, notably in the wake of the death of teenager Molly Russell, who took her own life after viewing graphic content on the platform.

Her father, Ian, who has since set up a foundation in his daughter’s name and become a campaigner for online safety, has said he believes Instagram was partly responsible for her death.

Instagram has since introduced a number of new restrictions around self-harm content.

Mr Rosen added Facebook would continue to invest in artificial intelligence (AI) to further improve its detection programmes.

“Across the most harmful types of content we work to combat, we’ve continued to strengthen our efforts to enforce our policies and bring greater transparency to our work,” he said.

“In addition to suicide and self-injury content and terrorist propaganda, the metrics for child nudity and sexual exploitation of children, as well as regulated goods, demonstrate this progress.

“The investments we’ve made in AI over the last five years continue to be a key factor in tackling these issues.

“In fact, recent advancements in this technology have helped with rate of detection and removal of violating content.”

Sorry, we are not accepting comments on this article.