Big audience? Big responsibility
With Facebook being so popular worldwide, it is not surprising that its parent company Meta removes millions of pieces content every quarter that go against its community standards. In the second quarter of 2023, there were 18 million pieces of actioned hate speech on Facebook, with the corresponding quarter of 2021 seeing a record number of 31.5 million pieces of hate speech actioned on the platform. Additionally, between April and June 2023, there were 1.8 million pieces of violent and graphic content removed, as well as 1.5 million pieces of bullying and harassment related items.Generation Z favorite TikTok also removes millions of pieces of content every quarter, which is particularly concerning given its young audience base. Between January and March 2023, 30.6 percent of removed TikTok videos were taken down for reasons of minor safety, whilst 27.2 percent were removed for reasons of illegal activities and regulated goods. As for Snapchat, another app beloved by Gen Z, the largest number of content and account reports involved violations for sexually explicit content.
Problematic content appears on all types of social media
It is not just the social media giants that struggle with content moderation, as fake engagement and misinformation find their way around the online environment. LinkedIn, considered a social networking platform for business professionals, removed over 204 thousand pieces of content containing harassment or abuse in the second half of 2022. In addition, 137 thousand posts containing misinformation were also taken down. The employment-oriented site was issued 23 global government requests for content removal between July and December 2022, of which 87 percent were actioned.Social news aggregation and discussion site Reddit reported that the most common ground for subreddit removal was spam, with over 780 thousand subreddits being removed for this reason in 2022. Additionally, 551 thousand communities were taken down by Reddit administrators due to a lack of active moderation.
Can social media be a safe space?
Content moderation and removal is a necessity to make online environments safer, but with so many users and postings, not all problematic content is addressed, and there is little in place to monitor this. The Digital Services Act (DSA), an EU regulation which covers all the major social networks with over 45 million users, aims to enforce stricter rules against illegal and harmful content in the EU and impose fines if companies fail to comply. The act proposes that online services must delete hate speech and false information within 24 hours and remove accounts that regularly post these kinds of content.Additionally, the UK online safety bill looks to set tougher standards for social networks, and akin to the Digital Services Act, will mean that social media companies are responsible for preventing harmful content from being published, or removing it as quickly as possible, as well as ensuring that age limit regulations on social media services are enforced.