A smartphone with Facebook's logo is seen in front of displayed Facebook's new rebrand logo Meta

In the complex task of curbing misinformation, hate speech, and harmful content, Facebook often faces backlash from users, governments, and advocacy groups alike.

Based on a presentation from Meta Oversight Board’s Afia Asantewaa Asare-Kyei and Abigail Bridgman during the 11th edition of the Forum on Internet Freedom in Africa (FIFAfrica) in Dakar Senegal, this article attempts to delve into why content moderation on Facebook is so controversial, exploring the multifaceted challenges the platform encounters and the reasons it seems perpetually under scrutiny.

Social media platforms have revolutionized how people communicate, access information and shape public opinion – with Facebook still standing out as a global giant, connecting billions of users across the world. However, this visibility and influence make it a frequent target of criticism regarding its content moderation policies.

Facebook’s sheer scale is its first major challenge. With over 2.9 billion monthly active users sharing photos, videos, posts, and comments, the volume of content generated every minute is staggering. This massive flow makes identifying and moderating harmful content a logistical feat.

Despite advanced algorithms and a large team of human moderators, it’s challenging to filter harmful content consistently across languages and cultural contexts.

Moreover, while Artificial Intelligence (AI) can help flag offensive language or images, it’s far from perfect. Nuances such as sarcasm, regional dialects, and cultural symbols can easily be misinterpreted by machines. Consequently, Facebook often relies on human moderators, who, even with extensive training, cannot realistically monitor every piece of content.

Free speech vs safety

One of the most significant sources of contention is Facebook’s role in balancing free speech with user safety. Facebook’s content policies often reflect a commitment to keeping the platform safe by restricting violent, graphic, and hate-filled content.

However, defining the boundaries of what is acceptable can be highly subjective and varies between different regions and cultures. What might be considered satire or political dissent in one country can be perceived as hate speech or subversion in another.

This ambiguity often leaves Facebook open to criticism from all sides. Users may accuse the platform of censoring free speech when posts are taken down, while advocates for safety argue Facebook doesn’t do enough to prevent the spread of harmful content. Governments often add further complications by demanding that Facebook adhere to national standards, sometimes to limit dissenting voices, leading to accusations of censorship.

Facebook’s reach has also made it a powerful tool for political campaigns, sometimes with negative consequences. Election interference, fake news, and the manipulation of voters through targeted misinformation campaigns are increasingly common problems. While Facebook has implemented policies to address these issues, such as flagging misinformation and requiring transparency in political ads, these measures have not completely stemmed the spread of propaganda.

Certain governments apply pressure on Facebook to either restrict or promote specific narratives. In countries with strict censorship policies, governments often demand that Facebook take down content critical of the state, threatening to ban the platform if it doesn’t comply. In more democratic nations, lawmakers may criticize Facebook for failing to curb disinformation that could sway elections. The platform is thus stuck between appeasing regulatory authorities and defending freedom of speech.

Content moderation as a mental health crisis

Another often-overlooked issue with Facebook’s content moderation is its toll on moderators themselves. Human moderators are responsible for reviewing flagged content, which often includes graphic violence, hate speech, and disturbing imagery.

This repeated exposure has been linked to severe mental health impacts among moderators, some of whom develop post-traumatic stress disorder (PTSD) as a result of their work.

To address this, Facebook has increased mental health support for its moderation teams and introduced stricter safety protocols. Yet, concerns remain, as these moderators must work with sensitive and traumatic material in order to keep the platform safe for other users.

Role of AI and its limitations

Facebook has invested heavily in AI to improve its content moderation efficiency. Algorithms can quickly sift through millions of posts, images, and videos, flagging potentially harmful content for human review. While these AI systems have become increasingly sophisticated, they are still far from perfect.

AI can struggle to interpret the nuances of human communication, often mistaking innocent content for something harmful, or failing to catch subtle forms of hate speech and misinformation.

Additionally, AI’s effectiveness varies greatly between languages. While Facebook’s AI can moderately manage content in widely spoken languages, it often performs poorly with less common languages, leaving some communities vulnerable to harmful content. This language barrier contributes to criticism that Facebook’s moderation practices are inconsistently applied and may even disadvantage certain regions.

Cultural bias and global misunderstandings

Content moderation on a global platform like Facebook must consider diverse cultural perspectives, yet even with a multinational moderation team, cultural misunderstandings are inevitable. A post that appears harmless in one cultural context might be deeply offensive in another.

Facebook has attempted to address this by hiring moderators from various backgrounds and regions, but cultural nuances are hard to standardize.

This lack of cultural sensitivity has led to accusations that Facebook’s content moderation policies are Western-centric. Users from other regions sometimes feel that the platform doesn’t take their cultural and social norms into account, leading to instances where content deemed appropriate by local standards is removed, sparking further backlash.

Struggles with misinformation and extremism

The platform has been criticized for failing to curb misinformation and extremism adequately, particularly during sensitive events such as elections and public health crises.

For example, during the COVID-19 pandemic, Facebook came under fire for allowing anti-vaccine misinformation to spread widely. Similarly, extremist groups have used the platform to recruit followers and spread dangerous ideologies.

Facebook has implemented fact-checking programs, partnered with independent organizations, and introduced tools to flag potential misinformation. However, these measures are only partially successful, as misinformation can still slip through the cracks. Additionally, the speed at which misinformation spreads online often outpaces the response time of moderators or fact-checkers, meaning that harmful content can reach large audiences before it’s flagged or removed.

Why Facebook remains under attack

Facebook is constantly navigating the fine line between openness and restriction. This delicate balancing act means it is often under attack from multiple sides: advocates for free speech, safety advocates, regulatory authorities, and everyday users.

Its global reach amplifies every misstep, often making Facebook’s content moderation efforts appear inconsistent, culturally biased, or inadequate.

Moreover, Facebook’s approach to content moderation has come to symbolize a much larger debate on the role of tech giants in managing free expression online. As a private company, Facebook has the right to set its own community standards, but as a global platform with immense influence, it also has a responsibility to keep its users safe. This dual role adds complexity to every decision the platform makes.

Future of content moderation on Facebook

As Facebook continues to evolve, it will likely need to innovate further in content moderation technology and policy to meet the increasing demands of a diverse user base.

This might include refining AI tools to better understand cultural nuances, expanding human moderation teams, and enhancing transparency around moderation decisions.

While it’s unlikely that Facebook will ever satisfy all users fully, a commitment to continuous improvement, clear policies, and accountability could help mitigate some of the criticisms it faces. Content moderation on a global platform is inherently challenging, and the issues Facebook faces reflect the broader challenges of moderating a digital public square in an increasingly polarized world.

Facebook’s struggles with content moderation highlight the complex and often controversial role that social media platforms play in modern society. Balancing free speech with the need to protect users from harmful content is a delicate task that requires ongoing adaptation and sensitivity to cultural diversity.

While Facebook is frequently under attack for its perceived failures, its efforts also represent the unprecedented challenge of moderating content on a global scale.