Meta's Changes: More Speech, Fewer Mistakes 2025

As of 2025, Meta has 3.07 billion monthly active users. With such vast reach, Meta’s newly proposed changes to enhance free speech online carry significant global implications. In this legal update, we review some of Meta’s proposed changes and highlight some practical implications for policymakers.

The Rationale for the Proposed Changes

Essentially, Meta is admitting that its current content moderation systems (AI and human-led) for protecting users online has led to the suppression of content that should not otherwise be suppressed. In his own words, Mr. Zuckerberg says, “we have reached a point where it’s just too many mistakes and too much censorship. For this reason, Meta is introducing changes which it believes will allow for more freedom of speech online, allowing more people to share their ideas and beliefs.

Commentary

Whilst Meta’s proposed changes raise some concerns regarding online safety, there are a number of reasons why Meta’s pro-speech stance warrants further consideration. For one, both the Online Safety Act and Digital Services Act impose clear obligations on platforms to safeguard users’ right to freedom of expression.

[1] Additionally, the Online Safety Act requires platforms providing category 1 services to ensure the free expression of “content of democratic importance”.[2]Content of democratic importance” is defined as content which “is or appears to be specifically intended to contribute to democratic political debate in the United Kingdom or a part or area of the United Kingdom”.

In light of these legal requirements, it appears that platforms like Meta carry an equally significant legal responsibility to uphold free speech as they do to protect users’ safety online. These legal requirements also lend some credibility to Meta’s decision to adopt a more speech-friendly approach.

Additionally, the legal obligations highlighted above also suggest that the suppression of speech that should otherwise be allowed, may itself constitute a form of online harm. While this form of censorship is not typically considered as a form of online harm in practice[3], it can nonetheless be harmful, especially when viewed from the broader perspective of harm to society, democracy and individual rights.

Read Also:  Conducting Layoffs in Tech Start-ups: What are the Rules for Nigerian Layoffs?

However, while the rationale for prioritizing free speech is clear, it is equally important to scrutinize the potential implications of Meta’s proposed changes as the changes may also  introduce specific risks that could undermine user safety. The following sections explore these concerns in greater detail.

Replacing Fact Checkers with Community Notes

Facebook intends to replace third-party fact-checkers with a community notes feature, which  allows users to fact-check online content by adding notes to correct potentially misleading posts. While we expect that Meta will deploy the community notes feature only in connection with online misinformation, there are just a few points to note about a content moderation strategy that intends to rely wholly on a community notes feature for addressing misinformation generally.

  1. Good Cop vs. Bad Cop: The community notes feature relies on the assumption that there are just as many users motivated by truth to counter misinformation as there are users who spread disinformation or misinformation. This assumption may not hold, as users may lack the practical motivation to counter misinformation. Other than an individual sense of purpose, there are no tangible rewards or incentives for making a community note, making it less likely for users to participate.
  2. Limited Reach: It appears that community notes are visible only on posts with active contributions, meaning that many other misleading posts may go unchecked.
  3. Barriers to Community Noting: The current design of the community notes feature introduces a preliminary hurdle that may discourage participation. For instance, a user who wishes to counter misinformation on X must first rate existing notes and achieve a rating impact of 5, before they can write their own notes. While the rationale behind this design appears to be the need to ensure the credibility of contributors, this preliminary requirement can diminish the effectiveness of the community notes feature as a tool for quickly and efficiently combating misinformation.
  4. Demonstrating Compliance: There is some doubt as to whether replacing its third-party fact-checking program with the community notes feature will be adequate to combat misinformation which results in illegal content or even co-ordinated disinformation campaigns. On the other hand, retaining professional fact-checkers could provide platforms with an additional layer of protection and help platforms demonstrate compliance with legal obligations to counter illegal content proactively.
Read Also:  Designing a Fintech Compliance Strategy: Strategic Considerations for African Fintechs

There is probably a case for reviewing and improving Meta’s third-party fact-checking program. However, the combination of (a), (b), (c) and (d) suggests the conclusion that (i) the community notes feature may not be ideal for addressing types of online harms (ii) it may be prudent to introduce the community notes feature as an addition to Meta’s online safety tool kit, and not as an alternative (iii) Users are more likely to use the community notes feature when there is a tangible incentive for accurate contributions, as rewards can motivate participation and enhance the quality and timeliness of contributions.

Policy Enforcement

Mr. Zuckerberg is a software expert, so his claim that Meta’s content moderation system is making too many mistakes, carries some weight[4]. However, policymakers may find it challenging to objectively assess Meta’s claims, in the absence of supporting data from Meta.

From a policy standpoint, a critical question arises: does the potential harm to online users posed by harmful content outweigh the risks associated with restricting content suspected to be harmful? In other words, can the justification for some level of censorship or content restrictions be considered sufficient when weighed against the principle of free speech? Alternatively, should the right to free speech take precedence, even if it allows for the possibility of harmful content being encountered?

Read Also:  Anton Piller Orders - A Powerful Tool to Protect your Intellectual Property

The tension between protecting users online and protecting free speech online is not easily resolved. One thing is clear – each jurisdiction will have to decide where to draw the line, balancing these competing values in ways that reflect their legal, cultural, and societal priorities.

Final Comments

Overall, Meta’s proposed changes to its content moderation system raise important questions about the future of online speech and user protection. While tools like community notes offer a novel approach, they also come with limitations that may hinder a platform’s ability to meet legal requirements and to comprehensively protect users. Inevitably, the future of online platforms will depend on finding the balance between user protection and the free exchange of ideas.

[1] See generally, section 22 of the Online Safety Act and Article 34 of the Digital Services Act. There are fairly similar provisions under Nigeria’s regulations covering online safety and platform liability.

[2] See Section 17(2) of the Online Safety Act. 

[3] The traditional categories of online harm include CSAM, online frauds and scams, incitement to violence, hate speech & hate crime, cyberbullying, radicalization and terrorist content, to mention a few.

[4] Based on Meta’s claim that its current content moderation system is making “too many mistakes”, Meta is also getting rid of restrictions on topics like immigration and gender, to allow for more speech online. Meta’s updated filtering strategy will now focus on illegal and high severity violations, with lower-severity issues handled on a case-by-case basis, based on user reports.

 

Subscription Form