Algorithmic Bias on Meta Platforms: Impact on Left-Leaning Content
September, 2025
Introduction
Debates about political bias in social media often focus on claims of “anti-conservative” bias, but emerging evidence suggests the algorithms of Meta’s platforms – Facebook, Instagram, and the newer Threads – may in fact disfavor certain left-leaning content. Algorithmic bias can manifest in two ways: through feed algorithms that amplify some political content over others, and through content moderation systems that unevenly remove or suppress posts. This report examines credible findings from 2022–2025 – including academic studies, whistleblower revelations, internal documents, and watchdog analyses – to see how Meta’s algorithms and policies affect left- versus right-leaning content. Key findings indicate that right-wing content often enjoys greater visibility and engagement on these platforms, while left-wing or progressive content (e.g. social justice advocacy) experiences higher rates of suppression. Below we detail these trends, with comparisons where data allows, and highlight recent developments in Meta’s approach.
Amplification and Engagement: Do Algorithms Favor the Right?
On Facebook, which uses a highly engagement-driven News Feed algorithm, right-leaning pages consistently outperform left-leaning ones in visibility and interactions. For example, a 2021 analysis by Media Matters tracked millions of Facebook posts and found that right-leaning news pages received far more user engagement than left-leaning pagesmediamatters.orgmediamatters.org. Over the first three quarters of 2021, right-leaning Facebook pages produced only about 26% of posts in a set of political pages but garnered roughly 47% of all engagements (likes, shares, comments). In contrast, left-leaning pages contributed 14% of posts and received only about 22% of engagementsmediamatters.org. This disparity (illustrated below) suggests that Facebook’s feed algorithm – which rewards content that sparks sharing and discussion – tends to amplify conservative content to a greater degreemediamatters.orgmediamatters.org. The engagement-based ranking system favors sensational or emotionally charged posts, a style more often adopted by right-wing outlets, leading to their disproportionate reach. By comparison, left-leaning pages not only post less frequently on Facebook, but their posts also receive relatively fewer interactions – hinting at an algorithmic ecosystem where reaction-inducing content (frequently from the right) is boostedmediamatters.orgmediamatters.org.
Data from a 2021 study of 1,773 political Facebook pages shows right-leaning pages (purple) garner a far larger share of total interactions than their share of posts, while left-leaning pages see the opposite. Engagement-driven algorithms amplify content that triggers strong reactions – often benefiting sensational right-wing postsmediamatters.orgmediamatters.org.
Multiple studies and audits affirm that there is little evidence of an “anti-conservative” bias in Facebook’s algorithm; if anything, the advantage runs the other way. A report by NYU’s Stern Center for Business and Human Rights found no systematic bias against conservatives and noted that conservative media and politicians often generate more interactions than liberal ones on social platformsitif.org. Similarly, an independent 2021 study published in Nature analyzed millions of Twitter (X) feed results and found right-wing politicians’ tweets were algorithmically amplified more than left-wing politicians’ in six of seven countries studieditif.org. And during the 2020 U.S. election cycle, a peer-reviewed study observed that right-leaning news content was shared “substantially more” than left-leaning content on social media, giving conservatives an engagement edge onlineitif.org. These findings undermine the narrative of social platforms censoring the right – instead suggesting that engagement-centric algorithms (like Facebook’s) inherently boost attention-grabbing conservative posts, often outpacing left-leaning content in reachmediamatters.orgmediamatters.org. Facebook itself has argued that this is user-driven, claiming that right-wing populist content simply resonates more by tapping emotions like anger and fearitif.org. However, critics contend that Facebook’s design intentionally or not “promotes conservative sources by design,” as evidenced by certain algorithm tweaks that disproportionately impacted left-leaning outletsitif.org.
Internal Algorithm Changes: Throttling Progressive News
Perhaps the most striking example of intentional algorithmic bias against left-leaning content was an internal change Facebook made in late 2017. According to a Wall Street Journal investigation, Facebook planned to adjust its News Feed to reduce the prominence of political news, but executives worried the initial change would reduce traffic to right-wing sites more than left-wing sites. In response, CEO Mark Zuckerberg personally approved an alternate algorithm tweak that “affected left-leaning sites more than previously planned,” thereby sparing conservative outletsbusinessinsider.combusinessinsider.com. One casualty was Mother Jones (a progressive magazine), which saw its Facebook referral traffic plummet. Mother Jones later calculated the change cost it $400,000–$600,000 in lost yearly revenue and said Facebook officials had assured them at the time that no targeting was happeningbusinessinsider.combusinessinsider.com. In hindsight, however, it emerged that engineers did adjust the feed ranking in a way that throttled progressive news pages more heavily, apparently to avoid angering conservative partnersbusinessinsider.comitif.org. Facebook has denied any intent to single out publishers and maintains the change was a broad effort to show fewer politics storiesbusinessinsider.com. Still, the episode, first revealed in 2020, suggests that Meta’s leadership has been sensitive to accusations of anti-conservative bias – to the point of skewing algorithmic outcomes against left-leaning content.
Further supporting this, Facebook whistleblower Frances Haugen’s 2021 disclosures showed that the company’s algorithms knowingly amplified outrage and misinformation. After Facebook’s 2018 shift to promote “meaningful social interactions” (heavy weighting of comments and shares), internal researchers found the new algorithm was driving up “misinformation, toxicity, and violent content” via re-shared postsmediamatters.org. They observed that publishers learned to exploit outrage to game the algorithmmediamatters.org. Importantly, Haugen revealed that Facebook resisted fixes that would reduce viral misinformation if those fixes also risked cutting user engagementmediamatters.org. In other words, attempts to dial back sensationalism (which often meant curbing hyper-partisan, frequently right-wing content) were often shelved because they might decrease usage metrics. This aligns with an internal culture of caution: Haugen and other insiders alleged that Facebook put profits and growth above neutral enforcement, enabling conservative or extremist content to flourish if it drove engagementmediamatters.orgmediamatters.org. Such internal evidence reinforces that while Meta’s official stance is that their platforms are neutral, in practice the algorithms and policy choices have given conservatives a louder megaphone relative to the left. As one Meta official reportedly explained, curbing outrage-bait content disproportionately hit right-wing publishers – a result the company was reluctant to fully embracemediamatters.org.
Content Moderation and Suppression of Left-Leaning Speech
Beyond feed ranking, bias can also occur in how content is moderated – i.e. what posts are removed or downranked by Meta’s algorithms and human reviewers. Here too, recent cases show left-leaning and social justice content getting caught in the dragnet at higher rates.
One dramatic example is the removal of content related to Palestine. During the Israel–Hamas war in October–November 2023, Meta’s platforms engaged in what observers called a systemic censorship of pro-Palestinian content. A Human Rights Watch investigation documented over 1,050 instances of posts, stories, and accounts – almost entirely expressing support for Palestine or documenting human rights abuses against Palestinians – being taken down or suppressed on Facebook and Instagramhrw.org. Out of 1,050 cases HRW verified, 1,049 involved censorship of peaceful pro-Palestine content, versus just 1 case involving pro-Israel contenthrw.org. Users reported their posts and even their entire accounts being shadowbanned or suspended for sharing news or personal testimonies from the Palestinian perspective. The suppression took many forms: posts and comments removed, accounts temporarily blocked from posting, limits on content reach (e.g. not appearing in feeds or search), and other “shadow” measures reducing visibilityhrw.org. According to HRW, this pattern was global and systemic in scale, indicating a bias in enforcement rather than isolated errorshrw.org.
Crucially, HRW identified Meta’s own policies and algorithms as root causes. In particular, Facebook’s policy on “Dangerous Organizations and Individuals” (which bans praise or support of groups deemed terrorist organizations) was flagged as overly broadhrw.org. For instance, Meta relies heavily on the U.S. government’s terrorist list – which includes Hamas – to automatically filter content. This meant that even neutral or journalistic discussion of Hamas or support for Palestinian political causes could be interpreted as “praise” and auto-removedhrw.org. By design, the policy swept up vast amounts of Palestinian advocacy, effectively quelling one side of the political discourse. HRW also noted Meta’s inconsistent application of exceptions (like allowing newsworthy content) and an “apparent deference to government requests” – e.g. taking down posts flagged by Israeli authorities – as factors in the biashrw.orghrw.org. While Meta eventually acknowledged some mistakes, the incident showed how automated moderation at scale can mirror geopolitical biases, disproportionately silencing voices aligned with left-leaning human rights and anti-war positions. Notably, this was not the first time – activists have long accused Facebook and Instagram of over-censoring content about Palestinian struggles, with Meta repeatedly apologizing for wrongful takedowns in the pasthrw.org.
Another realm of concern is how moderation algorithms treat content from marginalized or progressive social movements in the U.S. Research is increasingly finding that posts discussing racism, inequality, or LGBTQ issues – typically associated with left/liberal advocacy – are more likely to be flagged or removed, even when they don’t violate rules. A 2024 study in Proceedings of the National Academy of Sciences provides clear evidence: posts in which users described their personal experiences of racism were disproportionately flagged as “toxic” content by automated moderation modelspubmed.ncbi.nlm.nih.gov. In a real-world field experiment, five widely used AI content filters (including ones from major tech companies) were given actual social media posts. The result: when a user said they were a victim of racist behavior, the algorithms frequently misclassified that post as hate speech or harassment (presumably because it contained slurs or negative language, even though it was the target’s account)reddit.comreddit.com. Human moderators were also more likely to flag these accounts of racism for removal, perhaps out of discomfort or misunderstandingreddit.com. In effect, the very people sharing anti-racist messages or seeking support after discrimination were getting “silenced” by the content moderation process. The study further showed that witnessing this kind of suppression had a chilling effect on Black users’ sense of belonging in online communitiesreddit.com. This indicates a subtle but important bias: attempts to enforce rules against hate speech can backfire if algorithms cannot distinguish hateful content from content about hate. The consequence is an unequal burden on left-leaning discourse about race and justice – those posts are more likely to be taken down compared to, say, a politically opposite post that doesn’t trip the filters. (Indeed, platforms like Facebook have admitted that their early AI moderation struggled with context, leading to minority users’ content being removed erroneouslyreddit.com.)
Related findings have emerged regarding content from LGBTQ+ and feminist perspectives. Activists have accused Instagram and Facebook of “shadowbanning” feminist, queer, or leftist content, meaning the content is not outright deleted but made harder to find or given less algorithmic reach without explanation. While hard to prove conclusively, there have been instances suggesting bias – for example, certain transgender and Black creators noticed their posts or hashtags would not appear in searches or would see unexplained drops in engagementindependent.co.ukindependent.co.uk. In mid-2020, amid the Black Lives Matter protests, Instagram’s leadership acknowledged concerns that Black users’ posts were being algorithmically hidden or not widely distributed. Instagram CEO Adam Mosseri publicly stated that the company would investigate its algorithms for racial or systemic bias, noting a need to “keep bias out of these decisions” and improve the fairness of content distribution and recommendationindependent.co.ukindependent.co.uk. Instagram even updated its policies around that time, clarifying what kind of content doesn’t get promoted on the Explore tab, after accusations that it was suppressing Black voices and protest contentindependent.co.ukindependent.co.uk. This came alongside TikTok’s admission that its algorithm inadvertently suppressed posts with hashtags like #BlackLivesMatter (for which TikTok apologized)independent.co.uk. These episodes underscore that algorithmic moderation, if not carefully designed, can reinforce existing social biases – often to the detriment of left-leaning advocacy for marginalized groups.
In summary, content moderation on Meta platforms has shown a pattern of over-enforcement against left-leaning social justice content, whether due to poorly tuned algorithms or policy choices. Posts by progressive activists, minority users, or those challenging power structures sometimes face greater scrutiny and higher odds of removal. By contrast, content from right-wing actors, even when border-line (e.g. aggressive rhetoric that skirts hate speech rules), has sometimes been left up in the name of neutrality or free expressionwashingtonpost.comwashingtonpost.com. Indeed, internal Facebook documents (exposed by Washington Post) revealed that when Facebook researchers proposed more aggressively filtering out the worst hate speech (much of which targets racial minorities), top executives vetoed the change. They feared it would “tilt the scales” by removing too much derogatory content aimed at protected groups – a move certain conservative partners might complain aboutwashingtonpost.comwashingtonpost.com. As a result, Facebook kept a “race-blind” approach that, in practice, left more racist posts online – overwhelmingly affecting Black users – to avoid any perception of anti-right biaswashingtonpost.comwashingtonpost.com. This exemplifies how attempts to appear politically neutral have led to outcomes skewed against minorities and those speaking up on their behalf. Such decisions effectively favored a conservative stance (in this case, the “free speech” to use offensive language) at the expense of the left-leaning push for stronger hate-speech protections.
Instagram and Threads: Newer Platforms, Similar Patterns?
Instagram, being image-focused, has historically been less dominated by hard politics than Facebook. However, in recent years Instagram has also become a venue for political expression and has faced parallel bias issues. We’ve noted above Instagram’s challenges during the 2020 BLM movement and again during the 2023 Gaza conflict – in both cases, users reported left-aligned content being hidden or removed at scale. One difference is that Instagram’s feed algorithm (and the newer Threads platform’s feed) is somewhat opaque and still evolving. Instagram previously downplayed political content in users’ main feeds (especially after 2021, when Meta intentionally reduced the amount of civic and political content shownabout.fb.com). However, in early 2025 Meta announced it would “phase back” political content into people’s feeds, acknowledging that the prior suppression was too blunt an approachabout.fb.com. This policy reversal came amid a broader shift championed by Mark Zuckerberg toward lighter moderation and more “free expression” on all Meta platformsabout.fb.comabout.fb.com.
The reaction to these changes split along ideological lines. Left-leaning influencers on Instagram and Threads expressed alarm that Meta’s loosening of content moderation – including ending fact-checking and allowing more controversial speech – was “pandering” to right-wing interests (notably the return of Donald Trump and his base to Meta platforms)businessinsider.com. “We are shifting to the other side of things,” one progressive content creator said, worried that extreme and misleading conservative content would now proliferate uncheckedbusinessinsider.com. In contrast, conservative influencers cheered the policy overhaul as a win for “free speech,” applauding that Meta will restore “political recommendations” in feeds and rely on community-driven fact-checking insteadbusinessinsider.combusinessinsider.com. Adam Mosseri (Instagram’s head) even stated that creators who post about politics should “feel comfortable” doing so now, as Meta will stop over-enforcing rules and will show politically oriented posts to users againbusinessinsider.com. This indicates that Instagram (and by extension, Threads) is moving away from any prior algorithmic throttling of political content. While that might sound neutral, observers note that reducing moderation often benefits those who were previously moderated more often – which, given the misinformation and hate speech data, tends to be right-leaning pages. Thus, left-leaning users worry the new laissez-faire approach could once again tilt the playing field, allowing far-right narratives (previously limited by fact-check labels or removals) to spread widely and drown out more nuanced left-leaning voicesbusinessinsider.combusinessinsider.com.
Threads, launched by Meta in July 2023 as a Twitter-like microblogging platform, has had limited study so far. Early user demographics and anecdotes suggested Threads attracted a more moderate-to-liberal user base initially – partly because many liberals fleeing Twitter’s chaos under Elon Musk joined Threads, and also because overt hate speech or disinformation was less prevalent under Threads’ early rules. Some conservatives even claimed Threads was “a liberal echo chamber”, opting to stay awaythreads.com. However, it’s important to distinguish user composition from algorithmic bias. Threads uses algorithms (similar to Instagram’s) to recommend posts, and no evidence has shown that it systematically downranks either left or right content at this stage. If anything, the perception of Threads being “biased toward liberals” came from the community norms at launch, not the code. Over time, if Meta fully integrates Threads into its family of apps and applies the same engagement-driven logic, it could exhibit the same tendencies as Facebook’s feed. Given Meta’s 2024–2025 pivot to easing restrictions, we might expect Threads to gradually allow more controversial political content and rely on algorithms (and user tools like “Community Notes”) to sort truth from falsehood. In practice, this could mean that the loudest, most emotive content – often from far-right agitators or hyper-partisan sources – will gain traction on Threads just as it does on Facebook, unless Meta’s algorithms are fundamentally redesigned.
At present, it’s too early to conclusively say Threads’ algorithm is biased against left-leaning content – no comprehensive audit has been published. But the concerns are informed by Meta’s track record on Facebook and Instagram. If Threads prioritizes high engagement, it could inherit the same pattern where outrage-oriented content (usually right-leaning populist rhetoric) gets amplified more. Meta’s own announcement in January 2025 emphasized “returning to our free expression roots” and admitted that previous efforts to curb misinformation resulted in “too much harmless content” being censoredabout.fb.comabout.fb.com. They explicitly said they would stop demoting content that fact-checkers had flagged and would raise the bar for what gets taken down automaticallyabout.fb.comabout.fb.com. While this applies platform-wide, it will likely shape Threads as well. The net effect is a moderation pullback that many analysts interpret as favoring right-wing speech (since in recent years, right-leaning posts were more frequently flagged for COVID-19 falsehoods, election conspiracies, hate speech, etc., compared to mainstream left-leaning posts). Left-leaning users on Threads and Instagram have voiced exactly this worry: that Meta’s new stance will embolden extremist content and make progressive voices more vulnerable to harassment and misinformation campaignsbusinessinsider.combusinessinsider.com.
Conclusion
Evidence from the past few years indicates that Meta’s algorithmic systems have not been politically neutral in outcome – and the imbalance has often been to the detriment of left-leaning content. On Facebook, right-wing pages and personalities consistently receive more algorithmic amplification, fueled by engagement-hungry algorithms that reward provocative contentmediamatters.orgmediamatters.org. Internal adjustments have even been made (as in 2017) specifically to avoid angering conservatives, at the cost of reduced visibility for progressive newsbusinessinsider.combusinessinsider.com. On Instagram, episodes of apparent bias (whether via mistaken AI flags or policy overreach) have silenced Black activists, pro-Palestinian voices, and others advocating for left-wing causeshrw.orgindependent.co.uk. And as Meta’s newer policies favor lighter moderation across Facebook, Instagram, and Threads, many observers fear this will further skew the landscape by unleashing content that historically skews right (e.g. inflammatory or unverified claims) while eroding protections that benefited vulnerable groupsbusinessinsider.combusinessinsider.com.
Meta’s official stance denies any intentional political bias, and indeed some of these effects arise indirectly – from algorithms optimizing for engagement or from flawed “neutral” policies that inadvertently punish one side. Nonetheless, the real-world impact is that left-leaning content (progressive journalism, social justice advocacy, minority viewpoints) often faces greater hurdles in both visibility and moderation. Meanwhile, many right-leaning actors have thrived on Meta’s platforms, sometimes by pushing the limits of disinformation and hate speech policies. As one watchdog study concluded, there is little credible evidence that Facebook or Instagram systematically suppress conservative viewpoints – if anything, the platforms amplify them, given the engagement they generatemediamatters.orgmediamatters.org. By contrast, there are documented instances of left-leaning voices being throttled or censored, whether by design or by accidentbusinessinsider.comhrw.org.
Going forward, transparency and independent oversight will be critical. Experts call for audits of recommendation algorithms and moderation practices to identify biasesitif.org. Meta has taken steps like sharing some data with researchers and even acknowledging the need to “keep bias out” of its systemsindependent.co.ukindependent.co.uk, but concrete results remain to be seen. For users and policymakers, the findings so far suggest that the “playing field” of online discourse is uneven. Ensuring that neither left nor right voices are unfairly muted – and that algorithms don’t privilege incendiary content at the cost of informed dialogue – is an ongoing challenge. The recent revelations and research provide a clearer picture that algorithmic bias against left-leaning content is real and has multiple facets (feed algorithms that favor the right’s tactics, and moderation rules that disproportionately hit the left’s causes). Correcting this will likely require deliberate changes in how platforms rank content and enforce policies, so that free expression is truly balanced and not skewed by the invisible hand of the algorithm.
Sources: Recent analyses and reports were used to compile these findings, including whistleblower accounts and studies by watchdog groups and academics. Key references include Media Matters engagement studiesmediamatters.orgmediamatters.org, investigations by Wall Street Journal and Mother Jones on Facebook’s newsfeed tweaksbusinessinsider.combusinessinsider.com, Human Rights Watch’s report on censorship of Palestinian contenthrw.orghrw.org, a PNAS study on racial bias in content moderationpubmed.ncbi.nlm.nih.gov, and Meta’s own policy statementsabout.fb.combusinessinsider.com, among others. These are cited throughout for verification and further reading.


