by Belle Torek •
On January 7, 2025, Meta CEO Mark Zuckerberg announced a complete overhaul of content policies across Meta platforms. These changes include the end of fact-checking, thinned-down moderation efforts, and a dismantled Hateful Conduct policy that expressly permits abuse against LGBTQ+ people while forbidding the same abuses against all other communities. In the following days, Meta has also announced the termination of its broader DEI efforts, further signaling an abdication from its commitment to inclusion. Here, we explain what these changes mean for the LGBTQ+ community and offer paths forward, both on and off Meta platforms.
When Mark Zuckerberg announced sweeping changes to Meta’s content moderation policies, he framed the move as a bold defense of free speech. But many, especially members of the LGBTQ+ community and allies, worry about what this means for safety on Meta’s platforms and fear this marks an open invitation for Meta users to engage in anti-LGBTQ+ abuse that will disempower and marginalize the community.
To better understand the gravity of this situation, it’s worth reflecting on how the LGBTQ+ community has historically engaged with Meta platforms. Perhaps the most well-known example arose in 2013 when the Human Rights Campaign led the community to “turn the Internet red” in a groundbreaking campaign for marriage equality. Public figures, political leaders, and everyday users alike changed their Facebook profile pictures to HRC’s red equal sign, resulting in a viral movement that elevated awareness and galvanized public support for LGBTQ+ equality. By empowering both community members and allies, this campaign played a crucial role in building public support for marriage equality, ultimately contributing to the 2015 Obergefell decision that granted same-sex couples the right to marry. This victory was further codified with the 2022 passage of the Respect for Marriage Act, which enshrined federal protections for marriage equality into law and safeguarded these rights against future challenges. This is what is at stake as we move into this new era, and the community is forced to question whether such a movement could flourish on Meta platforms today.
The changes announced last week — which will be implemented across all Meta-owned platforms, including Facebook, Instagram, Threads, and WhatsApp — are not just another policy update, but a shift that will normalize anti-LGBTQ+ misinformation and intensify anti-LGBTQ+ harassment. In the wake of these changes, some LGBTQ+ people will likely self-censor or withdraw from online spaces altogether, while others may suffer physical violence that began as online threats. All of this poses a critical threat to LGBTQ+ representation, equality, and democracy.
Meta’s newly announced content moderation policies can largely be broken into three core shifts:
Despite being framed as moves toward transparency and free expression, these policy shifts actually create fertile ground for misinformation, identity-based harassment, and abuse, and threaten LGBTQ+ inclusion and safety across Meta. Unfortunately, these changes are not the only recent threat to LGBTQ+ visibility.
Last year, Meta notably made the decision to limit the reach of political content across Instagram. This change, which excluded accounts that share political content from algorithmic amplification or recommendation, significantly curtailed the reach of several pro-equality Instagram accounts, including those of the Human Rights Campaign and GLAAD. This development came in addition to Meta’s history of shadowbanning LGBTQ+ content, often by incorrectly flagging it as sexually explicit.
Settings that create barriers to LGBTQ+ visibility, paired with policies that condone anti-LGBTQ+ abuse, perpetuate a hostile online ecosystem for LGBTQ+ people. The following will delve further into details on these policies, as well as guidance to empower our community in the face of these threats.
While the new moderation changes are bad for all users, Meta’s updated guidelines take specific aim at LGBTQ+ people. While Meta’s policies broadly prohibit users from insulting people or groups by calling them mentally ill, they make troubling exceptions for LGBTQ+ people (see Image 1).
Similarly, while calls for the exclusion of people and groups based on characteristics like race or religion are still forbidden, when it comes to sexual orientation or gender identity, such calls are expressly permitted (see Images 2 and 3).
Meta generally prohibits calls for economic exclusion, but expressly permits calls for the exclusion of people based on gender identity and sexual orientation from certain professions: the military, law enforcement, and education. It also allows calls for gender-based exclusion from spaces including restrooms, sports, health and support groups, and schools. (see Images 2 and 3).
Image 1
Image 2
Image 3
Adding insult to injury, the policy employs the term “transgenderism” (see Image 1) a term often wielded by anti-trans activists to delegitimize transgender people. Not only is this word socially and scientifically invalid, but its use in Meta’s Community Standards signals a disturbing alignment with anti-LGBTQ+ political rhetoric.
These exceptions reflect Meta leadership’s failure to either understand or care about the lived experiences or safety of LGBTQ+ people. By institutionalizing this kind of bias, Meta’s policies legitimize–and arguably even incentivize–discrimination, embolden bad actors, and deepen the challenges that LGBTQ+ already experience in navigating digital spaces.
At first glance, the introduction of community notes may seem like a democratizing effort to hold users accountable for misleading, or outright false, information. However, in practice, we have witnessed the shortcomings of this system on X, where bad-faith actors regularly weaponize community notes to amplify biased, inaccurate narratives and attack information integrity. As several noteworthy media and disinformation experts have demonstrated, fact-checking helps answer concrete questions as to events that actually occurred: they were never a forum for embellishment or censorship in the first place.
For LGBTQ+ communities, this shift is especially dangerous. Now, anti-LGBTQ+ actors will almost certainly be able to amplify the spread of disinformation, or content misgendering or deadnaming transgender people, by using community notes to share pseudoscientific “context” under the guise of accurate information.
Meta’s decision to deprioritize harmful content, so long as it is legal, further exacerbates these risks. By focusing only on content that violates legal standards, Meta effectively gives users across all its platforms carte-blanche to spread the harassment, hate speech, and disinformation that may fall just shy of illegality, but still cause irreparable harm to the safety of LGBTQ+ people. One of the major downstream impacts of this is a chilling effect on the speech of LGBTQ+ users, many of whom already limit their online presence or self-censor to avoid abuse.
This laissez-faire approach to moderation bears significant implications for the community, especially in combination with the new Hateful Conduct policy. Now, posts falsely describing LGBTQ+ identity as mental illness or calling for the exclusion of LGBTQ+ people are explicitly permitted and will not only perpetuate harmful stereotypes but embolden bad actors and extremist groups who seek to marginalize LGBTQ+ voices.
The cumulative impact of these policy changes on the lives, experience, and safety of LGBTQ+ people cannot be overstated. By legitimizing hateful rhetoric and cutting back moderation efforts designed to protect targets of hate, Meta is actively designing an online ecosystem where LGBTQ+ people will disproportionately suffer harassment and abuse, without any access to recourse. Abusive content will proliferate and, in doing so, threaten to render platforms like Facebook, Instagram, Threads, and WhatsApp unsafe–and therefore unusable–to many users. Of course, an environment where only the most empowered and the most threatening voices are heard is not a true free speech environment at all.
The ultimate responsibility for creating a safer, more inclusive platform lies with Meta. It is incumbent on Meta to implement and enforce policies that protect its users, rather than leaving them to fend for themselves in an increasingly hostile online environment. Social media companies are private actors: as such, the First Amendment empowers them to engage in robust content moderation efforts, and to make decisions about what kind of content they want to have on their platforms. When these platforms fail to exercise this discretion to ensure a safe experience for their users, marginalized voices experience outsize abuse, chilling effects on their expression, and devastating resulting harms, ranging from poor physical and mental health outcomes, to social isolation and withdrawal from public life, to incitement and physical violence. Meta itself previously acknowledged this at the top of its former hate speech policy, noting that the proliferation of hate speech “creates an environment of intimidation and exclusion, and in some cases may promote offline violence.”
Moreover, these consequences aren’t limited to users of Meta’s platforms. Policies that normalize harmful rhetoric have a ripple effect, shaping public attitudes and emboldening those who seek to attack LGBTQ+ rights. In a broader sense, Meta’s decision to permit calls for exclusion based on sexual orientation and gender identity risks legitimizing discrimination as acceptable discourse outside the Metaverse.
These outcomes are not hypothetical. We have seen similar patterns emerge across fringe platforms, where an unwillingness to keep users safe has led to an uptick in harassment and disinformation, and where bots and trolls drown out meaningful discourse. But Meta is unlikely to become just another fringe platform: its continued scale and influence make this situation all the more dire. Given these grave implications for the community, HRC recently joined fellow members of Meta’s external civil rights advisory group in expressing our significant concern with these policy changes.
As some LGBTQ+ users consider migrating to alternative platforms or stepping back from social media altogether, many others will choose to remain on Meta-owned platforms in the absence of comparable spaces for building community and maintaining connections. Our community should never have to bear the burden of protecting itself from harms that companies like Meta have chosen to enable, but for those who continue to engage, here are some steps to help navigate these challenges and foster the safest possible user experiences:
These steps, though not a substitute for effective platform moderation, can help LGBTQ+ users foster safer and more affirming digital spaces as the fight for broader accountability and more inclusive policies continues.
As a community, in addition to prioritizing our safety and staying empowered, we must:
While we still strongly urge Meta to change course, for now, it is up to us to take action — and together, we will rise to the challenge. We must continue to show up and maintain resilience on behalf of our community, and to speak out against hate. Our fearless speech matters, and it is a key lever in our continued fight for equality.
Image: