Misperceived

A story about common confusion

I.

It began with a study.

In December of 2025, Stanford researchers analyzed 2.2 billion social media posts looking for a pattern. They wanted to know what percentage of users posted severely toxic content. Not rudeness, not sarcasm, but speech that was so hateful that 90% of the world would flag it as being problematic1.

With this data in hand, they then asked thousands of people to answer a simple question:

Take a guess.
What percentage of social media users do you think post severely toxic content?
?
0%50%100%

II.

The bar

Imagine walking into a bar with 100 people. Three of them are screaming about politics, about each other, about nothing. But the bouncer, who gets paid based on how long you stand there staring, has wired those three into the sound system and turned it up to ten.

You walk in, hear the roar, and conclude: this place is full of lunatics. Never hearing the 97 people having normal conversations a few feet away.

That's social media. The bouncer is an algorithm. And you have definitely been the bystander.

Pick a topic. Any topic. the room often looks like:

Reading this, you'd think the country is split between unhinged extremes. It's not. And the degree to which it's not is the most important thing platforms aren't telling you.

III.

See the room

Let's scale a social media platform down to 100 users. This is the room:

97 regular users
3 users who have posted toxic content
3%33% On most platforms, ~3% of accounts produce 1/3 of all content
Your feed Engagement ranking amplifies high-reaction content from the prolific few
Your feed
The actual room. 3 out of 100 users have ever posted severely toxic content.

The room didn't change. The users didn't change. But engagement-based ranking — the bouncer wired into the sound system — amplified the loudest voices until they dominated your feed. You scrolled enough toxicity and your brain performed a kind of ambient demography, concluding that the behavior must be as widespread as the content. The feed became a census.

This pattern repeats across platforms. On Twitter/X, toxic tweets receive ~86% more retweets and ~27% more visibility than non-toxic ones, 0.3% of users shared 80% of all contested news14, and just 6% of users produce roughly 73% of all political tweets16. On TikTok, 25% of users produce 98% of all public videos15. The specific numbers vary. The dynamic is the same: a small minority of highly active users overwhelms the majority.

IV.

This is not just about what we see on social media

If this were just about tone of our social posts, it wouldn't matter very much. But this distortion compounds into something much more serious.

Pattern 1 The majority goes silent3

When the majority of people looks at the feed and assumes they're outnumbered, people will often self-censor. They go quiet, or they leave a platform entirely. They cede the space to users with more extreme politics.

Pattern 2 The loud minority thinks it's the majority5

The minority who aggressively post end up with their own distortion – believing they are part of the majority.

A study of 17 extremist forums (neo-Nazi groups, radical environmentalists, single-issue militants) found the same pattern everywhere: the more someone posted, the more they believed the public agreed with them. Participation breeds false consensus.

Pattern 3 Everyone gets each other wrong6

Both sides develop wildly inaccurate beliefs about who the other side actually is. Try it yourself:

What percentage of Democratic supporters do you think are gay, lesbian, or bisexual?
?
0%50%100%
What percentage of Republican supporters do you think earn over $250,000 a year?
?
0%50%100%

The distortion extends to policy beliefs. Step through to see the perception gap on the issue of immigration.

Source: More in Common (2019) & Moore-Berg et al., PNAS 2020. Illustrative.
Pattern 4 Politicians follow the perceived room, not the real one

Elected officials are very good at sensing political sentiment. It's literally their job. (They are not elected to correct people's beliefs.)

Politicians who can build a coalition about a perceived belief are more likely to win. They position themselves against an opponent that doesn't exist, but their supporters think exists.

And remember: most of our politics now happens on social media. Candidates often read the same distorted feed. They are unlikely to change their minds.

The Overton window shifts. Not because opinion changed, but because perception did.

Pattern 5 Misperception turns into hostility7

When you believe the other side is extreme, you become more willing to treat them as a threat.

Both Democrats and Republicans vastly overestimate how many on the other side support political violence. The result is a populace primed to assume the other side is ready to do horrible things.

"What percentage of the other side supports political violence?"
Democrats believe
estimate
35.5%
35.5%
3.4× off
of Republicans support political violence
Republicans believe
estimate
37.1%
37.1%
4.0× off
of Democrats support political violence
Both sides were wrong by 3 to 4 times. When researchers corrected these beliefs, partisan hostility dropped.

Each step feeds the next. The distortion is self-reinforcing.

V.

Knowing isn't enough

Okay. So now you know that a small minority dominates the feed.

You know that Republicans and Democrats actually have a far more nuanced set of opinions about contested issues.

Does that fix it? Not really. You also know that everyone else doesn't know it. And if the world continues operating as if the distortion is real, you should probably act the same — even though you know it's wrong. The room hasn't changed, even if you know people inside it are confused.

This is called a common knowledge problem.

Private knowledge
You've read the stat. But you have no idea who else has. The feed still looks the same. You still assume you're outnumbered. You stay quiet.

Steven Pinker lays this out cleanly in his excellent recent book When Everyone Knows That Everyone Knows8. Learning a fact changes what you know. Seeing it displayed publicly — where everyone else can see it too — where you know others can also see it, changes what everyone knows, and subsequently how they act.

Social media has no public square. It has 300 million private windows, each showing a different distortion of the same room. Illuminating the common thoughts between us has the potential to radically change it.

VI.

The Intervention

So what can we do about this? Fortunately, there's some good evidence showing how it can be fixed.

Multiple studies show that when misperceptions are corrected in a public way, hostility drops. Mernyk et al. found that a single correction reduced partisan hostility for a full month7. Lee et al. found that correcting overestimates of toxic users improved how people felt about their country and each other1.

We can do this today.

Imagine every post on a contested topic had a quiet link beneath it. Not a fact-check. Not a label. Not a warning. Just a question:

How do people actually feel about this?

Let's explore an example that cuts across political identity:

Money in Politics

83% of Americans support a constitutional amendment to limit money in politics. 81% are concerned about the influence of money on elections, including 78% of Republicans and 90% of Democrats. 75% say unlimited spending weakens democracy. Only 15% believe unlimited political spending is protected free speech.

And yet nothing changes, because everyone assumes the other side is fine with it. The feed is full of people defending their team's donors and attacking the other team's. It looks like a 50/50 war. It's not. It's an 80%+ consensus that can't see itself.

@real_talk_politics · 2h
Everyone complains about money in politics but the second their candidate gets a massive donation they shut up real fast. You don't hate money in politics. You hate when the OTHER side has more of it.
♡ 11,847💬 6,203↻ 2,891

VII.

Why this isn't fact-checking

Fact-checking is a top-down approach that often times feels like someone is telling you what to think. This is just showing you what people already think.

Content moderation for many years now has been perceived as removing speech. This simply adds context. Nothing is censored. Nothing is labeled. The loudest voices can keep posting. They just can't monopolize your model of reality anymore.

The Lee et al. study found something worth sitting with: when people learned the real number, they felt better. Better about their country. Better about each other. The cynicism lifted1. We're walking around with a distorted picture of who we are, and it's making us worse to each other. Not because of anything real, but because of a design choice in a content-ranking algorithm.

It works for video too

Short-form video is the fastest-growing vector for political distortion. The same dynamic applies — a small minority of creators produce the vast majority of political content — but video bypasses the pause that text gives you. Community Check can adapt. Tap through to see how.

Money IS free speech.
Deal with it. 🇺🇸
Citizens United was CORRECT
@liberty_caucus_tv Follow
If a corporation wants to spend $100M on political ads that's called FREEDOM. Campaign limits = government censorship. You just hate it when YOUR side gets outspent 🔥 #FreeSpeech #CitizensUnited
284K
💬18.2K
41K
A political video goes viral. 284K likes, 18K comments. The feed shows outrage. But what do people actually think?

See technical specs for how it works below ↓

VIII.

This doesn't require new technology

Every platform already has the data. They already survey users. They already know the base rates. They already have the infrastructure to display context beneath posts. They just don't have the incentive.

But the unseen majority is the public. And the public deserves to know itself.

A tiny minority, dominating the feed. That's all it ever was. The rest of us were here the whole time, quiet and decent and waiting to be seen.

See FAQ See technical specifications
References
1 Lee, Neumann, Zaki & Hancock, "Americans overestimate how many social media users post harmful content," PNAS Nexus, 4(12), 2025. n=1,090. Benchmark: Kumar et al., "Understanding the Behaviors of Toxic Accounts on Reddit," WWW '23, 2023. 3.1% of accounts produced 33.3% of all comments.
2 Grinberg et al., "Fake news on Twitter during the 2016 U.S. presidential election," Science, 363(6425), 2019. 0.1% of users accounted for nearly 80% of contested news sources shared.
3 Noelle-Neumann, "The Spiral of Silence," J. Communication, 24(2), 1974.
4 Hampton et al., "Social Media and the 'Spiral of Silence'," Pew Research, 2014.
5 Wojcieszak, "False Consensus Goes Online," Public Opinion Quarterly, 72(4), 2008.
6 Ahler & Sood, "The Parties in Our Heads," J. Politics, 80(3), 2018. 342% overestimate.
7 Mernyk et al., "Correcting Inaccurate Metaperceptions," PNAS, 119(16), 2022. n=4,741. Effects lasted 1 month.
10 Moore-Berg, Ankori-Karlinsky, Hameiri & Bruneau, "Exaggerated meta-perceptions predict intergroup hostility between American political partisans," PNAS, 117(26), 2020. ~80% of both parties overestimated opposing party hostility by 50-300%. See also: "America's Divided Mind," Beyond Conflict, 2020.
11 Sparkman, Geiger & Weber, "Americans experience a false social reality by underestimating popular climate policy support by nearly half," Nature Communications, 13, 4779, 2022. n=6,119. 80% of Americans support siting renewables locally; perceived support: 43%.
12 Yudkin, Hawkins & Dixon, "The Perception Gap," More in Common, 2019. n=2,100 via YouGov. Average overestimation of opposing party's extreme views: ~55% estimated vs ~30% actual.
13 More in Common, "Americans' Environmental Blind Spot," 2022. 73% of Republicans support U.S. clean energy leadership; Republicans estimate only 33% of their own party agrees.
14 Baribi-Bartov, Munger & Pan, "Supersharers of fake news on Twitter," Science, 384(6700), 2024. 0.3% of users shared 80% of contested news during the 2020 U.S. election.
15 Pew Research Center, "How U.S. Adults Use TikTok," 2024. 25% of users produce 98% of all public videos.
16 Bail, Breaking the Social Media Prism, Princeton University Press, 2021; Pew Research Center, 2021. 6% of U.S. Twitter users produce ~73% of all political tweets.

Common Questions

Honest objections deserve honest answers. These are the questions skeptics from every political perspective are most likely to ask.

You can't flood a system that chooses its respondents randomly. Community Check would use stratified random sampling — the gold standard in survey methodology (Groves et al., Survey Methodology, 2nd ed., Wiley, 2009). You don't volunteer to respond. You're selected, like jury duty. Each user would respond once per question per 90-day cycle. Coordinated response patterns would be anomaly-detected and excluded. The sampling algorithm and exclusion criteria would be open-source and auditable.

This is the same methodology behind national polls that reliably measure opinion across 330 million people using samples of just 1,000–2,000 respondents. The key isn't sample size — it's random selection. A platform with hundreds of millions of users has an even larger pool to draw from, making representative sampling more robust, not less. Right now, a single viral post from one account can shape the perceived consensus of millions. Community Check would replace that with N>100,000 randomly selected responses — orders of magnitude larger than any national poll, and a dramatically higher bar than the status quo.

In the ideal implementation, questions are governed by a bridging algorithm — the same approach Community Notes uses. Questions are proposed by a diverse pool of contributors and only enter the active taxonomy if they earn approval from contributors who historically disagree with each other. Loaded or partisan questions are filtered out structurally, not by any single editorial board. AAPOR standards for neutral question design apply: balanced language, all reasonable response options, no leading framing.

For the open-source starting point, questions come from established polling organizations (Pew, Gallup, AP-NORC) with published methodology. The full question taxonomy is open — any researcher or journalist can audit the wording. That's a level of transparency no social media algorithm currently offers.

Right now, the system already silences the actual majority. The spiral of silence — people self-censoring because they falsely believe they're in the minority — is one of the most replicated findings in political communication (Noelle-Neumann, J. Communication, 1974). Hampton et al. (Pew Research, 2014) found social media makes this worse: people who sensed their Facebook network disagreed with them were less likely to speak up both online and in person. Community Check breaks that cycle.

It also explicitly displays minority positions — when 15% hold a view, that number appears clearly. A minority position accurately shown at 15% is far healthier than one that looks like 50% through amplification or 0% through suppression. Everyone benefits from seeing the real picture.

Election forecasting and opinion measurement are different things. Community Check doesn't predict elections. It measures policy preferences — "Do you support background checks?" — which are far more stable and far easier to measure than vote intention. When Pew reports 87% support for background checks across 15 years of polling with N=5,000+, that's a measurement with a published margin of error, not a prediction.

The platform sample adds N>100,000 — 50–100x larger than typical national polls, with margins of error below ±0.5%. That's an extraordinarily reliable signal, and it updates continuously (Lee et al., PNAS Nexus, 2025).

Your perception is already being shaped — by algorithms that prioritize engagement over accuracy. Community Check simply makes additional information visible: what a representative sample of people actually believe. You can agree, disagree, or ignore it entirely.

Think of nutrition labels. The Nutrition Labeling and Education Act of 1990 didn't tell people what to eat — it made the information available. Community Check does the same for public opinion: standardized, transparent data beneath content that is already shaping how you see the world.

They solve different problems. Community Notes evaluates whether specific claims are true or false, written by self-selected volunteers rated via a bridging algorithm (Wojcik et al., arXiv:2210.15723, 2022). Community Check doesn't assess truth — it shows what people think about the policy topic a post discusses. A post can be entirely accurate and still create a distorted picture of where the public stands.

The data source matters too. Community Notes contributors self-select in — and More in Common (2019) found that the most politically engaged users have the largest perception gaps (nearly 3x more distorted than disengaged users). Community Check uses random sampling and peer-reviewed national surveys. Both tools are valuable; they complement each other.

The research consistently shows the opposite. The social norms approach — correcting misperceived norms by showing accurate data — has been validated across 200+ studies (Berkowitz, Changing the Culture of College Drinking, Hampton Press, 2004). Tankard & Paluck (Social Issues and Policy Review, 2016) found that accurate norm information corrects misperceptions without coercion — it reveals what people already privately believe, rather than pressuring them into something new.

Mernyk et al. (PNAS, 2022, n=4,741) showed this directly: correcting inaccurate metaperceptions reduced support for partisan violence, with effects lasting ~26 days. People didn't conform — they recalibrated, and felt better about each other as a result.

Community Check doesn't claim majority opinion equals truth. It provides a map of what people actually think — which is valuable precisely when your estimate of the room is off by 200–400%, as Ahler & Sood (J. Politics, 2018) documented. If 70% of people believe something you disagree with, knowing that number helps you understand the world you're operating in. Hiding it doesn't make the disagreement go away.

Both majority and minority positions are always displayed with their numbers. This isn't "the crowd says you're wrong." It's "here's what the room actually looks like" — and that's useful no matter where you stand in it.

Correct — by design. Community Check activates only when reliable polling data exists, a documented perception gap has been identified, and a post reaches >10K impressions. That covers ~50–100 major policy questions. Posts about niche topics or emerging controversies without polling data get no Community Check.

The topic-matching confidence threshold is 0.8 — if the system isn't sure, it stays silent. False positives are worse than gaps. This is intentionally focused on the specific, well-documented cases where perception gaps are largest: gun policy, climate, immigration, healthcare, money in politics. Start where the data is strongest, and expand from there.

The architecture prevents this. Data comes from independent polling organizations — not governments, not platforms. The sampling algorithm is open-source. Question wording is published. Methodology is auditable. Quarterly transparency reports detail every step from sampling to display.

Compromising it would require simultaneously infiltrating multiple independent polling organizations, altering open-source code inspected by thousands of researchers, and evading anomaly detection. That's a high bar. Today's platform algorithms shape public perception at scale with zero transparency and zero public oversight. Community Check raises the baseline significantly.

Independent thinking requires accurate inputs. Right now, the feed is giving you wildly inaccurate ones. Sparkman et al. (Nature Communications, 2022, n=6,119) found Americans underestimate popular climate policy support by nearly half — 80% actually support renewable energy siting, but people estimate 43%. Moore-Berg et al. (PNAS, 2020) found partisans overestimate the other side's hostility by roughly 2x. These aren't matters of opinion — they're factual errors about the world around you.

Community Check doesn't ask you to care what others think. It gives you an accurate picture so your independent opinions are based on reality, not on an algorithmically curated distortion of it.

Correcting metaperceptions — beliefs about what others believe — works differently than correcting factual beliefs. Factual corrections can trigger defensiveness. But learning "the other side is less extreme than you thought" tends to be relieving, not threatening. It lowers the temperature.

Lee et al. (PNAS Nexus, 2025, n=1,090) found that correcting overestimates of toxic social media users improved positive emotions and reduced perceived moral decline. Mernyk et al. (PNAS, 2022, n=4,741) found effects lasting ~26 days from a single correction. Community Check targets this same mechanism — not what you believe, but what you believe others believe. That's where the distortion lives, and that's where the correction is most effective.

Technical Specification

How Community Check would work in practice, from data sources to platform integration.