A story about common confusion
In December of 2025, Stanford researchers analyzed 2.2 billion social media posts looking for a pattern. They wanted to know what percentage of users posted severely toxic content. Not rudeness, not sarcasm, but speech that was so hateful that 90% of the world would flag it as being problematic1.
With this data in hand, they then asked thousands of people to answer a simple question:
Imagine walking into a bar with 100 people. Three of them are screaming about politics, about each other, about nothing. But the bouncer, who gets paid based on how long you stand there staring, has wired those three into the sound system and turned it up to ten.
You walk in, hear the roar, and conclude: this place is full of lunatics. Never hearing the 97 people having normal conversations a few feet away.
That's social media. The bouncer is an algorithm. And you have definitely been the bystander.
Pick a topic. Any topic. the room often looks like:
Reading this, you'd think the country is split between unhinged extremes. It's not. And the degree to which it's not is the most important thing platforms aren't telling you.
Let's scale a social media platform down to 100 users. This is the room:
The room didn't change. The users didn't change. But engagement-based ranking — the bouncer wired into the sound system — amplified the loudest voices until they dominated your feed. You scrolled enough toxicity and your brain performed a kind of ambient demography, concluding that the behavior must be as widespread as the content. The feed became a census.
This pattern repeats across platforms. On Twitter/X, toxic tweets receive ~86% more retweets and ~27% more visibility than non-toxic ones, 0.3% of users shared 80% of all contested news14, and just 6% of users produce roughly 73% of all political tweets16. On TikTok, 25% of users produce 98% of all public videos15. The specific numbers vary. The dynamic is the same: a small minority of highly active users overwhelms the majority.
If this were just about tone of our social posts, it wouldn't matter very much. But this distortion compounds into something much more serious.
Pattern 1 The majority goes silent3When the majority of people looks at the feed and assumes they're outnumbered, people will often self-censor. They go quiet, or they leave a platform entirely. They cede the space to users with more extreme politics.
Pattern 2 The loud minority thinks it's the majority5The minority who aggressively post end up with their own distortion – believing they are part of the majority.
A study of 17 extremist forums (neo-Nazi groups, radical environmentalists, single-issue militants) found the same pattern everywhere: the more someone posted, the more they believed the public agreed with them. Participation breeds false consensus.
Pattern 3 Everyone gets each other wrong6Both sides develop wildly inaccurate beliefs about who the other side actually is. Try it yourself:
The distortion extends to policy beliefs. Step through to see the perception gap on the issue of immigration.
Elected officials are very good at sensing political sentiment. It's literally their job. (They are not elected to correct people's beliefs.)
Politicians who can build a coalition about a perceived belief are more likely to win. They position themselves against an opponent that doesn't exist, but their supporters think exists.
And remember: most of our politics now happens on social media. Candidates often read the same distorted feed. They are unlikely to change their minds.
The Overton window shifts. Not because opinion changed, but because perception did.
Pattern 5 Misperception turns into hostility7When you believe the other side is extreme, you become more willing to treat them as a threat.
Both Democrats and Republicans vastly overestimate how many on the other side support political violence. The result is a populace primed to assume the other side is ready to do horrible things.
Each step feeds the next. The distortion is self-reinforcing.
Okay. So now you know that a small minority dominates the feed.
You know that Republicans and Democrats actually have a far more nuanced set of opinions about contested issues.
Does that fix it? Not really. You also know that everyone else doesn't know it. And if the world continues operating as if the distortion is real, you should probably act the same — even though you know it's wrong. The room hasn't changed, even if you know people inside it are confused.
This is called a common knowledge problem.
Steven Pinker lays this out cleanly in his excellent recent book When Everyone Knows That Everyone Knows8. Learning a fact changes what you know. Seeing it displayed publicly — where everyone else can see it too — where you know others can also see it, changes what everyone knows, and subsequently how they act.
Social media has no public square. It has 300 million private windows, each showing a different distortion of the same room. Illuminating the common thoughts between us has the potential to radically change it.
So what can we do about this? Fortunately, there's some good evidence showing how it can be fixed.
Multiple studies show that when misperceptions are corrected in a public way, hostility drops. Mernyk et al. found that a single correction reduced partisan hostility for a full month7. Lee et al. found that correcting overestimates of toxic users improved how people felt about their country and each other1.
We can do this today.
Imagine every post on a contested topic had a quiet link beneath it. Not a fact-check. Not a label. Not a warning. Just a question:
Let's explore an example that cuts across political identity:
83% of Americans support a constitutional amendment to limit money in politics. 81% are concerned about the influence of money on elections, including 78% of Republicans and 90% of Democrats. 75% say unlimited spending weakens democracy. Only 15% believe unlimited political spending is protected free speech.
And yet nothing changes, because everyone assumes the other side is fine with it. The feed is full of people defending their team's donors and attacking the other team's. It looks like a 50/50 war. It's not. It's an 80%+ consensus that can't see itself.
This is not a poll under the post. That would just measure the same distortion. Community Check draws from a random sample of platform users, surveyed independently of the content. The sample is statistically representative. The results update continuously. And critically: everyone sees the same numbers.
The feed shows you what people say. Community Check shows you what people think. The difference is the entire problem.
Nothing is censored. Nothing is labeled. The loud posters can keep posting. They just can't monopolize your model of reality anymore, because the actual room is now visible, right there beneath the post, to everyone.
Fact-checking is a top-down approach that often times feels like someone is telling you what to think. This is just showing you what people already think.
Content moderation for many years now has been perceived as removing speech. This simply adds context. Nothing is censored. Nothing is labeled. The loudest voices can keep posting. They just can't monopolize your model of reality anymore.
The Lee et al. study found something worth sitting with: when people learned the real number, they felt better. Better about their country. Better about each other. The cynicism lifted1. We're walking around with a distorted picture of who we are, and it's making us worse to each other. Not because of anything real, but because of a design choice in a content-ranking algorithm.
Short-form video is the fastest-growing vector for political distortion. The same dynamic applies — a small minority of creators produce the vast majority of political content — but video bypasses the pause that text gives you. Community Check can adapt. Tap through to see how.
Every platform already has the data. They already survey users. They already know the base rates. They already have the infrastructure to display context beneath posts. They just don't have the incentive.
But the unseen majority is the public. And the public deserves to know itself.
A tiny minority, dominating the feed. That's all it ever was. The rest of us were here the whole time, quiet and decent and waiting to be seen.
Honest objections deserve honest answers. These are the questions skeptics from every political perspective are most likely to ask.
You can't flood a system that chooses its respondents randomly. Community Check would use stratified random sampling — the gold standard in survey methodology (Groves et al., Survey Methodology, 2nd ed., Wiley, 2009). You don't volunteer to respond. You're selected, like jury duty. Each user would respond once per question per 90-day cycle. Coordinated response patterns would be anomaly-detected and excluded. The sampling algorithm and exclusion criteria would be open-source and auditable.
This is the same methodology behind national polls that reliably measure opinion across 330 million people using samples of just 1,000–2,000 respondents. The key isn't sample size — it's random selection. A platform with hundreds of millions of users has an even larger pool to draw from, making representative sampling more robust, not less. Right now, a single viral post from one account can shape the perceived consensus of millions. Community Check would replace that with N>100,000 randomly selected responses — orders of magnitude larger than any national poll, and a dramatically higher bar than the status quo.
In the ideal implementation, questions are governed by a bridging algorithm — the same approach Community Notes uses. Questions are proposed by a diverse pool of contributors and only enter the active taxonomy if they earn approval from contributors who historically disagree with each other. Loaded or partisan questions are filtered out structurally, not by any single editorial board. AAPOR standards for neutral question design apply: balanced language, all reasonable response options, no leading framing.
For the open-source starting point, questions come from established polling organizations (Pew, Gallup, AP-NORC) with published methodology. The full question taxonomy is open — any researcher or journalist can audit the wording. That's a level of transparency no social media algorithm currently offers.
Right now, the system already silences the actual majority. The spiral of silence — people self-censoring because they falsely believe they're in the minority — is one of the most replicated findings in political communication (Noelle-Neumann, J. Communication, 1974). Hampton et al. (Pew Research, 2014) found social media makes this worse: people who sensed their Facebook network disagreed with them were less likely to speak up both online and in person. Community Check breaks that cycle.
It also explicitly displays minority positions — when 15% hold a view, that number appears clearly. A minority position accurately shown at 15% is far healthier than one that looks like 50% through amplification or 0% through suppression. Everyone benefits from seeing the real picture.
Election forecasting and opinion measurement are different things. Community Check doesn't predict elections. It measures policy preferences — "Do you support background checks?" — which are far more stable and far easier to measure than vote intention. When Pew reports 87% support for background checks across 15 years of polling with N=5,000+, that's a measurement with a published margin of error, not a prediction.
The platform sample adds N>100,000 — 50–100x larger than typical national polls, with margins of error below ±0.5%. That's an extraordinarily reliable signal, and it updates continuously (Lee et al., PNAS Nexus, 2025).
Your perception is already being shaped — by algorithms that prioritize engagement over accuracy. Community Check simply makes additional information visible: what a representative sample of people actually believe. You can agree, disagree, or ignore it entirely.
Think of nutrition labels. The Nutrition Labeling and Education Act of 1990 didn't tell people what to eat — it made the information available. Community Check does the same for public opinion: standardized, transparent data beneath content that is already shaping how you see the world.
They solve different problems. Community Notes evaluates whether specific claims are true or false, written by self-selected volunteers rated via a bridging algorithm (Wojcik et al., arXiv:2210.15723, 2022). Community Check doesn't assess truth — it shows what people think about the policy topic a post discusses. A post can be entirely accurate and still create a distorted picture of where the public stands.
The data source matters too. Community Notes contributors self-select in — and More in Common (2019) found that the most politically engaged users have the largest perception gaps (nearly 3x more distorted than disengaged users). Community Check uses random sampling and peer-reviewed national surveys. Both tools are valuable; they complement each other.
The research consistently shows the opposite. The social norms approach — correcting misperceived norms by showing accurate data — has been validated across 200+ studies (Berkowitz, Changing the Culture of College Drinking, Hampton Press, 2004). Tankard & Paluck (Social Issues and Policy Review, 2016) found that accurate norm information corrects misperceptions without coercion — it reveals what people already privately believe, rather than pressuring them into something new.
Mernyk et al. (PNAS, 2022, n=4,741) showed this directly: correcting inaccurate metaperceptions reduced support for partisan violence, with effects lasting ~26 days. People didn't conform — they recalibrated, and felt better about each other as a result.
Community Check doesn't claim majority opinion equals truth. It provides a map of what people actually think — which is valuable precisely when your estimate of the room is off by 200–400%, as Ahler & Sood (J. Politics, 2018) documented. If 70% of people believe something you disagree with, knowing that number helps you understand the world you're operating in. Hiding it doesn't make the disagreement go away.
Both majority and minority positions are always displayed with their numbers. This isn't "the crowd says you're wrong." It's "here's what the room actually looks like" — and that's useful no matter where you stand in it.
Correct — by design. Community Check activates only when reliable polling data exists, a documented perception gap has been identified, and a post reaches >10K impressions. That covers ~50–100 major policy questions. Posts about niche topics or emerging controversies without polling data get no Community Check.
The topic-matching confidence threshold is 0.8 — if the system isn't sure, it stays silent. False positives are worse than gaps. This is intentionally focused on the specific, well-documented cases where perception gaps are largest: gun policy, climate, immigration, healthcare, money in politics. Start where the data is strongest, and expand from there.
The architecture prevents this. Data comes from independent polling organizations — not governments, not platforms. The sampling algorithm is open-source. Question wording is published. Methodology is auditable. Quarterly transparency reports detail every step from sampling to display.
Compromising it would require simultaneously infiltrating multiple independent polling organizations, altering open-source code inspected by thousands of researchers, and evading anomaly detection. That's a high bar. Today's platform algorithms shape public perception at scale with zero transparency and zero public oversight. Community Check raises the baseline significantly.
Independent thinking requires accurate inputs. Right now, the feed is giving you wildly inaccurate ones. Sparkman et al. (Nature Communications, 2022, n=6,119) found Americans underestimate popular climate policy support by nearly half — 80% actually support renewable energy siting, but people estimate 43%. Moore-Berg et al. (PNAS, 2020) found partisans overestimate the other side's hostility by roughly 2x. These aren't matters of opinion — they're factual errors about the world around you.
Community Check doesn't ask you to care what others think. It gives you an accurate picture so your independent opinions are based on reality, not on an algorithmically curated distortion of it.
Correcting metaperceptions — beliefs about what others believe — works differently than correcting factual beliefs. Factual corrections can trigger defensiveness. But learning "the other side is less extreme than you thought" tends to be relieving, not threatening. It lowers the temperature.
Lee et al. (PNAS Nexus, 2025, n=1,090) found that correcting overestimates of toxic social media users improved positive emotions and reduced perceived moral decline. Mernyk et al. (PNAS, 2022, n=4,741) found effects lasting ~26 days from a single correction. Community Check targets this same mechanism — not what you believe, but what you believe others believe. That's where the distortion lives, and that's where the correction is most effective.
How Community Check would work in practice, from data sources to platform integration.