Menu
Deep ResearchPROAsk Elon
Late 2022 — 2023 · Released by independent journalists · Twitter / X

The Twitter Files

12 threads of internal Twitter documents released to selected journalists shortly after the Musk acquisition. The revelations reshaped public understanding of platform-government relationships.

  1. #12022-12-02released by Matt Taibbi

    Hunter Biden laptop story suppression (Oct 2020)

    The first Twitter Files thread documented how Twitter suppressed the New York Post's October 2020 reporting on Hunter Biden's laptop — blocking links to the story and locking accounts that shared it, including the Post's own account. The files showed that senior Twitter executives made the call without a formal government request, citing a (since-disputed) internal policy about 'hacked materials.'

    • Twitter's policy team and executive leadership actively blocked the NY Post's Biden laptop story without a legal demand from any government agency.
    • Requests to suppress the story came from both Democratic and Republican political operatives via Twitter's 'partner support' portal — but only the Democrat requests were acted upon before the election.
    • Senior executives including Vijaya Gadde (head of policy) and Yoel Roth (head of trust & safety) were directly involved in the decision.
    • The 'hacked materials' justification was applied despite Twitter's own staff acknowledging there was no evidence the laptop material was actually hacked.
    • After internal debate, the ban was partially relaxed within 24 hours — but reputational and virality damage to the story had already been done.
  2. #22022-12-08released by Bari Weiss

    Visibility filtering / "shadow banning"

    The second thread, released by Bari Weiss and her team at The Free Press, revealed Twitter's internal toolkit for limiting the reach of accounts and tweets without notifying users — a practice the company had publicly denied. Internally these tools were called 'visibility filtering,' but they functionally suppressed content from trending, searches, and suggested follows without telling account holders.

    • Twitter maintained a system of tiers including 'Do Not Amplify,' 'Search Blacklist,' and 'Trends Blacklist' that could be applied to accounts or individual tweets.
    • High-profile conservative accounts — including Turning Point USA founder Charlie Kirk and Rep. Dan Crenshaw — were placed on these lists.
    • The system was applied asymmetrically, with conservative accounts disproportionately affected in the files reviewed.
    • Twitter employees used the term "visibility filtering" internally while publicly denying they engaged in shadow banning.
    • The decisions were often made by mid-level trust & safety staff with limited formal review.
  3. #32022-12-09released by Matt Taibbi

    The Trump ban — internal deliberations

    The third thread revealed the chaotic internal process inside Twitter on January 6–8, 2021 as executives debated and ultimately decided to permanently suspend Donald Trump's account. The files showed real-time Slack messages and emails as employees argued about whether Trump's tweets violated policy, with intense external political pressure shaping the outcome.

    • Internal Twitter staff were split on whether Trump's January 6 tweets actually violated Twitter rules — several policy staff argued the tweets did not meet the bar for a permanent ban.
    • The decision was influenced by pressure from advertisers and public statements by employees threatening to quit if Trump was not banned.
    • Twitter CEO Jack Dorsey was not involved in real-time decisions on January 6 and 7; the calls were made by Gadde, Roth, and a small group of policy executives.
    • Employees discussed the risks of setting a precedent for banning a sitting head of state and the signal it would send globally.
    • A 'Scaled Enforcement' team effectively fast-tracked the ban under a 'glorification of violence' policy that had not been applied to comparable tweets from other leaders.
  4. #42022-12-10released by Michael Shellenberger

    Lead-up to the Trump ban — pressure and context

    The fourth thread, released by journalist Michael Shellenberger, provided the broader context for the Trump ban — including pressure from congressional Democrats beginning months before January 6, 2021, and how Twitter's policy team had been preparing a justification framework for the ban for some time.

    • Democratic members of Congress had been lobbying Twitter for months to ban Trump, with formal letters citing his COVID misinformation and election rhetoric.
    • Twitter's policy team had internally prepared 'ban scenarios' for Trump as early as mid-2020, anticipating the need to act.
    • Twitter's partner support system gave certain political campaigns and media outlets expedited access to moderation tools unavailable to ordinary users.
    • The files showed that concerns about Trump's account were raised repeatedly in 2020 but action was delayed until after the election.
    • Employees who opposed the ban were overruled and their objections were not escalated to Dorsey.
  5. #52022-12-12released by Matt Taibbi

    Final Trump ban decision — the actual calls made

    The fifth thread reconstructed the moment-by-moment decision on January 8, 2021 to permanently ban Trump. Taibbi documented the specific internal communications — Slack threads, emails, and a live Google doc — that constituted the final decision process, including who had final authority and how the 'glorification of violence' rationale was applied.

    • The final decision to permanently ban Trump was made in a Google doc that Vijaya Gadde and her team edited in real time on January 8.
    • Twitter's own Trust & Safety lead Yoel Roth had argued that Trump's tweets on January 8 did not qualify under existing policy — but was overruled.
    • Executives cited a 'coded' reading of Trump's language (references to 'the American Patriots' and '75 million great American Patriots') as implied calls to further violence.
    • The decision was made within approximately six hours of Trump's January 8 tweets, unusually fast by Twitter's own process standards.
    • No formal appeal process was available to Trump's account and none was offered.
  6. #62022-12-16released by Matt Taibbi

    FBI and Twitter — the formalized relationship

    The sixth thread documented the extensive, formalized relationship between the FBI and Twitter's trust & safety team — including the volume of content-moderation requests the FBI sent to Twitter and the internal processes Twitter had developed to handle them.

    • The FBI sent Twitter hundreds of requests to review or take down accounts during the 2020 election period — far more than previously known.
    • Twitter had a formal weekly or bi-weekly meeting cadence with the FBI's Foreign Influence Task Force dedicated to flagged accounts.
    • Twitter staff were often skeptical of FBI requests and did not uniformly comply — some accounts flagged by the FBI were found not to violate policy.
    • The FBI paid Twitter approximately $3.4M for processing government legal demands, per internal records.
    • The volume and regularity of FBI-Twitter contact went well beyond typical law-enforcement cooperation and amounted to an embedded relationship.
  7. #72022-12-19released by Michael Shellenberger

    FBI coordination on Hunter Biden laptop story

    The seventh thread, by Shellenberger, focused specifically on the FBI's involvement in priming Twitter — and other platforms — to suppress or treat skeptically any reporting related to the Hunter Biden laptop ahead of the 2020 election, including by labeling anticipated stories as Russian disinformation.

    • The FBI had briefed Twitter executives in the months before the 2020 election to be alert for a 'hack-and-leak' operation related to Hunter Biden.
    • That briefing created a hair-trigger in Twitter's trust & safety team that was activated when the NY Post story dropped in October 2020.
    • Internal Twitter communications showed Yoel Roth referencing the FBI's pre-election briefing as partial justification for suppressing the laptop story.
    • The FBI's briefing did not specifically mention the NY Post story or the Biden laptop — it was a general warning — but Twitter treated it as covering those facts.
    • FBI agents who had direct knowledge the laptop was authentic did not correct the platform's misunderstanding.
  8. #82022-12-20released by Lee Fang

    Pentagon covert influence-ops accounts on Twitter

    The eighth thread, reported by journalist Lee Fang of The Intercept, revealed that Twitter had knowingly allowed a network of US military-linked accounts to conduct covert influence operations — even after being alerted to them — because they were operated on behalf of the US government.

    • Twitter maintained a whitelist of accounts linked to US military PSYOP operations, shielding them from removal despite knowing they violated Twitter's stated ban on platform manipulation.
    • These accounts were used to spread US government messaging in Arabic, Farsi, and Urdu — targeting audiences in Iran, Afghanistan, and the broader Middle East.
    • Twitter employees raised internal concerns about the accounts but were told by management to leave them active.
    • The revelation was particularly awkward given Twitter's simultaneous removal of thousands of accounts it attributed to Russian and Chinese influence operations.
    • Stanford Internet Observatory researchers had flagged many of the accounts, but their findings were not acted upon.
  9. #92023-01-13released by Matt Taibbi

    Twitter's broader government and agency relationships

    The ninth thread expanded the government-platform relationship picture beyond the FBI to include DHS, the Office of the Director of National Intelligence (ODNI), and the State Department's Global Engagement Center — showing a whole-of-government approach to shaping platform moderation.

    • Multiple federal agencies beyond the FBI had standing relationships with Twitter for content moderation requests, including DHS and the State Department.
    • The DHS's Cybersecurity and Infrastructure Security Agency (CISA) ran a formal 'switchboard' system to channel government content removal requests to platforms.
    • Stanford Internet Observatory, the University of Washington's Center for an Informed Public, and other academic groups acted as intermediaries between government agencies and social media platforms on moderation decisions.
    • Twitter employees expressed concern internally that they were being used as an enforcement arm for government speech preferences.
    • The breadth of coordination went far beyond what platforms had publicly disclosed in their transparency reports.
  10. #102023-01-09released by David Zweig

    COVID-19 content moderation — suppressing true information

    The tenth thread, by journalist David Zweig, examined Twitter's internal handling of COVID-19 content and argued that the platform suppressed factually accurate information — including from credentialed epidemiologists — that contradicted official guidance from the CDC and WHO.

    • Twitter's COVID moderation policy was explicitly aligned with CDC and WHO guidance, meaning that accurate information contradicting those bodies could be removed or suppressed.
    • Tweets by Stanford epidemiologist Jay Bhattacharya, a co-author of the Great Barrington Declaration, were placed on Twitter's 'Trends Blacklist' after he tweeted about the costs of lockdowns.
    • Internal records showed Twitter staff debating whether to suppress a tweet by a Harvard epidemiologist that accurately cited mask-study data questioning universal mandates.
    • The moderation system created a chilling effect: credentialed scientists self-censored rather than risk having their accounts flagged.
    • Government agencies including CDC had accounts with special reporting privileges that allowed them to flag content for rapid review.
  11. #112023-01-13released by Matt Taibbi

    'Virality Project' — coordinated COVID narrative management

    The eleventh thread documented the 'Virality Project,' a coordination effort between Stanford Internet Observatory and multiple government-linked entities that systematically flagged COVID-related content across all major platforms for suppression — including content that was factually accurate or based on legitimate scientific debate.

    • The Virality Project coordinated across Twitter, Facebook, Google, TikTok, and other platforms — creating a single chokepoint for narrative management that operated across the entire information ecosystem.
    • Content flagged by the Virality Project included true vaccine side-effect reports, questions about natural immunity, and reporting on breakthrough COVID infections in vaccinated individuals.
    • Weekly briefing calls included representatives from the platforms and from government-linked academic centers.
    • The Virality Project's stated mission was combating 'misinformation,' but internal documents showed it flagging content solely because it could 'generate vaccine hesitancy' — regardless of accuracy.
    • Twitter's trust & safety team complied with the vast majority of Virality Project flagging requests.
  12. #122023-01-17released by Matt Taibbi

    Compelled speech — the "anti-vax" label policy

    The twelfth thread examined how Twitter, under pressure from external groups and government-linked researchers, developed policies that amounted to compelled speech — requiring the platform to affirmatively add warning labels and counter-messaging to content that was not misinformation but merely skeptical of vaccine mandates.

    • The Virality Project pushed Twitter to apply 'informational' labels not just to misinformation but to any content that expressed hesitancy about COVID vaccines, even if the underlying facts were accurate.
    • Twitter executives debated internally whether applying such labels to accurate content was ethically defensible and potentially counterproductive.
    • Some Twitter staff argued that the label regime was causing users to distrust the platform's overall moderation as politically motivated.
    • The policy effectively created a two-tier information system: official messaging was amplified, while dissenting-but-true content was suppressed or labeled.
    • The files showed requests from DHS's CISA asking platforms to add positive vaccine messaging proactively to users' timelines — a form of government-directed editorial intervention.
First Principles AI
First Principles AI
Ask anything about Elon
5 free

Ask anything about Elon — companies, predictions, tweets, controversies, vehicles, family.