More

    What the Twitter Files Reveal About Free Speech and Social Media


    In the hours after the January 6th insurrection, executives at Twitter had to decide what to do about Donald Trump’s account. On one level, the decision seemed straightforward. The President, having been voted out of office, had repeatedly insisted that a fair election had been stolen, summoned what would become a violent crowd of his supporters to Washington and directed them to the Capitol, where they tried to forcibly stop the official tally that would remove him from office. Trump had communicated much of this effort on Twitter itself; accounts had been suspended for far less. But on another level the situation was murkier. Twitter had developed an overlapping network of formal rules and internal review boards that governed its use, and had chosen to largely exempt public figures from the scrutiny it directed at most accounts. This choice had insulated Trump from punishment in the past. Why, exactly, should this time be any different?

    At that precise moment, Twitter’s C.E.O. and co-founder, Jack Dorsey, was vacationing in French Polynesia. On the morning of January 7th, he e-mailed employees, saying that it was important that Twitter stick to its prior policies. But in the course of the morning things began to change. Shortly before noon, Yoel Roth, Twitter’s Global Head of Site Integrity, sent a message to a colleague. “GUESS WHAT,” Roth wrote. “Jack just approved repeat offender for civic integrity.” The new policy, Roth explained, established an escalating system of five strikes, through which repeat offenses could lead to a permanent ban. “Progress!” Roth’s colleague wrote back. That afternoon and evening, executives at the company went back and forth trying to figure out what this approach meant. Notably, Roth confirmed that the “public-interest exception” had been suspended in Trump’s case. Beyond that, the course was not yet clear: for now, Twitter would wait, and see what the President would do.

    We know the details of these internal conversations because of the ongoing publication of the Twitter Files, a serial investigation into the way the company has managed sensitive public issues, commissioned by its new owner, Elon Musk. Not long after Musk bought Twitter, in October, he reached out to a few prominent journalists, each of them at least broadly sympathetic to Musk’s view that Twitter’s past moderation decisions reflected its own entrenchment in the liberal establishment, and were therefore effectively suppressing conservative and other dissenting views. Among them were Matt Taibbi, the gonzo political writer, formerly of Rolling Stone; Bari Weiss, the ex-Times Opinion writer; the environmental-policy wonk Michael Shellenberger, who made his name by opposing the climate left; and the investigative reporter Lee Fang, of the Intercept. These writers had reputations already, but, even so, to receive a summons from the wealthiest person in the world must have been thrilling. “At dinner time on December 2, I received a text from Elon Musk,” Weiss wrote recently. “Was I interested in looking at Twitter’s archives, he asked. And how soon could I get to Twitter HQ? Two hours later I was on a flight.”

    Musk set at least one condition: that the reports be published first on Twitter itself. Because of this, and because the journalists he chose tend to write polemically and have fierce online cliques of supporters and opponents, the Twitter Files have arrived pre-factionalized. Conservatives have cheered their publication, while many progressives have either ignored them or rolled their eyes. But as they have been published over the past several weeks, the files have been at once among the most interesting and the most complicated journalistic documents of the Trump era: complicated in that their tone is often propagandistic and their evidence frustratingly partial but interesting in that they show how various political actors sought to influence a period in global politics (beginning, roughly, with the Syrian war and continuing through the pandemic) defined by fights over communication and information. The files cut a haphazard tunnel, in other words, through one of the richest substrates in politics, and readers are left to try to bend, squint, strike a match, and dig through it to try to figure out how exactly information on social media was managed during this era—and, crucially, whether that era has ended.

    Trump tweeted twice on the morning of January 8th. At nine-forty-five, he wrote, “The 75,000,000 great American Patriots who voted for me, AMERICA FIRST, and MAKE AMERICA GREAT AGAIN, will have a GIANT VOICE long into the future. They will not be disrespected or treated unfairly in any way, shape or form!!!” He followed that with a shorter post an hour later, saying, “To all those who have asked, I will not be going to the Inauguration on January 20th.” Several public figures (among them Michelle Obama) had called that morning for Twitter to permanently ban the President, and by the afternoon there was also some public pressure stemming from the company itself: the Washington Post published a joint letter from more than three hundred employees demanding that Trump be removed from the platform. The trust and safety team, though, scrutinized Trump’s morning tweets for violations of its standards and saw nothing wrong. “I’m not seeing clear or coded incitement in the DJT tweet,” one Twitter official wrote. “No violation of our standards at this time.”

    But Vijaya Gadde, the company’s general counsel, asked another team to consider the same tweets, and one member of that team offered another way of viewing them: praising the “75,000,000 great American Patriots who voted for me” needed to be read in the context of January 6th. “He is the leader of a violent extremist group who is glorifying the group and its recent actions,” the team member wrote. The argument wound on, but eventually this gloss—that Trump’s obnoxious but also facially anodyne tweet demanding respect for his supporters was in fact inciting violence—won the day. About six hours later, following an all-hands meeting, the company announced that Trump was banned from the platform indefinitely, “due to the risk of further incitement of violence.”

    Two years later, Trump is still not back on the platform. (Musk invited him to return in November, but the ex-President declined.) The Twitter Files suggest that the company made a subjective determination, some might say a commonsensical one—that Trump had gone too far—and then found a legalistic rationale for doing what it wanted. Of course, businesses do things like this all the time, but in a company that had come to play such a central role in convening global political speech, this whiff of arbitrariness was bound to set off alarms. As noted by Bari Weiss, who authored the installment of the Twitter Files on Trump’s ban, heads of states around the world objected. Angela Merkel’s spokesperson called the decision “problematic.” Emmanuel Macron told an audience that he didn’t want to live in a world in which these decisions were made by “a private player.” Alexei Navalny, the Russian dissident politician, called it “an unacceptable act of censorship.”

    Interestingly, a critical note was struck within Twitter itself. The company’s C.T.O., Parag Agrawal, who was soon to take over from Dorsey as C.E.O. (Musk fired him last October), wrote to a colleague, “I think a few of us should brainstorm the ripple effects [of Trump’s ban].” His message reads as if Agrawal thought Twitter might have bitten off more than it could chew. He wrote that evening, “Centralized content moderation IMO has reached a breaking point.”

    Certainly, Twitter was practicing quite a bit of what Agrawal called “centralized content moderation.” Consider the case of Jay Bhattacharya, mentioned briefly in another installment of the Twitter Files. From the early days of the pandemic, Bhattacharya, a health-policy professor at Stanford, was one of the most prominent intellectuals calling for more lenient COVID restrictions in the U.S. and abroad. (Together with Martin Kulldorff of Harvard and Sunetra Gupta of Oxford, Bhattacharya was one of the three authors of the Great Barrington Declaration of October, 2020, which argued that ending lockdowns for all but the most vulnerable would allow countries to quickly achieve herd immunity.) Bhattacharya detailed his views in many ways—he gave talks to other experts, spoke on television, wrote op-eds—and he tweeted. As Weiss revealed in a Twitter Files dispatch in early December, Bhattacharya was at one point placed on a “trends blacklist,” a tool meant to keep even viral tweets from appearing on Twitter’s “trending” search bar and intended to limit an account’s over-all reach without visibly restricting it. Weiss suggests that the decision was likely made by a small group of senior Twitter executives.

    Now, Bhattacharya is not the guy who got the pandemic right. In a Wall Street Journal op-ed in March, 2020, he claimed that COVID was only one-tenth as deadly as the flu; in January, 2021, he argued in the Indian newspaper ThePrint that a mass vaccination program in that country would do more harm than good. We should be glad that most governments did not take this advice. You could also make a case that declining to include a tweet in the “trending” section is a pretty mild course of action, amounting to a decision not to promote: if the Times opinion editors had passed on Bhattacharya’s op-ed before the Journal accepted it, no one would accuse them of suppressing free speech. Bhattacharya’s account currently has more than three hundred and fifty thousand followers; he was never suspended. Still, he was making a sincere policy argument, as experts and pundits have done on op-ed pages for as long as there’s been a free press. This wasn’t “misinformation.” Why would it be seen as so dangerous that it needed to be suppressed?

    The backbeat of the Twitter Files is a heightened sensitivity to the power of information. Three weeks before the 2020 election, the New York Post published a splashy story implying, without much evidence, that emails on Hunter Biden’s laptop revealed that he had connected a Ukrainian businessman with his father. Twitter reacted by suspending the Post’s account and also by suspending the accounts of people who promoted the story, among them the White House press secretary, Kayleigh McEnany. (“At least pretend to care,” a Trump campaign staffer fumed to his Twitter contact, lobbying for her reinstatement.) As New York magazine’s Eric Levitz detailed, the story was wildly oversold. But ban the Post? Pharmacies did not respond to the Hunter Biden story by pulling the Post from their shelves; bodegas did not turn to an all-Daily News lineup. Something about the technical ease of online suppression made it more likely to happen.

    The most eyebrow-raising revelations in the Twitter Files, documented mostly by Matt Taibbi and Lee Fang, concern the extent to which the F.B.I. and the Pentagon were interested in controlling what was seen on the platform. According to Taibbi’s reporting, there were more than a hundred and fifty e-mails between Roth and the F.B.I. from January, 2020, to November, 2022. Some of these seem to have been more or less normal investigative queries, but many were requests that the company take action to restrict accounts that the F.B.I. had flagged for supplying misinformation. As Taibbi pointed out, some of these requests were absurd—one concerned a parody account of the pro wrestler the Undertaker, which primarily tweeted about soiling himself. (It was banned the same day.) The F.B.I. also flagged cases where the “misinformation” was obviously a joke: “I want to remind republicans to vote tomorrow, Wednesday November 9,” @fromma, the subject of an F.B.I. request to Twitter, tweeted. Down a different archival tunnel, Fang discovered that Twitter had long been coöperating with the Pentagon to help the U.S. government amplify accounts (often in Arabic or Russian) with friendly, and sometimes manufactured, perspectives. You don’t have to be an especially cynical reader of American history to realize that, if there is a new tool that allows for “centralized content moderation” of political information, the F.B.I. is going to take an interest in it. Still, in this context, “centralized content moderation” sounds downright Orwellian.

    Recent Articles

    Related Stories

    Leave A Reply

    Please enter your comment!
    Please enter your name here

    Stay on op - Ge the daily news in your inbox