Affected studies were tracking hate speech, child safety, and misinformation.

100+ researchers say they stopped studying X, fearing Elon Musk might sue them
<a href=httpscdnarstechnicanetwp contentuploads202311GettyImages 1760599795jpg>Enlarge<a><a href=httpswwwgettyimagescomdetailnews phototesla and spacexs ceo elon musk during an in conversation news photo1760599795>WPA Pool Pool | Getty Images Europe<a>

99WITH

At a moment when misinformation about the Israel-Hamas war is rapidly spreading on X (formerly Twitter)—mostly by verified X users—many researchers have given up hope that it will be possible to closely monitor this kind of misinformation on the platform, Reuters reported.

According to a “survey of 167 academic and civil society researchers conducted at Reuters’ request by the Coalition for Independent Technology Research” (CITR) in September, more than 100 studies about X have been canceled, suspended, or switched to focus on another platform since Elon Musk began limiting researchers’ access to X data last February. Researchers told Reuters that includes studies on hate speech and child safety, as well as research tracking the “spread of false information during real-time events, such as Hamas’ attack on Israel and the Israeli airstrikes in Gaza.”

The European Union has already threatened X with fines if the platform fails to stop the spread of Israel/Hamas disinformation. In response, X has reported taking actions to curb misinformation, like removing newly created Hamas-affiliated accounts and accounts manipulating trending topics, working with partner organizations to flag terrorist content, actioning “tens of thousands of posts,” and proactively monitoring for antisemitic speech.

But it’s not immediately clear if X is doing enough to reduce potential risks. External social media researchers have typically depended on crunching X’s real-time data to assess growing threats on the platform, and Reuters’ survey shows how much harder it has become for some researchers to continue doing that work. Researchers also told Reuters that another factor hampering research was Musk’s lawsuit against the Center for Countering Digital Hate. “The majority of survey respondents”—104 out of 167—told Reuters they fear “being sued by X over their findings or use of data.”Advertisement

Meanwhile, X’s content moderation efforts have continued to be heavily scrutinized as X struggles to prove that it’s containing the spread of misinformation and hate speech under Musk’s new policies.

Most recently, X CEO Linda Yaccarino had to step in—amid outcry from X advertisers and staff—to remove a pro-Hitler post that went viral on the platform, The Information reported. X later claimed that the post was removed because it broke platform rules, not because of the backlash, but X’s efforts to proactively monitor antisemitic speech seemingly failed there. And nobody’s sure why X’s global escalation team delayed action, although it’s possible that they feared that removing the post might be considered censorship and incite the ire of Musk, the “free speech absolutist.”

In February, the CITR published a letter, warning that Musk charging high fees for access to Twitter data that was previously free “will disrupt critical projects from thousands of journalists, academics, and civil society actors worldwide who study some of the most important issues impacting our societies today.” Currently, X offers three paid tiers for researchers to access data, costing between $100 and $42,000 per month. Reuters reported that CITR’s survey, “for the first time,” importantly quantifies the number of studies canceled since these fees were imposed.

Cutting off researchers from X’s application programming interface (API) could leave X users more vulnerable to hate speech, misinformation, and disinformation, Reuters reported. Some researchers who are attempting to keep their studies alive told Reuters that they’ve resorted to manually analyzing posts on the platform.

That slow approach threatens to reduce the quality of their findings at a time when there may be more posts for researchers to wade through. X’s current policy is to limit reach of “lawful but awful” posts, rather than remove them, drawing criticism from regulators who historically have pushed social media platforms to do more content moderation. X hopes to persuade regulators that its policy is the right way of doing things. The platform recently provided a transparency report to the EU (to comply with the Digital Services Act) that explained that the platform was determined to prove that “free expression and platform safety can coexist.” While X vowed to remove “dangerous and illegal content and accounts,” as well as respond to reports of illegal content, X will allow most other posts that are not considered to be violent threats, targeted harassment, or privacy violations, the report said.

“X is reflective of real conversations happening in the world, and that sometimes includes perspectives that may be offensive, controversial, and/or narrow-minded to others,” X’s report said. “While we welcome everyone to express themselves on X, we will not tolerate behavior that harasses, threatens, dehumanizes, or uses fear to silence the voices of others.”Advertisement

Although X’s API fees and legal threats seemingly have silenced some researchers, X has found some other partners to support its own research. In a blog last month, Yaccarino named the Technology Coalition, Anti-Defamation League (another group Musk threatened to sue), American Jewish Committee, and Global Internet Forum to Counter Terrorism (GIFCT) among groups helping X “keep up to date with potential risks” and supporting X safety measures. GIFCT, for example, recently helped X identify and remove newly created Hamas accounts.

But X partnering with outside researchers isn’t a substitute for external research, as it seemingly leaves X in complete control of spinning how X research findings are characterized to users. Unbiased research will likely become increasingly harder to come by, Reuters’ survey suggested.

For example, in July, X claimed that a software company that helps brands track customer experiences, Sprinklr, supplied X with some data that X Safety used to claim that “more than 99 percent of content users and advertisers see on Twitter is healthy.” But a Sprinklr spokesperson this week told Reuters that the company could not confirm X’s figures, explaining that “any recent external reporting prepared by Twitter/X has been done without Sprinklr’s involvement.”

LEAVE A REPLY

Please enter your comment!
Please enter your name here