Social media trails like these are becoming a recurrent feature in violent events ranging from synagogue massacres to bombing plots.
“Before they become real, they percolate online, courtesy of a social media ecosystem that is ubiquitous, barely moderated and well suited to helping aggrieved people find each other,” my colleagues write.
Experts in online extremism say the plot exposed by federal and state officials this week highlights the stakes for social media companies to address violent posts on their platform.
“Social media companies have been allowing these communities to build and grow, ignoring the mounting evidence that memes, posts and images encouraging violence can and do translate into actual violence,” Cindy Otis, a former CIA analyst and vice president of analysis for the Alethea Group, which tracks online threats, told my colleagues. “Not only have many of these Michigan pages and groups been on Facebook for years, the Facebook algorithm actively recommended other militia-related groups and pages to join, allowing each page and group to expand their reach. ”
The companies’ checkered record on extremism is a driving force behind efforts in Washington to regulate them.
Lawmakers have held several hearings about the companies’ handling of online extremism, including one last month where House lawmakers examined “social media’s role in radicalizing America.”
“We’ve had several hearings over the past few years trying to shine light on these problems and imploring the social media companies to act,” said Frank Pallone, (D-N.J.), chair of the House Energy and Commerce Committee at that hearing. “It’s clear they won’t do it on their own.”
The latest incident could add to growing pressure in Washington to overhaul Section 230, a decades-old legal provision shielding tech companies from lawsuits for the photos, videos and posts people share on their services. Democrats particularly have criticized the companies for not taking enough responsibility for harmful content on their services, contending legal changes may be the only way to get tech companies to change their ways.
Facebook says it’s cooperating with the FBI investigation into the kidnapping plot.
There was also a Facebook page called “Michigan Militia Corps, Wolveries,” that used similar language as the group behind the plot, which authorities said was called “Wolverine Watchmen.” Posts included an image of men holding massive firearms, accompanying a link to an article that said, “Trump sides with protesters against Michigan governor.” A page affiliated with the Wolverine Watchmen group had around 3,000 likes and was active until at least the end of August, according to Otis’s research.
“We remove content, disable accounts and immediately report to law enforcement when there is a credible threat of imminent harm to people or public safety,” the company said in a statement.
TikTok, meanwhile, removed the account of one of the suspects, Brendan Caserta. Caserta is shown in videos in a Hawaiian shirt, and in another, he warned that the “price of freedom is eternal vigilance,” according to a recording from the Detroit News. TikTok told my colleagues it removed the account in line with its policies “to reduce potential glorification of harm or martyrdom,” spokeswoman Hilary McQuaide said.
Our top tabs
Experts say it’s unlikely that the president used a green screen in a recent social media video, contradicting online speculation.
The viral speculation started appearing on Twitter almost as soon as Trump posted the video yesterday afternoon from the South Lawn of the White House, my colleague Rachel Lerman writes.
“Trump faked this video released today,” one Twitter user posted. “It’s shot in front of a green screen, the background is fake & on 3-second loop.”
And actor George Takei weighed in, garnering more than 3,500 retweets:
Yet video experts say the elements people are pointing out — the focus of the grass, the background that looks like it’s on a loop and shadows — are probably a result of Twitter’s standard compression of videos posted on its service.
“I don’t see clear visual evidence that the video is shot with a green screen, the crisp shadows appear to be consistent with the sun as the light source, and the reverberation in the audio does not sound like it is indoors,” Hany Farid, a professor at the University of California at Berkeley who researches digital forensics, tells Rachel.
White House spokesman Judd Deere said the president was filmed on the South Lawn and did not use a green screen.
It’s just the latest example of unsubstantiated rumors proliferating online since Trump’s coronavirus diagnosis, with people speculating that the president might not be as healthy as his doctors say.
Facebook permanently banned an Arizona marketing firm running a domestic ‘troll farm’ in support of Trump.
The firm, Rally Forge, was “working on behalf” of Turning Point Action, an affiliate of Turning Point USA, the prominent conservative youth organization, Facebook’s investigation found. The company took down 200 accounts and 55 pages on the social network, as well as 76 accounts on the company’s subsidiary Instagram, my colleague Isaac Stanley-Becker writes.
“The fake accounts, some with either cartoonlike Bitmoji profiles or images generated by artificial intelligence, complemented the real accounts of users involved in the effort, which largely entailed leaving comments sympathetic to President Trump and other conservative causes across social media,” Isaac wrote.
Facebook did not penalize Turning Point USA or its president, Charlie Kirk, who spoke at the Republican National Convention. The company could not determine the extent to which the group’s leaders were aware of the use of fake accounts and other violations. Twitter also suspended 262 accounts involved in the operation because of “platform manipulation and spam.”
Experts in disinformation criticized the decision to not penalize the group financing the activity.
“If, once exposed, there are no consequences, others will try it, too,” Philip N. Howard, director of the Oxford Internet Institute, told Isaac. “Long term, the industry shoots itself in the foot because limited action diminishes our trust in the authenticity of public life on the platforms. There’s been worry about white nationalists or other extremists using Russia’s tactics; now, it’s also the teenager around the corner who’s on the payroll of a troll operation.”
Researchers found a spike in hostile online comments targeting Chinese Americans after Trump’s coronavirus diagnosis.
An analysis by the Anti-Defamation League of 2.7 million tweets in the three days after Trump announced the diagnosis on Twitter, the civil rights group found an 85 percent spike in language associated with hostility against Asians, compared to the day before. The announcement instigated many online conversations blaming China for trying to purposefully get the president sick.
Prominent politicians played a role in driving the conversation. In one since-deleted tweet to her 394,000 followers, pro-Trump former congressional candidate DeAnna Lorraine wrote that “China must pay for giving Trump COVID,” and promised that “we will have justice.”
Trump himself has been at the front of blaming China for the coronavirus. He’s repeatedly called the virus the “China plague.”
“From the birther scandal to lies about immigrants to his attempt to blame China for his own failure to contain the coronavirus, Donald Trump has built his presidency on perpetuating conspiracy theories and racism,” said Rep. Judy Chu, (D-Calif.), chair of the Congressional Asian Pacific American Caucus, in a statement.
Inside the industry
Before you log off
We could all use a little nostalgia TV in 2020.