Home

How the War on Terror Set the Stage for Today's Moderation Wars

This week, the Supreme Court is hearing two cases that could upend the way we’ve come to understand freedom of speech on the internet. Both Gonzalez v. Google and Twitter v. Taamneh ask the court to reconsider how the law interprets Section 230, a regulation that protects companies from legal liability for user-generated content. Gizmodo will be running a series of pieces about the past, present, and future of online speech.

There were surely a number of events that led a quiet middle-class teenager to become one of the most influential terrorist propagandists of all time, but one of the first was that he set up a Blogspot account.

In 2003, Samir Khan, a Saudi-born U.S. naturalized citizen, was barely 18 years old when he launched “InshallahShaheed,” which translates to “Martyr, God Willing,” in which he poured out his thoughts about why America deserved “hellfire” for the wars in Iraq and Afghanistan. Originally born in Saudi Arabia, Khan grew up in Queens, ostensibly the byproduct of a normal childhood. At some point, he decided that he hated America and wanted to join a holy war against it. Setting up his blog, Khan swiftly became something of an icon and used his status to foster ties to various high-ranking al-Qaeda members.

Eventually, Khan would become the editor of Inspire, the terrorist group’s web magazine dedicated to recruiting western Muslims to violent jihad. The magazine, which was only downloadable via PDF file, was full of grisly stuff, including articles advocating for the murder of U.S. government employees and one infamous article encouraging would-be terrorists to build bombs in their mom’s kitchen. It would also reportedly serve as the inspiration for numerous real-world terrorist attacks, including the Boston Marathon bombing.

As for Khan, he seemed to relish his role as the mouthpiece for the world’s most feared terror group. “I am proud to be a traitor to America,” he wrote, in one notorious screed. He predicted a future in which America would be overrun by jihadists.

Keep it light and stay secure.This combination wall light and security camera from eufy offers nightvision, 2K picture, and over 3 weeks of local video storage with no hidden fee. Get it for $50 off on Amazon with code: WSWLCE50US.

Of course, that future never materialized. Roughly a year after making that post, Khan was silenced permanently. In September of 2011, while living in Yemen, Hellfire missiles from a U.S. Predator drone struck the convoy the 25-year-old blogger was traveling in, killing him. The government said that the primary target of the strike had been Anwar al-Awlaki, another U.S. citizen, a friend of Khan’s, and, through his online videos, one of the most influential radical clerics at the time. The targeted assassination of both men was unprecedented for many reasons, not least of which was that it involved the killing of two U.S. citizens without a trial or even a coherent legal pretext.

Two rights groups, including the ACLU, later sued the U.S. government over the drone strike, arguing that its actions were unconstitutional. Hina Shamsi, director of the ACLU’s National Security Project, characterized the lawsuit as a challenge to “the constitutionality of [the government’s] killing of American citizens without due process, based on vague and constantly changing legal criteria and secret evidence that was never presented to a court.”

“At the time, the government was taking really unprecedented and extraordinary positions. It was claiming the power to use lethal force against its own citizens and arguing that the court should have no role at all to play in reviewing its actions,” Shamsi told Gizmodo. As to Khan’s role as a propagandist, Shamsi notes that Khan was never officially charged with a crime. “The government can’t kill people based on their speech alone [in this country]—that’s pretty fundamental,” she said.

However, whether Khan was technically guilty of a crime or not, the truth was that he had been the mouthpiece for some truly horrendous stuff. Threading a thin line between incitement to violence and a constitutional gray zone where rhetorical ugliness is tolerated, Khan’s online presence, controversial as it was, was an early example of what has now become the fundamental dilemma of the social media age: how to deal with internet speech that’s considered undesirable.

It’s a dilemma that still obviously plagues us with questions that have no easy solutions: What kind of speech should be allowed? What doesn’t qualify? What should be done with the speech that doesn’t?

This week, the Supreme Court heard two cases that challenged our understanding of Section 230 of the Communications Decency Act, the landmark 1996 law that gives broad legal immunity to web platforms and shields them from legal action as a result of the content they host. One case, Gonzalez v. Google, sought to hold Google and its subsidiary, YouTube, partially responsible for the ISIS terrorist attacks that took place in Paris in 2015. The lawsuit, which was filed by one of the victims’ parents, argues that Google “aided and abetted” one of the shooters in the incident. YouTube had failed to take down ISIS videos from its platform, and later the videos were allegedly recommended to the shooter. The other case made a similar argument about Twitter’s past hosting of terrorism-related material.

It’s interesting that these issues continue to haunt social media platforms because, for a very long time, extremist content was a problem that said platforms really didn’t want to admit existed. And, because of the protections provided by Section 230, they hadn’t really worried about it.

The Middle East Media Research Institute, or MEMRI, which researches the proliferation of right-wing Islamist content online, spent years attempting to get major tech companies to take action against extremists. During the early years of the social media industry, it was mostly a lost cause. MEMRI’s executive director, Steven Stalinsky, remembers one particular meeting he and his colleagues had with the senior policy team at Google way back in December 2010. According to him, the meeting was most memorable because of how much “screaming” it involved.

“We were being yelled at by their lawyers. It went on for a long time,” Stalinsky recalled, in a phone call with Gizmodo. Stalinsky said that, at that particular meeting, Google’s team was upset about numerous reports that MEMRI had put out accusing the tech giant of hosting terrorist content. Indeed, at the time, it wasn’t unusual to see YouTube videos that involved al-Qaeda adherents proselytizing violent jihad. Despite a large amount of this kind of content floating around its video hosting site, Google wasn’t very good at taking it down.

Twitter had a similar problem on its hands. In the early days of the microblogging app, radical extremists flocked to the platform, setting up shop to spread their gospel. Many extremist Sheikhs used accounts to advocate for jihad, with seemingly little awareness or action taken by Twitter’s management. When ISIS emerged, it too found Twitter to be incredibly useful. By one count in 2015, the group had tens of thousands of followers on the platform.

“They didn’t want to deal with it,” Stalinsky said, of the social media platforms. “They were preoccupied with other stuff and I don’t think they saw moderation as a major priority at the time. A lot of these companies were created by pretty young guys who were very good at coding but weren’t really ready for the national security implications of what they’d built,” he added.

It wasn’t until Islamic State fighters began using YouTube and Twitter to host videos of American journalists getting beheaded that the major platforms were finally forced to confront their own inaction. The gruesome killing of American journalist James Foley, in particular, became a flashpoint for change. “That was absolutely the turning point,” said Stalinsky. “There was so much government pressure, so much bad press—it was impossible for them not to do something about it.”

YouTube acknowledges that the platform has put markedly more effort into its content moderation strategies in recent years. When reached for comment by Gizmodo, a company representative said: “With respect to our policies prohibiting violent extremist content, we’ve been very transparent over the last several years about our efforts in this space, and the dramatic increase in investments starting in 2016-2017.” The representative added that, today, the platform uses a combination of “machine learning technology and human review” to catch violent videos; additionally, the platform’s Intelligence Desk, a group of specialized analysts, “work to identify potentially violative trends before they spread to our platform,” the representative said.

Still, not everybody is happy with Big Tech’s efforts to clean itself up. After platforms started paying closer attention to the content they were hosting, community guidelines expanded, and account suspensions became routine. It wasn’t just terrorists getting booted from platforms anymore, it was a whole lot of different kinds of people. As a result, complaints from folks who felt they’d been undeservedly “canceled” or “shadowbanned” rose, and allegations of political bias—of all different stripes—became a staple.

Aaron Terr, director of public advocacy at the free speech organization FIRE, said the major platforms’ moderation strategies may have their heart in the right place but they’re a bit of a mess overall.

“Right now you have lists of complex and vague rules, enforced without tr Source: Gizmodo

Previous

Next