Later On

A blog written for those whose interests more or less match mine.

Twitter Suspensions Reveal the Company’s Skewed Views on ‘Extremism’

leave a comment »

Jillian York reports at Motherboard:

Every society engages in censorship. Whether from church or state or otherwise, the desire to suppress information seems a natural human impulse, albeit one variant in all its manifestations. Most of us readily accept certain kinds of censorship—think child sexual abuse imagery—but are reluctant to call it by its name.

The restriction of content we deem beyond the pale is still, in fact, censorship. The word “censorship” is not itself a value judgement, but a statement of fact, an explanation for why something that used to be, no longer is. The American Civil Liberties Union defines “censorship” as “the suppression of words, images, or ideas that are ‘offensive’, [that] happens whenever some people succeed in imposing their personal political or moral values on others.” The definition further notes that censorship can be carried out by private groups—like social media companies—as well as governments. And when carried out by unaccountable actors (be they authoritarian governments or corporations) through opaque processes, it’s important that we question it.

According to Twitter’s latest transparency report, the company suspended more than 377,000 accounts for “promoting extremism.” Twitter said that 74 percent of extremist accounts were found by “internal, proprietary spam-fighting tools”—in other words, algorithms and filters built to find spam, but employed to combat the spectre of violent extremism.

Few have openly questioned this method, which is certainly not without error. In fact, the filtering of actual spam inspired more of a debate back in the day—in 1996, residents of the town of Scunthorpe, England, were prevented from signing up for AOL accounts due to the profanity contained within their municipality’s name, leading to the broader realization that filters intended to catch spam or obscenity can have overreaching effects. The “Scunthorpe problem” has arisen time and time again when companies, acting with good intentions, have filtered legitimate names or content.

The Scunthorpe problem demonstrates that when we filter content—even for legitimate reasons or through democratic decisions—innocuous speech, videos, and images are bound to get caught in the cracks. After all, you can’t spell socialism without “Cialis”.

We know that companies, using low-wage human content moderators and algorithms, undoubtedly make mistakes in their policing of content. To err is human, and algorithms are built and implemented by humans, lest we forget. But when a company takes charge of ridding the world of extremism, with minimal to no input from society at large, there’s something more insidious going on.

Twitter’s deeming of some content—but not other content—as “extremist” is, after all, a value judgement. Although there’s little transparency beyond numbers, much of the banned content matches up neatly with the US government’s list of designated terrorist organizations. We don’t know what kinds of terms Twitter uses to weed out the accounts, but accounts expressing support for Islamic terror organizations seem to make up the bulk of takedowns. Meanwhile, neo-Nazis like Richard Spencer are rewarded with a “verified” checkmark—intended to signify a confirmed identity, but often used and seen as a marker of celebrity.

By choosing to place its focus on the faraway spectre of ISIS—rather than the neo-Nazis closer to home—Twitter is essentially saying that “extremism” is limited to those scary bearded men abroad, a position not unlike that of the American media. In fact, extremism is a part of our new, everyday reality, as elected officials opt for racist and sexist policies and as President Trump eggs on his most ardent white supremacist fans, offering tacit support for their vile views. As white supremacist hate gains ground, companies seem caught unaware, and unwilling or unprepared to “tackle” it the way they have Islamic extremism.

The question of whether to censor, of what to censor, is an important one, one that must be answered not by corporations but through democratic and inclusive processes. As a society, we may in fact find that censoring extremism on social platforms helps prevent further recruitment, or saves lives, and we may decide that it’s worth the potential cost. At that point, we could work to develop tools and systems that seek to prevent collateral damage, to avoid catching the proverbial dolphins in the tuna nets. . .

Continue reading.

Written by LeisureGuy

3 April 2017 at 3:06 pm

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s