Could Facebook and YouTube’s New Anti-Terror Software Censor Conservatives?
Bad news, extremists. Facebook and YouTube have announced that they are updating their software to automatically remove content deemed to be “extreme,” by which we can only assume they mean videos associated with terrorist groups or other violent individuals. The goal is to make the spread of hate and violence more difficult via these popular internet channels, which is undoubtedly a laudable goal. Yet I can’t help wondering whether this policy is wise, even in the unlikely event that it will be successful.
Until now, these sites have relied on users reporting objectionable content, in order to ensure that inappropriate content is not allowed to remain publicly available. Apparently, not enough users were outraged about ISIS-related content to complain. At least, that would be the implication. And while’ it’s easy to see efficiency advances in automating this process, it also raises some new questions.
The biggest one for me is, how do these algorithms decide what counts as “extremist” and what doesn’t? By removing the element of human judgment from the equation, it’s hard to see how a piece of software can know enough to distinguish a video that should be taken down from one that is making a satirical or political point. Machines are notoriously bad at detecting sarcasm and irony, for example.
Of course, the specifics of the algorithms to be used are being kept a closely-guarded secret, for the simple reason that extremist groups who know what they’re looking for can disguise their videos to avoid the filter. That’s a reasonable explanation, but it doesn’t answer the concerns about pulling down perfectly legitimate content by mistake. And even if the program is perfect and does only identify what its masters consider “extremist,” what’s to stop them from using that power to designate conservative, tea party or libertarian organizations as among those too hot for TV?
We know that Facebook has a bad history with political bias. We also know that the FBI considers anti-government sentiment to be extremist, by which measure I must certainly be on a list somewhere. To what extent is this policy change going to protect people, and to what extent will it be used to suppress unpopular but legitimate opinions? That’s my worry.
Now of course, Facebook and YouTube are both private companies, meaning they have the right to remove any content they don’t like, for any reason whatsoever. It’s their site and they get to choose what goes on it. Fair enough. But The Hill describes these companies as responding to “pressure from President Obama and European leaders.” What sort of pressure are they reacting to, I wonder. When government gets involved, there is usually a pretty strong distinction between a company policy freely chosen for benefit of the users and shareholders, and one chosen because of threats, whether explicit or implied, of government retaliation.
The internet has always been a haven of free speech, where anything goes and nothing is over the line. This is a double edged sword. In order to allow meaningful political dissent, you also have to allow hateful, indefensible things said by potentially violent people. I think the benefits of such a system outweigh the drawbacks by rather a large margin. But if the web’s most powerful companies start engaging in automatic, unthinking censorship of speech they find offensive, we may be in danger of losing the greatest forum for free thinking and new ideas in the history of mankind. Let’s hope it doesn’t come to that. (For more from the author of “Could Facebook and YouTube’s New Anti-Terror Software Censor Conservatives?” please click HERE)
Follow Joe Miller on Twitter HERE and Facebook HERE.




