Facebook Plans to Use Artificial Intelligence to Review for Extremist Posts

Facebook announced Thursday that it plans to use artificial intelligence to help remove inappropriate content from the social media platform.

CEO Mark Zuckerberg wrote in a post that the efforts will be directed at removing terrorist content, but suggested that other “controversial posts” could be removed too.

Noting that human reporting does not always catch terrorist posts in a timely fashion, he explained, “That’s why we’re also building artificial intelligence that lets us find potential terrorist content and accounts faster than people can.”

The software uses “natural language understanding” and “image matching” to find content. “And when we identify pages, groups, posts or profiles that support terrorism, we use algorithms to find related material across our platform.”

“There’s an area of real debate about how much we want AI filtering posts on Facebook,” Zuckerberg conceded. “It’s a debate we have all the time and won’t be decided for years to come. But in the case of terrorism, I think there’s a strong argument that AI can help keep our community safe and so we have a responsibility to pursue it.”

The CEO also announced the launch of Facebook’s “Hard Questions” blog, which will tackle such issues as:

How much should we monitor and remove controversial posts, and who gets to decide what’s controversial?

How do we make sure social media is good for democracy?

How aggressively should social media companies monitor and remove controversial posts and images from their platforms?

Who gets to decide what’s controversial, especially in a global community with a multitude of cultural norms?

Who gets to define what’s false news — and what’s simply controversial political speech?

In a post to which Zuckerberg linked in announcing the blog, Facebook’s vice president for public policy and communications, Elliot Schrage, further elaborated that it will be a place for the social media giant to explain its editorial decisions.

“As we proceed, we certainly don’t expect everyone to agree with all the choices we make. We don’t always agree internally,” wrote Scharage. “We’re also learning over time, and sometimes we get it wrong. But even when you’re skeptical of our choices, we hope these posts give a better sense of how we approach them — and how seriously we take them.”

During the presidential election in May 2016, the tech blog Gizmodo broke a story, based on reports by former Facebook news curators, that the platform regularly suppressed conservative news and injected liberal topics into its “trending” news section.

Zuckerberg strongly denied that Facebook engaged in this practice.

“We have found no evidence that this report is true. If we find anything against our principles, you have my commitment that we will take additional steps to address it,” he responded at the time.

To further reassure conservatives that Facebook is playing the fair online arbiter, the company invited various conservative personalities to its headquarters in Menlo Park, Calif., that month.

After Donald Trump’s upset victory of Hillary Clinton last fall, Facebook faced criticism for allowing purveyors of so-called “fake news” to flourish, which supposedly helped swing the election in Trump’s favor.

Zuckerberg wrote a post on Facebook days after the election expressing his doubts that his platform played any such role.

“Of all the content on Facebook, more than 99 percent of what people see is authentic,” he related. “Only a very small amount is fake news and hoaxes. The hoaxes that do exist are not limited to one partisan view, or even to politics. Overall, this makes it extremely unlikely hoaxes changed the outcome of this election in one direction or the other.”

He added, “We have already launched work enabling our community to flag hoaxes and fake news, and there is more we can do here. We have made progress, and we will continue to work on this to improve further.”

The clear danger in trying to identify and remove “fake news” lies in using that as a guise to take down content with which the Facebook editors do not agree.

Liftable Media CEO Patrick Brown commended Facebook following the presidential election for its “insistence upon free and open dialogue and debate.”

He contended that “fake news has suddenly come to the foreground for one main reason. For many years the same few media entities have largely controlled the media discourse in this country. This includes the major TV networks, The New York Times, The Washington Post, etc. They have decided what is newsworthy, what is worthy of discussion, and, ultimately, what is ‘true.’”

Joe Miller, publisher of Restoring Liberty — the most popular political blog in Alaska — conjectured that Facebook’s use of artificial intelligence may be employed to give it cover.

“Given Facebook’s atrocious past track record in suppressing free speech under the guise of ‘social responsibility,’ its new hybrid approach involving artificial intelligence is probably just a thinly-disguised way to shift direct responsibility for political manipulation away from Zuckerberg,” said Miller.

Shaun Hair, Liftable’s vice president of digital content, said, “The stated good intentions of Facebook still assumes that they are worthy to hold the keys of truth for the rest of us.”

He added, “The purveyors of false news usually get shut down over time because the truth wins.” (For more from the author of “Facebook Plans to Use Artificial Intelligence to Review for Extremist Posts” please click HERE)

Follow Joe Miller on Twitter HERE and Facebook HERE.