Facebook has said that efforts to use artificial intelligence and other automated techniques to delete terrorism-related posts are “bearing fruit” but more work is needed.
The firm said that 99% of the material it now removes about Al Qaeda and so-called Islamic State is first detected by itself rather than its users.
But it acknowledged that it had to do more work to identify other groups.
Facebook reports progress in removing extremist content
Facebook said on Wednesday that it was removing 99 percent of content related to militant groups Islamic State and al Qaeda before being told of it, as it prepared for a meeting with European authorities on tackling extremist content online.
Eighty-three percent of “terror content” is removed within one hour of being uploaded, Monika Bickert, head of global policy management, and Brian Fishman, head of counter-terrorism policy at Facebook, wrote in a blog post.
Facebook is using AI to try to prevent suicide
Facebook is using artificial intelligence to address one of its darkest challenges: stopping suicide broadcasts.
The company said Monday that a tool that lets machines sift through posts or videos and flag when someone may be ready to commit suicide is now available to most of its 2 billion users (availability had been limited to certain users in the United States). The aim of the artificial intelligence program is to find and review alarming posts sooner, since time is a key factor in preventing suicide.
Facebook to expand artificial intelligence to help prevent suicide
Facebook Inc will expand its pattern recognition software to other countries after successful tests in the U.S. to detect users with suicidal intent, the world's largest social media network said on Monday.