After facing criticism from European Union leaders following a string of terrorist attacks in the UK, Facebook on Thursday outlined the ways it's stepping up its efforts to curb extremist content on its social network, including its use of artificial intelligence.
Company officials said in a blog post Thursday that Facebook will use AI in conjunction with human reviewers to find and remove "terrorist content" immediately, before other users see it.
In the wake of criticism for failing to adequately combat online extremism, Facebook announced Thursday that it will step up efforts to eradicate "terrorist content" from its platform.
The company says it's expanded the use of artificial intelligence to identify possible terrorist postings and even block or remove them without human intervention. In rare cases, when they uncover evidence of imminent harm, they promptly inform authorities about it. Facebook also announced that it created a brand new email address - email@example.com - where you can send feedback or suggestions for problems it should address. "What we see is terrorist actors and their supporters start to understand the kind of things that we're doing and they try to change what they do and we have to be reactive to that".
Facebook restated its stance against ISIS and Al Qaeda by offering transparency on how it handles content that may support terrorism, attempt to recruit from the platform or spread terrorist propaganda.
Facebook and other internet companies face growing government pressure to identify and prevent the spread of terrorist propaganda and recruiting messages on their services.
Facebook had declared that it is actively fighting terrorism online, and it is using artificial intelligence (AI) to do so.
England vs Pakistan Semi-Final
Left arm-pacer made his debut for Pakistan in the match against England and made it count by getting his maiden ODI wicket. It was simply the fact that Pakistan have played on the wicket already and that helped them".
Here is how Facebook is the most advanced technology to stop uploading and promotion of the terrorist content on its platforms.
When a page, a group, a post or a profile is identified as supporting terrorism, Facebook uses an algorithm to flush out pages or profiles that may be related.
Outside of that scope, Facebook has made strategic partnerships with governments and industry leaders to join forces in the fight against terror.
The new algorithms have helped the company to "dramatically reduce the time period that terrorist recidivist accounts are on" the social networking site.
Germany, France and Britain, countries where civilians have been killed and wounded in bombings and shootings by Islamist militants in recent years, have pressed Facebook and other social media sites such as Google and Twitter to do more to remove militant content and hate speech.
J.M. Berger, a fellow with the International Centre for Counter-Terrorism at The Hague, said a large part of the challenge for companies like Facebook is figuring out what qualifies as terrorism - a definition that might apply to more than statements in support of groups like the Islamic State. It's also working with other social media companies to create a shared database of these digital signatures - known as hashes - to ensure that people can't simply post the same content to Twitter or YouTube. Many of these people have backgrounds in law enforcement and they collectively speak nearly 30 languages.