SAN FRANCISCO — Facebook has been under pressure for its failure to remove violence, nudity, hate speech and other inflammatory content from its site. Government officials, activists and academics have long pushed the social network to disclose more about how it deals with such posts.
Now, Facebook is pulling back the curtain on its efforts.
On Tuesday, the Silicon Valley company published numbers for the first time detailing how much and what type of content it takes down from the social network. In an 86-page report, Facebook revealed that it deleted 865.8 million posts in the first quarter of 2018, the vast majority of which were spam, with a minority of posts related to nudity, graphic violence, hate speech and terrorism.
Facebook also said it removed 583 million fake accounts in the same period, or the equivalent of 3 to 4 percent of its monthly users.
Guy Rosen, Facebook’s vice president of product management, said the company had substantially increased its efforts over the past 18 months to flag and remove inappropriate content. The inaugural report was intended to “help our teams understand what is happening” on the site, he said. Facebook hopes to continue publishing reports about its content removal every quarter.
The social network is aiming for more transparency after a turbulent period. Facebook has been under fire for a proliferation of false news, divisive messages and other inappropriate content on its site, which in some cases have led to real-life incidents. Graphic violence continues to be widely shared on Facebook, especially in countries like Myanmar and Sri Lanka, stoking tensions and helping to fuel attacks and violence.
See Which Facebook Ads Russians Targeted to People Like You
Congress last week released more than 3,000 Facebook ads linked to Russia around the 2016 presidential election, the most comprehensive look at the misinformation campaign mounted on the social network.
Facebook has separately been grappling with a data privacy scandal over the improper harvesting of millions of its users’ information by political consulting firm Cambridge Analytica. Mark Zuckerberg, Facebook’s chief executive, has said that the company needs to do better and has pledged to curb the abuse of its platform by bad actors.
On Monday, as part of an attempt to improve protection of its users’ information, Facebook said it had suspended roughly 200 third-party apps that collected data from its members while it undertook a thorough investigation.
Tuesday’s report on content removal is another step by Facebook to clean up its site. But the figures the company published were limited. Facebook declined to provide examples of graphically violent posts or hate speech that it removed, for example. And the company said it had taken down more posts from its site in the first three months of 2018 than it had during the last quarter of 2017, but it gave no specific figures from previous years, making it hard to assess how much it had stepped up its efforts.
Still, Jillian York, the director for international freedom of expression at the Electronic Frontier Foundation, said she welcomed Facebook’s report.
“It’s a good move and it’s a long time coming,” she said. “But it’s also frustrating because we’ve known that this has needed to happen for a long time. We need more transparency about how Facebook identifies content, and what it removes going forward.”
Facebook previously declined to reveal its content removal efforts, citing a lack of internal metrics. Instead, it published a country-by-country breakdown of how many requests it received from governments to obtain Facebook data or restrict content from Facebook users in that country. Those figures did not specify what type of data the governments asked for or what posts were restricted. Facebook also published its next country-by-country report on Tuesday.
According to Tuesday’s report, about 97 percent of all the content that Facebook removed from its site in the first quarter was spam. About 2.4 percent of the deleted content had nudity, Facebook said, with even smaller percentages of posts removed for graphic violence, hate speech and terrorism.
Facebook attributed the increase in content removal in the first quarter to improved artificial intelligence programs that could detect and flag offensive content. Mr. Zuckerberg has long highlighted A.I. as the main solution to helping Facebook sift through the billions of pieces of content that people post to its site every day.
“If we do our job really well, we can be in a place where every piece of content is flagged by artificial intelligence before our users see it,” said Alex Schultz, Facebook’s vice president of data analytics. “Our goal is to drive this to 100 percent.”
According to the new report, Facebook’s A.I. found 99.5 percent of terrorist content on the site, leading to the removal of roughly 1.9 million pieces of content in the first quarter. The A.I. also detected 95.8 percent of posts that were problematic because of nudity, with 21 million such posts taken down.
But Facebook still relied on human moderators to identify hate speech because automated programs have a hard time understanding context and culture. Of the 2.5 million pieces of hate speech Facebook removed in the first quarter, 38 percent was detected by A.I., according to the new report.
Facebook said it also removed 3.4 million posts that had graphic violence, 85.6 percent of which were detected by A.I.
The company did not break down the numbers of graphically violent posts by geography, even though Mr. Schultz said that at times of war, people in certain countries would be more likely to see graphic violence than others. He said that in the future, Facebook hoped to publish country-specific numbers.
The report also did not include any figures on the amount of false news on Facebook as the company did not have an explicit policy on removing misleading news stories, Mr. Schultz said. Instead, Facebook has tried to deter the spread of misinformation by removing spam sites that profit from advertisements that run alongside false news, and by removing fake accounts that spread them.