Facebook on Wednesday, announced, “Last quarter, Facebook was able to remove more than 8.7 million images and other contents through AI and Machine Learning technology that violated our child nudity or sexual exploitation of children policies.”
Antigone Davis, Global Head of Safety of Facebook, in her blog post, said: “The new AI and Machine learning technology which was developed by Facebook last year was able to remove 99 percent of those posts before anyone else report to them and We have also remove accounts that promote this type of content.”
“We have specially trained teams with backgrounds in law enforcement, online safety, analytics, and forensic investigations, which review content and report findings to NCMEC,” Davis said.
Adding to this she said, “In coming months Facebook will join Microsoft and other industry partners to begin building tools for smaller companies to prevent the grooming of children online.”