Normally the job of browsing potential harmful images was done by a group of human content moderators, a job that "is notoriously terrible, psychologically injuring workers who have to comb through the depths of depravity, from child porn to beheadings," reports TechCrunch. But now Facebook's director of engineering for Applied Machine Learning Joaquin Candela has revealed that the company is putting its powerful AI to work on the front lines, scanning and spotting harmful images as they are being uploaded by a user, allowing the company to block them before any human ever sees it.
"We have more offensive photos being reported by AI algorithms than by people," Candela told TechCrunch. "The higher we push that to 100%, the fewer offensive photos have actually been seen by a human." MG