If someone tries to post copyrighted material on Facebook, like a music video that isn’t theirs, the odds are good that the service’s systems will be able to find that they have done so based on the unique fingerprint the video file has. If this fingerprint, or “hash” matches up with a known list of copyright material, then the video posted is flagged and off it goes into the digital ether.
Blocking Extremist Content
According to a new report from Reuters and its anonymous sources, YouTube, Facebook, and other internet giants have started automating the process of taking down extremist content. The move comes as a huge triumph for various activist groups and governments who have urged web-based companies to aid in the fight against terrorism. Many extremist organizations have used social media and the web as a key tool in recruiting, communicating, and spreading propaganda.
The companies who are allegedly using this automated technology have not yet confirmed their methods, but according to Reuters,“numerous people familiar with the technology said that posted videos could be checked against a database of banned content to identify new postings of, say, a beheading or a lecture inciting violence.”
Hashing software, which recognizes unique digital fingerprints and has been used to counter child pornography, was expanded upon to include video and audio files that could have been uploaded by extremist groups.
The Counter Extremism Project (CEP), a non-profit organization, proclaimed earlier this month that it had created a technology specifically designed to help organizations police extremist content on their platforms. While some of the companies doing some policing of their own have discussed the Counter Extremism Project’s tools, however, it could not be confirmed if any have taken the organization up on its offer.
Whatever the case, Facebook and Google join forces against terrorism. And this dynamic duo shouldn’t be taken lightly.