Facebook, Twitter, Microsoft, and YouTube have announced that they'd be working together to build a massive database. This database will contain images and videos used by terrorist networks to spread propaganda and reach out to potential recruits. The database will alert these companies when something on their site likely violates their terms of use.
Seems like a good idea, right?
The Details
As of now, details are a little unclear. Each image and video in the database will be have a "unique digital fingerprint" and companies will be alerted when that content shows up on their site. At the start of the program, companies will only add "the most extreme and egregious terrorist images and videos" to the database.
The idea is that these images and videos will be the most likely to violate the terms of use of all of the platforms. This makes it an easy decision to remove that content if it shows up somewhere else. Each company gets to decide which image and videos should be in the database. They'll also have the power to decide whether to remove content if it shows up on their site.
That's pretty much all we know so far. Twitter hopes "this collaboration will lead to greater efficiency as we continue to enforce our policies to help curb the pressing global issue of terrorist content online." But we really don't know much so far. Who will decide which content to contribute? Will companies set up independent review committees for this sort of thing? Or will they just add a few things now and then when they feel like it?
A Poor Record
It's really hard to police content online. Whether it's misinformation, abuse, terrorist recruitment videos, or one of the myriad other kinds of bad content online, there's a minefield of issues inherent in the process. Who decides which groups qualify as terrorists? How is the decision made to remove content? Will the companies go out looking for it, or rely on users to report it?
Microsoft, in its statement on online terrorist content, said this:
When terrorist content on our hosted consumer services is brought to our attention via our online reporting tool, we will remove it.
And they provide a link to a form used specifically for reporting terrorist content. But how likely is someone to actually seek out and use that form? They also say that they'll "remove links to terrorist-related content from Bing only when that takedown is required of search providers under local law."
This further puts the responsibility elsewhere.
Issues of Responsibility
The Electronic Frontier Foundation has written extensively about the difficulties in policing this type of content online. In a 2014 post, they criticized the UK's plan to compel ISPs to block "extremist content" without transparency and accountability procedures. Human rights groups have criticized France, too, for having very wide definitions of terrorist activity.
Both Twitter and Facebook have rather poor records with policing content on their sites. Although Twitter has deleted over 100,000 ISIS-related Twitter accounts and started to police hate speech groups more strongly, they have a bad reputation for letting extremism and abuse run rampant on their platform. Facebook, too, seems content to continue using a hands-off approach under the banner of free speech.
Of course, it's quite difficult to manage this sort of responsibility. I'm not saying it's easy. But tech companies are adept at distancing themselves from what their technology is used for. That includes the propagation of terrorist images, videos, recruitment drives, and other content. In August 2016, a British report stated that Facebook, Twitter, and YouTube were "consciously failing to combat the use of their sites to promote terrorism and killing."
The establishment of a database of images and videos doesn't seem likely to change that. It's encouraging that they're taking action, but it's hard to imagine it being more than half-hearted.
Could This Be a Privacy Issue?
Because companies will be manually adding items to the database after users report them, it doesn't seem like there's the chance that your images or videos could accidentally find their way into the database. Which is encouraging. However, the creation of a massive central database full of uniquely identified "bad" content is worrying.
This is largely because the people in charge at the time get to decide what counts as "bad". This is especially true if there's governmental involvement. You might be part of the majority one day, and in a persecuted minority the next. The deployment of this type of technology makes that sort of change even scarier.
Of course, there's no indication that these companies will be using this database for anything but lax enforcement of their own terms of use guidelines. But the fact that private organizations have developed and are deploying this tech might have some people worried. Governments surely have surveillance tech that's this powerful already, but proliferation could be a sign that corporations are going to have the tools to collect more information in the near future.
As of right now, there's no reason to worry that your privacy will be endangered by the terrorist content database. But privacy and advocacy organizations would do well to keep an eye on the deployment of this type of technology elsewhere.
A Step Forward?
It's good to see that tech companies are stepping up and being proactive about terrorist content on their platforms. But their record is full of lax enforcement and willful neglect. Will this tool help turn that around? It seems unlikely. (Unless they automate it, which comes with a whole raft of other issues.) In my opinion, this database won't change much, and Twitter, Facebook, Microsoft, and YouTube very well have conceived it as a public relations measure instead of one actually meant to do good.
What do you think? Will this database help fight the spread of terrorist content online? Or is it an empty gesture in an effort to garner some public admiration? Share your thoughts in the comments below!