I’d like to talk about a simple problem on Google which I think partially explains why technology companies seem so utterly useless at preventing the spread of terrorist content.
In November last year, Google said it had fixed a bug on the visa waiver search engine results page after an investigation by the BBC forced Google to pay attention to the problem.
This “bug” was allowing people to use Google’s advertising network to set up sites that charged to file visa waivers. Instead of going directly to the US Government’s website – which still costs about $14 – they would charge users up to $99 for “checking” their application.
The problem was first identified in 2009. It wasn’t “fixed” until 2018, eight years later.
I even fell for it in August 2009:
Why did it take so long?
Google’s solution to this problem – like a hammer which only sees nail – was to “develop a machine learning process to wipe out unofficial Esta ads.”
That process took eight years.
In the meantime, countless numbers of people were paying over and above what they should have for entering their details into a very simple online form.
Of course, Google met its minimum standard obligation of investigating and taking down any links that users reported were incorrect, but the companies taking advantage of Google’s incompetence could very simply edit the domain name and resubmit, a relatively trivial operation.
The BBC even sent some unofficial ads to Google, which its algorithms dutifully allowed to be displayed.
(It turns out that this process didn’t even fix the problem. It’s still possible to see unofficial advertising for ESTA visa waivers on Google. This story found dozens of fake ads charging up to $100, days after Google said it had fixed the problem.)
Now apply this problem to media savvy terrorist attacks
A Google mindset of do it first, collect data, and improve it over time (or Facebook’s “move fast and break things”) has dramatic consequences when more malicious operators take advantage of their weaknesses.
Google, YouTube, Facebook and Twitter have all been far too slow in taking down videos of the New Zealand terrorist attack. Even though they all have large, dedicated and sophisticated moderation teams seeking to remove this information, they are unable to stop people re-uploading videos.
The obvious question is, why don’t technology companies do it manually? I agree with Alex Hern who says, they could have, “one person – just one – to sit there searching for ‘New Zealand terror attack’ and just delete the obvious reposts that keep popping up on that search term.”
So why don’t they do it? I also agree with Alex’s reason, that they have, “a desire to build scalable systems rather than one-off applications of human labour.”
Technology solutions:
-Create a new algorithm to identify suspect terrorist uploads
-Use content ID matching algorithms
-Use AI-enhanced moderation
Human solutions:
-Disable video uploads temporarily
-Manually delete videos
-Employ editors to approve questionable content first
Technology companies can’t fix every problem with an algorithm
Just as big tech doesn’t invest in ideas that don’t “scale”, they won’t invest in solving problems unless there’s a scalable solution. Technology companies think they can “fix” unsolvable problems with maths. They think they can “fix” the problem of terrorists sharing their content with an algorithm, just like they can “fix” the problem of people being scammed for ESTA forms.
As the ESTA example show, they can’t.
Humans might not be as fast than algorithms, but they’re cleverer, and technology companies need to wise up too.