Google is taking measures to improve its ability to spot objectionable content by sixfold within a matter of weeks, according to a report in Fast Company, as the company tries to calm advertisers and European leaders alarmed by some of its online fares.
YouTube and rival Facebook are racing to improve AI-powered technology that spots things like racism and other hate speech in their online videos, including content used by terrorists for recruiting.
Although Google and YouTube’s approach will heavily rely on the machine learning techniques it is developing, humans aren’t being removed from the equation. There will be more independent (human) experts in YouTube’s ‘Flagger’ programme. It’s giving grants to 50 NGOs so they can help provide assistance about types of content.
Both companies have been coming under increasing political pressure in Europe especially to do more to quash extremist content — with politicians including in the UK and Germany pointing the finger of blame at platforms such as YouTube for hosting hate speech and extremist content.
YouTube already suffered a backlash from advertisers worried about the safety of their brands next to distasteful videos. Earlier this year, YouTube lost millions in advertising revenue with major brands temporarily pausing spending as it was revealed their names were appearing next to videos with extremist views.
The UK’s prime minister also called for international agreements between allied, democratic governments to “regulate cyberspace to prevent the spread of extremism and terrorist planning”.
Last week, Facebook also revealed how it is attempting to tackle terrorists and extremists on its platform.
Facebook said last week it would use its own AI-powered software and hire more terrorist experts after leaders of the U.K. and France threatened new laws to punish companies whose content stays online long enough for terrorists to spread their message.
While in Germany a proposal that includes big fines for social media firms that fail to take down hate speech has already gained government backing.
Besides the threat of fines being cast into law, there’s an additional commercial incentive for Google after YouTube faced an advertiser backlash earlier this year related to ads being displayed alongside extremist content, with several companies pulling their ads from the platform.
Facebook is hiring producers and putting together its own slate of shows, following YouTube into a market worth $500 billion a year.
Google subsequently updated the platform’s guidelines to stop ads being served to controversial content, including videos containing “hateful content” and “incendiary and demeaning content” so their makers could no longer monetize the content via Google’s ad network. Although the company still needs to be able to identify such content for this measure to be successful.
Rather than requesting ideas for combating the spread of extremist content, as Facebook did last week, Google is simply stating what its plan of action is — detailing four additional steps it says it’s going to take, and conceding that more action is needed to limit the spread of violent extremism.