Any fact-checker who works in the media has a straightforward but challenging job: make sure all the claims in an article are true. Are simple facts, like the distance between two cities, accurate? Are the quotes correct? Are broader statements true? It’s an important task, and in an era of outright fake news—especially considering the 2016 election and the upcoming midterms—it’s becoming even more crucial.

To tackle this larger issue, researchers from MIT as well as institutions in Qatar and Bulgaria have been working on a way to use artificial intelligence to help humans make sense of the complicated media landscape. And they realized that an important step they needed to take before developing an AI that can fact-check individual claims was to analyze how reliable different news websites are themselves in the first place.

So they set out to make an AI that could evaluate how factually strong different sites are, and their political bias.

To train their AI system, they first used data from 1,066 websites listed in a source called Media Bias / Fact Check. Then, the AI analyzed information about news websites, considering sources like articles on the sites themselves, their Wikipedia pages, Twitter accounts, even URLs. Using information like this, the AI had about a 65 percent accuracy at predicting how factual the website was, and was about 70 percent accurate at detecting its bias.

Of course, the MIT research group aren’t the only ones using AI to analyze language like this: a Google-made AI system called Jigsaw automatically scores the toxicity of reader comments, and Facebook has turned to AI to help augment its efforts to keep hate speech at bay in Myanmar.

LEAVE A REPLY

Please enter your comment!
Please enter your name here