2017-05-22

Most Americans these days get their main news from Google or Facebook, two tools that rely heavily on algorithms. A study in 2015 showed that the way a search engine like Google selects and prioritises search results on political candidates can have an influence on voters’ preferences. 

Similarly, it has been shown that by tweaking the algorithms behind the Facebook newsfeed, the turnout of voters in American elections can be influenced. If Marc Zuckerberg were ever to run for president, he would theoretically have an enormously powerful tool at his disposal. (Note: as recent article in The Guardian investigated the misuse of big data and social media in the context of the Brexit referendum).

Algorithms are everywhere in our everyday life and are exerting a lot of power in our society. They prioritise, classify, connect and filter information, automatically making decisions on our behalf all the time. But as long as the algorithms remain a ‘black box’, we don't know exactly how these decisions are made. 

Are these algorithms always fair? Examples of possible racial bias in algorithms include the risk analysis score that is calculated for prisoners that are up for parole or release (white people appear to get more favourable scores more often) and the service quality of Uber in Washington DC (waiting times are shorter in predominantly white neighbourhoods). Maybe such unfair results are not only due to the algorithms, but the lack of transparency remains a concern. 

So what is going on in these algorithms, and how can we make them more accountable? 

A lot of interesting investigative journalism can still be done in this field. Generally, by trying to ‘poke’ at the algorithms and seeing how they respond - correlating the output to the input - we can try to figure out how they work. Investigative journalists can play this game, collect and analyse the data and determine whether the results are unfair or discriminatory. Or maybe they lead to other negative or undesirable consequences (censorship, law breaking, violations of privacy, false predictions…).

There’s plenty of methodological challenges to deal with, however, you can only really understand why you’re seeing the results you’re getting if you have a deep technological knowledge of how a system was built. There are feedback loops between the algorithms and the people that design them. Algorithms are an instable, dynamic system; results can be changing every day, so tracking in time may be needed. The appropriate size or dimension of sampling needs to be decided, as well as the variables to consider. Then there are plenty of legal and regulatory aspects to look into. 

But perhaps most importantly, we need to ask ourselves what our expectations are. What do we consider to be ‘fair’ algorithms? Different people will have different views on that, but we probably shouldn’t let the algorithms keep deciding it for us.  

Any journalist interested in investigating algorithm accountability can go to algorithmtips.org for help to get started.

© Katrien Vanherck