On algorithms, accountability and the perils of “instant implementation” of innovation

Warsaw, 20 July 2017

Zuzanna Warso, Helsinki Foundation for Human Rights

2017 marks the 20th anniversary of the Council of Europe Convention on Human Rights and Biomedicine. After two decades, it has become clear that ethical and human rights challenges posed by scientific and technological progress are not confined to the fields of medicine and biology. The Digital Born Media Carnival took place from 14-18 July 2017 in Kotor, Montenegro, and gathered academics, activists, human rights lawyers, and journalists. It offered a much-needed multi-disciplinary space to discuss some of the most pressing questions concerning the impacts of new digital technologies on the rights and freedoms of individuals.

This blogpost does not offer a summary of the discussions that took place, but rather provides a follow-up on issues that reverberated with the author the most.

Algorithmic transparency

The issue of algorithmic transparency was a hot topic during the conference. Transparency carries the promise of disarming mysterious formulas (referred to as “black boxes”) that increasingly influence our lives. The struggle for transparency should be applauded. That said, I believe that when we view the use of algorithms through the lens of human rights, we should pay at least as much attention to the need for accountability, i.e., the responsibility of institutions and companies for decisions made by algorithms. There are three prudent reasons against focusing solely on transparency. First, it would spare us the disappointment inevitably associated with the hope that as soon as we gain precise knowledge about the structure of an algorithm (or any other black box), the problem of algorithmic injustice will disappear. Second, in the struggle for transparency, one is up against businesses who are very good at keeping (trade) secrets, and this affects transparency application. Third, and most importantly, if the use of algorithms is coupled with machine learning and artificial intelligence, achieving transparency will become increasingly difficult. Already now no one really knows how the most advanced algorithms do what they do. In light of this, next to endeavors to understand how algorithms work, we should carefully examine outputs they create and assess them. To do that, we will need reliable data. This may be challenging – how are we to gather enough data to assess if online services discriminate to provide individualized and divergent user experiences? Crowdsourcing research data (as done by ProPublica in their research on Facebook ads) may help in overcoming this specific challenge. If data shows that practices of a company lead to discrimination, this should be a firm enough basis to challenge that practice, even without the exact knowledge on how algorithms do what they do.

The use of algorithms in governing content (particularly hate speech)

At the conference, I’ve heard several times that algorithms should not be used to detect hate speech, because hate speech depends on the context. I was told algorithms are not good with context and you still need an actual human being to tell if something is in fact hate speech, rather than a joke. Even if we agreed that machines are (and will remain) unable to assess context, the use of algorithms (and possibly AI) in governing content seems inevitable. The volume of online content will sooner or later necessitate automation in governing it, if it already hasn’t. Bearing this in mind, I think our focus should be on advocating two things. First, the right to explanation – so it is clear to anyone whose content is removed (or in a hard-core scenario, who is denied the possibility to publish something) why the service provider doesn’t allow it. Second, the right to appeal, i.e., effective, quick and transparent procedures of challenging decisions made automatically.

The problem of “instant implementation” of new developments

The implementation of any technology should be preceded by careful research and testing, and examination of ethical, legal (including human rights) and societal impacts, especially if there are good reasons to assume that it may have negative impacts, when used outside the lab, on the rights and freedoms of individuals. However, it seems that currently this is not the case. There is a rush to instantly implement new technological solutions on a mass scale, with little time being reserved to analyse results, correct errors and mitigate any negative impacts. The fine line between research (development) and implementation has blurred to a point where we can’t actually locate it. In fact, by using digital services (most notably social media) we are constantly participating in one big experiment, in a research project whose methodology is fuzzy. Adequate time is not reserved to critically assess and test results, in liaison with relevant stakeholders (e.g., civil society organisations). This is unacceptable. People should not be treated like lab rats, and technological innovation requires precaution and due consideration of ethical principles and societal values.

Leave a Reply