The public conversation about emerging technologies like AI and algorithm-based decision making in general often seems to be shaped more by Hollywood than empirical knowledge. This is even visible in the way we speak about technology.
People tend to use images that are related to the description of human interactions to describe technical procedures (like »the computer sees«, »hears« or »thinks«). This can lead to a perception of technology as an acting individual (ignoring the human in charge behind it).
On the other hand new technologies are often being superelevated. Capabilities are projected on technology that it just does not have. This seems to happen partly out of Sci-Fi-influenced expectations and a lack of understanding how technological developments actually work.
This overestimation of technology’s capabilities and the perception of technology as an acting individual can lead to a situation where developers, designers or decision-makers are not seen responsible for their creations any more. Computer-based decisions suddenly seem to exist in an accountability free space (like in a recent example where Lufthansa was able to publicly blame their algorithm for price increases).
We saw a first step towards a possible solution in educating people about the human factor in the development of a machine learning algorithm. Therefore in a 3-weeks project we built a game-like website which let users design a ML algorithm themselves.
Within the web app users could choose between different machine learning examples from predictive policing to credit scores. A clean and simple UI then guided them to train an algorithm by making just a few simple decisions. The questions users had to answer were simplified versions of the real decisions software architects face when drafting a machine learning algorithm.
The process resulted in a screen where users got confronted with possible issues like biases they had created through their decisions.
In this way the app did not just educate people about the creation process of machine learning algorithms or possible biases in the example cases. It also conveyed the impact of human decisions in the creation. With this we aimed to educate people that machine learning decisions are not made without a human impact and therefore are not necessarily being neutral.
This project was created as part of the course ‘Efficiency and Madness’ by Stephanie Hankey (Executive Director of the Tactical Tech Collective).
Team: Tim J. Peters, Nadine Stammen