Eine Teilnehmerin der Webdays mit lachendem Gesicht, im Hintergrund verschwommen das Publikum Eine Teilnehmerin der Webdays mit lachendem Gesicht, im Hintergrund verschwommen das Publikum
Digital youth education

Brave new world?

Life with AI and Automated Decision-Making

These were exciting three days. Within record time, the 60 young people who had travelled to Berlin for WebDays 2019 got a handle on the complex issue of artificial intelligence (AI) and began reflecting on its significance for their own lives. By the end of the event, they had very clear ideas of how a future with AI in it needs to be shaped – and had come up with a set of well-founded, concrete appeals to policymakers. WebDays 2019 took place from 29 November to 1 December 2019.

05.12.2019 / Stephanie Bindzus

In her opening address on the limits and opportunities afforded by artificial intelligence, Stefanie Kaste from the D21 initiative gave the all-clear for the medium term. Classroom teachers won’t be replaced by the Terminator anytime soon, she believed. “Strong AI“, meaning artificial intelligence that is functionally at least equal to that of a human, remains a remote utopian idea. Current AI solutions are designed to address very specific problems. Stefanie Kaste understood AI to be a tool, reminding the audience that humans – we – decide how this tool is deployed, what data is used to “feed” AI so it can learn, and how this data is managed.

Promote it or monitor it?

Kaste used two examples to illustrate how the lines have blurred between promoting AI and monitoring it. In the US, a teaching method known as School of One has shown how AI can be used to provide a personalised daily curriculum to each student that is aligned with their individual learning requirements. In this case, the use of AI has led to a demonstrable improvement in students’ achievements. In China, too, AI is frequently used in schools: cameras record what happens in the classroom to allow teachers to analyse how much attention students are paying and how well they understand what is being said, amongst other things, and take appropriate action.

So how should one handle the promises, benefits and risks of AI and digitalisation? Over the three-day event, the young participants split into five so-called Discussion Hubs to find answers. Each of the Hubs focused on one aspect:

Two additional keynote presentations provided much food for thought and even more facts. Dr Thilo Hagendorff, media ethicist at the University of Tübingen, spoke about AI applications in everyday life. He quickly managed to turn a superficially dry subject into a highly relevant issue. Almost everyone is aware that we encounter AI in our daily lives, whether at the deposit bottle return machine at the supermarkt or when using speech assistance apps or automatic translation. However, there is much less awareness of the use of AI for predictive modelling, for instance in predictive policing, creditworthiness checks, or even to predict pregnancies. During the presentation, the speaker used live facial recognition so the app could assess his age. But that wasn’t the only capability, Hagendorff warned: facial-recognition AI also claims to be able to assess a person’s health or sexual orientation. He used another illustrative example to show how fake news can be generated on demand. While the automatically generated article on the audience’s chosen subject – is the Earth flat? – didn’t necessarily convince everyone in the room of the theory, it clearly demonstrated how little time it takes to produce a presumably well-researched article.

Not much imagination is needed to recognise that while AI does have its benefits, there is a considerable risk that certain AI applications and automated decision-making are used for nefarious purposes or to discriminate against certain groups of people.

A reflection of society

In her presentation, Kristina Penner from the NGO AlgorithmWatch raised awareness of the benefits and risks of what is known as Automated Decision-Making, or ADM. She explored the democratic aspects of using algorithms for decision-making by offering up some current examples. For instance, Denmark is already using ADM to forecast which children could become vulnerable to neglect. In Spain, ADM is used to predict recidivism in juvenile delinquents. In 2020 Austria plans to introduce a system whereby unemployed individuals are assigned to certain categories depending on the likelihood of their finding work. This assignment to a category will determine whether or not jobseekers are offered training or other forms of support. The system is highly controversial for a number of reasons, notably because women are automatically assigned a lower score.

The systems we use are a reflection of society, believes Penner. The algorithms themselves are unbiased: they work exclusively according to mathematical rules. Yet prior to application, it is humans who decide which systems are used and how they are designed. What data is used to feed them? What criteria are applied? Often enough, existing disadvantages and discriminatory systems are replicated in ADM, sometimes inadvertently, sometimes not. Awareness is hence crucial to prevent negative consequences. In the case of Austria, for instance, one measure would be to eliminate the lower scoring on the basis of gender.

“Society cannot allow itself to be monitored by the system; instead, the system needs to be monitored by society!”WebDays participants

The WebDays participants in their Discussion Hubs brought all this information to the table and discussed the influence of AI and ADM on their own environment, giving due regard to the present but in particular also to the future. How will humans coexist in a world where AI and ADM have become part of our way of life? What precautions need to be put in place and what adjustments and investments should be made now?

Among the main aspects that participants highlighted were disclosure and transparency on the part of government agencies and companies in regard to the data they use and share; information and participation in decision-making at all levels of society when it comes to AI and ADM; lawful and fair solutions to prevent discrimination; and the sustainable use of AI in education. Many of these appeals were accompanied by clear proposals.

Not a dystopian scenario

In the closing panel discussion, the Hub participants could address a number of urgent questions to the panellists and share their demands with them. On the panel were YouTuber Rayk Anders, the former WebDays contributor Frederick Hamsa-Feld, Dr Janis Kossahl from the consumer policy division of the Federal Ministry of Justice and Consumer Protection and the data protection activist Malte Spitz. The fundamental question underlying all comments was: How can AI and ADM be deployed and made accessible in a reasonable and beneficial manner while preventing discrimination, manipulation and data misuse?

In the end, there were no conclusive answers. Yet Rayk Anders, who at the beginning of the three-day event had spoken of a “dystopia”, agreed to revise his outlook in light of what he had heard from the committed and well-informed participants of WebDays. His fellow panellists agreed. A glance at the Federal Government’s new Youth Strategy, which was published the week after WebDays ended, reveals that the chapter on digitalisation doesn’t just make reference to the WebDays event in 2018. It also states that “all adolescents and young adults have a right to digital participation”. That is an encouraging first step and reason to hope that the appeals voiced by the young participants will indeed be heard.

Mehrere junge Menschen sitzen an einem Tisch und arbeiten an Laptops.
About digital youth education

The internet has become a cultural and communication space in its own right. Digital youth education helps young people to navigate this space responsibly and to use it for social and political participation.