6th of January, 2020
Popular imagination is still dominated by the science-fiction visions of the omnipotent, self-conscious AI. Meanwhile a clear and present danger to democracy is not general AI that one day may become autonomous, but rather an erosion of citizens’ autonomy as a result of the proliferation of narrow AI. Unexpectedly, a comparative study into national AI strategies in Europe (including the UK’s strategy) reveals that possible (mis)uses of AI tools in politics are almost completely ignored. At the same time, the findings offers insights into ways of dealing with those challenges by fostering trust to public institutions and promoting new digital literacy in the general public.
Background: AI and autonomy
The central value of democratic participation is autonomy – the assumption that citizens’ decisions are based on a free choice.
The advent of machine learning, and especially deep learning algorithms that aim at altering human behaviour has already transformed the way political campaigns are being conducted. The political attitudes have been influenced not only by micro-targeted political ads or deep fakes. They are also being changed by even less transparent mechanisms such as big nudging and content recommendation algorithms that result in creating filter bubbles and in ideological polarisation. It is unclear to what extent these tools are being used – and by whom. The self-determination of citizens has been thus called into question.
According to 2018 Eurobarometer survey, 75% of the Brits are concerned about disinformation and misinformation on the Internet in pre-election period and 73% about the use of their personal data to target political advertising. If those concerns will not be addressed properly, government risks further undermining trust in the democratic process.
Research into national AI strategies in Europe
The study has been focusing on the national AI strategies of the European single market member-states. As of January 2020, sixteen of those countries have already issued their strategies and four other states prepared draft versions. The aim of the research has been to find out what is the level of awareness of the possible threats AI poses to democratic process and what are the proposed countermeasures.
The most striking finding is that AI’s impact on democratic process and politics is almost completely absent from the strategies. Nevertheless, the majority of them focuses strongly on the ethical considerations, frequently mentioning autonomy and freedom as core values in the development of human-centric AI. In many cases specialised public bodies – such as ethics committees – have been established to research current and future challenges of AI as well as to oversee the use of algorithms in public institutions.
Furthermore, the review shows that there is an important difference in focus with regard to education. In most cases the strategies call for a reform of the educational system to answer the demands of the economy, but do not provide plans for educating the general public. One of the exceptions is Finland that puts public understanding of AI at the centre of its strategy and has already launched a free online course Elements of AI (www.elementsofai.com).
Some of the most important insights come from the Danish strategy that prioritises preservation of people’s high confidence in the management of personal information by public authorities (at 83% in 2017, according to Denmarks Statistiks, p. 27). That, along with the high level of mutual trust within society, may serve as a basis for introducing AI technologies into various aspects of social life, even if the technologies themselves are not fully understood by all citizens.
Shortcomings of the UK’s approach to AI
The UK’s, business-centric AI Sector Deal (2018) neglects ethical and political challenges. While a specialised body – Centre for Data Ethics and Innovation (CDEI) – has been established and has already produced some initial reports, its impact on the general public is very limited. Furthermore, the proposals with regard to education in the AI technologies are limited to students only. Again, the measures to improve the general public’s understanding of the opportunities and perils of AI have been absent from the Sector Deal.
Aside from addressing the threats of political misuses of AI technologies directly, the relevant government’s policy should engage with at least two areas:
- Rather than focusing entirely on educating students for the benefit of economy, a new, universal digital literacy has to be promoted in the society (especially in the older age groups), for example by introducing online courses in the basics of AI.
- Because of the complexity of AI technologies it is necessary to perceive the political challenges of AI within the broader context of the trust between citizens and public institutions. That means that it is vital to give CDEI and other institutions more power to engage with the general public.
National AI strategies (as of January, 2020)