Is artificial intelligence a threat to our political liberty?

This is a brief presentation prepared for the KU Leuven AI Summer School 2022. I discuss how AI might be perceived as adverse to the three concepts of liberty: liberal or negative freedom, positive freedom, and republican or neo-Roman liberty. By endangering our freedom, AI may also impair human political agency that we need to address the most pressing global challenges.


My interview with Quentin Skinner on the meaning of freedom in the digital age has just been published

In August of 2021 I had one of the greatest professional experiences: I have been able to interview Quentin Skinner, a world-leading intellectual historian. The focus of the interview was a concept of liberty (or freedom) in the context of digitalisation of social and political life. 

Quentin Skinner points out that before the arrival of liberalism with its negative vision of freedom as non-interference, another conception of liberty had been almost universally accepted. Some theorists call that other conception “republican”, while Skinner prefers to call it “neo-Roman”. It says that the essence of liberty is to be free from dependence or domination, understood as someone else’s arbitrary power over our life.

In the interview we talk about how neo-Roman concept of liberty enables us to see that the threat of digital corporations is not just about privacy – it is about our status as free persons.

The interview has been published in “History of European Ideas”:

Online political disinformation is a real problem. But what about misinformation superspreaders? (+ “Best Abstract” award)

At the 2021 ECREA Political Communication Section Interim Conference (organized this time by National University of Political Studies and Public Administration in Bucharest) along with my colleague Jakub Jakubowski we talked about our findings with regard to the scale of disinformation in the comment sections of two of the largest Polish news websites.

As a result of an analysis of two large datasets (2.5M and 0.5M comments), we found that in case of website (3rd largest audience among news portals in Poland) more than 7% of the comments could be classified as disinformation. In case of the second website (4th largest audience) the scale of disinformation was ten times smaller, the main reason for that being the website requires registration before users are able to comment.

But in our presentation, entitled “Spreading the (misinformation) disease. Discourse in the web portals and political decision-making during COVID-19 pandemic and 2020 presidential campaign in Poland”, we also pointed out that while focusing on online propagand and manipulation, we cannot forget about offline misinformation superspreaders, namely politicians.

Here are our slides:

On the second day of the conference we received a great news:


A Challenge of Predictive Democracy: Algorithmic Governance and Value of Politics

On 12 March I will be speaking at the conference “Intelligent and Autonomous: Emergent Digital Technologies and the Challenges of Disinformation, Security, and Regulation
Interdisciplinary Online Conference”, organized by Vytautas Magnus University.

Over the last decade we have witnessed a creeping, technological revolution. As a result of developments in the field of machine learning, narrow artificial intelligence tools have become ubiquitous. The promise of powerful algorithms that know us better than we know ourselves means, at least to some people, that one day we could get rid of the flawed, ineffective political process, and replace it with AI-powered decision-making systems that could recognize and satisfy preferences of all citizens.

Such ideas have been labelled as “government by algorithm” or “algorithmic governance”. Approaching these proposals from a perspective of political theory, I would like to introduce a concept of predictive democracy that will serve as an intuition pump, helping us understand possible implications of algorithmic governance for democratic politics. Using that tool, I will argue that ideas of algorithmic governance may be misguided as they are frequently built upon an overly simplistic understanding of politics. Far from being a mere tool of allocating goods in response to existing preferences, politics is a set of practices that are directly related, on the one hand, to citizens’ sense of agency and belonging, and, on the other hand, to the legitimacy of political institutions.

OK Boomer? But what about the Millennials’ role in Cambridge Analytica scandal?

During my lectures on social, political and ethical challenges of Artificial Intelligence I talk, of course, about the Cambridge Analytica scandal. And while the catchphrase “OK boomer” has recently been gaining a lot of attention, we cannot forget that, at least in that case, Millennials are also to be blamed.

Aleksandr Kogan (born 1985/86), Christopher Wylie (b. 1989), Brittany Kaiser (b. 1986) – not to mention Mark Zuckerbreg (b. 1984) – even if today some of them express remorse (like Wylie and Kaiser), they all contributed to the enormous abuse of new technologies’ potential, including tools of artificial intelligence that had been used to profile voters and micro-target political advertising.

After watching a movie “The Great Hack” and reading few chapters of her book “Targeted”, I don’t really trust Kaiser. She seems to be interestad mainly in self-promotion. I am much more convinced by Wylie who – at least partially – redeemed himself with a book “Mindf*ck: Cambridge Analytica and the Plot to Break America”. In the book he describes in detail his career – from Candadian Liberal Party volunteer who was fascinated by the Obama campaign, to co-creator of the tools that helped elect Donald Trump and to bring about Brexit. He also unveils the methods of the Internet psychological warfare that are currently being employed not in the name, but against the essence of democracy.

AI experts need help from ethicists and social scientists

After reading a few books on artificial intelligence that have been published in the last couple of months, my impression is that IT and in particular AI experts desperately need support from social scientists and humanities researchers. As it turns out, the dynamic technological development has not been accompanied by sufficient reflection on the ethical, social and political consequences of implementation of AI-based solutions.

Melanie Mitchell, a computer scientist, points out in her book “Artificial Intelligence: A Guide for Thinking Humans”, that too much attention has been given to the potential problems with future development of the so-called strong AI (the one that we know from many science fiction stories, including movies like “The Terminator” or “Matrix”). Meanwhile, not enough attention has been paid to problems caused by the already existing algorithms of weak AI. Stuart Russell, in his book “Human Compatible: Artificial Intelligence and the Problem of Control” calls for rethinking the relations between humans and more and more intelligent machines. Those machines should realise human aims and values – not the other way around. And again, Russell writes not about the strong AI, but about the algorithms that we encounter in our everyday lives.

For example, the algorithms that choose the content we see on social media aim at maximising the probability that we will click on the content that has been recommended to us. The best way to achieve that aim is not to present us with content similar to that we already like, but to change our preferences – to make those preferences more predictable. As a result, users see more uniform (and, in case of politics, more extreme) content. With a bit of drama, Russell notes that “the consequences include the resurgence of fascism, the dissolution of the social contract that underpins democracies around the world, and potentially the end of the European Union and NATO”.

But perhaps the most surprising is the fact that the need to have a better and wider public and governmental control over new technologies is expressed today by people from commercial sector. This is the message of a book entitled “Tools and Weapons: The Promise and the Peril of the Digital Age”, authored not by some left-wing, anti-market leftist, but by Brad Smith who is the president of Microsoft!

The ethical reflection of all those authors is not especially deep. They usually have no knowledge about the achievements of philosophy and ethics of technology (for example, the name of Hans Jonas is not mentioned in any of these books). But the authors cannot be blamed for not having knowledge in the field that is beyond their expertise. On the other hand, social scientists and humanities researchers should use that opportunity and try to take part in the discussions about the proliferation of AI technologies and the resulting transformation of our social life. Their voice is urgently needed.

Shatema Threadcraft on the spectacular and the mundane

On Thursday I had a chance to participate for the first time in the seminar of LSE Political Theory Group. And to listen to Shatema Threadcraft’s extremely interesting and insightful talk about the complicated relation between spectacular black deaths and more ‘mundane’ crimes perpetrated mostly against black women in the US.

Having just read Shatema’s 2016 book – entitled ‘Intimate Justice: The Black Female Body and the Body Politic” – I was very interested in her theoretical approach. And after the seminar I had a pleasure of talking with her about that. (And my copy of the book has now a wonderful dedication from the author.)

It was a very enjoyable evening with a diverse, international group of fellow political theorists. I wonder if there is any other place but London when you can get it?

My visit at the LSE has just begun

I arrived in London two days ago to begin my eight month long stay as a Visiting Fellow at the LSE Department of Government. During that time I will be working on a research project on ideology and political science. But I also hope to interact with many splendid scholars who work at the LSE. The Government Department is one of the leading political science institutions in the world, and it is a privilege to be able to spend any time here.

Being in London in the time of Brexit-related uncertainty is also a kind of adventure in political science in itself. It will be extremely interesting to observe the process of the UK leaving the EU in real time. Or staying, for that matter, because even the most knowledgeable political scientists cannot predict the outcome of current turmoil.

Talking Ethics in Psychiatric Genetics

On 29th of January, as a merely social scientist, I had a privilege to talk before an international audience of hard scientists: psychiatrists and geneticists that gathered at the team meeting of EnGage project. The EnGage is funded by COST and aims at constructing a pan-European network of professionals who work in psychiatric genetic counseling and testing.

My task was to talk about the applied ethics in these two fields. I tried to show how the most fundamental principles of medical ethics – respect for patient autonomy, beneficence, non-maleficence, and justice – are translated into area of psychiatric genetics. I reviewed some of recent literature in this subject.

The meeting took place at the Centre of Medical Biology, Poznań University of Medical Science

You can take a look at my presentation below.