OK Boomer? But what about the Millennials’ role in Cambridge Analytica scandal?

During my lectures on social, political and ethical challenges of Artificial Intelligence I talk, of course, about the Cambridge Analytica scandal. And while the catchphrase “OK boomer” has recently been gaining a lot of attention, we cannot forget that, at least in that case, Millennials are also to be blamed.

Aleksandr Kogan (born 1985/86), Christopher Wylie (b. 1989), Brittany Kaiser (b. 1986) – not to mention Mark Zuckerbreg (b. 1984) – even if today some of them express remorse (like Wylie and Kaiser), they all contributed to the enormous abuse of new technologies’ potential, including tools of artificial intelligence that had been used to profile voters and micro-target political advertising.

After watching a movie “The Great Hack” and reading few chapters of her book “Targeted”, I don’t really trust Kaiser. She seems to be interestad mainly in self-promotion. I am much more convinced by Wylie who – at least partially – redeemed himself with a book “Mindf*ck: Cambridge Analytica and the Plot to Break America”. In the book he describes in detail his career – from Candadian Liberal Party volunteer who was fascinated by the Obama campaign, to co-creator of the tools that helped elect Donald Trump and to bring about Brexit. He also unveils the methods of the Internet psychological warfare that are currently being employed not in the name, but against the essence of democracy.

AI experts need help from ethicists and social scientists

After reading a few books on artificial intelligence that have been published in the last couple of months, my impression is that IT and in particular AI experts desperately need support from social scientists and humanities researchers. As it turns out, the dynamic technological development has not been accompanied by sufficient reflection on the ethical, social and political consequences of implementation of AI-based solutions.

Melanie Mitchell, a computer scientist, points out in her book “Artificial Intelligence: A Guide for Thinking Humans”, that too much attention has been given to the potential problems with future development of the so-called strong AI (the one that we know from many science fiction stories, including movies like “The Terminator” or “Matrix”). Meanwhile, not enough attention has been paid to problems caused by the already existing algorithms of weak AI. Stuart Russell, in his book “Human Compatible: Artificial Intelligence and the Problem of Control” calls for rethinking the relations between humans and more and more intelligent machines. Those machines should realise human aims and values – not the other way around. And again, Russell writes not about the strong AI, but about the algorithms that we encounter in our everyday lives.

For example, the algorithms that choose the content we see on social media aim at maximising the probability that we will click on the content that has been recommended to us. The best way to achieve that aim is not to present us with content similar to that we already like, but to change our preferences – to make those preferences more predictable. As a result, users see more uniform (and, in case of politics, more extreme) content. With a bit of drama, Russell notes that “the consequences include the resurgence of fascism, the dissolution of the social contract that underpins democracies around the world, and potentially the end of the European Union and NATO”.

But perhaps the most surprising is the fact that the need to have a better and wider public and governmental control over new technologies is expressed today by people from commercial sector. This is the message of a book entitled “Tools and Weapons: The Promise and the Peril of the Digital Age”, authored not by some left-wing, anti-market leftist, but by Brad Smith who is the president of Microsoft!

The ethical reflection of all those authors is not especially deep. They usually have no knowledge about the achievements of philosophy and ethics of technology (for example, the name of Hans Jonas is not mentioned in any of these books). But the authors cannot be blamed for not having knowledge in the field that is beyond their expertise. On the other hand, social scientists and humanities researchers should use that opportunity and try to take part in the discussions about the proliferation of AI technologies and the resulting transformation of our social life. Their voice is urgently needed.

Shatema Threadcraft on the spectacular and the mundane

On Thursday I had a chance to participate for the first time in the seminar of LSE Political Theory Group. And to listen to Shatema Threadcraft’s extremely interesting and insightful talk about the complicated relation between spectacular black deaths and more ‘mundane’ crimes perpetrated mostly against black women in the US.

Having just read Shatema’s 2016 book – entitled ‘Intimate Justice: The Black Female Body and the Body Politic” – I was very interested in her theoretical approach. And after the seminar I had a pleasure of talking with her about that. (And my copy of the book has now a wonderful dedication from the author.)

It was a very enjoyable evening with a diverse, international group of fellow political theorists. I wonder if there is any other place but London when you can get it?

My visit at the LSE has just begun

I arrived in London two days ago to begin my eight month long stay as a Visiting Fellow at the LSE Department of Government. During that time I will be working on a research project on ideology and political science. But I also hope to interact with many splendid scholars who work at the LSE. The Government Department is one of the leading political science institutions in the world, and it is a privilege to be able to spend any time here.

Being in London in the time of Brexit-related uncertainty is also a kind of adventure in political science in itself. It will be extremely interesting to observe the process of the UK leaving the EU in real time. Or staying, for that matter, because even the most knowledgeable political scientists cannot predict the outcome of current turmoil.

Talking Ethics in Psychiatric Genetics

On 29th of January, as a merely social scientist, I had a privilege to talk before an international audience of hard scientists: psychiatrists and geneticists that gathered at the team meeting of EnGage project. The EnGage is funded by COST and aims at constructing a pan-European network of professionals who work in psychiatric genetic counseling and testing.

My task was to talk about the applied ethics in these two fields. I tried to show how the most fundamental principles of medical ethics – respect for patient autonomy, beneficence, non-maleficence, and justice – are translated into area of psychiatric genetics. I reviewed some of recent literature in this subject.

The meeting took place at the Centre of Medical Biology, Poznań University of Medical Science

You can take a look at my presentation below.