After reading a few books on artificial intelligence that have been published in the last couple of months, my impression is that IT and in particular AI experts desperately need support from social scientists and humanities researchers. As it turns out, the dynamic technological development has not been accompanied by sufficient reflection on the ethical, social and political consequences of implementation of AI-based solutions.
Melanie Mitchell, a computer scientist, points out in her book “Artificial Intelligence: A Guide for Thinking Humans”, that too much attention has been given to the potential problems with future development of the so-called strong AI (the one that we know from many science fiction stories, including movies like “The Terminator” or “Matrix”). Meanwhile, not enough attention has been paid to problems caused by the already existing algorithms of weak AI. Stuart Russell, in his book “Human Compatible: Artificial Intelligence and the Problem of Control” calls for rethinking the relations between humans and more and more intelligent machines. Those machines should realise human aims and values – not the other way around. And again, Russell writes not about the strong AI, but about the algorithms that we encounter in our everyday lives.
For example, the algorithms that choose the content we see on social media aim at maximising the probability that we will click on the content that has been recommended to us. The best way to achieve that aim is not to present us with content similar to that we already like, but to change our preferences – to make those preferences more predictable. As a result, users see more uniform (and, in case of politics, more extreme) content. With a bit of drama, Russell notes that “the consequences include the resurgence of fascism, the dissolution of the social contract that underpins democracies around the world, and potentially the end of the European Union and NATO”.
But perhaps the most surprising is the fact that the need to have a better and wider public and governmental control over new technologies is expressed today by people from commercial sector. This is the message of a book entitled “Tools and Weapons: The Promise and the Peril of the Digital Age”, authored not by some left-wing, anti-market leftist, but by Brad Smith who is the president of Microsoft!
The ethical reflection of all those authors is not especially deep. They usually have no knowledge about the achievements of philosophy and ethics of technology (for example, the name of Hans Jonas is not mentioned in any of these books). But the authors cannot be blamed for not having knowledge in the field that is beyond their expertise. On the other hand, social scientists and humanities researchers should use that opportunity and try to take part in the discussions about the proliferation of AI technologies and the resulting transformation of our social life. Their voice is urgently needed.