AI does not happen to us: we make it happen, we are responsible
We are increasingly seeing Artificial Intelligence (AI) being advertised and applied as a magical technology that has the capacity to solve any and every problem humankind has ever faced. At the same time, most people have little understanding of how AI works and what are its capabilities and shortcomings. But AI is not magic. AI is a technology, an engineered system. We - people - make AI happen; it does not appear out of nowhere. It is not just there by itself! People are the ones introducing AI; we are aware of AI's potential dangers, so it is up to us to do something about it! This means that we are in control of when, where, and how AI is applied, but it also means that we are responsible for the impact AI will have on society and individuals.
Even though the recent developments in AI would almost make one believe that AI is a new technology which has ‘suddenly’ taken over the world, in fact, AI is over 60 years old as a field of research (see Darthmouth workshop), whereas the idea of intelligent machines is probably as old as humanity. Machines have been making decisions for us for quite some time. Algorithms decide on the best routing path for our mobile communications, and on when is best to sell or buy stocks; electronic ports in a train station decide whether to grant you access to the platforms based on the information it gets from your travel card or ticket; search engines decide which of the trillions of internet pages you are most likely to want to see when you do an online search. So, decisions made by machines are nothing new; what makes AI decisions different is the increasing complexity and potential impact of those decisions, which is often the result of algorithms that are hard, or even impossible to understand, and that AI decisions are often made without direct human intervention. However, people are the ones that determine the optimisation goals, and the utility functions on the basis of machine learning algorithms: we decide what the machine should be maximising.
In order to ensure that those dystopic futures that the media and science fiction like to highlight do not become reality, AI systems must be introduced in ways that build trust and understanding, and respect human and civil rights. The need for ethical considerations in the development of intelligent interactive systems is becoming one of the main influential areas of research in the last few years, and has led to several initiatives both from researchers as from practitioners, including the IEEE initiative on Ethics of Autonomous Systems, and the European Guidelines for Trustworthy AI. I am honoured to be part of these two initiatives.
In all areas of application, AI reasoning must be able to take into account societal values, moral and ethical considerations, weigh the respective priorities of values held by different stakeholders and in multiple multicultural contexts, explain its reasoning, and guarantee transparency. As the capabilities for autonomous decision-making grow, perhaps the most important issue to consider is the need to rethink responsibility. That is, our responsibility! Whatever their level of autonomy and social awareness and their ability to learn, AI systems are artefacts, constructed by people to fulfil some goals. Theories, methods, algorithms are needed to integrate societal, legal and moral values into technological developments in AI, at all stages of development (analysis, design, construction, deployment and evaluation). These frameworks must deal both with the autonomic reasoning of the machine, but most importantly, we need frameworks to guide our design choices, to regulate the scope of AI systems, to ensure proper data stewardship, and to help individuals determine their own involvement.
Responsible Artificial Intelligence is fundamentally about human responsibility for the development of intelligent systems along fundamental human principles and values, to ensure human flourishing and well-being in a sustainable world. In fact, Responsible AI is more than the ticking of some ethical boxes in a report, or the development of some add-on features, or switch-off buttons in AI systems.
It is up to us to decide. We are responsible.