The new “wave of AI”, more specifically machine learning and deep learning is currently revolutionizing applications in many domains. One of the early and impressive examples is of course the big leap we see in content analysis of images and image understanding. Machine learning and deep learning techniques are offering their power and potential in many domains from automated driving to health care, from industry 4.0 to regenerative energy. When we talk about AI the term Explainable AI is getting into focus as an important and relevant aspect of AI. A „black box“ AI should be able to be understandable and readably by a human, initially such that the results of the AI can be understood by human experts.
Humans that are using and affected by AI are of course not experts in AI but everyday individuals in their work life and their personal private daily life. In a life in which AI methods and approaches will be influencing day-to-day actions, our decision-making powers, our control and our freedom, we need to design AI driven systems that are oriented along user, their needs and requirements. We must put them into the position to accept, to understand, to control, and potentially object to AI. We need to make AI first and foremost beneficial to all users. This keynote will look into the challenges of user-centered AI, what this means in different fields of applications and where we need new methods and tools to and put the user in lead in the decades of AI to come.