Multimedia analysis has recently made spectacular improvements in both quality and in sophistication. Over the last half-decade we have seen extreme progress in tasks like image and video tagging, object detection and activity recognition, generating descriptive captions and more. Some of these have been deployed and are in widespread use in our smartphones and on our social media platforms. We have also seen recent research work, including our own, on computing more abstract features of multimedia, such as person-counting from CCTV, computing visual salience, estimating aesthetics of images and videos, and computing video memorability. The common methodology used across most of these applications is of course machine learning, in all its forms, from convolutional neural networks to simple regression and support vector machines. Much of the research in our field is about wrestling with machine learning to optimise its performance in multimedia analysis tasks and this recent run of extreme progress does not look like ending anytime soon, though it will reach its high water mark. When it does reach the point at which it cannot get any better, what then ? Generative machine learning (ML) is a recent form of media analysis which turns the conventional approach on its head and its methodology is to train a model and then generate new data. Example applications of generative ML deoldify which colourises black and white images and video clips, and Generative Adversarial Networks (GANs) which can generate DNA sequences, 3D models of replacement teeth, impressionist paintings, and of course video clips, some known as deepfakes. Putting aside the more nefarious applications of deepfakes, what is the potential for generative forms of multimedia ? In the short to medium term we can speculate that it would include things like movie augmentation but it how far can it go and could it replicate human creativity ? In this talk I will introduce some of the recent forms of generative multimedia and discuss how far I believe we could go with this exciting new technology.
Medicine stands apart from other areas where AI can be applied. While we have seen advances in other fields with lots of data, it is not the volume of data that makes medicine so hard, it is the challenges arising from extracting actionable information from the complexity of the data. It is these challenges that make medicine the most exciting area for anyone who is really interested in the frontiers of machine learning – giving us real-world problems where the solutions are ones that are societally important and which potentially impact on us all. Think Covid 19! In this talk I will show how AI and machine learning are transforming medicine and how medicine is driving new advances in machine learning, including new methodologies in automated machine learning, interpretable and explainable machine learning, dynamic forecasting, and causal inference. I will also discuss our experiences in implementing such AI solutions nationally, in the UK, in order to fight the current Covid 19 pandemic as well as how they can be adapted for international use.
The new “wave of AI”, more specifically machine learning and deep learning is currently revolutionizing applications in many domains. One of the early and impressive examples is of course the big leap we see in content analysis of images and image understanding. Machine learning and deep learning techniques are offering their power and potential in many domains from automated driving to health care, from industry 4.0 to regenerative energy. When we talk about AI the term Explainable AI is getting into focus as an important and relevant aspect of AI. A „black box“ AI should be able to be understandable and readably by a human, initially such that the results of the AI can be understood by human experts.
Humans that are using and affected by AI are of course not experts in AI but everyday individuals in their work life and their personal private daily life. In a life in which AI methods and approaches will be influencing day-to-day actions, our decision-making powers, our control and our freedom, we need to design AI driven systems that are oriented along user, their needs and requirements. We must put them into the position to accept, to understand, to control, and potentially object to AI. We need to make AI first and foremost beneficial to all users. This keynote will look into the challenges of user-centered AI, what this means in different fields of applications and where we need new methods and tools to and put the user in lead in the decades of AI to come.
The first camera phone was sold in 2000, when taking pictures with your phone was an oddity, and sharing pictures online was unheard-of. Today, barely twenty years later, the smartphone is more camera than phone. How did this happen? This transformation was enabled by advances in computational photography — the science and engineering of making great images from small form factor, mobile cameras. Modern algorithmic and computing advances, including machine learning, have changed the rules of photography, bringing to it new modes of capture, post-processing, storage, and sharing. In this talk, I’ll give a brief history of digital and computational photography and describe some of the key recent advances of this technology, including burst photography and super-resolution.