Today we’re speaking with Tim Miller, associate professor in the School of Computing and Information Systems at The University of Melbourne. Tim’s primary area of expertise is in artificial intelligence, with a particular emphasis on the concept of explainable Artificial Intelligence (XAI).
Tim describes his background as being at the intersection of artificial intelligence, interaction design, and cognitive science/psychology, so we knew this was going to be a fascinating conversation!
Tim does not disappoint, and we cover a wide range of topics including:
- What explainable AI is, and why it is such a hot topic today
- How to define intelligence, and the role philosophy has to play in AI
- What can go wrong with a lack of visibility into AI, and why explainability is so important
- The current issue of trust when it comes to AI, and why the worst-case scenarios tend to get most of the media attention
- The problem of bias in AI and how we can remove it. In particular, we discuss the topic of “predictive policing” and some of the issues it presents
- Tim’s thoughts (as a leading expert on transparency in AI) on the My Health Record program
- GDPR and what it means for explainability and the AI community in general
- The concept of using AI to inspect, detect, and solve problems with other forms of AI
- Where Tim believes we are as a community in terms of AI advancement, and what needs to happen in order to take further leaps in the development of AI
- Tim’s movements and appearances at upcoming AI events