How can we understand Artificial Intelligence?

Artificial Intelligence (AI) algorithms are revolutionizing many areas of science, engineering, and economics, with enormous consequences for society. AI systems, and algorithms from the related disciplines of machine learning and statistical inference, are in use every day to help professionals make medical and diagnostic decisions, sentencing and parole decisions, insurance-coverage decisions, in addition to other aspects of society.  

The latest advances in this new wave of AI are a collection of methods known as “Deep Learning.”  One of the criticisms leveled at Deep Learning algorithms is that they are uninterpretable. In short, these blackbox systems make better predictions than ever before, but they cannot tell us why, nor can their designers.  This issue is especially important as AI systems are under increased public scrutiny as their influence on society increasingly grows with each passing year. The European Union for instance introduced the General Data Protection Regulation (GDPR) which includes “right to explanation,” whereby users can ask for an explanation of any algorithmic decisions made about them.

Machine learning is also widely used in the sciences to find hidden patterns and structures from large and complex data. The ultimate goal of machine learning and AI in sciences is to help build knowledge for humans, yet the machine learned system is often just as opaque as the original data, and becomes yet another object to study (i.e., it produces an uninterpretable model to do virtual experiments on). Keeping humans in the loop of scientific experiments is key to AI-driven scientific discovery.

Workshop Questions

  • How do researchers understand the inner workings of a fully trained artificial neural network? How can they be designed to be interpretable?
  • Is there a trade-off between predictive accuracy and interpretability, and if so, how should it be managed? Do interpretable models have the same expressive power and trainability?
  • What issues of interpretability and transparency do scientists encounter in the course of their own research?
  • What tools can we use to interpret machine learning systems, and do they bias our understanding?
  • Are discrete, compositional structure of knowledge more interpretable than continuous vector representations?