Program

08:00: Breakfast & Registration
09:00: Introductory Remarks
09:10: Keynote – Steven Skiena

Interpreting Word and Graph Embeddings

AI applications employing Natural Language Processing (NLP) componentsare increasingly used in the legal, business, and medical professions to summarize text, interpret text, and make decisions informed by text.  Word and graph embeddings (e.g. word2vec) are vector representations that provide powerful ways to reduce large data sets to concise features readily applicable to a variety of problems in NLP and data science.  I will introduce word/graph embeddings, and present examples where we have used them to study a variety of historical and cultural phenomena — including measuring shifts in the meaning of words, changing attitudes towards gender, and bias studies associated with ethnic and other minorities.  These applications suggest that vector representations can be interpretable, when used in the right way on the right task.

09:50: Keynote – David Jensen

The Deep Connections Between Interpretability and Causation

Modern technologies for machine learning, particularly deep neural networks, have produced a recent string of impressive successes on perceptual tasks, including image classification, text translation, and voice recognition.  However, these technologies are now being applied to more cognitive tasks, such as vehicle control, criminal sentencing recommendation, and medical diagnosis.  Such tasks have vastly higher consequences for error, and they emphasize the need for additional properties such as transparency, interpretability, and robustness.  Many researchers view these as “add-ons”, properties that can be obtained by encapsulating deep models with additional layers that provide these services.  In this talk, I argue that more fundamental change is needed.  Transparency, interpretability, and robustness are properties of what is learned, rather than properties of how it is learned. Specifically, nearly all current technologies for machine learning aim for associational models, and such models pose fundamental barriers to transparency, interpretability, and robustness.  Instead, far more machine learning research should focus on causal models, which much more naturally support such properties.  This view exposes the false dichotomy of accuracy and interpretability, helps draw clearer connections between machine learning and scientific discovery, and provides a more formal and precise description of the compositional structure of interpretable models.

10:30: Break
10:50: Keynote – Jenn Wortman Vaughan

Interpreting Interpretability: Understanding Data Scientists’ Use of Interpretability Tools for Machine Learning

Machine learning models are now routinely deployed in domains ranging from criminal justice to healthcare. With this newfound ubiquity, ML has moved beyond academia and grown into an engineering discipline. To that end, interpretability tools have been designed to help data scientists and machine learning practitioners better understand how ML models work. However, there has been little evaluation of the extent to which these tools achieve this goal. We study data scientists’ use of two existing interpretability tools, the InterpretML implementation of GAMs and the SHAP Python package. We conduct a contextual inquiry (N=11) and a survey (N=197) of data scientists to observe how they use interpretability tools to uncover common issues that arise when building and evaluating ML models. Our results indicate that data scientists over-trust and misuse interpretability tools. Furthermore, few of our participants were able to accurately describe the visualizations output by these tools. We highlight qualitative themes for data scientists’ mental models of interpretability tools. We conclude with implications for researchers and tool designers.

11:30: Keynote – Peter Koo

Uncovering Biological Insights with Deep Learning

Deep learning has the potential to make a significant impact in biology and healthcare, but a major challenge is understanding the reasons behind their predictions. Here, I will focus on how deep learning is being leveraged to understand the functional impact of mutations in the genome, with the broader aim of gaining mechanistic insights into complex diseases, including cancer. Although deep learning is largely considered a “black box”, I will show how treating a high-performing model as an in silico laboratory can help to distill knowledge that they learn from big, noisy genomics data. By employing randomized trials in our in silico experimental design, we can identify genomic patterns that are causally linked to the model’s predictions, thereby allowing us to peer into their decision making process. I will highlight how this interpretability approach has helped to elucidate biological insights for RNA-protein interactions and then discuss open challenges that remain.

12:10: Lunch & Poster Session
14:00: Keynote – Jeff Kao

Beyond Analytical Interpretability

Models deployed in the wild can interact with the world in unexpected ways, sometimes with life-altering consequences. Interpretability is a key part of building better machine learning systems. But a purely analytical approach to interpretability that ignores a model’s emergent role in the wider social system is incomplete, and potentially even dangerous. Examples from investigative journalism, tech and law warn that, counterintuitively, an interpretable algorithm on its own can lead to more harmful outcomes. These examples may also serve as lessons for how best to discuss, monitor and improve machine learning systems in the future.

14:40: Keynote – David Doermann

The Role of Explainability in Human-Machine Partnerships

Explainability in AI is a key component that facilitates transparency, trust, and traceability. Without it, we are dealing with black boxes with performance based only on a predefined training set, and a limited availabity to evolve. We will have systems that either work, or don’t, but will not effectively be able to take input from or give input to a true human partner. In this talk we will highlight what the necessary requirements are to develop a “complementary relationship” which evolves to a “cooperative interaction between men and electronic computers”. As we continue to move to a time when computation and speed are no longer a substitute for true intelligence, and want to move past “Humans in the Loop”, we must develop systems that can co-adapt. This requires these systems to justify their answers much in the same way we do. Only then, we will be able to develop true Human- Machine Partnerships.

15:20: Break
15:40: Panel Discussion
17:10: Concluding Remarks