An Explainable AI Approach to Large Language Model Assisted Causal Model Auditing and Development

Yanming Zhang, Brette Fitzgibbon, Dino Garofolo, Akshith Kota, Eric Papenhausen, Klaus Mueller

Room: 110

2023-10-22T03:00:00ZGMT-0600Change your timezone on the schedule page
Exemplar figure, described by caption below
ChatGPT has rich knowledge about the real world. We propose to use it as an auditor for causal networks. Our method presents ChatGPT with a causal network and prompts it for insights about edge directionality, confounders, and mediating variables, resulting in a more accurate and detailed causal model. To summarize the large text produced by ChatGPT we designed several visualizations: The Causal Debate Chart, the Causal Environment Chart, and the Confounder/Mediator Chart.
Fast forward

Causal networks are widely used in many fields, including epidemiology, social science, medicine, and engineering, to model the complex relationships between variables. While it can be convenient to algorithmically infer these models directly from observational data, the resulting networks are often plagued with erroneous edges. Auditing and correcting these networks may require domain expertise frequently unavailable to the analyst. We propose the use of large language models such as ChatGPT as an auditor for causal networks. Our method presents ChatGPT witha causal network, one edge at a time, to produce insights about edge directionality, possible confounders, and mediating variables. We ask ChatGPT to reflect on various aspects of each causal link and we then produce visualizations that summarize these viewpoints for the human analyst to direct the edge, gather more data, or test further hypotheses. We envision a system where large language models, automated causal inference, and the human analyst and domain expert work hand in hand as a team to derive holistic and comprehensive causal models for any given case scenario. This paper presents first results obtained with an emerging prototype.