CIS Seminar: “Forward and Inverse Causal Inference in a Tensor Framework”
March 17, 2022 at 12:30 PM - 1:30 PM
Details
Organizer
Venue
Developing causal explanations for correct results or for failures from mathematical equations and data is important in developing a trustworthy artificial intelligence, and retaining public trust. Causal explanations are germane to the “right to an explanation” statute, i.e., to data-driven decisions, such as those that rely on images. Computer graphics and computer vision problems, also known as forward and inverse imaging problems, have been cast as causal inference questions consistent with Donald Rubin’s quantitative definition of causality, where “A causes B” means “the effect of A is B”, a measurable and experimentally repeatable quantity. Computer graphics may be viewed as addressing analogous questions to forward causal inference that addresses the “what if” question, and estimates a change in effects given a delta change in a causal factor. Computer vision may be viewed as addressing analogous questions to inverse causal inference that addresses the “why” question which we define as the estimation of causes given a forward causal model, and a set of observations that constrain the solution set. Tensor factor ananlysis also known as structural equations with multimode latent variables is a suitable and transparent framework for modeling the mechanism that generates observed data. Tensor factor analysis has been employed in representing the causal factor structure of data formation in econometrics, psychometric, and chemometrics since the 1960s. More recently, tensor factor analysis has been successfully employed to represent cause-and-effect in computer vision, and computer graphics, or for prediction and dimensionality reduction in machine learning tasks.

