The lab works on foundational questions at the intersection of causal inference and AI. A core thread is understanding what happens when data are imperfect: observations go missing not by chance but for reasons intertwined with the quantities under study. We have built the theoretical foundations for recovering causal and probabilistic queries from such incomplete data using graphical models, and have developed causal discovery algorithms that operate even in the presence of missing data. A second thread extends causal inference beyond the standard i.i.d. assumption, developing methods for settings where data points are networked, exchangeable, or otherwise dependent. Most recently, the lab has turned toward acting agents, asking how causality can equip agents to plan reliably when the world shifts beneath their feet.