Lifted First-Order Probabilistic Inference Eyal Amir Probabilistic inference algorithms are widely employed in artificial intelligence. Among their many applications are tracking partially observed systems, medical diagnosis, automatic tutoring, and computational biology. Such applications use large, complex models, and are difficult to engineer and learn. To solve these difficulties, several methods emerged that use relational probabilistic specifications. These specification languages can abstract over large classes of objects. Unfortunately, most probabilistic inference algorithms are specified and processed on a propositional level. There, random variables are considered different from others, and real-world structures of objects and relationships are ignored. In the last decade, many proposals for algorithms accepting relational specifications have been presented, but in the inference stage they still operate on a mostly propositional representation level. In this talk I will present an exact inference algorithm that operates directly on a relational level, and that can be applied to any relational model (specified in a language that generalizes undirected probabilistic graphical models). I will discuss how further research can improve our results, and how different applications can gain from this advance. I will conclude with our experiments, which show superior performance in comparison with propositional exact inference. (Joint work with Rodrigo Braz and Dan Roth)