CSE doctoral student Tara Safavi receives Rackham Predoctoral Fellowship

The fellowship will advance her work in inferring relational world knowledge in machines with explicit and implicit representations.

Tara Safavi Enlarge
Tara Safavi

CSE PhD candidate Tara Safavi has received a Rackham Predoctoral Fellowship to support her work in relational knowledge representation and inference. 

Endowing machines with the abilities to recognize and manipulate relations among objects – whether spatial, temporal, social, lexical, causal, or otherwise – is a long-standing goal in artificial intelligence, as relational reasoning is central to human abilities like categorization, planning, and problem-solving. 

There are several open questions at the heart of this yet-unreached goal, one of which Tara’s dissertation investigates in depth: How should we represent relational world knowledge in machines, such that machines may accurately and efficiently infer new knowledge using this representation

In her dissertation, Tara argues that such representations must combine both explicit structure (i.e., captured by a graph) with implicit structure (i.e., learned from directly modeling free text). These arguments center primarily around the task of link prediction, in which a machine learning algorithm is trained to predict novel relationships between entities, because many knowledge inference problems involving correlated data samples can either be framed as link prediction or else include a link prediction component – for example, recommendation of encyclopedic entities like Wikipedia pages or generation of novel scientific hypotheses. 

In Part I of her dissertation, she considers explicitly structured knowledge representations, namely heterogeneous entity-relation knowledge bases (KBs) that are constructed according to a predefined schema. She first demonstrates the advantages of such graph representations, showing that small-scale personalized machine learning tasks over sparse heterogeneous datasets can be improved by augmenting such datasets with semantic KB-style relations. She then explores the drawbacks of learning over explicitly structured knowledge representations alone, showing that “structure-only” methods are limited by the KB’s (lossy) construction and cannot fully model the relational semantics of world knowledge without learning directly over diverse natural language realizations of such relations.

Motivated by these findings, in Part II of her dissertation she proposes link prediction methods in knowledge bases using neural contextual language models (LMs), which implicitly acquire and represent schema-less relational knowledge by pretraining and fine-tuning on large unstructured natural language corpora like Wikipedia. In applications related to common sense, encyclopedic, and scientific knowledge inference, she argues that KBs and LMs have the potential to be treated as complementary knowledge representations: KBs provide high-precision data storage, but methods that model KB structure alone are limited by the graph’s predefined schema. On the other hand, contextual LMs store factual knowledge at relatively low precision, but they are more flexible and expressive for relation modeling because they learn the structure of text directly. 

Tara graduated from the University of Michigan with a BS in Computer Science in April 2017 with Highest Distinction and High Honors. While an undergraduate, she served as an IA for three different classes in the EECS Department. In 2017, she was the recipient of the College of Engineering Marian Sarah Parker Prize and an Outstanding Research Award from the EECS Department.

As she was completing her undergraduate studies in 2017, Tara was selected as a recipient of the Google Women Techmakers Scholarship to continue her studies in computer science and engineering. In 2018, Tara was awarded an NSF Graduate Research Fellowship. Tara received a Best Student Paper Award at the IEEE International Conference on Data Mining (ICDM 2019) for “Distribution of Node Embeddings as Multiresolution Features for Graphs.” During her PhD, she has conducted research internships at Microsoft Research and Bloomberg, and plans to undertake two more internships at Microsoft Research and the Allen Institute for Artificial Intelligence. 

Tara is currently a peer mentor for students applying for the NSF GRFP and other fellowships. She has previously helped develop and teach a middle-school computing program, has served on the board of Seven Mile Coding in Detroit, and co-founded the U-M WISE Girls Who Code Club. She is advised by Prof. Danai Koutra.

About the Rackham Predoctoral Fellowship

The Rackham Predoctoral Fellowship supports outstanding doctoral students who have achieved candidacy and are actively working on dissertation research and writing. They seek to support students working on dissertation that are unusually creative, ambitious and risk-taking.