Emily Liu
I am a machine learning researcher and engineer interested in developing a scientific understanding of how large models learn, represent information, and generalize. My background spans efficient training and inference, representation learning, and robustness under distribution shift.
More recently, my interests have expanded toward mechanistic interpretability, online and continual learning in foundation models and recommender systems, and the science of generalization in large-scale models. I am especially interested in the algorithmic principles that allow models to scale reliably while remaining computationally tractable and environmentally responsible.
I completed my Master of Engineering in Computer Science at MIT, advised by Dr. Caroline Uhler, where my thesis examined causal representation learning for predicting the effects of genetic perturbations in single cells. I also hold bachelor’s degrees in Computer Science and Mathematics from MIT. Currently, I work at ByteDance on large-scale recommendation and multimodal LLM-based ranking models, focusing on model optimization, debiasing, and production reliability. My broader goal is to connect rigorous scientific understanding with practical, trustworthy, and sustainable model deployment at scale.
Find an up-to-date version of my CV here.
