Teresa Yeo

Teresa Yeo

Researching things

I am a Research Scientist at Google DeepMind. My research focuses on scalable methods for continual adaptation, including generating targeted training data and gradient-free adaptation methods. More recently, I've been interested in adapting generative models to create visually rich and interactive outputs.

I was a postdoc at the Singapore-MIT Alliance for Research and Technology working on neurosymbolic methods for efficient adaptation. My PhD was at EPFL supervised by Amir Zamir, on making models more reliable under changing environments. In my past life, I was a quant in New York and London and worked on creating systematic investment strategies for equity portfolios (or, a glorified coin flipper).
Feb 2026
I started as a Research Scientist at Google DeepMind in Singapore!
Dec 2025
Our workshops on Test-Time Updates and Catch, Adapt and Operate have been accepted at ICLR 2026. See you in Rio!
TMLR 2025

Controlled Training Data Generation with Diffusion Models

T. Yeo*, A. Atanov*, H. Benoit^, A. Alekseev^, R. Ray, P. Esmaeil Akhoondi, A. Zamir

TMLR 2025

An Analysis of Model Robustness across Concurrent Distribution Shifts

M. Jeon*, S. Choi*, H. Choi, T. Yeo

ECCV 2024

ViPer: Visual Personalization of Generative Models via Individual Preference Learning

S. Salehi, M. Shafiei, R. Bachmann, T. Yeo, A. Zamir

NeurIPS 2023

🔍 Spotlight

4M: Massively Multimodal Masked Modelling

D. Mizrahi, R. Bachmann, O. F. Kar, T. Yeo, M. Gao, A. Dehghan, A. Zamir

ICCV 2023

Rapid Network Adaptation: Learning to Adapt Neural Networks Using Test-Time Feedback

T. Yeo, O. F. Kar, Z. Sodagar, A. Zamir

NeurIPS 2022

Task Discovery: Finding the Tasks that Neural Networks Generalize on

A. Atanov, A. Filatov, T. Yeo, A. Sohmshetty, A. Zamir

CVPR 2022

🎤 Oral

3D Common Corruptions and Data Augmentation

O. F. Kar, T. Yeo, A. Atanov, A. Zamir

ICCV 2021

🎤 Oral

Robustness via Cross-domain Ensembles

T. Yeo*, O. F. Kar*, A. Sax, A. Zamir

Arxiv, CVPR 2020

🎤 Oral

Robust Learning Through Cross-Task Consistency

A. Zamir*, A. Sax*, T. Yeo, O. F. Kar, N. Cheerla, R. Suri, Z. Cao, J. Malik, L. Guibas

AAAI 2019

🎤 Oral

Iterative Classroom Teaching

T. Yeo, P. Kamalaruban, A. Singla, A. Merchant, T. Asselborn, L. Faucon, P. Dillenbourg, V. Cevher

2017-2024

EPFL

Ph.D. in Computer Science

Advisor: Amir Zamir, Pierre Dillenbourg

Thesis: Making Computer Vision Models Robust and Adaptive

2015-2016

University of Cambridge

M.Phil. in Machine Learning and Machine Intelligence

Thesis: Bayesian optimization for natural language processing

2024-2026

Postdoctoral Researcher Singapore-MIT Alliance for Research and Technology

Neurosymbolic methods for efficient adaptation.
2018-2023

Teaching Assistant EPFL

Fall 2021: CS503 Visual Intelligence: Machines and Minds
Spring 2018, 2019, 2020: EE559 Deep Learning
Fall 2019: CS433 Machine Learning
2016-2017

Data Scientist Shift Technology

Designed and impelmented models for automated fraud detection.
2013-2015

Quantitative Researher UBS

Researched on systematic strategies for equity portfolios.
2023 - Present

Reviewer

NeurIPS, ICLR, CVPR, ICCV, ECCV

2026

The 3rd Test-Time Updates Workshop Co-organizer

ICLR

2026

Catch, Adapt, and Operate: Monitoring ML Models Under Drift Workshop Co-organizer

ICLR

2025

Test-Time Adaptation Workshop Co-organizer

ICML