Katie Kang

I am a PhD student in the Berkeley Aritifical Intelligence Research (BAIR) lab at UC Berkeley, advised by Sergey Levine and Claire Tomlin. Previously, I also did my undergrad at Berkeley.

Email  /  Google Scholar

profile photo
Research

I am broadly interested in robustness, generalization, and decision-making.

Unfamiliar Finetuning Examples Control How Language Models Hallucinate
Katie Kang, Eric Wallace, Claire Tomlin, Aviral Kumar, Sergey Levine
In Submission
arxiv
Deep Neural Networks Tend To Extrapolate Predictably
Katie Kang, Amrith Setlur, Claire Tomlin, Sergey Levine
ICLR 2024
arxiv / code
Multi-Task Imitation Learning for Linear Dynamical Systems
Thomas Zhang*, Katie Kang*, Bruce Lee*, Claire Tomlin, Sergey Levine, Stephen Tu, Nikolai Matni
L4DC 2023
arxiv
Lyapunov Density Models: Constraining Distribution Shift in Learning-Based Control
Katie Kang, Paula Gradu, Jason Choi, Michael Janner, Claire Tomlin, Sergey Levine
ICML 2022
arxiv / website / talk / blog post
Hierarchically Integrated Models: Learning to Navigate from Heterogeneous Robots
Katie Kang, Gregory Kahn, Sergey Levine
CoRL 2021
NeurIPS Robot Learning Workshop 2020 [Best Paper Award]
arxiv / video / website
Generalization through Simulation: Integrating Simulated and Real Data into Deep Reinforcement Learning for Vision-Based Autonomous Flight
Katie Kang*, Suneel Belkhale*, Gregory Kahn*, Pieter Abbeel, Sergey Levine
ICRA 2019
arxiv / video / code / news
Teaching
EE 126 (Probability and Random Processes), Fall 2019, Spring 2019, Fall 2018

CS 70 (Discrete Math and Probability), Summer 2018

website template