Research
I am broadly interested in better understanding the inner workings of deep learning. My research recently has focused on characterizing and controlling how LLMs generalize.
|
What Do Learning Dynamics Reveal About Generalization in LLM Reasoning?
Katie Kang, Amrith Setlur, Dibya Ghosh, Jacob Steinhardt, Claire Tomlin, Sergey Levine, Aviral Kumar
In Submission
arxiv
/
code
|
Unfamiliar Finetuning Examples Control How Language Models Hallucinate
Katie Kang, Eric Wallace, Claire Tomlin, Aviral Kumar, Sergey Levine
NAACL 2025
arxiv
/
code
|
Deep Neural Networks Tend To Extrapolate Predictably
Katie Kang, Amrith Setlur, Claire Tomlin, Sergey Levine
ICLR 2024
arxiv
/
code
|
Multi-Task Imitation Learning for Linear Dynamical Systems
Thomas Zhang*, Katie Kang*, Bruce Lee*, Claire Tomlin, Sergey Levine, Stephen Tu, Nikolai Matni
L4DC 2023
arxiv
|
Lyapunov Density Models: Constraining Distribution Shift in Learning-Based Control
Katie Kang, Paula Gradu, Jason Choi, Michael Janner, Claire Tomlin, Sergey Levine
ICML 2022
arxiv
/
website
/
talk
/
blog post
|
Hierarchically Integrated Models: Learning to Navigate from Heterogeneous Robots
Katie Kang, Gregory Kahn, Sergey Levine
CoRL 2021
NeurIPS Robot Learning Workshop 2020 [Best Paper Award]
arxiv
/
video
/
website
|
Generalization through Simulation: Integrating Simulated and Real Data into Deep Reinforcement Learning for Vision-Based Autonomous Flight
Katie Kang*, Suneel Belkhale*, Gregory Kahn*, Pieter Abbeel, Sergey Levine
ICRA 2019
arxiv
/
video
/
code
/
news
|
|