Ivan Lee  李一帆

prof_pic.jpg

I am a Computer Science PhD student at UC San Diego, advised by Taylor Berg-Kirkpatrick in the BergLab. My research focuses on context compaction in language models, multi-agent research automation, and emergent alignment: alternative post-training methods that elicit cooperative behavior without explicit alignment supervision. Before returning to academia, I worked in advertising and education technology. I earned my MSc from UC San Diego and my BSc from UC Davis.


selected publications

  1. arXiv
    The Format Tax
    Ivan Lee, Loris D’Antoni, and Taylor Berg-Kirkpatrick
    arXiv 2026
  2. arXiv
    Optical Context Compression Is Just (Bad) Autoencoding
    Ivan Lee, Cheng Yang, and Taylor Berg-Kirkpatrick
    arXiv 2025
  3. Is attention required for ICL? Exploring the Relationship Between Model Architecture and In-Context Learning Ability
    Ivan Lee, Nan Jiang, and Taylor Berg-Kirkpatrick
    ICLR 2024
  4. Readability ≠ Learnability: Rethinking the Role of Simplicity in Training Small Language Models
    Oral Spotlight (top 5.7%)
    Highlighted by Chris Manning: "Best thing I’ve seen at COLM 2025 so far."
    Ivan Lee and Taylor Berg-Kirkpatrick
    COLM 2025

honors & awards

  • Gold Reviewer Award, ICML 2026. Top 25% of reviewers.
  • Oral Spotlight, COLM 2025. Top 5.7% of accepted papers.

invited participation

  • Schmidt Sciences Trustworthy AI Convening, New Orleans. March 2026.

teaching

Teaching Assistant, UC San Diego (instructor: Taylor Berg-Kirkpatrick):

  • DSC 258R: Natural Language Processing. Spring 2026.
  • CSE 251A / 151A: Machine Learning. Winter 2026.

service

Reviewing: ACL Rolling Review (ARR) 2026; ACL Student Research Workshop 2025; COLM 2026; ICLR 2025, 2026; ICML 2026.