Herbert Zhenghao Zhou | 周正浩

prof_pic.jpg

Room 210, DOW Hall

370 Temple Street

New Haven, CT 06511

Welcome! I am a third-year PhD student in the Department of Linguistics at Yale University. I am deeply grateful to be advised by Robert Frank and Tom McCoy. I am an active member in the CLAY Lab and the Language & Brain Lab. I graduated from Washington University in St. Louis in 2022 with a B.S. in Computer Science & Mathematics and PNP (Philosophy-Neuroscience-Psychology, a philosophy-centric cognitive science program, with linguistics concentration). I grew up in Shanghai, China.

Broadly speaking, I am interested in the intersection of computational linguistics and psycholinguistics, with the long-term goal of formally characterizing human and machine intelligence, specifically language capabilities. I have been exploring various research topics, including:

  • Cognitively / neurally plausible computational models of sentence-level language processing and production, currently focusing on modeling structural priming (or more generally, linguistic adaptation);
  • Behavioral and algorithmic levels understanding of the in-context learning (ICL) capabilities of large language models (LLMs), with methods from both mechanistic interpretability and neuro-compositional computation;
  • Psycholinguistic experiments on Tree-Adjoining Grammar (TAG) based sentence production theories.

See more details in the Research tab.

Outside academia, I enjoy books 📖 and coffee ☕️ (you can always find me in cafés over the weekends), music 🎼 and museums 🏛️ (I sing in the Contour A Cappella group at Yale), biking 🚲 and hiking ⛰️ (but never professional, as I enjoy the casual flow), etc. Always excited to talk about research!

news

Oct 14, 2024 [Future] I will give a talk at LSA Annual Meeting 2025 (January 9-12, 2025) as part of the Dynamic Field Theory for unifying discrete and continuous aspects of linguistic representations Symposium. I will present a Dynamic Field Theory-based model of structural priming. Stay posted!
Sep 07, 2024 I presented my poster titled Is In-context Learning a Type of Gradient-based Learning? Diagnosing with the Inverse Frequency Effect in Structural Priming at AMLaP 2024 @ Edinburgh, UK. I talked with a lot of structural priming people, and I can’t help falling in love with Scotland 😭
Aug 28, 2024 My third year of PhD started today! And this is my first semseter TAing at Yale: see you in the Neural Netwwork Models of Linguistic Structure class!
Aug 09, 2024 I have spent my past two weeks at ESSLLI 2024 in KU Leuven, Belgium. I gave a presentation titled Language Models Show Gradient Inverse Frequency Effects in Structural Priming: Implications for In-Context Learning in the LACO (Language & Computation) at the Student Session. Inspiring courses, gorgeous cities, precious friendship! Will miss those shiny days…🥹
Jul 27, 2024 I went to SCiL 2024 @ Irvine, CA and CogSci 2024 @ Rotterdam, Netherlands.

selected publications

  1. TGT_ICL_2.png
    Mechanism of Symbol Processing for In-Context Learning in Transformer Networks
    Paul Smolensky, Roland Fernandez, Zhenghao Herbert Zhou, Mattia Opper, and Jianfeng Gao
    Oct 2024
    arXiv:2410.17498 [cs.AI]
  2. IFE_ICL.png
    Is In-Context Learning a Type of Gradient-Based Learning? Evidence from the Inverse Frequency Effect in Structural Priming
    Zhenghao Zhou, Robert Frank, and R. Thomas McCoy
    Jun 2024
    arXiv:2406.18501 [cs]
  3. PIPS.jpg
    What affects Priming Strength? Simulating Structural Priming Effect with PIPS
    Zhenghao Zhou, and Robert Frank
    In Proceedings of the Society for Computation in Linguistics 2023, Jun 2023
  4. SVAgree.png
    Subject-verb agreement with Seq2Seq transformers: Bigger is better, but still not best
    Michael Wilson, Zhenghao Zhou, and Robert Frank
    In Proceedings of the Society for Computation in Linguistics 2023, Jun 2023