Herbert Zhenghao Zhou | 周正浩

prof_pic.jpg

Room 210, DOW Hall

370 Temple Street

New Haven, CT 06511

Welcome! I am a third-year PhD student in the Department of Linguistics at Yale University. I am deeply grateful to be advised by Robert Frank and Tom McCoy. I am an active member in the CLAY Lab and the Language & Brain Lab. I graduated from Washington University in St. Louis in 2022 with a B.S. in Computer Science & Mathematics and PNP (Philosophy-Neuroscience-Psychology, a philosophy-centric cognitive science program, with linguistics concentration). I grew up in Shanghai, China.

Broadly speaking, I am interested in the intersection of computational linguistics and psycholinguistics, with the long-term goal of formally characterizing human and machine intelligence, specifically language capabilities. I have been exploring various research topics, including:

  • Cognitively / neurally plausible computational models of sentence-level language processing and production, currently focusing on modeling structural priming (or more generally, linguistic adaptation);
  • Behavioral and algorithmic levels understanding of the in-context learning (ICL) capabilities of large language models (LLMs), with methods from both mechanistic interpretability and neuro-compositional computation;
  • Psycholinguistic experiments on Tree-Adjoining Grammar (TAG) based sentence production theories.

See more details in the Research tab.

Outside academia, I enjoy books 📖 and coffee ☕️ (you can always find me in cafés over the weekends), music 🎼 and museums 🏛️ (I sing in the Contour A Cappella group at Yale), biking 🚲 and hiking ⛰️ (but never professional, as I enjoy the casual flow), etc. Always excited to talk about research!

news

Feb 18, 2025 New preprint Meaning Beyond Truth Conditions: Evaluating Discourse Level Understanding via Anaphora Accessibility on ArXiv💿! Check it out if you are curious about how LLMs perform on anaphora accessibility, inspired by the dynamic semsntics framework, as well as how do humans do! It was a great experience collaborating with Miranda Zhu, and many thanks to our very supportive advisors Simon and Bob!
Jan 22, 2025 Paper Is In-Context Learning a Type of Error-Driven Learning? Evidence from the Inverse Frequency Effect in Structural Priming accepted to NAACL🎉! See you in Albuquerque, New Mexico at the end of April~
Jan 13, 2025 This semeseter I will be the teaching fellow for Computational Psycholinguistics. Very excited to work with Tom McCoy on this newly offered course at Yale!
Jan 10, 2025 I gave a talk at LSA Annual Meeting 2025 as part of the Dynamic Field Theory for unifying discrete and continuous aspects of linguistic representations Symposium (hooray to everyone in the group, we all did a great job🍻). I presented a Dynamic Field Theory-based model of structural priming, titled Error-Driven Learning in DFT: A Case Study with Structural Priming.
Sep 07, 2024 I presented my poster titled Is In-context Learning a Type of Gradient-based Learning? Diagnosing with the Inverse Frequency Effect in Structural Priming at AMLaP 2024 @ Edinburgh, UK. I talked with a lot of structural priming people, and I can’t help falling in love with Scotland 😭

selected publications

  1. LmAnaphora.png
    Meaning Beyond Truth Conditions: Evaluating Discourse Level Understanding via Anaphora Accessibility
    Xiaomeng Zhu*Zhenghao Zhou*, Simon Charlow, and Robert Frank
    Feb 2025
    arXiv:2502.14119 [cs]
  2. TGT_ICL_2.png
    Mechanism of Symbol Processing for In-Context Learning in Transformer Networks
    Paul Smolensky, Roland Fernandez, Zhenghao Herbert Zhou, Mattia Opper, and Jianfeng Gao
    Oct 2024
    arXiv:2410.17498 [cs.AI]
  3. IFE_ICL.png
    Is In-Context Learning a Type of Gradient-Based Learning? Evidence from the Inverse Frequency Effect in Structural Priming
    Zhenghao Zhou, Robert Frank, and R. Thomas McCoy
    Jun 2024
    arXiv:2406.18501 [cs]
  4. PIPS.jpg
    What affects Priming Strength? Simulating Structural Priming Effect with PIPS
    Zhenghao Zhou, and Robert Frank
    In Proceedings of the Society for Computation in Linguistics 2023, Jun 2023
  5. SVAgree.png
    Subject-verb agreement with Seq2Seq transformers: Bigger is better, but still not best
    Michael Wilson, Zhenghao Zhou, and Robert Frank
    In Proceedings of the Society for Computation in Linguistics 2023, Jun 2023