Herbert Zhenghao Zhou | 周正浩

Room 210, DOW Hall
370 Temple Street
New Haven, CT 06511
Welcome! I am a third-year PhD student in the Department of Linguistics at Yale University. I am deeply grateful to be advised by Robert Frank and Tom McCoy. I am an active member in the CLAY Lab and the Language & Brain Lab. I graduated from Washington University in St. Louis in 2022 with a B.S. in Computer Science & Mathematics and PNP (Philosophy-Neuroscience-Psychology, a philosophy-centric cognitive science program, with linguistics concentration). I grew up in Shanghai, China.
Broadly speaking, I am interested in the intersection of computational linguistics and psycholinguistics, with the long-term goal of formally characterizing human and machine intelligence, specifically language capabilities. I have been exploring various research topics, including:
- Cognitively / neurally plausible computational models of sentence-level language processing and production, currently focusing on modeling structural priming (or more generally, linguistic adaptation);
- Behavioral and algorithmic levels understanding of the in-context learning (ICL) capabilities of large language models (LLMs), with methods from both mechanistic interpretability and neuro-compositional computation;
- Psycholinguistic experiments on Tree-Adjoining Grammar (TAG) based sentence production theories.
See more details in the Research tab.
Outside academia, I enjoy books 📖 and coffee ☕️ (you can always find me in cafés over the weekends), music 🎼 and museums 🏛️ (I sing in the Contour A Cappella group at Yale), biking 🚲 and hiking ⛰️ (but never professional, as I enjoy the casual flow), etc. Always excited to talk about research!
news
Feb 18, 2025 | New preprint Meaning Beyond Truth Conditions: Evaluating Discourse Level Understanding via Anaphora Accessibility on ArXiv💿! Check it out if you are curious about how LLMs perform on anaphora accessibility, inspired by the dynamic semsntics framework, as well as how do humans do! It was a great experience collaborating with Miranda Zhu, and many thanks to our very supportive advisors Simon and Bob! |
---|---|
Jan 22, 2025 | Paper Is In-Context Learning a Type of Error-Driven Learning? Evidence from the Inverse Frequency Effect in Structural Priming accepted to NAACL🎉! See you in Albuquerque, New Mexico at the end of April~ |
Jan 13, 2025 | This semeseter I will be the teaching fellow for Computational Psycholinguistics. Very excited to work with Tom McCoy on this newly offered course at Yale! |
Jan 10, 2025 | I gave a talk at LSA Annual Meeting 2025 as part of the Dynamic Field Theory for unifying discrete and continuous aspects of linguistic representations Symposium (hooray to everyone in the group, we all did a great job🍻). I presented a Dynamic Field Theory-based model of structural priming, titled Error-Driven Learning in DFT: A Case Study with Structural Priming. |
Sep 07, 2024 | I presented my poster titled Is In-context Learning a Type of Gradient-based Learning? Diagnosing with the Inverse Frequency Effect in Structural Priming at AMLaP 2024 @ Edinburgh, UK. I talked with a lot of structural priming people, and I can’t help falling in love with Scotland 😭 |