Herbert Zhenghao Zhou | 周正浩

prof_pic.jpg

📍 New Haven, CT

📧 herbert.zhou@yale.edu

🌗 computational psycholinguist

Welcome! I am a rising fourth-year PhD student in the Department of Linguistics at Yale University. I am deeply grateful to be advised by Robert Frank and Tom McCoy. I am an active member in the CLAY Lab and the Language & Brain Lab. I graduated from Washington University in St. Louis in 2022 with a B.S. in Computer Science & Mathematics and PNP (a philosophy-centric cognitive science program). I grew up in Shanghai, China.

My research interests are at the intersection of computational linguistics and psycholinguistics, with the goal of understanding the mechanisms of how humans and AI models incrementally represent and process human language, respectively. I am particularly interested in the algorithmic / mechanistic level interpretability: to what extent do AI models implement the processing mechanisms we find in humans, and to what extent can knowledge from AI models inform us about human cognition. I use methods from targeted behavioral evaluation, mechanistic interpretability, and human psycholinguistic experiments to study insights from the bidirectional interactions between cognitive science and AI/NLP. See more details in the Research tab.

Outside academia, I enjoy books 📖 and coffee ☕️ (you can always find me in cafés over the weekends), music 🎼 and museums 🏛️ (I sing in the Contour A Cappella group at Yale), biking 🚲 and hiking ⛰️ (but never professional, as I enjoy the casual flow).

news

Jun 12, 2025 I will go to LSA Summer Institue again! So much previous memory and friendship gained in 2023 @ UMass Amherst, and excited to meet some old and new friends in Eugene in July 💫!
May 15, 2025 Paper Meaning Beyond Truth Conditions: Evaluating Discourse Level Understanding via Anaphora Accessibility accepted to ACL🎉! My awesome collaborator Miranda Zhu and I have decided to present the paper virtually. Don’t hesitate to reach our if you are interested!
Feb 18, 2025 New preprint Meaning Beyond Truth Conditions: Evaluating Discourse Level Understanding via Anaphora Accessibility on ArXiv💿! Check it out if you are curious about how LLMs perform on anaphora accessibility, inspired by the dynamic semsntics framework, as well as how do humans do! It was a great experience collaborating with Miranda Zhu, and many thanks to our very supportive advisors Simon and Bob!
Jan 22, 2025 Paper Is In-Context Learning a Type of Error-Driven Learning? Evidence from the Inverse Frequency Effect in Structural Priming accepted to NAACL🎉! See you in Albuquerque, New Mexico at the end of April~
Jan 13, 2025 This semeseter I will be the teaching fellow for Computational Psycholinguistics. Very excited to work with Tom McCoy on this newly offered course at Yale!

selected publications

  1. LmAnaphora.png
    Meaning Beyond Truth Conditions: Evaluating Discourse Level Understanding via Anaphora Accessibility
    Xiaomeng Zhu*Zhenghao Zhou*, Simon Charlow, and Robert Frank
    In , Feb 2025
    arXiv:2502.14119 [cs]
  2. TGT_ICL_2.png
    Mechanism of Symbol Processing for In-Context Learning in Transformer Networks
    Paul Smolensky, Roland Fernandez, Zhenghao Herbert Zhou, Mattia Opper, and Jianfeng Gao
    In , Oct 2024
    arXiv:2410.17498 [cs.AI]
  3. IFE_ICL.png
    Is In-Context Learning a Type of Error-Driven Learning? Evidence from the Inverse Frequency Effect in Structural Priming
    Zhenghao Zhou, Robert Frank, and R. Thomas McCoy
    In Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), Apr 2025
  4. PIPS.jpg
    What affects Priming Strength? Simulating Structural Priming Effect with PIPS
    Zhenghao Zhou, and Robert Frank
    In Proceedings of the Society for Computation in Linguistics 2023, Jun 2023
  5. SVAgree.png
    Subject-verb agreement with Seq2Seq transformers: Bigger is better, but still not best
    Michael Wilson, Zhenghao Zhou, and Robert Frank
    In Proceedings of the Society for Computation in Linguistics 2023, Jun 2023