Publications
A list of peer-reviewed abstract, conference proceedings, preprints, and journal articles. An asterisk (*) indicates equal contribution.
2025
- Meaning Beyond Truth Conditions: Evaluating Discourse Level Understanding via Anaphora AccessibilityXiaomeng Zhu*, Zhenghao Zhou*, Simon Charlow, and Robert FrankIn , Feb 2025arXiv:2502.14119 [cs]
We present a hierarchy of natural language understanding abilities and argue for the importance of moving beyond assessments of understanding at the lexical and sentence levels to the discourse level. We propose the task of anaphora accessibility as a diagnostic for assessing discourse understanding, and to this end, present an evaluation dataset inspired by theoretical research in dynamic semantics. We evaluate human and LLM performance on our dataset and find that LLMs and humans align on some tasks and diverge on others. Such divergence can be explained by LLMs’ reliance on specific lexical items during language comprehension, in contrast to human sensitivity to structural abstractions.
@inproceedings{zhu2025meaning, title = {Meaning {Beyond} {Truth} {Conditions}: {Evaluating} {Discourse} {Level} {Understanding} via {Anaphora} {Accessibility}}, shorttitle = {Evaluating {Discourse} {Level} {Understanding} via {Anaphora} {Accessibility}}, url = {https://arxiv.org/abs/2502.14119}, doi = {10.48550/arXiv.2502.14119}, urldate = {2025-02-19}, publisher = {arXiv}, author = {Zhu, Xiaomeng and Zhou, Zhenghao and Charlow, Simon and Frank, Robert}, month = feb, year = {2025}, note = {arXiv:2502.14119 [cs]}, }
- Is In-Context Learning a Type of Error-Driven Learning? Evidence from the Inverse Frequency Effect in Structural PrimingZhenghao Zhou, Robert Frank, and R. Thomas McCoyIn Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), Apr 2025
Large language models (LLMs) have shown the emergent capability of in-context learning (ICL). One line of research has claimed that ICL is functionally equivalent to gradient descent, a type of error-driven learning mechanism. In this paper, we introduce a new way of diagnosing whether ICL is functionally performing error-driven learning. Our approach is based on the inverse frequency effect (IFE) —- a phenomenon in which an agent’s behavior is influenced to a greater degree when presented with improbable examples as compared to more likely ones. The IFE has previously been identified in psycholinguistics where humans exhibit the IFE in the context of structural priming (the tendency for people to produce sentence structures they have encountered recently). In that context, the IFE has been used as evidence that human structural priming must involve error-driven learning mechanisms. In our experiments, we simulated structural priming with ICL and found that LLMs indeed display the IFE, with the effect being stronger in larger models. We conclude that at least in the case we studied, ICL is indeed a type of error-driven learning, supporting the hypothesis that an error signal is implicitly computed in the forward pass during ICL. Our results suggest that both humans and LLMs make use of error-driven processing mechanisms in on-line processing.
@inproceedings{zhou-etal-2025-context, title = {Is {In}-{Context} {Learning} a {Type} of {Error}-{Driven} {Learning}? {Evidence} from the {Inverse} {Frequency} {Effect} in {Structural} {Priming}}, urldate = {2025-05-02}, author = {Zhou, Zhenghao and Frank, Robert and McCoy, R. Thomas}, editor = {Chiruzzo, Luis and Ritter, Alan and Wang, Lu}, booktitle = {{Proceedings} of the 2025 {Conference} of the {Nations} of the {Americas} {Chapter} of the {Association} for {Computational} {Linguistics}: {Human} {Language} {Technologies} ({Volume 1}: {Long} {Papers})}, month = apr, year = {2025}, address = {Albuquerque, New Mexico}, publisher = {Association for Computational Linguistics}, url = {https://aclanthology.org/2025.naacl-long.586/}, pages = {11712--11725}, isbn = {979-8-89176-189-6}, }
2024
- Mechanism of Symbol Processing for In-Context Learning in Transformer NetworksPaul Smolensky, Roland Fernandez, Zhenghao Herbert Zhou, Mattia Opper, and Jianfeng GaoIn , Oct 2024arXiv:2410.17498 [cs.AI]
Large Language Models (LLMs) have demonstrated impressive abilities in symbol processing through in-context learning (ICL).This success flies in the face of decades of predictions that artificial neural networks cannot master abstract symbol manipulation.We seek to understand the mechanisms that can enable robust symbol processing in transformer networks, illuminating both the unanticipated success, and the significant limitations, of transformers in symbol processing.Borrowing insights from symbolic AI on the power of Production System architectures, we develop a high-level language, PSL, that allows us to write symbolic programs to do complex, abstract symbol processing, and create compilers that precisely implement PSL programs in transformer networks which are, by construction, 100% mechanistically interpretable.We demonstrate that PSL is Turing Universal, so the work can inform the understanding of transformer ICL in general. The type of transformer architecture that we compile from PSL programs suggests a number of paths for enhancing transformers’ capabilities at symbol processing.
@inproceedings{smolensky_tgt_2024, title = {Mechanism of {Symbol} {Processing} for {In}-{Context} {Learning} in {Transformer} {Networks}}, shorttitle = {Mechanisms of {Symbol} {Processing} in {Transformers}}, url = {https://arxiv.org/abs/2410.17498}, doi = {10.48550/arXiv.2410.17498}, urldate = {2024-10-23}, publisher = {arXiv}, author = {Smolensky, Paul and Fernandez, Roland and Zhou, Zhenghao Herbert and Opper, Mattia and Gao, Jianfeng}, month = oct, year = {2024}, note = {arXiv:2410.17498 [cs.AI]}, }
2023
- What affects Priming Strength? Simulating Structural Priming Effect with PIPSZhenghao Zhou, and Robert FrankIn Proceedings of the Society for Computation in Linguistics 2023, Jun 2023
The Gradient Symbolic Computation (GSC) framework has been proposed as a general model of human cognitive proessing (Cho et al. 2020, Smolensky et al. 2022). In this study, we use the Parallelism In Producing Syntax (PIPS) model (Brehm et al. 2022), one computational instantiation of the GSC framework, to simulate the structural priming effect (e.g., Bock 1986) in sentence production. We focus on English dative alternation and demonstrate that the PIPS model can qualitatively reproduce the lexically independent priming effect, the lexical boost effect, and under some conditions, the inverse frequency effect shown in humans. This demonstrates the pontential of GSC as a general framework to simulate the process of human sentence production. We leave the underlying mechanisms of how priming effects arises in the PIPS model for future work.
@inproceedings{zhou_what_2023, title = {What affects {Priming} {Strength}? {Simulating} {Structural} {Priming} {Effect} with {PIPS}}, shorttitle = {What affects {Priming} {Strength}?}, author = {Zhou, Zhenghao and Frank, Robert}, booktitle = {Proceedings of the {Society} for {Computation} in {Linguistics} 2023}, month = jun, year = {2023}, volume = {6}, pages = {413--417}, address = {Amherst, MA}, url = {https://openpublishing.library.umass.edu/scil/article/id/947/}, doi = {10.7275/s0rv-1p15}, editor = {Hunter, Tim and Prickett, Brandon}, urldate = {2024-09-02}, }
- Subject-verb agreement with Seq2Seq transformers: Bigger is better, but still not bestMichael Wilson, Zhenghao Zhou, and Robert FrankIn Proceedings of the Society for Computation in Linguistics 2023, Jun 2023
Past work (Linzen et al., 2016; Goldberg, 2019, a.o.) has used the performance of neural network language models on subject-verb agreement to argue that such models possess structure-sensitive grammatical knowledge. We investigate what properties of the model or of the training regimen are implicated in such success in sequence to sequence transformer models that use the T5 architecture (Raffel et al., 2019; Tay et al., 2021). We find that larger models exhibit improved performance, especially in sentences with singular subjects. We also find that larger pre-training datasets are generally associated with higher performance, though models trained with less complex language (e.g., CHILDES, Simple English Wikipedia) can show more errors when trained with larger datasets. Finally, we show that a modelś ability to replicate psycholinguistic results does not correspondingly improve with more parameters or more training data: none of the models we study displays a fully convincing replication of the hierarchically-informed pattern of agreement behavior observed in human experiments.
@inproceedings{wilson_subject-verb_2023, title = {Subject-verb agreement with {Seq2Seq} transformers: {Bigger} is better, but still not best}, shorttitle = {Subject-verb agreement with {Seq2Seq} transformers}, author = {Wilson, Michael and Zhou, Zhenghao and Frank, Robert}, booktitle = {Proceedings of the {Society} for {Computation} in {Linguistics} 2023}, month = jun, year = {2023}, pages = {278--288}, address = {Amherst, MA}, publisher = {Association for Computational Linguistics}, url = {https://aclanthology.org/2023.scil-1.24}, doi = {10.7275/d5gb-v650}, editor = {Hunter, Tim and Prickett, Brandon}, urldate = {2024-09-02}, keywords = {subject-verb agreement, transformer language models, sequence to sequence models, agreement attraction}, }