Publications
publications by categories in reversed chronological order. generated by jekyll-scholar.
An asterisk (*) indicates equal contribution.
2025
- Meaning Beyond Truth Conditions: Evaluating Discourse Level Understanding via Anaphora AccessibilityXiaomeng Zhu*, Zhenghao Zhou*, Simon Charlow, and Robert FrankFeb 2025arXiv:2502.14119 [cs]
We present a hierarchy of natural language understanding abilities and argue for the importance of moving beyond assessments of understanding at the lexical and sentence levels to the discourse level. We propose the task of anaphora accessibility as a diagnostic for assessing discourse understanding, and to this end, present an evaluation dataset inspired by theoretical research in dynamic semantics. We evaluate human and LLM performance on our dataset and find that LLMs and humans align on some tasks and diverge on others. Such divergence can be explained by LLMs’ reliance on specific lexical items during language comprehension, in contrast to human sensitivity to structural abstractions.
@article{zhu2025meaning, title = {Meaning {Beyond} {Truth} {Conditions}: {Evaluating} {Discourse} {Level} {Understanding} via {Anaphora} {Accessibility}}, shorttitle = {Evaluating {Discourse} {Level} {Understanding} via {Anaphora} {Accessibility}}, url = {https://arxiv.org/abs/2502.14119}, doi = {10.48550/arXiv.2502.14119}, urldate = {2025-02-19}, publisher = {arXiv}, author = {Zhu, Xiaomeng and Zhou, Zhenghao and Charlow, Simon and Frank, Robert}, month = feb, year = {2025}, note = {arXiv:2502.14119 [cs]}, }
2024
- Mechanism of Symbol Processing for In-Context Learning in Transformer NetworksPaul Smolensky, Roland Fernandez, Zhenghao Herbert Zhou, Mattia Opper, and Jianfeng GaoOct 2024arXiv:2410.17498 [cs.AI]
Large Language Models (LLMs) have demonstrated impressive abilities in symbol processing through in-context learning (ICL).This success flies in the face of decades of predictions that artificial neural networks cannot master abstract symbol manipulation.We seek to understand the mechanisms that can enable robust symbol processing in transformer networks, illuminating both the unanticipated success, and the significant limitations, of transformers in symbol processing.Borrowing insights from symbolic AI on the power of Production System architectures, we develop a high-level language, PSL, that allows us to write symbolic programs to do complex, abstract symbol processing, and create compilers that precisely implement PSL programs in transformer networks which are, by construction, 100% mechanistically interpretable.We demonstrate that PSL is Turing Universal, so the work can inform the understanding of transformer ICL in general. The type of transformer architecture that we compile from PSL programs suggests a number of paths for enhancing transformers’ capabilities at symbol processing.
@misc{smolensky_tgt_2024, title = {Mechanism of {Symbol} {Processing} for {In}-{Context} {Learning} in {Transformer} {Networks}}, shorttitle = {Mechanisms of {Symbol} {Processing} in {Transformers}}, url = {https://arxiv.org/abs/2410.17498}, doi = {10.48550/arXiv.2410.17498}, urldate = {2024-10-23}, publisher = {arXiv}, author = {Smolensky, Paul and Fernandez, Roland and Zhou, Zhenghao Herbert and Opper, Mattia and Gao, Jianfeng}, month = oct, year = {2024}, note = {arXiv:2410.17498 [cs.AI]}, }
- Is In-Context Learning a Type of Gradient-Based Learning? Evidence from the Inverse Frequency Effect in Structural PrimingZhenghao Zhou, Robert Frank, and R. Thomas McCoyJun 2024arXiv:2406.18501 [cs]
Large language models (LLMs) have shown the emergent capability of in-context learning (ICL). One line of research has explained ICL as functionally performing gradient descent. In this paper, we introduce a new way of diagnosing whether ICL is functionally equivalent to gradient-based learning. Our approach is based on the inverse frequency effect (IFE) – a phenomenon in which an error-driven learner is expected to show larger updates when trained on infrequent examples than frequent ones. The IFE has previously been studied in psycholinguistics because humans show this effect in the context of structural priming (the tendency for people to produce sentence structures they have encountered recently); the IFE has been used as evidence that human structural priming must involve error-driven learning mechanisms. In our experiments, we simulated structural priming within ICL and found that LLMs display the IFE, with the effect being stronger in larger models. We conclude that ICL is indeed a type of gradient-based learning, supporting the hypothesis that a gradient component is implicitly computed in the forward pass during ICL. Our results suggest that both humans and LLMs make use of gradient-based, error-driven processing mechanisms.
@misc{zhou_is_2024, title = {Is {In}-{Context} {Learning} a {Type} of {Gradient}-{Based} {Learning}? {Evidence} from the {Inverse} {Frequency} {Effect} in {Structural} {Priming}}, shorttitle = {Is {In}-{Context} {Learning} a {Type} of {Gradient}-{Based} {Learning}?}, url = {http://arxiv.org/abs/2406.18501}, doi = {10.48550/arXiv.2406.18501}, urldate = {2024-09-02}, publisher = {arXiv}, author = {Zhou, Zhenghao and Frank, Robert and McCoy, R. Thomas}, month = jun, year = {2024}, note = {arXiv:2406.18501 [cs]}, }
2023
- What affects Priming Strength? Simulating Structural Priming Effect with PIPSZhenghao Zhou, and Robert FrankIn Proceedings of the Society for Computation in Linguistics 2023, Jun 2023
The Gradient Symbolic Computation (GSC) framework has been proposed as a general model of human cognitive proessing (Cho et al. 2020, Smolensky et al. 2022). In this study, we use the Parallelism In Producing Syntax (PIPS) model (Brehm et al. 2022), one computational instantiation of the GSC framework, to simulate the structural priming effect (e.g., Bock 1986) in sentence production. We focus on English dative alternation and demonstrate that the PIPS model can qualitatively reproduce the lexically independent priming effect, the lexical boost effect, and under some conditions, the inverse frequency effect shown in humans. This demonstrates the pontential of GSC as a general framework to simulate the process of human sentence production. We leave the underlying mechanisms of how priming effects arises in the PIPS model for future work.
@inproceedings{zhou_what_2023, title = {What affects {Priming} {Strength}? {Simulating} {Structural} {Priming} {Effect} with {PIPS}}, shorttitle = {What affects {Priming} {Strength}?}, author = {Zhou, Zhenghao and Frank, Robert}, booktitle = {Proceedings of the {Society} for {Computation} in {Linguistics} 2023}, month = jun, year = {2023}, volume = {6}, pages = {413--417}, address = {Amherst, MA}, url = {https://openpublishing.library.umass.edu/scil/article/id/947/}, doi = {10.7275/s0rv-1p15}, editor = {Hunter, Tim and Prickett, Brandon}, urldate = {2024-09-02}, }
- Subject-verb agreement with Seq2Seq transformers: Bigger is better, but still not bestMichael Wilson, Zhenghao Zhou, and Robert FrankIn Proceedings of the Society for Computation in Linguistics 2023, Jun 2023
Past work (Linzen et al., 2016; Goldberg, 2019, a.o.) has used the performance of neural network language models on subject-verb agreement to argue that such models possess structure-sensitive grammatical knowledge. We investigate what properties of the model or of the training regimen are implicated in such success in sequence to sequence transformer models that use the T5 architecture (Raffel et al., 2019; Tay et al., 2021). We find that larger models exhibit improved performance, especially in sentences with singular subjects. We also find that larger pre-training datasets are generally associated with higher performance, though models trained with less complex language (e.g., CHILDES, Simple English Wikipedia) can show more errors when trained with larger datasets. Finally, we show that a modelś ability to replicate psycholinguistic results does not correspondingly improve with more parameters or more training data: none of the models we study displays a fully convincing replication of the hierarchically-informed pattern of agreement behavior observed in human experiments.
@inproceedings{wilson_subject-verb_2023, title = {Subject-verb agreement with {Seq2Seq} transformers: {Bigger} is better, but still not best}, shorttitle = {Subject-verb agreement with {Seq2Seq} transformers}, author = {Wilson, Michael and Zhou, Zhenghao and Frank, Robert}, booktitle = {Proceedings of the {Society} for {Computation} in {Linguistics} 2023}, month = jun, year = {2023}, pages = {278--288}, address = {Amherst, MA}, publisher = {Association for Computational Linguistics}, url = {https://aclanthology.org/2023.scil-1.24}, doi = {10.7275/d5gb-v650}, editor = {Hunter, Tim and Prickett, Brandon}, urldate = {2024-09-02}, keywords = {subject-verb agreement, transformer language models, sequence to sequence models, agreement attraction}, }