Large Language Models (LLMs) have demonstrated impressive abilities in symbol processing through in-context learning (ICL).This success flies in the face of decades of predictions that artificial neural networks cannot master abstract symbol manipulation.We seek to understand the mechanisms that can enable robust symbol processing in transformer networks, illuminating both the unanticipated success, and the significant limitations, of transformers in symbol processing.Borrowing insights from symbolic AI on the power of Production System architectures, we develop a high-level language, PSL, that allows us to write symbolic programs to do complex, abstract symbol processing, and create compilers that precisely implement PSL programs in transformer networks which are, by construction, 100% mechanistically interpretable.We demonstrate that PSL is Turing Universal, so the work can inform the understanding of transformer ICL in general. The type of transformer architecture that we compile from PSL programs suggests a number of paths for enhancing transformers’ capabilities at symbol processing.
Is In-Context Learning a Type of Gradient-Based Learning? Evidence from the Inverse Frequency Effect in Structural Priming
Large language models (LLMs) have shown the emergent capability of in-context learning (ICL). One line of research has explained ICL as functionally performing gradient descent. In this paper, we introduce a new way of diagnosing whether ICL is functionally equivalent to gradient-based learning. Our approach is based on the inverse frequency effect (IFE) – a phenomenon in which an error-driven learner is expected to show larger updates when trained on infrequent examples than frequent ones. The IFE has previously been studied in psycholinguistics because humans show this effect in the context of structural priming (the tendency for people to produce sentence structures they have encountered recently); the IFE has been used as evidence that human structural priming must involve error-driven learning mechanisms. In our experiments, we simulated structural priming within ICL and found that LLMs display the IFE, with the effect being stronger in larger models. We conclude that ICL is indeed a type of gradient-based learning, supporting the hypothesis that a gradient component is implicitly computed in the forward pass during ICL. Our results suggest that both humans and LLMs make use of gradient-based, error-driven processing mechanisms.
2023
What affects Priming Strength? Simulating Structural Priming Effect with PIPS
Zhenghao Zhou, and Robert Frank
In Proceedings of the Society for Computation in Linguistics 2023, Jun 2023
The Gradient Symbolic Computation (GSC) framework has been proposed as a general model of human cognitive proessing (Cho et al. 2020, Smolensky et al. 2022). In this study, we use the Parallelism In Producing Syntax (PIPS) model (Brehm et al. 2022), one computational instantiation of the GSC framework, to simulate the structural priming effect (e.g., Bock 1986) in sentence production. We focus on English dative alternation and demonstrate that the PIPS model can qualitatively reproduce the lexically independent priming effect, the lexical boost effect, and under some conditions, the inverse frequency effect shown in humans. This demonstrates the pontential of GSC as a general framework to simulate the process of human sentence production. We leave the underlying mechanisms of how priming effects arises in the PIPS model for future work.
Subject-verb agreement with Seq2Seq transformers: Bigger is better, but still not best
Michael Wilson, Zhenghao Zhou, and Robert Frank
In Proceedings of the Society for Computation in Linguistics 2023, Jun 2023
Past work (Linzen et al., 2016; Goldberg, 2019, a.o.) has used the performance of neural network language models on subject-verb agreement to argue that such models possess structure-sensitive grammatical knowledge. We investigate what properties of the model or of the training regimen are implicated in such success in sequence to sequence transformer models that use the T5 architecture (Raffel et al., 2019; Tay et al., 2021). We find that larger models exhibit improved performance, especially in sentences with singular subjects. We also find that larger pre-training datasets are generally associated with higher performance, though models trained with less complex language (e.g., CHILDES, Simple English Wikipedia) can show more errors when trained with larger datasets. Finally, we show that a modelś ability to replicate psycholinguistic results does not correspondingly improve with more parameters or more training data: none of the models we study displays a fully convincing replication of the hierarchically-informed pattern of agreement behavior observed in human experiments.