Exploring High-Level Abstract Concepts in Large Language Models

Graduate StudentsMohamed Abdelwahab, Soliman Ali, Michael Murray, Ahsan Kaleem

It seems clear that Large Language Models (LLMs) are capable of inferring higher-level abstract concepts from their input, and perhaps associating them in order to do their impressive work for us.  As the influence of LLMs expands, it is imperative to gain insight into their decisions. One way to do that is to develop probes that detect the presence or absence of a broad set of concepts within the embeddings computed in an LLM - which is what we might say a model is “thinking" about. Such probes should be low-cost and easily applicable to any LLM, so that monitoring for many concepts is possible during normal operation.

In this research, we have taken the first steps towards developing the capability of creating  many such probes by defining and executing examples of the key tasks needed: first, the careful delineation of a concept through the creation of a dataset with the concept both present and then absent. Then, the training and testing of a set of linear probes to detect the concept on any layer of an LLM, including an exploration of the complexity of the probe needed. Finally, we show that such probes can track concepts across larger contexts. This is done with four separate concepts and three different LLMs. When this process is scaled to many more concepts, it will create the ability to easily monitor new models.

Publications

Jonathan.Rose@utoronto.ca

The Edward S. Rogers Sr. Department of Electrical and Computer  Engineering,

Faculty of Applied Science and Engineering, University of Toronto