Large language models as Markov chains
By Ievgen Redko
Ievgen Redko will give a talk on Large language models as Markov chains
Abstract
Large language models (LLMs) have proven to be remarkably efficient across a wide range of natural language processing tasks and well beyond them. However, a comprehensive theoretical analysis of the origins of their impressive performance remains elusive. In this talk, I will approach this challenging task by drawing an equivalence between generic autoregressive language models with vocabulary of size T and context window of size K and Markov chains defined on a finite state space of size O(T^K). I will derive several surprising findings related to the existence of a stationary distribution of Markov chains that capture the inference power of LLMs, their speed of convergence to it, and the influence of the temperature on the latter. I will then present pre-training and in-context generalization bounds and show how the drawn equivalence allowed us to enrich their interpretation. Finally, I will illustrate these theoretical guarantees with experiments on several recent LLMs to highlight how they capture the behavior observed in practice.
Biography
Ievgen Redko is a principal research scientist at Paris Noah’s Ark Lab and a time series analysis and transfer learning team leader. He obtained his Ph.D. in 2015 from Sorbonne Paris Cité University (Paris North) and defended his HDR while holding an assistant professor position at the University Jean Monnet of Saint-Etienne in 2022. Previously, Ievgen was a visiting professor at Aalto University and the Finnish Center for Artificial Intelligence working in the Intelligent Robotics group. Prior to that, he held an assistant professor position at INSA Lyon working on fundamental machine learning with applications in health-related areas. His recent research interests broadly include in-context learning of large pre-trained models, time series analysis for forecasting and classification, and their real-world applications.