Simply by training on massive amounts of data, large language models (LLMs) can produce highly fluent text across many languages. While structured linguistic resources—such as treebanks—and theoretically grounded linguistic frameworks have long played a central role in advancing NLP, the rise of LLMs raises important questions about their continued utility in this new paradigm. In this talk, I will present several applications of linguistic frameworks, including visual question answering, machine translation for low-resource languages, and classifier model interpretability. These applications demonstrate the role of linguistics in the age of LLMs. I will conclude by outlining promising directions for future research, highlighting open areas in NLP where linguistic frameworks can play a critical role.
Shira Wein has been an Assistant Professor in the Computer Science department of Amherst College, since fall 2024. Before starting the position at Amherst, Shira received her Ph.D. in Computer Science from Georgetown University, where she was the recipient of a Clare Boothe Luce scholarship. Shira's research is in the area of natural language processing and is specifically interested in multilingual language modeling, computational semantics, and evaluation tools. She has previously completed research at Google, the University of Southern California, and the NASA Jet Propulsion Laboratory. In addition to research, she is passionate about teaching courses across the computer science curriculum and mentoring students.