Dr. Sarah Chen profile photo

Dr. Sarah Chen

Associate Professor of Computer Science

Stanford University

My research focuses on natural language processing, machine learning, and the intersection of language understanding with knowledge representation.


📍 Stanford, CA
🏛 Stanford University
✉ Email🎓 Google Scholar🆔 ORCID💻 GitHub
← Back to Talks

Semantic Parsing in the Age of Large Language Models

Invited Talk — Stanford AI Seminar Series, Stanford, CA — April 2025

With the rise of large language models that can generate plausible answers to virtually any question, a provocative question has emerged in the NLP community: is semantic parsing still relevant? This talk argues emphatically that it is—and that the need for explicit semantic representations has never been greater. As language models are deployed in settings that require formal correctness (database queries, API calls, robotic instructions), the gap between natural language fluency and formal precision becomes a critical failure mode.

I present a retrospective on how the field of semantic parsing has evolved over the past decade, from feature-rich log-linear models to neural sequence-to-sequence approaches to today's LLM-based systems. I then discuss three areas where explicit semantic representations provide crucial advantages that LLMs alone cannot match: interpretability (every output can be formally verified), compositional generalization (grammar constraints prevent novel-but-invalid combinations), and formal verification (outputs can be statically checked against specifications before execution). The talk concludes with our vision for hybrid systems that leverage LLMs for understanding and symbolic parsers for precision.

Follow:

GitHub

© 2026 Sarah Chen. Powered by Sitelas.