Dr. Sarah Chen
Associate Professor of Computer Science
Stanford University
My research focuses on natural language processing, machine learning, and the intersection of language understanding with knowledge representation.
Talks
Building Trustworthy Language Models Through Semantic Grounding
NeurIPS 2025 Workshop on Reliable AI, December 2025
Keynote address on our lab's work bridging semantic entailment verification and large language model reasoning. Covered faithfulness metrics, grounding techniques, and practical evaluation frameworks for trustworthy generation.
From Parsing to Reasoning — The Evolution of Structured NLP
Google DeepMind Research Seminar Series, September 2025
Invited talk tracing the arc from classical semantic parsing to modern neurosymbolic approaches. Discussed how structured representations remain critical even as language models grow, and outlined open challenges in compositional generalization.
Knowledge-Grounded Language Generation
ACL 2025, July 2025
Half-day tutorial covering methods for grounding language generation in structured knowledge sources. Topics included retrieval-augmented generation, KG-conditioned decoding, and evaluation of factual consistency.
Semantic Parsing in the Age of Large Language Models
MIT CSAIL Distinguished Lecture Series, March 2025
Colloquium exploring whether LLMs have made traditional semantic parsing obsolete. Argued that explicit semantic representations still provide crucial benefits for interpretability, compositional generalization, and formal verification of model outputs.