Dr. Sarah Chen profile photo

Dr. Sarah Chen

Associate Professor of Computer Science

Stanford University

My research focuses on natural language processing, machine learning, and the intersection of language understanding with knowledge representation.


📍 Stanford, CA
🏛 Stanford University
✉ Email🎓 Google Scholar🆔 ORCID💻 GitHub
← Back to Talks

Building Trustworthy Language Models Through Semantic Grounding

Keynote Address — NeurIPS 2025 Workshop on Reliable AI, Vancouver, Canada — December 2025

As language models are deployed in increasingly consequential settings—from medical diagnosis to legal reasoning to scientific discovery—the question of trustworthiness has shifted from an academic concern to a practical imperative. This keynote presents our lab's work on building language models that are not only capable but verifiably reliable, focusing on the role of semantic grounding as a foundation for trustworthy generation.

The talk is organized around three pillars of trustworthy language modeling. First, I present our semantic entailment verification framework, which decomposes model outputs into atomic claims and validates each against a structured evidence graph. Second, I discuss our work on knowledge-grounded generation, showing how dynamic retrieval of structured knowledge can reduce hallucination rates by over 40% without sacrificing fluency. Third, I outline a vision for interpretable reasoning chains where every step can be traced back to specific evidence, enabling human auditors to identify exactly where and why a model's reasoning fails.

Follow:

GitHub

© 2026 Sarah Chen. Powered by Sitelas.