Dr. Sarah Chen profile photo

Dr. Sarah Chen

Associate Professor of Computer Science

Stanford University

My research focuses on natural language processing, machine learning, and the intersection of language understanding with knowledge representation.


📍 Stanford, CA
🏛 Stanford University
✉ Email🎓 Google Scholar🆔 ORCID💻 GitHub
← Back to Talks

Faithful Reasoning in the Age of Foundation Models

Keynote Address — ACL 2026, Dublin, Ireland — August 2026

This keynote address surveys the rapidly evolving landscape of reasoning in large language models, with a focus on the critical distinction between apparent reasoning ability and genuine faithfulness. As foundation models become increasingly capable of producing fluent, step-by-step explanations for their outputs, the research community faces an urgent question: are these reasoning chains reflective of the model's actual computation, or are they post-hoc rationalizations that may mask systematic errors?

Drawing on our lab's recent work on semantic entailment verification, knowledge-grounded generation, and neurosymbolic methods, I present a unified framework for thinking about faithfulness across the reasoning pipeline. The talk covers three themes: (1) how to define and measure faithfulness at the level of individual reasoning steps, (2) how structured knowledge can serve as an anchor for verifiable multi-hop reasoning, and (3) how symbolic constraints can complement neural flexibility to achieve compositional generalization without sacrificing the expressiveness that makes LLMs so powerful.

Follow:

GitHub

© 2026 Sarah Chen. Powered by Sitelas.