Dr. Sarah Chen profile photo

Dr. Sarah Chen

Associate Professor of Computer Science

Stanford University

My research focuses on natural language processing, machine learning, and the intersection of language understanding with knowledge representation.


📍 Stanford, CA
🏛 Stanford University
✉ Email🎓 Google Scholar🆔 ORCID💻 GitHub
← Back to Teaching

CS 224N: Natural Language Processing with Deep Learning

Winter 2024, Winter 2025, Winter 2026 — Stanford University

CS 224N is Stanford's flagship graduate course on natural language processing, offering a comprehensive introduction to the cutting-edge neural network methods that power modern NLP systems. The course covers the full pipeline from foundational representations (word vectors, contextual embeddings) through model architectures (recurrent networks, attention mechanisms, transformers) to advanced topics including pre-training, fine-tuning, and alignment of large language models.

The course is designed for graduate students and advanced undergraduates with a solid foundation in machine learning and linear algebra. Students engage with the material through five programming assignments that build progressively from implementing word2vec to fine-tuning transformer models, culminating in a substantial final research project where students tackle an open NLP problem of their choosing. Past projects have led to publications at top venues including ACL, EMNLP, and NeurIPS.

Since taking over as lead instructor in 2024, I have expanded the curriculum to include dedicated modules on LLM reasoning, faithfulness evaluation, and responsible deployment—reflecting the rapidly evolving landscape of the field. The course regularly enrolls 400+ students and has consistently been rated among the top graduate courses in the School of Engineering.

Topics Covered

  • Word Vectors and Distributional Semantics
  • Neural Network Foundations for NLP
  • Recurrent Neural Networks and Language Modeling
  • Attention Mechanisms and the Transformer Architecture
  • Pre-Training and Transfer Learning (BERT, GPT, T5)
  • Question Answering and Reading Comprehension
  • Machine Translation and Sequence-to-Sequence Models
  • Natural Language Generation and Decoding Strategies
  • Large Language Models: Scaling, Alignment, and RLHF
  • Faithfulness, Factuality, and Trustworthy Generation
  • Multimodal NLP: Vision-Language Models
  • Ethics and Responsible NLP Deployment

Follow:

GitHub

© 2026 Sarah Chen. Powered by Sitelas.