1. What This Paper Does
Imagine you ask a very smart assistant to solve a math word problem. If you just say "answer this," the assistant might blurt out a number without thinking it through—and often get it wrong. But if you first show the assistant a few examples of how to think step by step, suddenly it can solve much harder problems. That is the core insight of this paper.
Wei et al. introduce chain-of-thought (CoT) prompting, a remarkably simple technique: instead of giving a language model plain input-output examples in a few-shot prompt, you include intermediate reasoning steps—a "chain of thought"—in each example. The model then learns to produce its own chain of thought before arriving at an answer. No fine-tuning, no new training data, no architectural changes—just a different way of writing your prompt.