Loading...
    • Developer Guide
    • API Reference
    • MCP
    • Resources
    • Release Notes
    Search...
    ⌘K
    First steps
    Intro to ClaudeQuickstart
    Models & pricing
    Models overviewChoosing a modelWhat's new in Claude 4.5Migrating to Claude 4.5Model deprecationsPricing
    Build with Claude
    Features overviewUsing the Messages APIContext windowsPrompting best practices
    Capabilities
    Prompt cachingContext editingExtended thinkingEffortStreaming MessagesBatch processingCitationsMultilingual supportToken countingEmbeddingsVisionPDF supportFiles APISearch resultsStructured outputsGoogle Sheets add-on
    Tools
    OverviewHow to implement tool useFine-grained tool streamingBash toolCode execution toolProgrammatic tool callingComputer use toolText editor toolWeb fetch toolWeb search toolMemory toolTool search tool
    Agent Skills
    OverviewQuickstartBest practicesUsing Skills with the API
    Agent SDK
    OverviewQuickstartTypeScript SDKTypeScript V2 (preview)Python SDKMigration Guide
    MCP in the API
    MCP connectorRemote MCP servers
    Claude on 3rd-party platforms
    Amazon BedrockMicrosoft FoundryVertex AI
    Prompt engineering
    OverviewPrompt generatorUse prompt templatesPrompt improverBe clear and directUse examples (multishot prompting)Let Claude think (CoT)Use XML tagsGive Claude a role (system prompts)Prefill Claude's responseChain complex promptsLong context tipsExtended thinking tips
    Test & evaluate
    Define success criteriaDevelop test casesUsing the Evaluation ToolReducing latency
    Strengthen guardrails
    Reduce hallucinationsIncrease output consistencyMitigate jailbreaksStreaming refusalsReduce prompt leakKeep Claude in character
    Administration and monitoring
    Admin API overviewUsage and Cost APIClaude Code Analytics API
    Console
    Log in
    Loading...
    Loading...
    Loading...
    Loading...
    Loading...
    Loading...
    Loading...
    Loading...
    Loading...
    Loading...
    Loading...
    Loading...
    Loading...
    Loading...
    Loading...
    Loading...

    Solutions

    • AI agents
    • Code modernization
    • Coding
    • Customer support
    • Education
    • Financial services
    • Government
    • Life sciences

    Partners

    • Amazon Bedrock
    • Google Cloud's Vertex AI

    Learn

    • Blog
    • Catalog
    • Courses
    • Use cases
    • Connectors
    • Customer stories
    • Engineering at Anthropic
    • Events
    • Powered by Claude
    • Service partners
    • Startups program

    Company

    • Anthropic
    • Careers
    • Economic Futures
    • Research
    • News
    • Responsible Scaling Policy
    • Security and compliance
    • Transparency

    Learn

    • Blog
    • Catalog
    • Courses
    • Use cases
    • Connectors
    • Customer stories
    • Engineering at Anthropic
    • Events
    • Powered by Claude
    • Service partners
    • Startups program

    Help and security

    • Availability
    • Status
    • Support
    • Discord

    Terms and policies

    • Privacy policy
    • Responsible disclosure policy
    • Terms of service: Commercial
    • Terms of service: Consumer
    • Usage policy
    Prompt engineering

    Let Claude think (chain of thought prompting) to increase performance

    While these tips apply broadly to all Claude models, you can find prompting tips specific to extended thinking models here.

    When faced with complex tasks like research, analysis, or problem-solving, giving Claude space to think can dramatically improve its performance. This technique, known as chain of thought (CoT) prompting, encourages Claude to break down problems step-by-step, leading to more accurate and nuanced outputs.

    Before implementing CoT

    Why let Claude think?

    • Accuracy: Stepping through problems reduces errors, especially in math, logic, analysis, or generally complex tasks.
    • Coherence: Structured thinking leads to more cohesive, well-organized responses.
    • Debugging: Seeing Claude's thought process helps you pinpoint where prompts may be unclear.

    Why not let Claude think?

    • Increased output length may impact latency.
    • Not all tasks require in-depth thinking. Use CoT judiciously to ensure the right balance of performance and latency.
    Use CoT for tasks that a human would need to think through, like complex math, multi-step analysis, writing complex documents, or decisions with many factors.

    How to prompt for thinking

    The chain of thought techniques below are ordered from least to most complex. Less complex methods take up less space in the context window, but are also generally less powerful.

    CoT tip: Always have Claude output its thinking. Without outputting its thought process, no thinking occurs!
    • Basic prompt: Include "Think step-by-step" in your prompt.
      • Lacks guidance on how to think (which is especially not ideal if a task is very specific to your app, use case, or organization)

    • Guided prompt: Outline specific steps for Claude to follow in its thinking process.
      • Lacks structuring to make it easy to strip out and separate the answer from the thinking.

    • Structured prompt: Use XML tags like <thinking> and <answer> to separate reasoning from the final answer.

    Examples


    Prompt library

    Get inspired by a curated selection of prompts for various tasks and use cases.

    GitHub prompting tutorial

    An example-filled tutorial that covers the prompt engineering concepts found in our docs.

    Google Sheets prompting tutorial

    A lighter weight version of our prompt engineering tutorial via an interactive spreadsheet.

    • Before implementing CoT
    • Why let Claude think?
    • Why not let Claude think?
    • How to prompt for thinking
    • Examples