Home » AI Models Guide » Reasoning Models

What Are Reasoning Models and When to Use Them

Reasoning models are AI models that spend extra computation time thinking through a problem before generating a response. Unlike standard chat models that produce answers immediately, reasoning models work through intermediate steps internally, which makes them significantly more accurate on math, logic, multi-step analysis, and complex decision-making tasks. They cost more and respond slower, but they get hard problems right where cheaper models fail.

How Reasoning Models Differ From Chat Models

A standard chat model like GPT-4.1-mini generates each word of its response one after another, choosing the most likely next word based on the conversation so far. This works well for most tasks, but it means the model cannot step back and reconsider its approach partway through a complex problem.

Reasoning models add an extra step before generating the visible response. They first produce a chain of thought, working through the problem step by step in an internal reasoning process. This means they can break complex problems into parts, check their work, consider alternative approaches, and arrive at more reliable answers. The trade-off is that this extra computation takes more time and costs more tokens.

Available Reasoning Models

GPT o3-mini

The primary reasoning model on the platform. GPT o3-mini is built specifically for tasks requiring logical thinking and multi-step problem solving. It is slower than chat models and costs more per request, but it achieves substantially higher accuracy on tasks involving math, data analysis, code debugging, and complex business logic.

Cost note: Reasoning models use significantly more tokens per request because of their internal thinking process. A single reasoning request might use 5 to 10 times more tokens than the same question answered by a chat model. Check the pricing comparison to understand the cost difference.

When to Use a Reasoning Model

When NOT to Use a Reasoning Model

Using Reasoning Models in Workflows

The most cost-effective approach is to use reasoning models selectively within a larger workflow. For example, a workflow might use GPT-4.1-nano to classify incoming support tickets (cheap, fast), then route complex technical questions to a reasoning model for analysis (accurate, thorough), and use GPT-4.1-mini to draft the final response (natural writing). This way you only pay reasoning-model prices for the steps that actually need it.

You can configure which model to use at each step of a chain command workflow, mixing cheap models for simple steps with reasoning models for the hard parts.

Try reasoning models on the platform. See how they handle your most complex business questions.

Get Started Free