Home » AI Models Guide » Pricing Comparison

AI Model Pricing Comparison: Cost Per Token Explained

AI model pricing is based on tokens, which are the basic units of text that models process. On this platform, costs are expressed in credits (1 credit = $0.001, so 1,000 credits = $1). The cheapest models cost under 1 credit per typical request, while premium and reasoning models can cost 10 to 20 credits or more. Understanding token-based pricing helps you estimate costs and choose the right model for your budget.

What Are Tokens

Tokens are the units AI models use to process text. A token is roughly 3 to 4 characters of English text, or about three-quarters of a word. A 100-word paragraph is approximately 130 to 140 tokens. AI models charge separately for input tokens (what you send) and output tokens (what the model generates), with output tokens typically costing 2 to 4 times more per token than input. For a deeper explanation, see What Are Tokens and How AI Model Pricing Works.

Platform Pricing by Model Tier

The platform uses a credit-based system with a 2x markup on raw API costs when using platform-provided API keys. If you connect your own API key, you pay the provider's raw cost with no platform markup on the AI model fee. All prices below are approximate credits per typical request (a few hundred tokens of input and output).

Cheap Tier

GPT-4.1-nano: Under 1 credit for most requests. At this price point, you can run thousands of classification or routing tasks per dollar. This is the model to use for high-volume, simple tasks where cost is the primary concern.

Mid Tier

GPT-4.1-mini: Roughly 2 to 4 credits per typical chatbot response. The most popular model for customer-facing chatbots because it balances quality and cost well.

Claude Sonnet: Comparable to GPT-4.1-mini in overall cost per request. Slightly different pricing per token but similar total cost for typical interactions.

Premium Tier

GPT-4.1: Roughly 5 to 10 credits per typical response. Use for high-quality content generation and complex analysis where mid-tier quality is not sufficient.

Claude Opus: Roughly 10 to 20 credits per typical response. The most expensive model available, but produces the highest quality output for demanding tasks.

Reasoning Tier

GPT o3-mini: Highly variable, roughly 5 to 25 credits per request depending on the complexity of the reasoning required. The model generates many internal tokens during its thinking process, all of which count toward the cost. Simple questions cost less, complex multi-step problems cost more.

Important: These are approximate costs for typical requests. Actual costs vary based on the length of your system prompt, conversation history, knowledge base results included, and the length of the model's response. Longer conversations with more context cost more per message.

Input vs Output Token Pricing

All AI providers charge different rates for input tokens and output tokens. Input tokens include everything you send to the model: the system prompt, conversation history, RAG results from your knowledge base, and the user's message. Output tokens are the model's response. Since output tokens cost more, tasks that generate long responses cost proportionally more than tasks with short answers.

This has practical implications: a chatbot with a long, detailed system prompt costs more per message than one with a concise prompt, even if the responses are the same length. Optimizing your system prompt length is one of the easiest ways to reduce AI costs.

Platform Keys vs Your Own Keys

When using the platform's built-in API keys, a 2x markup is applied to the raw AI model cost. This covers the platform's API key management, routing, failover, and support. When you connect your own OpenAI or Anthropic API key, the AI model fee passes through at raw cost with no markup. You still pay the platform's per-request software fee (1 to 10 credits depending on the feature), but the AI portion has no added margin.

For low-volume users, platform keys are simpler because you do not need to manage separate API accounts. For high-volume users, connecting your own key can reduce the AI model portion of your costs significantly.

How to Estimate Monthly Costs

To estimate your monthly AI spending, multiply the average credits per request by the number of requests you expect. For example:

See How AI Model Selection Affects Your Monthly Bill for more detailed scenarios and optimization strategies.

Start with any model and see real costs in your dashboard. Scale up or down as needed.

Get Started Free