Understanding AI Models: GPT, Claude, and How to Choose
On This Page
Why AI Model Choice Matters
Every AI-powered feature on the platform runs on an AI model, whether it is a chatbot answering customer questions, a workflow summarizing data, or a custom app generating reports. The model you choose affects three things: how good the output is, how fast it responds, and how much each request costs.
A reasoning model like GPT o3-mini can solve complex logic problems that a cheaper model would get wrong, but it costs 5 to 10 times more per request. A fast model like GPT-4.1-mini handles routine conversations at 2 to 4 credits per message, while the same conversation on Claude Opus might cost 15 or more credits. Choosing the cheapest model that still produces accurate results for your specific task is the most effective way to control your AI spending.
The platform supports models from multiple providers, including OpenAI (GPT family) and Anthropic (Claude family). You can switch models per chatbot, per workflow step, or per custom app function without changing any other configuration.
Available AI Models on the Platform
The platform offers several model tiers, each designed for different types of work. All models are accessed through the same interface, so switching between them requires only changing a setting.
Chat Models
Chat models are the standard choice for conversations, content generation, and general-purpose tasks. GPT-4.1-mini is the default chat model, offering strong quality at low cost. Claude Sonnet is the default Claude chat model, with excellent instruction following and natural language quality. Both handle customer support, content writing, and data extraction well.
Reasoning Models
Reasoning models like GPT o3-mini spend extra time thinking through problems before responding. They excel at math, logic, multi-step analysis, and tasks where accuracy matters more than speed. They cost more and respond slower, so they are best reserved for tasks where cheaper models make mistakes.
Cheap Models
Models like GPT-4.1-nano are designed for simple, high-volume tasks where cost matters most. They work well for classification, short answers, data formatting, and routing decisions. They are not ideal for long-form writing or complex reasoning, but at a fraction of the cost of premium models, they are perfect for tasks that do not need deep intelligence.
Premium Models
The most capable models, Claude Opus and GPT-4.1, deliver the highest quality output for demanding tasks. Use them for complex content creation, nuanced analysis, or situations where the output quality directly affects your business. These models cost significantly more per token but produce noticeably better results on difficult tasks.
How to Choose the Right Model
Start with the cheapest model that could work and test it on your actual use case. If the output quality is not good enough, move up one tier. Most businesses find that 80% of their AI usage works perfectly well on mid-tier chat models, with reasoning or premium models needed only for specific tasks.
Consider these factors when choosing:
- Task complexity determines whether you need a reasoning model or a basic chat model
- Volume determines whether cost savings from a cheaper model add up significantly
- Speed requirements determine whether you can use a reasoning model (slower) or need a fast chat model
- Output quality standards determine whether a cheap model produces acceptable results
You can also use multiple models in one workflow, routing simple tasks to cheap models and complex tasks to premium ones. This gives you the best balance of quality and cost.
Model Overviews
Comparisons
Technical Concepts
Practical Guides
Ready to put AI models to work? Create your account and start building chatbots, workflows, and custom apps with the right model for every task.
Get Started Free