Home » AI Agents » Database Operations

How AI Agents Read and Write to Databases

AI agents interact with databases to store results, retrieve context, update records, and track their own state. The agent reads data as part of its observation phase, uses AI to analyze or transform that data, then writes results back to the database as its action. Database steps in a workflow give agents persistent memory and the ability to affect real business systems.

Why Agents Need Database Access

Without database access, an agent can only respond to the current input and forget everything afterward. Database operations give agents the ability to build context over time, track what they have already processed, and store their outputs where other systems can use them. A customer service agent that cannot read order history is limited to generic responses. One that queries the orders table can give specific, accurate answers about shipping status, payment details, and return eligibility.

Database access also lets agents coordinate with each other. One agent writes a classification result to a record, and a downstream agent reads that classification to decide what action to take. This pattern replaces manual handoffs between teams with automated data flow between agents.

Reading Data: The Observation Phase

Most agent workflows begin with a database read. The agent queries for records that need processing, pulls customer information for context, or checks its own state to see where it left off. In Chain Commands, database read steps use the query functions to pull data by partition key and sort key.

Single Record Queries

When the agent needs one specific record, it queries by exact partition key and sort key. A support agent handling a ticket reads the customer record using the account ID as the partition key and "main" as the sort key. This returns the full customer profile including credits, active apps, and account settings.

Prefix Queries

When the agent needs a collection of related records, it queries by partition key with a sort key prefix. A feedback processing agent queries all conversation records for a specific chatbot by using the account ID as the partition key and the chatbot ID as the sort key prefix. This returns every conversation that matches, which the agent can then loop through and analyze.

Conditional Reads

Agents often need to read data only when certain conditions are met. A scheduled agent might check a status field first, then only query the full dataset if the status indicates new data is available. This saves unnecessary database reads and reduces processing costs.

Writing Data: The Action Phase

After the AI processes data and makes a decision, the agent writes results back to the database. This could mean updating a status field, appending to an array, creating a new record, or modifying existing values. The write operation is what makes the agent's decision permanent and visible to other systems.

Field Updates

The most common write operation updates a single field on an existing record. An agent that classifies support tickets reads the ticket, sends it to the AI for classification, then updates the ticket's category field with the AI's response. The update function takes the table, partition key, sort key, field name, and new value.

Creating New Records

Some agents create new records as their primary output. A lead qualification agent might read raw form submissions, score each one with AI, then create a new record in the qualified leads table with the score, source, and recommended follow-up action. New records need both a partition key and a unique sort key.

Array Appends

When the agent needs to add to a list without replacing it, it reads the current array, appends the new item, and writes the full array back. A log analysis agent that tracks anomalies reads the existing anomaly list, adds the new finding, and writes the updated list. This pattern preserves history while adding new entries.

Common Database Patterns for Agents

Read, Process, Write

The fundamental pattern: read a record, send relevant fields to the AI, write the AI's output back to the same or different record. A product review agent reads a new review from the reviews table, sends the text to GPT-4.1-mini for sentiment analysis (about 4 credits), then writes the sentiment score and extracted keywords back to the review record.

Queue Processing

The agent reads from a queue table where unprocessed items have a status of "pending." After processing each item, it updates the status to "completed" or "failed." This pattern prevents duplicate processing when the agent runs on a schedule, because items already marked as completed are skipped on the next run. The loop step handles iterating through the queue.

Aggregation

The agent reads multiple records, has the AI summarize or analyze them as a group, then writes the aggregated result to a summary record. A weekly report agent queries all support conversations from the past seven days, sends the batch to the AI for trend analysis, and writes the summary to a reports table. This works well with data processing agents.

State Tracking

The agent maintains its own state record that tracks what it has processed, when it last ran, and any persistent context it needs. On each run, the agent reads its state record first, processes only new items since its last run timestamp, then updates the state record with the current timestamp. This is essential for scheduled agents that run periodically.

Working with Partition Keys and Sort Keys

The database uses a two-key system. The partition key (pid) groups related records together, typically by account ID or domain. The sort key identifies specific records within that group. Understanding this structure is important for building agents that query efficiently.

For the appData table, the partition key is the account ID and the sort key is the app name. Each app gets one row per account. Within that row, different fields store different types of data: the chatbot field holds an array of chatbot configurations, the settings field holds app preferences, and so on.

For the webhosting table, the partition key is the domain name and the sort key identifies the page or resource. Agent workflows that manage website content use this table to read and update pages programmatically.

When designing agent workflows, choose your query keys carefully. Querying by exact partition key and sort key returns one record instantly. Querying by partition key with a sort key prefix returns all matching records, which works well for batch processing but returns more data.

Batch Processing Database Records

Agents that process multiple records need to handle batches efficiently. The workflow reads all matching records with a prefix query, then uses a loop step to iterate through each record individually.

For each record in the loop, the agent sends the relevant data to the AI, waits for the response, and writes the result back. Each AI call costs credits (GPT-4.1-mini costs about 4 credits per call, GPT-5-nano costs about 1 credit), so plan accordingly when processing large batches. A batch of 200 records processed with GPT-4.1-mini costs roughly 800 credits.

Consider using cheaper models for straightforward classification tasks. If the agent is sorting records into three categories and the decision is usually obvious from the data, GPT-5-nano at 1 credit per call handles it well. Reserve more capable models for tasks where nuance matters.

Performance tip: When processing large batches, write results after each iteration rather than collecting all results and writing once at the end. This way, if the workflow fails partway through, the completed items are already saved and will not need reprocessing.

Handling Database Errors

Database operations can fail for several reasons: the record does not exist, the table is temporarily throttled, or the data format is unexpected. Robust agents handle these cases with error handling branches in the workflow.

For missing records, add a conditional step after the read that checks whether data was returned. If the query returned nothing, the agent can create the record, skip the item, or log a warning, depending on what makes sense for the use case.

For write failures, the workflow can retry once or queue the failed write for manual review. The key principle is that the agent should never silently lose data. If a write fails, the agent should either retry successfully or leave a clear record of what failed and why.

For unexpected data formats, validate the structure before sending to the AI. If a required field is missing or the data type is wrong, route to a fallback path rather than sending malformed data to the AI model, which wastes credits and returns unreliable results.

Setting Up Database Steps in Chain Commands

In the Chain Commands visual workflow builder, database operations are added as steps in your agent workflow. Each step specifies the operation type (read or write), the table, the keys, and the field to access.

Adding a Read Step

Create a new step in your workflow and configure it as a database query. Set the table name, partition key (which can reference a variable from a previous step), and sort key. The query result becomes available as a variable for subsequent steps. If you need all records matching a prefix, use the loop query variant.

Adding a Write Step

After the AI step produces its output, add a database update step. Set the table, partition key, sort key, field name, and the value to write. The value can be the AI's response, a transformed version of it, or a combination of AI output and data from previous steps. Use variables to reference data from any earlier step in the workflow.

Connecting Steps

The workflow connects read, AI, conditional, and write steps in sequence. A typical agent pattern looks like this: database read, pass data to AI, check AI response with a condition, then branch to different database write steps based on the condition result. The workflow and conditional logic guide covers how to connect these steps effectively.

Build AI agents that read and write to databases with visual workflows. No coding required.

Get Started Free