Database Tools
Give an agent read-only access to one of your operational databases — PostgreSQL, MySQL, Microsoft SQL Server, or ClickHouse — and let it answer questions that combine document knowledge with live data.
Common cases:
- "How many open Tier-1 incidents do we have, and what does the runbook say about each?"
- "Compare last quarter's churn from our analytics warehouse to what the docs claim about retention drivers."
- "Find the customer record for acme.com and summarize the related support docs."
How it differs from a database connector
A database connector ingests rows into the search index ahead of time. A database tool keeps the database where it is and lets the agent query it live, in the moment. The two complement each other:
| Approach | When to use |
|---|---|
| Connector | The data is reference content (product catalog, runbooks, knowledge tables) that doesn't change minute-to-minute. |
| Database tool | The data is operational (current incidents, today's metrics, customer state) and freshness matters. |
You can use both for the same database — index reference tables as a connector, expose the rest as a tool.
Configuring a database tool
In the team admin dashboard, open Database Tools → Add:
| Field | Description |
|---|---|
| Name | Human-readable name shown to admins. |
| Engine | postgres, mysql, mssql, or clickhouse. |
| Connection | Host, port, database, user, password (or SSL cert). Credentials are encrypted at rest. |
| Schema scope | Optional list of schemas / tables the agent is allowed to see. |
| Description | One paragraph describing what's in the database, in agent-friendly language. The agent uses this to decide when the tool is relevant. |
Once saved, the database is available to every agent in the team. Per-agent access can be locked down in the agent builder.
What the agent can do
Agents get three database tools:
search_database_schema— list tables and columns matching a description, so the agent can find the right tables for the question.get_table_info— inspect a single table's columns, types, and (for some engines) row count.query_database— run a single read-only query and get the rows back.
The agent typically uses these in sequence: discover schema → inspect promising tables → run one or two narrow queries → cite the results in its answer.
Safety guardrails
The platform enforces several invariants on every query the agent runs:
- Read-only: writes (
INSERT,UPDATE,DELETE,DROP,TRUNCATE,GRANT, etc.) are rejected at the tool layer before the database sees them. - Per-query timeout: every query is capped at a wall-clock duration (default 30 seconds). Hitting the cap is treated as a query failure the agent can try to repair.
- Result row cap: results above the configured cap are truncated, with a note for the agent. This prevents an accidental
SELECT *from exhausting the context window. - SSRF protection: connection hosts are validated; agents cannot point a "database" at metadata endpoints, localhost, or private IP ranges (the latter is opt-in and typically only enabled for self-hosted deployments).
- Connection pool isolation: each database tool has its own small pool; one slow query can't starve others.
What the agent cannot do
- Execute arbitrary stored procedures (engine-dependent — defaults to blocked).
- Issue cross-database queries beyond the configured connection scope.
- Create temporary tables or session-scoped state that persists across calls.
- Run queries that exceed the schema scope you configured.
Best practices
- Give the database a clear description (
"Daily revenue and churn data, refreshed every 6 hours from the data warehouse") — this is the single biggest lever for whether the agent uses the tool well. - Restrict the schema scope to what you want the agent to reach. Even read-only access to your
userstable may be more than you want. - Give the agent a read-only database role, not your application's user. Belt-and-braces: the tool layer blocks writes, but a role-level constraint is your second line of defense.
- For ClickHouse, set per-query row scan limits in the connection — the engine's native scan-row cap is more efficient than client-side row truncation for large tables.
- Pair high-impact natural-language-to-SQL agents with human approval on the query before it runs, when wrong queries would be costly to discover after the fact.
Related
- Database Connectors — for ingesting reference data into search
- Custom Tools — wrap arbitrary HTTP endpoints
- Agents — overall agent framework