Adding Advanced Logging for Analytics
While by default we already capture analytics for each StackAI workflow run - data such as run ID, input, output, user, status, latency, token count, etc. - sometimes you may find yourself needing workflow run data beyond the built-in analytics data.
For example, you may need to reference data such as the user identity or number of follow-ups and timestamps to determine how the workflow should continue. In these scenarios, we recommend using an external database in your workflow to capture such data for each run - databases such as Airtable, Google Sheets, or Excel. This is useful when multiple agents work the same case over time, and the database becomes the shared source of truth across runs.
When to use this approach
You need custom per-run metadata not exposed in the default analytics.
Multiple agents contribute to one process over days or weeks.
Follow-up logic depends on how many times a process has run for a record.
You need custom reports or dashboards from raw per-run data.
Why this matters
Shared state across agents: Agents read and update the same record.
Granular tracking: Store only the fields you care about.
Follow-up control: Enforce retries, escalation, and completion gates.
Custom reporting: Build stakeholder-friendly reporting on top.
Potential use cases
Case management: Track status, assessments, and follow-up counts.
Vendor outreach: Track outreach attempts, missing info, and escalation.
Lead qualification: Persist scoring and engagement history.
Document pipelines: Log processing status and flags per document.
Choosing a database
Any database with an API can work. Some options integrate more cleanly inside StackAI.
Recommended: Airtable
Airtable is usually the easiest option. StackAI has native actions for querying and updating records:
Write to Airtable: insert new rows.
Query Airtable: find records and return record IDs.
Update Airtable Record: update fields by record ID.
The Query Airtable action can prototype with natural language. Example: “Find the record where reference_number equals 12345.” Use the returned record ID in Update Airtable Record.
Alternatives: Google Sheets, Excel, and others
Spreadsheets can work as a lightweight database. Most spreadsheet APIs update by cell address (like B5). They don’t update by field value.
Plan an extra lookup step: use Code Execution tool to find the right row, then compute the target cell addresses.
Implementation setup
You need three pieces:
Write action: create the record on first run.
Query action: find the record on later runs.
Update action: update counters, status, and other fields.
Design your database schema
Define the fields you want to track. Design for what later agents need to read or update.
Common fields:
User identifier (email, name, account ID)
Case or reference number
Timestamps (created, last updated)
Status or completion flag
Counters (like follow-up count)
Extracted results and classification outputs
Write initial data on the first run
At the end of your first-run path, write a new record. Use an LLM node to extract structured fields from context. Then pass the output into your write action.
Example extraction prompt:
“From the analysis above, extract: reference number (digits only), today’s date, user email, and a one-sentence summary. Output as a JSON object.”
Query the database on subsequent runs
On follow-ups, query by a stable identifier. Use a reference number, email, or account ID. Pull this from the current input or memory.
For Airtable:
Start with Text to SQL to prototype.
Switch to direct SQL once it’s correct.
This avoids model misinterpretation in production.
Update the record
Use the record ID from the query. Update the fields you care about in the database tool, like:
Increment a follow-up counter
Change
statusfromin_progresstocompletedAppend notes or results
If you need read-modify-write, add a Code Execution node. Extract the record ID and current field values. Then send the update.
Tips
Use Send HTTP Request for bulk operations: Native actions can have row limits. Use the provider API to bypass those limits.
Build a “smart router” with LLM + If/Else: Constrain the LLM output to exact labels like
"path_1". Match those labels in If/Else.Treat the database as internal infrastructure: Don’t expose it to end users. Build a separate reporting agent if needed.
Use StackAI Project node: You can build a specific agent dedicated to writing info to database, and call that subagent in your main workflow.
Last updated
Was this helpful?

