Only this pageAll pages
Powered by GitBook
Couldn't generate the PDF for 167 pages, generation stopped at 100.
Extend with 50 more pages.
1 of 100

StackAI Docs

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Best Practices

Loading...

Loading...

Loading...

Loading...

Interactive Tutorials

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

WORKFLOW BUILDER

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

What Makes StackAI Unique

Quickly deploy agents for your enterprise company with a no-code platform anyone can use.

StackAI is an enterprise platform for building and deploying AI agents, with a strong focus on governance and security.

IT and Operations teams utilize StackAI to deploy internal applications that enhance and automate business processes, ranging from the simplest use cases, such as chatbots that retrieve information from databases like Microsoft SharePoint, to the most sophisticated automations, including performing in-depth research to generate investment memos.

StackAI's powerful orchestration engine and extensive integrations simplify and accelerate the automation of business processes. We provide the observability and controls necessary to deploy AI Agents across your organization. That is why banks, defense companies, and governments trust us to accelerate their transition to an AI-first organization and streamline their productivity.

End-to-End Experience

StackAI offers an end-to-end experience, allowing users to build both the logic of the AI agent using a drag-and-drop workflow builder and the interface of the AI agent by selecting pre-built user interfaces. Once the AI Agent is configured, it can be deployed and monitored with a few clicks.

AI Pipeline & Integrations

All components to build an AI application are available in StackAI with different levels of abstraction to help both non-technical and technical teams. The entire data pipeline needed for deploying AI applications can be configured in just a few steps.

From document indexing to retrieval, users can build Retrieval Augmented Generation (RAG) systems by simply dragging a “Knowledge Base” node into their workflow, with default settings optimized for 90% of the use cases. This significantly streamlines the development of AI applications for both non-technical and technical users (i.e., IT teams new to AI development).

Function calling, implemented through the concept of “Tools,” is equally intuitive. Users only need to select the desired tools at the LLM level; no code or complex setup required.

StackAI also offers a wide range of integrations with both established systems (such as SharePoint, SAP, Workday, Salesforce, etc) as well as modern SaaS companies (like Exa AI, Snowflake, and Miro), making data ingestion seamless and efficient.

Implementation Support

StackAI provides the platform and forward-deployment engineers to help customers build complex use cases and deploy those with confidence. Given the pace of innovation in AI, we offer the necessary guidance both with AI strategy as well as with tactical development advice, from testing new AI models to experimenting with different agentic architectures.

StackAI supports all enterprise customers with weekly co-building sessions led by forward-deployed engineers, GenAI-focused hackathons (also known as Stackathons), and Quarterly Business Review (QBR) sessions to align on progress and roadmap priorities.

AI Governance

StackAI is SOC 2 Type II, HIPAA, and GDPR compliant, with ISO 27001 certification in progress. For organizations with strict data residency or sovereignty requirements, StackAI offers On-Premise deployment options, critical for industries like defense and finance. This capability sets us apart from most providers; since they force customers to use their cloud solution.

Granular Role-Based Access Control (RBAC) enables admins to precisely govern who can modify and interact with LLMs, edit Knowledge Bases, or publish Workflows. Every component, from interfaces to citations, can be secured, authenticated (e.g. via SSO), and production-locked to ensure transparent accountability and control.

Admins can enforce approval flows, protect production environments from accidental edits, and ensure only reviewed agents are launched, with version control of all changes. Furthermore, Admins receive notifications of the status of their agents in production in real-time.

On-Premise

StackAI's on-premise deployment offers enterprise-grade control, performance, and security by running entirely within your organization's infrastructure. This allows customers to have full control over their data, and deploy local LLMs that can be orchestrated in StackAI, resulting in fully controlled AI applications.

Our on-premise deployment includes Single Sign-On (SSO), and can be deployed in most cloud providers (including AWS, GCP, and Azure) or in your organization's servers.

Platform Overview

Projects Dashboard

Let's start by creating a new project in Stack AI dashboard. Click on the "New Project" button in the top right corner of the dashboard.

Templates

Create a Quick Start project to open a blank project, or browse through a list of pre-built templates. You can choose the "Chat with Knowledge Base" template to build an application where your users can ask questions on a knowledge base you've uploaded.

You can search for a specific application you would like to integrate with a Large Language Model, like "Gmail," or you can look through the list of use cases on the left side of the screen.

Workflow View

Once you create a project, you will see an interface with 3 main components:

Workflow View

  • Canvas: the main component of the Stack AI tool, a 2D canvas where you can drag and drop nodes and connect them to build your workflow.

  • Sidebar: a large variety of functionality blocks, also called nodes, can be found on the left hand side. These nodes represent components in the flow where data is received, processed, and returned from different services. Chat with Workflow on the bottom of the sidebar for advice on how to build or to learn about what an existing project does.

  • Control bar: a set of commands at the top right hand side with buttons to 'Save' (save the current version), 'Run' (execute the workflow as it is in the canvas), 'Share' (share the current version with another Stack AI user), and 'Publish' (make your workflow available externally).

The Workflow View is where you can build your project, by dragging and connecting the necessary nodes to create your agent. Once you are ready to deploy your agent to users, hit 'Publish' in the top right corner. If its your first time publishing your project, you will be prompted to enter the Export View to choose an interface for your agent.

Other Important Views

Export View

By clicking the 'Export' button in the top left side bar, you will see a view with different interface options.

We offer pre-built interfaces for your AI chatbots that can be easily customized to match your brand's look and feel.

You can choose from a ChatGPT-style interface, a website chatbot, a voice interface, or deploy your chatbot via Slack, WhatsApp, or SMS. We offer the option to use Stack AI as a backend process, leveraging our APIs to send inputs, receive results, and build your own custom UI.

You will find different customization options in the Export tab (name, logo, colors, etc.). Configure a custom domain if required and protect your chatbot with SSO or password.

An URL is generated for you to share with your colleagues.

IMPORTANT: whenever you make changes to your project in the Builder View, always remember to click the Publish button. This will update the assistant's user interface.

Analytics View

Clicking on the 'Analytics' button in the top left sidebar will display a summary of your workflow usage.

This page features four graphs that provide an overview of workflow utilization, along with a complete list of execution logs. The logs include valuable information such as the execution status, runtime, tokens consumed, and the workflow's inputs and outputs. Additionally, you can filter the analytics by date using the selector in the top left corner.

Manager View

Clicking the 'Manager' button in the top left sidebar will take you to a view with all user conversations with your workflow. To download all conversations, click the "Download" button. To clear the conversation history, click "Delete".

Evaluator View

Click 'Evaluator' to enter the Evaluator View. This view allows you to test your agent on a batch of inputs uploaded in a CSV file. You can have the output be graded by another LLM, who will judge whether your agent outputs desirable results. You can also have the LLM compare the output of your agent to a gold standard answer, or make suggestions. Just edit the prompt on the right side of the screen.

Security & Privacy

StackAI prioritizes data protection and compliance, making it suitable for industries with stringent regulatory requirements. Key security features include:

  • Compliance Certifications: Adherence to SOC 2 Type II, HIPAA, and GDPR standards ensures that data handling meets global regulatory requirements.

  • Guardrails: LLMs can deviate from their initial requirements by answering questions they were not prompted to answer. Guardrails allow our customers to ensure LLMs do not answer questions and topics they are not supposed to reply.

  • PII Protection: Built-in mechanisms detect and mask Personally Identifiable Information (PII), safeguarding sensitive data during processing.

:

  • Data Retention Policies: Organizations can define data retention durations, ensuring data is stored only as long as necessary.

  • No Data Training: StackAI ensures that user data is not used to train AI models as part of its enterprise agreements with providers, maintaining data confidentiality.

Multi-Factor Authentication

To turn on MFA, go to Settings -> Feature Access -> Authentication. Click manage and turn on MFA. This will apply to all users in your organization.

New to Generative AI?

Understand how it works.

What is Generative AI?

Generative AI refers to a type of artificial intelligence that is capable of generating content. It involves the use of models trained to generate new data that mimic the distribution of the training data. Generative AI can create a wide array of content, including but not limited to text, images, music, and even synthetic voices.

What is an LLM?

LLM stands for Large Language Model. These models, such as GPT-4, are a type of artificial intelligence model that uses machine learning to produce human-like text. Large Language Models are trained on vast amounts of text data and can generate sentences by predicting the likelihood of a word given the previous words used in the text. They can be fine-tuned for a variety of tasks, including translation, question-answering, and writing assistance. These models are called "large" because they have a huge number of parameters. For example, GPT-4, one of the largest models as of today, has about 1.8 trillion adjustable parameters. Their large parameter count allows these models to capture a wide range of language patterns and nuances, but also makes them computationally expensive to train and use.

What type of applications can I build with Generative AI?

Generative AI models have a wide range of potential applications across numerous fields. Here are some examples:

  1. Content Creation: These models can generate new pieces of text, music, or artwork. For example, AI could create music for a video game, generate a script for a movie, or produce articles or reports.

  2. Chatbots and Virtual Assistants: Generative models can be used to create conversational agents that can carry on a dialogue with users, generating responses to user queries in a natural, human-like manner.

  3. Image Generation and Editing: Generative Adversarial Networks (GANs) can generate realistic images, design graphics, or even modify existing images in significant ways, such as changing day to night or generating a person's image in the style of a specific artist.

  4. Product Design: AI can be used to generate new product designs or modify existing ones, potentially speeding up the design process and introducing new possibilities that human designers might not consider.

  5. Medical Applications: Generative AI can be used to create synthetic medical data, simulate patient conditions, or predict the development of diseases.

  6. Personalized Recommendations: AI models can generate personalized content or product recommendations based on user data.

  7. Data Augmentation: In situations where data is scarce, generative models can be used to create synthetic data to supplement real data for training other machine learning models.

What is an embedding?

Embeddings are numerical representations of concepts converted to number sequences, which make it easy for computers to understand the relationships between those concepts. They are capable of capturing the context of a word in a document, its semantic and syntactic similarity, and its relation with other words.

What is a vector store?

A vector store in the context of machine learning is a storage system or database designed to handle vector data efficiently. Vector data is commonly used in fields like natural language processing and computer vision, where high-dimensional vectors are used to represent complex data like words, sentences, or images. Vectors stores are often optimized for operations that are common in machine learning, like nearest neighbor search, which involves finding the vectors in the store that are closest to a given vector. This is particularly useful in tasks like recommendation systems, where you might want to find the items that are most similar to a given item.

What is a multimodal model?

A multimodal model in the field of artificial intelligence is a model that can handle and integrate data from multiple different modalities, or types, of input. These types of inputs can include text, images, audio, video, and more. The main advantage of multimodal models is that they can leverage the strengths of different data types to make better predictions. For example, a model that takes both text and image data as input might be able to understand the context better than a model that only uses one or the other.

What is the memory of an LLM?

A Large Language Model (LLM) can generate text based on what it has seen before if it has memory. The term "memory" in this context refers to how much of the previous text the model can consider when producing new text. Memory is a different concept than the training set used to train the model. The model can answer things from what it knows given the training set that is was given. Additionally, considering that you are chatting with chatGPT, the model will respond to your queries considering the last responses and queries as well. This "memory" of the model is indeed crucial when dealing with long pieces of text or conversations, as it determines how much of the previous context the model can use to generate accurate and coherent responses.

PII protection at the LLM node

Interactive Tutorials

Inputs

Outputs

Knowledge Bases

Apps

Audio Input Node

The Audio Node allows you to upload or record an audio clip as input. The audio is converted to text (using an audio-to-text LLM) and passed to your model.

Providers

The Audio Node enables you to choose from two providers that will transcribe your audio:

  • deepgram: Uses Deepgram's API for audio transcription. Supports multiple models and submodels.

  • whisper-1: Uses OpenAI's Whisper v1 model. Does not support model or submodel selection (uses a default configuration).

Model

Available only when using the deepgram provider. Defines the main model used for transcription.

  • nova: Legacy model, fast and lightweight.

  • nova-2: Latest generation with improved accuracy and speed.

  • enhanced: Optimized for high-quality audio and complex content.

  • base: Baseline transcription model with balanced performance.

This field is disabled for whisper-1.

Submodel

Further refines transcription behavior. Available only with deepgram.

  • general: Default submodel for general-purpose transcription.

  • Other submodels exist depending on Deepgram's model.

This field is disabled for whisper-1.

Audio Node Settings

If you're using your own audio-to-text model, here you can add your own API key to use it.

How to use it

  1. Add an Audio to Text node to your flow.

  2. Connect the Audio to Text node to an LLM node.

  3. Mention the Audio to Text node in the LLM node by pressing "/" and selecting the Audio to Text node.

  4. Add an Output node to your flow.

  5. Connect the Output node to the LLM node.

Expose the Audio to Text node to your users

  1. Go to the Export tab.

  2. Enable the audio node in the Inputs section.

  3. Press Save Interface to save your changes.

  4. Your users should now see an upload button in the interface.

Output Node

What is an Output Node?

An Output node displays the results generated by other nodes in your workflow. It acts as the final delivery point in your data pipeline, where outputs from processing nodes (like LLMs or data processors) are shown to your users.

The Output node is typically connected to nodes that generate textual or structured responses, such as:

  1. LLM nodes (to display answers or messages).

  2. Knowledge Base nodes (to show retrieved documents or summaries).

Key benefits of the Output node:

  • Provides a clear way to present results to end-users.

  • Enables visibility into the results of data transformations or AI completions.

  • Supports dynamic interfaces when exposed via the Export tab.

How to expose Outputs externally?

To allow users to see the Output node results:

  1. Go to the Export tab.

  2. Enable the Output node in the Outputs section under Fields.

  3. Click Save Interface.

  4. The Output node’s results will now appear in your external interface when the workflow is triggered.

What to expose in the Output node?

With the Output node, you will be able to plot the result coming out of any other node.

Here are a few quick facts:

  • Markdown: The output node uses Markdown to format the text. You can use it to add links, imgs, headings, and more.

  • Length: While output can have any length, you should be mindful that LLM prompts have a limit on how many words they can result.

  • Intermediate Outputs: Use intermediate Output nodes to debug your workflow: for example, connect it to a vector store to see what chunks of text are returned.

Action Node

What is an Action Node?

An Action node allows your workflow to interact with external systems. You can use it to send data to other apps, update databases, trigger web searches, or automate other tasks across services.

This node is typically used after collecting and processing data through Input or LLM nodes.

Common uses include:

  • Sending rows to Airtable or Excel

  • Updating documents in Notion or MongoDB

  • Querying or writing to PostgreSQL

  • Triggering a Web Search and retrieving results

  • Sending Emails using Gmail or Outlook

These nodes help turn your workflows into automated agents that don’t just compute — they also take action.

How to use the Action node

To use the Action node:

  1. Click the node.

  2. In the right panel, search and select an action from the desired node.

Depending on the Action selected:

  • Input: Requires structured data (usually JSON or plain text) from a previous node.

  • Output: Sends data to an external service. The result can optionally be passed to an Output node or another processing node like an LLM.

Not all actions produce user-facing output. Some simply perform the task in the background — such as logging, sending an email, or updating a database.

Audio Node

What is an Audio Node?

The Audio node lets you generate audio from text using high-quality voice synthesis models. It's ideal for turning responses from LLMs or static text into spoken audio.

This node is commonly used in voice interfaces, accessibility workflows, or any experience where you want to deliver output via sound.

It supports popular text-to-speech engines and customizable voices to match your tone and use case.

Key capabilities include:

  • Supports multilingual audio synthesis.

  • Choose from multiple voice models and accents.

  • Play back audio directly in the interface with Test Output.

  • Optionally use your own API key to connect with external TTS providers.

How to use it?

To use the Audio node:

  • Input: Accepts a text string (e.g., from an LLM or Input node).

  • Output: Returns a playable audio file that can be previewed that can be previewed.

After receiving text input, the Audio node displays a Test Output section with a play button, allowing you to listen to the generated audio.

Settings

Configuration Options

  • Model: Choose the TTS engine, such as eleven_multilingual_v2.

  • Voice: Select from available voice profiles (e.g., Sarah, Chris).

  • API Key: Optional field for providing your own TTS provider credentials.

How to expose Audio externally?

To make audio results available in your external interface:

  1. Go to the Export tab.

  2. Enable the Audio node in the Outputs section under Fields.

  3. Click Save Interface.

  4. When triggered, users will be able to hear the generated audio directly in the interface.

LLMs

LLMs Hosted on Azure & AWS Bedrock

Microsoft Azure and AWS Bedrock offer the ability to host private clouds with OpenAI models. You can add these models in Stack AI using an "Azure" node or "Bedrock" node. Hosting models in Azure/Bedrock has benefits:

Azure

  1. Lower and Consistent Latency: Cloud hosted models are not affected by public API traffic. This is a great option if latency is a real concern.

  2. Higher Rate Limits: Models in Azure offer higher rate limits of up to 240,000 tokens per minute and 1440 requests per minute.

  3. Data Privacy and Compliance: data sent to Azure is kept under the private cloud and is not sent to OpenAI or any external service. These models are covered under Azure's Business Associate Agreement (BAA) and are HIPPA compliant.

AWS Bedrock

  1. AWS Security & Compliance: Bedrock leverages AWS’s security, IAM, and compliance features. You can use AWS IAM roles, VPC endpoints, and audit logging for enterprise-grade security.

  2. Data Privacy & Residency: Data processed through Bedrock stays within AWS infrastructure. You can choose the AWS region for data residency requirements.

How It Works

Microsoft Azure

  • Azure OpenAI Service: Azure provides access to OpenAI models (like GPT-3.5, GPT-4, etc.) through its Azure OpenAI Service.

  • How it works: You provision an Azure OpenAI resource, get an endpoint and API key, and can then use these credentials to access models via Azure’s API.

AWS Bedrock

  • Amazon Bedrock: AWS offers access to multiple foundation models (including Anthropic Claude, AI21, Cohere, and Amazon’s own Titan models) through the Amazon Bedrock service.

  • How it works: You provision an AWS Bedrock resource, get an endpoint and API key, and can then use these credentials to access the models you have enabled via AWS's API.

StackAI Academy

Here you will Learn how to build, deploy, and manage secure AI agents using StackAI, the orchestration platform built for modern teams. This Academy is designed for anyone using StackAI.

In this series, you’ll explore everything from connecting data and building workflows to customizing interfaces and applying advanced logic, all without needing to reinvent the wheel.

▶️ Start learning below.

Video modules include:

1. Platform Overview

2. Building Your First Workflow

3. Agent Builder

4. Deep Dive into UI

5. Connecting your Data to an AI Agent

6. Agentic Tools

7. Using LLMs

8. Logic Nodes

9. Developer Capabilities

10. Apps & Integrations

11. Governance

12. Handling Data

🔗 Try StackAI: https://stack-ai.com

Compliance Chatbot

Building a Compliance Chatbot in Stack AI

IT Support Chatbot

Build an IT support chatbot in Stack AI

Partnerships Agent

Build a strategic partnerships agent in Stack AI

Policy Chatbot

Build a policy chatbot in Stack AI

Web Research

Build a web research agent in Stack AI

LLM Node

A LLM Node is the heartbeat (or heartbeats!) or your project. StackAI is provider-agnostic, just choose your favorite provider and select the model you'd like to use in your project. If you change your mind and want to try a different provider or model, just make your selection in the dropdown menu.

To see an up to date list of our available providers and models, visit our LLM Leaderboard here.

AI Governance

To manage AI deployments effectively, StackAI offers governance features like:

  • Role-Based Access Control (RBAC): Define user permissions at granular levels, including access to the knowledge base and connections.

  • Single Sign-On (SSO): Integrate with identity providers like Okta and Entra ID for user authentication and inheritance of groups and permissions.

  • Project Publishing Controls: Restrict project publishing capabilities to authorized personnel, ensuring oversight.

  • Centralized Monitoring: A unified dashboard allows administrators to monitor agent activities, usage metrics, and error logs in real-time.

Below is a comprehensive guide to StackAI’s governance model, designed for teams that need speed without losing control.

The StackAI Governance Model (8 Layers)

1) Role-Based Access Control (RBAC) and Groups

Admins can create groups (e.g., “Legal,” “HR,” “Capture Team”) and assign them to workspaces/projects for coarse-grained control.

2) Workspace and Folder Access (Scope Control)

Easily create private group folders with specific allowlists. Only assigned users or groups can see what’s inside—others see nothing.

Easily view project owners and editors as well, by hovering over a project or in a list view.

3) Project Controls (Edit, Lock, Versioning)

Creators can lock a project (only the owner edits; admins can override).

All changes made to projects can be tracked with version control and diffs, so you can see exactly who changed what and when, and roll back. You can easily see all previously published versions of a project; and versions can be tagged with a commit message to clarify what changes were made. Easily go back to a previous version if desired.

4) Interface-Level Security (How You Publish)

When you export an agent (advanced form, chatbot, Slackbot, etc.), you can:

  • Enable one-click SSO on the interface editor

  • Set a password for external collaborators.

  • Restrict by allowed origins/URLs and even a user allowlist.

5) Global Governance and Admin Policy (Feature Access and Guardrails)

Org admins can set cross-cutting policy:

  • Require SSO on all interfaces.

  • Restrict who can publish, so that non-admins don't publish projects.

  • Enable an approval/feature-flag workflow for changes, wherein users request for their project to be reviewed and published by admin.

  • Allow/deny specific tools, connectors, and more through Feature Access (e.g., block Notion/Box across the org).

  • Set usage limits (e.g., token caps) as a security throttle.

  • Assigning user roles.

  • Disabling LLMs and adding default connections (i.e., your company’s API keys).

You can also build policy by group (e.g., “only Legal can access the Legal agents”).

6) Connection and Knowledge-Base Permissions

Connections (SharePoint, Dropbox, ServiceNow, etc.) are owned by their creator, with private details and credentials encrypted and hidden from others. Owners and admins can share a connection org-wide or limit it to specific users or groups.

Knowledge bases support the same allowlisting, so only authorized teams can reference sensitive content.

7) Production Analytics and Auditing

Downloadable project analytics show who ran what, when, with which models, token counts, latency, and per-step traces (inputs, KB hits, outputs). Builders can mask or disable logs when required, or limit visibility to the owner. For certain cases of external security tooling, StackAI can deliver scheduled exports and can post to a customer webhook (e.g., daily digests) for alerting pipelines.

Below are some of the most widely used governance features, developed in close collaboration with our customers:

Learn more about Analytics .

8) Authentication and MFA

Organizations can use email/password (when enabled) or SSO (recommended). Enabling SSO means protecting any or all interfaces from access by members outside of your organization; further, SSO allows you to capture the email addresses of all users of your interfaces to easily keep track of who is using your workflows. You can also require SSO for all interfaces. By default, SSO users land as users until granted higher roles.

Salesforce

Build a Salesforce agent in Stack AI

Input Node

An Input Node allows you and your end-users to send text queries to any node that accepts text strings as input.

The most popular nodes accepting inputs are:

  1. The LLM nodes (adding the input as part of their prompt).

  2. The Knowledge Base nodes (they use the input as a prompt to retrieve information from their contents)

Here are a few quick facts:

  • Inputs can be text fields of any length and are passed to their connected node in the flow.

  • While inputs can be of any length, you should be mindful that LLM prompts have a limit on how long of an input they can process. To handle long inputs, consider using the Text Data Loader.

  • To expose an Input node to your users, you will need to set it up in the Export tab.

Exposing Inputs Externally

If you'd like an input to be automatically populated when the user open's your agent's interface, you can expose inputs externally. This is useful in cases where you automatically want to capture the user's id or other metadata about the user, like their timezone. To do this, first publish your project and head to the Export View. for detailed instructions.

URL Input Node

What is a URL Node?

The URL Node allows users to add a URL to the flow and scrape the HTML or Metadata of a website to use as an input to the LLM. If an LLM node returns a URL as its output, it can feed into the URL node to scrape a website in a more complex workflow. The entire output of the URL Node will be given to the LLM as context.

Mode

Defines what type of content to fetch from the provided URL.

  • Page HTML: Downloads the full HTML content of the page. Suitable for use cases like content parsing, summarization, or extraction of visible page elements.

  • Metadata only: Fetches only metadata (e.g., <title>, <meta> tags such as description and Open Graph data). Useful for lightweight previews or indexing.

Scrape Subpages

When enabled, the node will attempt to crawl and include linked subpages within the same domain.

Enable URL as Input

When checked, this enables dynamic input of URLs via upstream nodes or user input, rather than hardcoding a static value in the interface.

How to use the URL Node

  1. Add a URL node to your flow.

  2. Connect the URL node to an LLM node.

  3. Mention the URL node in the LLM node by pressing "/" and selecting the URL node.

  4. Add an Output node to your flow.

  5. Connect the Output node to the LLM node.

URL Node Settings

If you click the gear icon in the node, you will see the available settings.

Chunking Settings

  • Chunking Algorithm: Defines how the data is split (e.g., Sentence-based).

  • Chunk Overlap: The number of overlapping tokens between chunks.

  • Chunk Length: Max length of each chunk sent to the LLM.

Additional Features

  • Advanced Data Extraction: Enable more precise field-level parsing (toggle option).

  • Text in Images (OCR): Extract and include text from profile images or banners (toggle option)

How to Use Knowledge Bases

What is a knowledge base and how to set up a knowledge base from the workflow builder or for your organization.

What is a Knowledge Base?

A knowledge base is a centralized repository of information, documents, or data that can be searched and referenced to answer questions, solve problems, or provide context.

StackAI enables users to leverage a powerful and flexible RAG (Retrieval-Augmented Generation) system through a simple drag-and-drop interface. By connecting directly to their knowledge base, users can effortlessly incorporate contextual search capabilities.

To use a Knowledge Base in your project, drag and drop a Knowledge Base Node or select the Knowledge Base tool in your LLM. Knowledge Bases can be created and managed in the Knowledge Base Dashboard, or you can create one on the fly in your Knowledge Base Node.

The standalone Knowledge Base Node requires an input to function—usually a standard text input. This input serves as the query that the Knowledge Base uses to fetch relevant context. If there are multiple inputs connected, you can specify which one to use in the Knowledge Base settings.

If no input is connected or specified in the Knowledge Base Node, it will not return a result.

The Knowledge Base Node

The Knowledge Base node acts like a search engine over files, allowing LLMs to retrieve the precise context needed to perform any given task effectively. Indexing and storage in vector databases happen automatically, without any action from the user. Syncing also occurs automatically, if enabled, so that new files or changes are added to the knowledge base.

Unlike other platforms, StackAI enables users to leverage a powerful and flexible RAG (Retrieval-Augmented Generation) system through a simple drag-and-drop interface. By connecting directly to their knowledge base, users can effortlessly incorporate contextual search capabilities. The Knowledge Base node acts like a search engine over files, allowing LLMs to retrieve the precise context needed to perform any given task effectively. Indexing and storage in vector databases happen without any action from the user. Syncing occurs automatically if enabled, so that new files or changes are added to the knowledge base.

Import Knowledge From Any Source

StackAI allows you to create a knowledge base from uploaded documents, tables, Dropbox, Google Drive, Sharepoint, and many more! File metadata from platforms like SharePoint is also imported to enhance information retrieval, resulting in a 27% increase in accuracy for financial applications.

Advanced search controls within the knowledge base, designed with the right level of abstraction to ensure ease of use for end users.

Citations are clearly displayed in the user interface to enable response auditing. Users can view the original source file and the exact information chunks utilized by the LLM. Superscript references within the response allow users to easily trace and verify the underlying data.

Azure SQL

Learn how to use the Azure SQL node to query your Azure SQL database with natural language or SQL, including required inputs, configurations, and output details.

The Azure SQL Node is a workflow node that allows you to query an Azure SQL database using either plain English or SQL queries. This node is ideal for retrieving, analyzing, and interacting with your database data directly within your workflow, making data access seamless and user-friendly.


How to use it?

To use the Azure SQL node, you must provide two required inputs:

  1. Schema: Describe your database schema, including tables, columns, and data types.

  2. Query: Enter your question or request in plain English or as a SQL statement.

The node will process your input, convert plain English queries to SQL if needed, execute the query, and return the results.


Example of Usage

Suppose you have a table called Sales with columns CustomerName, OrderAmount, and OrderDate.

  • Schema (Required):

  • Query (Required):

    or

Outputs:

  • Query (Required): The SQL query that was executed (e.g., SELECT SUM(OrderAmount) FROM Sales WHERE YEAR(OrderDate) = 2024;)

  • Results (Required): The results of the query (e.g., {"SUM(OrderAmount)": 150000})


Available Actions

  • Query an Azure SQL database

    • Inputs:

      • Schema (Required): The structure of your database (tables, columns, types, etc.).

      • Query (Required): Your question in plain English or a SQL statement.

    • Configurations: None required beyond the schema and query.

    • Outputs:

      • Query: The SQL query that was executed.

      • Results: The results returned from the database.

TABLE Sales (
  CustomerName TEXT,
  OrderAmount REAL,
  OrderDate DATE
);
What is the total order amount for 2024?
SELECT SUM(OrderAmount) FROM Sales WHERE YEAR(OrderDate) = 2024;
See here
here
A screenshot of a computer AI-generated content may be incorrect.
A screenshot of a computer AI-generated content may be incorrect.
A screenshot of a computer AI-generated content may be incorrect.
A screenshot of a computer AI-generated content may be incorrect.

Instruction vs Prompt

Master LLMs: Know the difference between instruction, system prompt, and user prompt to guide AI behavior and get top results. It's not just what you ask - it's how you ask it.

Instruction vs Prompt (System Prompt vs User Prompt)

In the world of Large Language Models (LLMs), understanding the difference between "Instruction" and "Prompt" (often referred to as "System Prompt" and "User Prompt") is key to getting the best results. It's not just about what you ask, but how you set up the AI's overall behavior and then provide specific tasks.

The System Prompt: The AI's Job Description Think of the System Prompt as the AI's "job description" or its persistent identity. It's a high-level, foundational set of instructions that defines the AI's core behavior, its persona, and any rules it should always follow throughout an entire conversation or session.

What goes in a System Prompt?

  • Role Prompting: This sets the AI's persona, like "You are a seasoned data scientist" or "You are a helpful and informative AI assistant specializing in technology." This helps the AI interpret all subsequent requests through that specific lens.

  • Ethical Guidance and Constraints: This is where you establish non-negotiable rules, such as avoiding certain topics ("Avoid discussing political opinions") or refusing harmful requests.

  • Defining Scope: You can specify the areas of expertise the AI should draw from and what's out of bounds.

  • Tool-Use Instructions: For more advanced applications, you can define what external tools the AI has access to and when it should use them.

The System Prompt is stable and typically remains consistent across many interactions, providing a steady framework for the AI's operation.

The User Prompt: The Task-Specific Request In contrast, the User Prompt is dynamic and task-oriented. It contains the specific question, command, or data for a single interaction within the conversation. It's the immediate "what" you want the AI to do right now.

What goes in a User Prompt?

  • Specific Questions and Commands: This is where you put your direct query, like "What are some eco-friendly travel destinations in South America?" or "Translate this text to French."

  • Task-Specific Context: Any details relevant only to the current turn of the conversation, such as "I'm planning a trip in June and prefer destinations with hikes."

  • Few-Shot Examples: If you need to show the AI examples of input-output pairs for a specific task, these are best placed in the User Prompt.

  • Response Formatting Instructions: While general style can be in the System Prompt, specific output formats for a single response (e.g., "Please provide the information in a list format," or "Answer in JSON format") are often more effective here.

Why the Separation Matters This deliberate separation into System and User prompts is a critical architectural choice for building robust and scalable LLM applications.

  • Clarity and Maintainability: Separating concerns makes your prompts easier to read, debug, and update. Your core AI configuration (system prompt) is distinct from the varying user inputs.

  • Optimized Model Performance: LLMs are often specifically trained to handle these distinct roles. Adhering to this structure is believed to provide a performance benefit, leading to more accurate and reliable outputs.

  • Future-Proofing: As your application evolves, having a modular prompt structure allows for easier modifications and additions of new features without overhauling your entire prompting logic.

By understanding and effectively using both system and user prompts, you can better control LLMs, making your applications more predictable, reliable, and powerful for various tasks.

Image Node

What is an Image Node?

The Image node allows you to generate visual content from text prompts using AI image generation models such as OpenAI’s DALL·E 3 or Stable Diffusion.

Use this node to turn descriptions into visuals. Ideal for creative tools, content generation, or enhancing user engagement with dynamic imagery.

Common applications include:

  • Illustrating chatbot responses

  • Creating product mockups or concept art

  • Generating visual assets on-the-fly for user interfaces

How to use it?

To use the Image node:

  • Input: Accepts a text string (prompt), often from a user or LLM node.

  • Output: Returns a generated image that can be previewed or used downstream.

The model processes the prompt and returns a generated image in the specified size and style.

Settings

Configuration Options

  • Model: Choose between available image generation models:

    • OpenAI DALL·E 3

    • Stable Diffusion 3.5

  • Image size: Select the resolution for the generated image:

    • 1024×1024 (square)

    • 1024×1792 (portrait)

    • 1792×1024 (landscape)

  • API Key: (Optional) Provide your own key to use a custom instance or higher tier of the selected model.

By adjusting the model and size, you can tailor visual outputs to match your product’s design or artistic needs.

How to expose Images externally?

To allow users to see the generated images:

  1. Go to the Export tab.

  2. Enable the Image node in the Outputs section under Fields.

  3. Click Save Interface.

  4. The image result will now be rendered in your external interface when the flow is triggered.

How to Improve LLM Performance

This guide offers various strategies and techniques to improve the performance of your Language Model (LLM). You can experiment with these methods individually or in combination to achieve better results for your specific needs. Some strategies include:

1. Write Clear Instructions

Ensure that your instructions are concise and clear. If the outputs are too lengthy, request brief responses. If you need expert-level writing, specify that. Minimizing guesswork for the LLM increases the likelihood of receiving the desired output. Consider the following:

  • Include specific details in your query for more relevant answers.

  • Instruct the model to adopt a specific persona.

  • Use delimiters to indicate distinct parts of the input.

  • Specify the steps required to complete a task.

  • Provide examples.

  • Specify the desired length of the output.

  • Refer to the Evaluation Description of available LLMs.

2. Provide Reference Text

To reduce the generation of fake answers, particularly on obscure topics, or to include citations and URLs, provide reference text that can assist the LLM. Here's what you can do:

  • Instruct the model to answer using reference text.

  • Instruct the model to answer with citations from reference text.

  • Refer to Offline Data Loaders.

3. Break Down Complex Tasks into Simpler Subtasks

Complex tasks tend to have higher error rates. To enhance performance, break down complex tasks into simpler subtasks. You can:

  • Use intent classification to identify the most relevant instructions for a user query.

  • For dialogue applications with long conversations, summarize or filter previous dialogue.

  • Summarize long documents piecewise and construct a full summary recursively.

  • Refer to the Description of available LLMs.

4. Allow LLMs Time to "Think"

LLMs may make more reasoning errors when rushed. Asking for a chain of reasoning before a response can help them reason their way to correct answers. Consider:

  • Instructing the model to work out its solution before rushing to a conclusion.

  • Using an inner monologue or a sequence of queries to hide the model's reasoning process.

  • Asking the model if it missed anything on previous passes.

  • Refer to the Description of available LLMs.

5. Utilize External Tools

Compensate for LLM weaknesses by using outputs from other tools. Text retrieval systems or code execution engines can be helpful. If a task can be done more reliably or efficiently with a tool, consider using it for better results:

  • Use embeddings-based search for efficient knowledge retrieval.

  • Use code execution for more accurate calculations or call external APIs.

  • Refer to Offline Data Loaders.

6. Test Changes Systematically

Measuring the impact of changes is essential for improvement. Define a comprehensive test suite (eval) to ensure that modifications yield a net positive performance:

  • Evaluate model outputs with gold-standard answers.

  • Refer to Evaluation.

Each of the strategies listed above can be implemented with specific tactics. These tactics provide ideas for experimentation and improvement. Feel free to explore creative ideas beyond what's listed here.

Knowledge Base

Learn how to use the Knowledge Base node in StackAI to search and retrieve information from your indexed documents, including input, configuration, and output details.

The Knowledge Base Node in StackAI allows you to search and retrieve information from your indexed documents. It is designed to help AI workflows access relevant content from your organization's knowledge repositories, making it easy to build intelligent document search, retrieval-augmented generation (RAG), and automated Q&A solutions.


How to use it?

To use the Knowledge Base node, connect it within your StackAI workflow where you want to enable document search or retrieval. The node can be configured to search specific knowledge bases, filter results, and control the number and type of results returned. It is typically used in conjunction with LLM nodes to provide context-aware answers or summaries based on your indexed content.


Example of Usage

Suppose you want to build a chatbot that answers user questions using your company’s internal documentation. You would add the Knowledge Base node to your workflow, configure it to search your indexed documents, and connect its output to an LLM node. When a user asks a question, the Knowledge Base node retrieves relevant document snippets, which the LLM then uses to generate a precise answer.


Available Actions

Below are the most commonly used actions for the Knowledge Base node:

1. Search Knowledge Base

Description: Searches your indexed knowledge base for relevant documents or content based on a query.

Inputs:

  • query (Required): The search string or question to look up in your knowledge base.

  • knowledge_base_id (Required): The unique identifier of the knowledge base to search.

  • filters (Optional): Metadata or tag filters to narrow down the search results.

  • top_k (Optional): The maximum number of results to return (default is 10).

  • advanced_rag (Optional): Boolean to enable advanced retrieval-augmented generation features.

Configurations:

  • connection_id (Required if your knowledge base requires authentication): The connection ID for accessing the knowledge base provider.

Outputs:

  • results (Required): An array of relevant document snippets or content chunks.

  • metadata (Optional): Additional information about each result, such as source, date, or tags.

Example:

  • Input:

    • query: "What is the company’s refund policy?"

    • knowledge_base_id: "630eed87-31bf-4e64-9399-a1d298ca8a45"

    • top_k: 5

  • Output:

    • results:

      • "Our refund policy allows returns within 30 days of purchase..."

      • "Refunds are processed within 5 business days after approval..."


How to Use in a Workflow

  1. Add the Knowledge Base node to your StackAI workflow.

  2. Set the required inputs, such as the query and knowledge base ID.

  3. (Optional) Add filters or adjust the number of results.

  4. Connect the output to an LLM node or Output node to display or process the retrieved information.


Best Practices

  • Always specify the correct knowledge base ID to ensure accurate search results.

  • Use filters to refine results for more targeted answers.

  • Combine with LLM nodes for context-aware, natural language responses.


Summary

The Knowledge Base node is a powerful tool in StackAI for searching and retrieving information from your indexed documents. By configuring its inputs and outputs, you can build intelligent workflows that leverage your organization’s knowledge for automation, support, and more.

Loom

Discover how to automate video workflows with the Loom node in StackAI. Learn about available actions, required inputs, configurations, and output examples.

What is Loom?

Loom is a video messaging tool that enables users to create, share, and manage videos quickly and efficiently. Integrating Loom with StackAI allows you to automate video-related tasks, such as transcribing videos, extracting insights, or managing video content, directly within your workflows.


Example of Usage

Suppose you want to automatically transcribe a Loom video and use the transcript in a report. You can set up a workflow where the Loom node retrieves the video transcript, which is then processed by an LLM node and formatted for output.


Available Actions

1. Loom Transcript

Description: Retrieve the transcript of a Loom video for further processing or analysis.

Inputs:

  • video_url (Required): The URL of the Loom video you want to transcribe. Example: "https://www.loom.com/share/your-video-id"

Configurations:

  • No additional configurations are required for this action.

Outputs:

  • transcript (Always returned): The full text transcript of the Loom video. Example:

    "Welcome to our product demo. In this video, we will walk through the main features..."

How to Use in a Workflow:

  1. Add the Loom node to your StackAI workflow.

  2. Select the "Loom Transcript" action.

  3. Enter the Loom video URL as the required input.

  4. Connect the output to downstream nodes (e.g., LLM, Template, Output) for further processing.


Summary Table

Action
Required Inputs
Configurations
Outputs

Loom Transcript

video_url (Required)

None

transcript

Notion

Learn how to automate Notion page creation with StackAI: required inputs, configuration, and output details for seamless workflow integration.

What is Notion?

Notion is a collaborative workspace that helps you organize, manage, and share information. With StackAI, you can automate the creation of Notion pages directly from your workflows, making it easy to generate structured content and knowledge bases.

How to use it?

The Notion node in StackAI allows you to create new pages in your Notion workspace. You must provide the parent page, a title, and the content for the new page. This node is ideal for automating documentation, meeting notes, or any structured data entry into Notion.

Example of Usage

Suppose you want to automatically create a project summary page in Notion after a workflow completes. You would use the Notion node to send the project details as a new page under a specific parent page.


Available Actions

1. Create Page

Create a new page in your Notion workspace under a specified parent page.

Inputs (All Required):

  • Parent Page (parent_page_id): Select the parent page under which the new page will be created. Example: "d695667e-33c4-4b9d-9f93-8c01ec1d7b89"

  • Title (title): The title of the new Notion page. Example: "Weekly Project Update"

  • Content (content): The main content to be written as text blocks in the new page. Example: "This week, we completed the following milestones..."

Configurations: No additional configurations are required beyond the inputs above.

Outputs (All Always Provided):

  • Page ID (page_id): The unique identifier of the newly created Notion page. Example: "b1a2c3d4e5f6g7h8i9j0"

  • Page URL (page_url): The direct URL to access the new Notion page. Example: "https://www.notion.so/yourworkspace/b1a2c3d4e5f6g7h8i9j0"

  • Message (message): A status message indicating the result of the operation. Example: "Page created successfully."


Example of Usage

You want to automate meeting notes creation:

  • Set the Parent Page to your "Meetings" section.

  • Set the Title to "Team Sync - July 8, 2025".

  • Set the Content to the meeting summary generated by an LLM node.

After execution, the Notion node will return the new page's ID, URL, and a success message, allowing you to share or reference the page in further workflow steps.

Pinecone

The Pinecone node in your workflow is used to query a Pinecone vector database for similar vectors based on a text query.

The Pinecone Node allows you to search a Pinecone vector database for vectors that are most similar to a given text input. It returns a list of similar vectors along with their metadata.

Required Inputs for the Pinecone Node

To use the Pinecone node, you need to provide the following input parameters:

  1. Query (string, required): The text you want to search for similar vectors. For example, "AI marketing trends".

  2. Number of Results (top_k) (integer, required): How many similar vectors you want to retrieve. The default is 5.

  3. Index Name (string, required): The name of the Pinecone index you want to query.

  4. Namespace (string, optional): An optional namespace within the Pinecone index to scope your query.

Output

  • The node outputs a field called Results, which contains the similar vectors found in the database, along with their metadata.

Example Usage

  • If you want to find the top 5 most similar vectors to the phrase "StackAI product launch" in your "company-updates" index, you would set:

    • Query: "StackAI product launch"

    • Number of Results: 5

    • Index Name: "company-updates"

    • Namespace: (leave blank or specify if needed)

The Pinecone node will then return the most relevant vectors, which you can use for recommendations, search, or further processing in your workflow.

LinkedIn

Comprehensive guide to the LinkedIn node in StackAI workflows, including available actions, input requirements, configurations, and output details.

The LinkedIn Node in StackAI enables seamless integration with LinkedIn, the leading professional networking platform. This node allows you to automate LinkedIn-related tasks, such as searching for profiles, extracting professional data, and more, directly within your workflow/


Example of Usage

Suppose you want to search for professionals in a specific industry and extract their public profile data. You would use the LinkedIn node, select the "Search" action, provide the search keywords as input, and configure any optional filters. The output will be a list of matching LinkedIn profiles with relevant details.


Available Actions

1. LinkedIn Search

Description: Search for professionals, companies, or jobs on LinkedIn based on keywords and filters.

Inputs:

  • Keywords (Required): The search terms to find relevant profiles or companies. Example: "Data Scientist San Francisco"

  • Filters (Optional): Additional filters such as location, industry, or company size. Example: {"location": "San Francisco", "industry": "Technology"}

Configurations:

  • Query Filter: which fields of the JSON to include in the output

  • Top K: how many results to return

  • LinkedInSearchType: search generally, or for jobs, companies, or content

  • CountryCode: the country to search in

Outputs:

  • Results (Always Provided): A list of LinkedIn profiles, companies, or jobs matching the search criteria. Example:

    [
      {
        "name": "Jane Doe",
        "title": "Senior Data Scientist",
        "company": "TechCorp",
        "location": "San Francisco"
      },
      ...
    ]

NetSuite

To use the NetSuite Node in StackAI, use the Custom SuiteQL action. This action allows you to query Netsuite and retrieve any data stored there with a SQL query.

Creating a Connection to NetSuite

StackAI recommends creating connections using a dedicated user account with permissions scoped for the task in question.

To create a connection, select + NetSuite (OAuth). Then, log in with the user account.

Here, you will be prompted to enter your Account ID, Client ID, and Client Secret.

Permissions

Give your service account user role permissions, and access to the necessary tables that you will be retrieving or writing data to. For access through StackAI, you may need to additionally give the service account permissions for "Analytics and REST" access to the table in question. For access to Messages for example, delegate permissions for Messages and also for Messages Analytics and REST

Tips

Always list the fields explicitly in your query:

SELECT id, subject, author, recipient, activitydate
FROM message
WHERE activitydate >= '2025-01-01'

You may not be able to use SELECT * when querying Netsuite. Your user may not have permissions to all fields, causing an error. NetSuite is also not a traditional relational database; some fields are dynamically defined.

Regex

The Regex Node in your workflow uses the Regex Extract action. This tool allows you to extract specific patterns or information from a block of text using regular expressions (regex).

Available Actions

Regex Extract

  • Purpose: Extracts content from text using a regular expression pattern you provide.

  • Provider: Regex

Inputs Required

  1. Content (string, required)

    • The text you want to extract information from.

    • Example: "Order number: 12345, Date: 2025-07-10"

  2. Expression (string, required)

    • The regular expression pattern to search for in the content.

    • Example: "Order number: (\d+)"

Output

  • Result (string)

    • The extracted value(s) from the content that match your regex pattern.

How to Use the Regex Node

  1. Connect the node: Pass the text you want to analyze (from an LLM, input, or another node) into the Regex node.

  2. Configure the action:

    • Set the "Content" field to the text you want to search.

    • Set the "Expression" field to your desired regex pattern.

  3. Use the output: The result will be available as {action-X.result} (where X is the node number) for downstream nodes.

Example Usage

Suppose you want to extract an order number from a message:

  • Content: "Order number: 12345, Date: 2025-07-10"

  • Expression: "Order number: (\d+)"

  • Result: "12345"

You can reference this result in other nodes using {action-3.result} if your Regex node is action-3.

Prompt Engineering

Prompt Engineering and Context Management

  • Prompt Customization: Each LLM node can have its own prompt and system message. Tailor these to the specific sub-task each LLM is handling.

  • Context Passing: Use node references (e.g., {llm-0}) in prompts to pass context or results between LLMs.

Prompting Best Practices

1

Clarity, Specificity, and Explicitness

  • Minimize Ambiguity: Vague requests lead to off-target responses. Instead of "Summarize this document," specify "Summarize this document in 3 bullet points focusing on the main challenges discussed."

  • Use Imperative Language: Frame prompts as direct commands (e.g., "Generate," "Summarize," "Translate"). Avoid conversational phrases.

  • Define Constraints: Clearly state desired length ("Use a 3 to 5 sentence paragraph"), tone ("Use a friendly and conversational tone"), or style ("in the style of a {famous poet}").

  • Positive Framing: Instruct the model on what it should do, rather than what it should not do. For instance, instead of "DO NOT ASK for a username or password," state: "The agent will attempt to diagnose the problem... whilst refraining from asking any questions related to PII. Instead of asking for PII, such as username or password, refer the user to the help article [www.samplewebsite.com/help/faq](https://www.samplewebsite.com/help/faq)."

2

Structuring the Prompt for Optimal Parsing and Reliability

  • Use Delimiters: Employ characters or symbols (e.g., `"""`, `###`, `<tag>`) to clearly separate sections like instructions, context, and examples. This prevents information "bleeding."

  • Place Instructions at the Beginning: Start your prompt with instructions for clear parsing.

  • Request JSON Format for Machine-Readable Output: For structured data, explicitly ask for JSON. Provide a complete example with desired keys and value types. Consider using API's "JSON mode" if available.

  • Adopt JSON Output by Default: Always request JSON, even for single fields, to allow for easy expansion later.

  • Use Hierarchical Structures: For complex prompts, organize content with headings, subheadings, and bullet points.

3

In-Context Learning (ICL): Guiding by Demonstration

  • Start with Zero-Shot Prompting: For simple tasks, provide only the task description without examples.

  • Use Few-Shot Prompting when Needed: If zero-shot isn't sufficient, include one or more input-output examples to demonstrate the desired output structure, style, or pattern.

  • Curate Diverse Examples: Ensure few-shot examples are representative and varied to avoid bias.

4

Strategic Allocation (System vs. User Prompts)

  • Utilize the System Prompt: Use for high-level, foundational instructions defining the AI's core behavior, persona, and persistent constraints (e.g., role-prompting, ethical guidance, tool-use instructions).

  • Use the User Prompt: This is for dynamic, task-oriented instructions, specific questions, task-specific context, few-shot examples relevant to the current task, and response formatting instructions for that single interaction.

  • Separate Concerns: This modular approach improves clarity, maintainability, and model performance.

Prompting with Tools

When using tools in an LLM Node, include them in your prompt with the @ sign.

XML Tags

For long prompts, or prompts that reference multiple inputs, group them with XML tags. Grouping signals to the LLM where certain associated blocks of information begin and end.

Term Extraction

Build a term extraction agent in Stack AI

Files Node

The File Node allows users to upload a file to the flow and use it as an input to the LLM. The File Node is not a Knowledge Base like the Documents Node; RAG will not be performed over the files. Instead, the file will always be given directly to the LLM as context.

Expose as Input: This toggle displays the file node as an available input on the user-facing interface, allowing the user to upload their own file. Toggle this OFF to keep the files static. The workflow will then always use the files you uploaded, and the user won't be able to change this.

Files Node Settings

Click on the node to see the available settings.

Chunking Settings

  • Chunking Algorithm: Defines how the data is split (e.g., Sentence-based).

  • Chunk Overlap: The number of overlapping tokens between chunks.

  • Chunk Length: Max length of each chunk sent to the LLM.

Additional Features

  • Advanced Data Extraction: Enable more precise field-level parsing (toggle option).

  • Text in Images (OCR): Extract and include text from images (toggle option)

How to Use the File Node

  1. Add a File node to your flow.

  2. Connect the File node to an LLM node.

  3. Mention the File node in the LLM node by pressing "/" and selecting the File node.

    1. Files (documents) is a reference to the contents (text) of the file, split into chunks

    2. Files (raw_text) is a reference to the contents of the file, completely unchunked

    3. Files (file_urls) is a reference to the actual file--useful if you need to instruct the LLM to attach the file to an email or send the file using SFTP.

  4. Add an Output node to your flow.

  5. Connect the Output node to the LLM node.

How to Expose the File Node to your Users

  1. Go to the Export tab.

  2. Enable the file node in the Inputs section.

  3. Press Save Interface to save your changes.

  4. Your users should now see an upload button in the interface.

Node Specific Features

Data Node

When you create a Data node, you will see a button to upload data to the Vector Store API. You can find the API documentation here.

Google Drive Node

Authenticate through Google to give your project access to your Google Drive. Your end users will be able to ask an LLM questions based on the files you've uploaded.

Sharepoint Node

The Sharepoint Node allows you to index two types of media: documents stored in Sharepoint and Sharepoint News, where everything on the page is indexed.

To authenticate to your Sharepoint organization account, you must follow these steps:

  • Go to App Registrations in Azure: visit your Azure Portal and go to "App Registrations" here.

  • Create App: Click on "New Registration". Add a name to your app and select "Accounts in this organizational directory only (Default Directory only - Single tenant)".

  • Get Client ID and Tenant ID: get your client and tenant id from the "Essentials" section. You will find them under "Application (client) ID" and "Directory (tenant) ID ".

  • Create a client secret: Navigate to "Certificates & Secrets" then click on "New client secret". Give an expiration date to your secret. Finally, you will find the client secret in the "Value" field of the secret.

  • Add Scopes: Naviate to "App Permissions" and then click on "Add a permission". Click on "Microsoft Graph" and select "Application Permissions". Then select the following scopes: Sites.ReadAll, Files.ReadAll, BrowserSiteLists.Read.All. Then click on "Add Permissions". Finally click on "Grant Admin Consent for Default Directory".

Now you can proceed to add your node in Stack AI add add the value of your client_id, client_secret, tenant_id, site_id, and folder path.

Table Node

Use a Table Node to create a Knowledge Base from a .csv file. StackAI creates a SQL database with your .csv file and performs semantic search, all under the hood. A generative model decides which search result--SQL or semantic search--is more informative, giving you powerful search over tables.

Websites Node

The Website Node allows you to create a Knowledge Base from a website URL. The URLs that you upload are then indexed and stored in a vector data base - don't worry, we do this for you in the background! This process is done once, so that you can later query this knowledge base and only retrieve the pieces of information that are more related to your query. It's the most efficient way to manage a long list of URLs, without having to index them everytime you run the workflow (i.e., embeddings are generated only once, when you upload the URLs).

Jira Node

The Jira Node allows you to create a knowledge base from Jira projects. To use this node, first establish a connection to Jira, see how to do so here. You will be able to select which projects to include in your knowledge base and perform RAG over those projects.

Prompting

In the Prompting section of the LLM Node, you will see two sections: Instructions, and Prompt.

System Prompt

The system prompt sets the overall behavior, tone, and role of the AI assistant for the entire conversation. It acts as a set of instructions or context that the model should always keep in mind when generating responses. This message is passed into the model's context each time you interact with it, so the model with always "remember" what you say here. It's important to keep this part as short and informative as you can. Put only the most important information into this section. It’s best used for setting rules, style, or persona (e.g., “You are a helpful tutor. Always explain things simply.”).

User Prompt

The user prompt is the main message or question that the LLM will answer. It can include direct user input, references to other nodes, or additional context. This is the main content the LLM will respond to, after considering the system prompt.

Include placeholders in your user prompt if you'd like to import output from other nodes (e.g., user input). You can do this by typing backslash and then selecting the node whose output you'd like to include.


If you need help with your prompt, try our Magic Wand tool on the bottom right of the prompt box.

To learn more about prompting best practices, go here.

Local LLM

This guide will walk you through how to set a default connection for your preferred LLM provider. You'll also learn how to disable the use of StackAI API keys across providers, deactivate specific LLM providers, and manage other advanced configuration options.

Let’s start!


1. Creating a Default Connection

Navigate to Settings on the bottom left of your screen, underneath your avatar. Then, select Feature Access from the menu.

Under the LLMs section, search for Local LLMs—you can follow the same steps for any other provider as well. If needed, this is also where you can disable specific LLM providers to prevent your users from accessing them.

Once you're in the tab for a specific LLM provider, you'll see a toggle to enable or disable the provider, as well as a "New Connection" button under Default Connection to define it as your primary connection.

Once you click the button, you can create a connection that will serve as the default for all LLMs from that provider within your workspace. This is especially useful because your users won’t need to manually enter an API key to use those models.ally.

2. Using a Local LLM

Once the connection is set up, you can create a new project, choose your LLM provider, select the specific model you want to use, and start interacting with it immediately.

You won’t need to manually add the connection—your default connection will appear automatically at the bottom. If you prefer to use a different connection, you can create it directly from that section and select it for your project.

3. Connections Manager

All your connections are stored in the Connections Manager tab. You can also create new connections from there, and easily select any of them when adding an LLM node to your project.

You can choose to make your connections either public or private, and you’re free to change the default connection at any time. Simply go to the Access Control tab in your Settings to update this preference.

3. Disconnecting all StackAI’s API keys usage

If you'd like to prevent any unintended usage of LLMs without your API keys, you can disable all StackAI API Keys. This ensures that only LLMs configured to send data to your own servers are allowed. To do this, navigate to Settings > Feature Access > Other > General LLM Configuration.

LLM Provider Governance

Control which LLMs your organization has access to, and where information is sent & stored.

Local LLMs

Stack AI allows organizations to connect and use their own local LLMs (such as models hosted on private infrastructure or on-premise servers) instead of relying solely on cloud-based models like OpenAI or Bedrock.

Governance Benefits:

  • Data Privacy: Your data never leaves your infrastructure, ensuring compliance with strict privacy or regulatory requirements.

  • Custom Control: You can select, update, or fine-tune models as needed, and restrict which users or workflows can access them.

  • Auditability: All usage of the local LLM can be logged and monitored within your organization’s security perimeter.

How it works in Stack:

  • Admins can add a local LLM as a provider in the Stack AI admin console. See

  • Once added, the local LLM appears as an option in the workflow builder, just like any other provider.

  • You can set permissions to control which users or teams can access the local LLM.


Turning Off Stack API Keys

By default, Stack AI provides hosted API keys for popular providers (like OpenAI, Anthropic, etc.) so users can get started quickly. However, for governance and security, organizations may want to require the use of their own API keys or connections.

Governance Benefits:

  • Credential Control: Prevents users from accidentally or intentionally using Stack’s shared keys, ensuring all API usage is billed to and controlled by your organization.

  • Security: Reduces risk of data leakage or misuse of shared credentials.

  • Compliance: Ensures all API access is auditable and tied to your organization’s own accounts.

How it works in Stack:

  • Admins can disable Stack-provided API keys for any provider in the admin console.

  • Once disabled, users must add their own connection (API key) to use that provider in workflows.

  • This setting can be enforced globally or per-provider.


Deactivating Certain Providers

Stack AI supports a wide range of providers (OpenAI, Bedrock, Google, Slack, etc.). For governance, you may want to restrict which providers are available to your users.

Governance Benefits:

  • Risk Mitigation: Prevents use of unapproved or high-risk providers.

  • Simplified Compliance: Ensures only vetted and compliant services are available.

  • User Experience: Reduces clutter and confusion by hiding unused or irrelevant providers.

How it works in Stack:

  • Admins can deactivate (hide or block) any provider from the admin console.

  • Deactivated providers will not appear in the workflow builder or connection menus for end users.

  • This can be managed at the organization or workspace level.


Summary Table

Feature
What It Does
Governance Benefit

Algolia

Learn how to use the Algolia node in Stack AI to perform advanced semantic and natural language searches on your Algolia index, with detailed input, configuration, and output examples.

The Algolia Node in Stack AI allows you to search your Algolia index using natural language or semantic queries. This integration is ideal for retrieving relevant data, documents, or records from your Algolia-powered search infrastructure directly within your AI workflows.


How to use it?

Add the Algolia node to your Stack AI workflow to execute search queries against your Algolia index. Connect an input node or LLM node to provide the search query, and use the results in downstream nodes for further processing or display.


Example of Usage

Suppose you want to search for documentation related to "API authentication" in your Algolia index. You would connect an input node (where the user types their query) to the Algolia node, and the Algolia node will return the most relevant results.


Available Actions

1. Database Query (Algolia Search)

Description: Executes a search query against your Algolia index and returns matching results.


Inputs

Name
Type
Required
Description
Example

Showing 1-1 of 1 items

Example Input:


Configurations

There are no additional configuration parameters required for the Algolia Database Query action. All you need is the search query input.


Outputs

Name
Type
Required
Description
Example

Showing 1-1 of 1 items

Example Output:


Summary Table

Action Name
Required Inputs
Configurations
Outputs

ServiceNow

Discover how to automate ServiceNow tasks in StackAI. Learn about the most common ServiceNow actions, their required inputs, configurations, and outputs with clear examples.

Establishing a Connection

1. Connection Name

A user-friendly name for your connection. This helps you identify it among other connections in Stack AI.

2. Instance URL

Go to

The URL in your browser’s address bar is your instance URL. It usually looks like: https://<instance>.service-now.com

3. Username

In that same card, copy the username that is underneath the URL.

4. Password

This is the password for the ServiceNow user account above. This is used together with the username for authentication.

Copy the password under "Current password."

5. Client ID

In the video below, you will find how to generate a Client ID and Client Secret by creating an OAuth API endpoint. To follow the steps in the video guide, go to the homepage of your ServiceNow instance: https://<instance>.service-now.com

6. Client Secret

To get the client secret, click on the resource you've created and then click on the copy button to the right, next to Client Secret.

Query

String

Yes

The search query to execute against the Algolia index.

"how to implement authentication" ,, "database optimization" ,, "api documentation"

{
  "query": "api documentation"
}

Results

String

Yes

The search results from Algolia in JSON format.

'[{"title": "API Docs", "url": "https://..."}]'

{
  "results": [
    {
      "title": "API Docs",
      "url": "https://docs.example.com/api"
    },
    {
      "title": "Authentication Guide",
      "url": "https://docs.example.com/auth"
    }
  ]
}

Database Query

Query (string)

None

Results (string, JSON)

Add Local LLM

Use your own on-prem/private LLM

Data privacy, control, auditability

Turn Off Stack API Keys

Require org-owned API keys for providers

Credential control, security, compliance

Deactivate Certain Providers

Hide/block specific providers from user access

Risk mitigation, compliance, simplicity

our guide.
developer.servicenow.com

Using Multiple LLMs

When using multiple LLMs in one project, there are important points to consider in order to ensure they work well together.


Clear Input/Output Flow

  • Explicit Connections: Each LLM node should have clearly defined input and output connections. Use Input nodes (in-0, in-1, etc.) to gather user data, and connect them to the relevant LLM nodes.

  • Output Handling: Route the output of each LLM node to Output nodes or downstream processing nodes (like Template or Python nodes) for further formatting or logic.

Sequential vs. Parallel LLMs

  • Sequential Orchestration: If the output of one LLM is needed as input for another, connect them in sequence (e.g., llm-0 → llm-1). This is useful for multi-step reasoning or refinement. Having initial LLMs give structured outputs to downstream LLMs can be helpful.

  • Parallel Orchestration: If you want to compare or aggregate results from multiple LLMs, connect the same input to several LLM nodes in parallel, then merge their outputs downstream using the Combine Node or a third LLM that will summarize and logically merge the two outputs

Memory and State

  • Sliding Window Memory: Use the memory feature in LLM nodes to maintain context across turns or steps, especially in multi-turn workflows.

  • Stateful Processing: If you need to track or update state, consider using Python nodes between LLMs to manipulate or store intermediate results.

Error Handling and Fallbacks

  • On Failure Branches: Configure on_failure_branch and retry settings for each LLM node to handle errors gracefully.

  • Fallback LLMs: Use the fallback options to specify alternative models/providers if the primary LLM fails.

Data Formatting and Validation

  • Template Nodes: Use Template nodes to format or merge outputs from multiple LLMs before presenting to the user.

  • Output Validation: If LLMs are expected to return structured data (e.g., JSON), use the json_schema parameter to enforce output format and validate results.

Chaining with Other Nodes

  • Integration with Actions: LLM outputs can be passed to Action nodes (e.g., sending emails, updating databases) for real-world effects.

  • Custom Logic: Insert Python nodes between LLMs for custom logic, filtering, or aggregation.

Citations and Traceability

  • Citations: Enable citations in LLM nodes if you want to track sources or provide references in the output.

  • Auditability: Use Output nodes and logs to trace the flow of data and decisions across multiple LLMs.

Performance and Latency

  • Parallelization: Where possible, run LLMs in parallel to reduce overall latency.

  • Token and Cost Management: Set appropriate max_tokens and temperature settings to control cost and response quality.


Summary Table:

Aspect
Best Practice

Input/Output Flow

Use explicit node connections and references

Orchestration Style

Choose sequential or parallel based on use case

Prompt Engineering

Customize prompts and use context passing

Memory/State

Use memory features and Python nodes for stateful logic

Error Handling

Configure retries, fallbacks, and failure branches

Data Formatting

Use Template nodes and output validation

Chaining/Integration

Connect to Action nodes and use Python for custom logic

Citations/Traceability

Enable citations and use Output nodes for auditability

Performance

Parallelize where possible, manage tokens and latency

Chunking

Best practices for implementing document chunking for RAG

Chunking: Optimizing Data Retrieval in Stack AI Workflows

Chunking is a key technique in AI-powered document processing. In StackAI, using the right chunking strategy can greatly enhance how effectively machine learning models understand and extract data from documents.


What is Chunking in StackAI?

Chunking = Breaking large documents into smaller, manageable parts.

  • Used in StackAI’s "Files" and "Documents" nodes.

  • Ensures input fits within AI model token limits.

  • Can be configured via the gear icon in relevant nodes.


Chunking Methods

1. Naïve Chunking (Fixed-Length)

Splits text by character, word, or token count.

  • Pros:

    • Fast and simple to implement

    • Predictable processing time

  • Cons:

    • May break sentences or ideas

    • Can reduce AI comprehension


2. Sentence-Based Chunking

Splits text along natural sentence boundaries.

  • Pros:

    • Preserves meaning and structure

    • Enhances AI understanding

  • Cons:

    • More computationally intensive

    • Chunk sizes can vary


Optimizing Chunk Configuration

Chunk Size

  • Choose based on your model's capabilities.

  • Tradeoff:

    • Larger chunks = better context but risk hitting token limits.

    • Smaller chunks = faster, but may lose coherence.

  • Recommended: 200–1,000 tokens

Chunk Overlap

  • Adds continuity between chunks.

  • Suggested: 15–30% overlap


Best Practices for Stack AI Users

  • Use sentence-based chunking for documents with rich content.

  • Tune chunk size to match your AI model's limits.

  • Experiment with overlap percentages to preserve context.

  • Iteratively test to ensure optimal results.


Technical Tips

  • Configure chunking inside "Files" and "Documents" nodes.

  • Continuously monitor model performance as you adjust settings.

  • Align your chunking strategy with your specific ML model needs.


Why It Matters

Mastering chunking helps:

  • Improve document comprehension for AI

  • Boost data extraction accuracy

  • Deliver better performance across document-based workflows in StackAI

Main Settings

Add Memory

Add memory to your LLMs in Stack AI. Improve user interaction by enabling models to remember previous conversations and provide more context-aware responses.

LLMs do not hold an internal state, and many applications require tracking previous interactions with the LLM as part of the interface (e.g. chatbots). To this end, you can add memory to an LLM node under the Stack AI tool by clicking on the gear icon of the LLM node.

Some quick facts:

  • All the LLM memory is encrypted end-to-end in the Stack AI database.

  • This data can be self-hosted under the Stack AI enterprise plan.

  • The LLM memory is user-dependent and an instance of the LLM memory.

  • Once the deployed as an API, you can specify the user_id for the LLM memory for each user (see Deployer Guide ).

  • By default, the “Sliding Window Input” memory is selected when a new LLM node is added to the flow.

We offer three types of memory modalities:

  • Sliding Window

    • Stores all LLM prompts and completions.

    • This strategy may consume many tokens as the LLM prompts can often occupy thousands of tokens.

    • Loads a window of the previous prompts and completions as part of the LLM conversation memory, up to the number of messages in the window.

    • In non-chat models (e.g. Davinci), the memory is added as part of the prompt as a list of messages at the end of the prompt.

  • Sliding Window with Input

    • Stores one LLM input parameter (e.g. in-0) and all LLM completions, without storing the entire prompt from each turn.

    • This strategy is more token efficient and aligned with many applications (e.g. when only the user message from input is relevant)

    • Loads a window of the previous inputs and completions as part of the LLM conversation memory, up-to the number of messages in the window. In non-chat models (e.g. davinci-003-text), the memory is added as part of the prompt as a list of messages at the end.

  • VectorDB

    • Stores all of the inputs and outputs to the LLM in a Vector Database and retrieves the most relevant messages to use as LLM memory.

    • This is especially useful if you expect some of the information to be needed at a later time but not in a sequential manner.

    • Allows the LLM to access older, contextually relevant interactions without the constraint of a fixed window size.

Sliding Window

The sliding window allows you to set the number of turns you would like to be included in your context.

Input Id

If you chose to have 'Sliding Window with Input' saved in memory, then you can also select the id of the input that you would like to be held in context. All other inputs will not be stored in context.

Citations

Turn on citations to allow the AI to provide citations (references) for the information it generates, especially when it uses external sources or uploaded documents.

Response Format

Text is the default response format. You can also choose to have the AI return a response formatted as a JSON object

Response Format
When to Use?

Text

The default option. Best for most conversational, summary, or narrative outputs.

JSON Object

Useful when you want structured data for further processing, such as extracting specific fields, integrating with APIs, or using the output in downstream nodes that expect JSON.

JSON Object with Schema

When you want to specify an exact JSON schema for the output so the AI outputs data in a very specific format for integration, automation, or validation.

JSON Object with Schema

To have an LLM output a JSON object according to a provided schema, you must provide the schema in the following format. JSON schemas not formatted according to this specification may throw an error.

  • "strict": true if you want to enforce exactly this output schema

  • "description": a description of your schema

  • "schema": the actual schema of your ouput

    • "type": set to "object" if you would like to return a JSON object

    • "properties": the outputs you would like to return, include each outputs type, and an informative description for the LLM

{
    "strict": true,
    "name": "weather-schema",
    "description": "Schema for a weather API request",
    "schema": {
        "type": "object",
        "properties": {
            "location": {
                "type": "string",
                "description": "The location to get the weather for"
            },
            "unit": {
                "type": "string",
                "description": "The unit to return the temperature in",
                "enum": ["F", "C"]
            }
        },
        "additionalProperties": false,
        "required": ["location", "unit"]
    }
}

BigQuery

Comprehensive guide to the BigQuery node in StackAI: actions, inputs, configurations, outputs, and usage examples for seamless Google BigQuery integration.

What is BigQuery?

BigQuery is a fully managed, serverless data warehouse by Google Cloud that enables super-fast SQL queries using the processing power of Google's infrastructure. In StackAI workflows, the BigQuery node allows you to connect, query, and interact with your Google BigQuery datasets directly from your automated flows.


How to Use the BigQuery Node

The BigQuery node in StackAI is designed to execute SQL queries on your Google BigQuery database. It can be used to retrieve, analyze, and process large datasets as part of your workflow automation. You can connect this node to other nodes to automate data-driven tasks, reporting, or trigger downstream actions based on query results.


Example of Usage

Suppose you want to retrieve sales data for the last month from your BigQuery database and use it in a report. You would configure the BigQuery node with your SQL query, connect your Google Cloud credentials, and pass the results to a reporting or output node.


Available Actions for BigQuery

1. Database Query (BigQuery)

Description: Run a SQL query against your Google BigQuery database and retrieve the results for use in your workflow.

Inputs

Input Name
Description
Required
Example

query

The SQL query to execute on your BigQuery database.

Yes

SELECT * FROM sales WHERE date > '2025-06-01'

parameters

Optional parameters for parameterized queries.

No

{"region": "US"}

Showing 1-2 of 2 items

Configurations

Configuration Name
Description
Required
Example

connection_id

The ID of your Google BigQuery connection (credentials).

Yes

bq-connection-12345

project_id

The Google Cloud project ID where your BigQuery dataset resides.

Yes

my-gcp-project

dataset_id

The BigQuery dataset ID to query.

Yes

sales_data

location

The location of your BigQuery dataset (e.g., ,US,, ,EU,).

No

US

Showing 1-4 of 4 items

Outputs

Output Name
Description
Required
Example

results

The rows returned by your SQL query, as a list of records.

Yes

[{"date": "2025-06-01", "total": 1000}, ...]

row_count

The number of rows returned by the query.

Yes

100

schema

The schema of the returned data (column names and types).

Yes

[{"name": "date", "type": "DATE"}, ...]

Showing 1-3 of 3 items


Example Configuration

{
  "action_id": "database_query_bigquery",
  "action_configurations": {
    "connection_id": "bq-connection-12345",
    "project_id": "my-gcp-project",
    "dataset_id": "sales_data",
    "location": "US"
  },
  "action_input_parameters": {
    "query": "SELECT * FROM sales WHERE date > '2025-06-01'",
    "parameters": {}
  }
}

Box

Comprehensive guide to the Box integration in StackAI: discover key actions, input/output details, and best practices for automating file and folder management.

What is Box?

Box is a leading cloud content management and file sharing service for businesses. The Box integration in StackAI allows you to automate, manage, and interact with your Box files, folders, metadata, and user permissions directly within your workflows. This enables seamless collaboration, secure file storage, and efficient document handling without leaving the StackAI platform.


How to use it?

To use the Box integration in StackAI, add the Box node to your workflow. You can then select from a wide range of actions to interact with your Box account, such as uploading files, managing folders, sharing documents, and handling metadata. Each action can be configured with specific inputs and settings to match your automation needs.


Example of Usage

Scenario: Automatically upload a file to a specific Box folder when a new document is generated in your workflow.

  1. Add the Box node to your StackAI workflow.

  2. Select the "Upload File" action.

  3. Configure the required inputs:

    • file (Required): The file to upload (e.g., from a previous node).

    • parent_folder_id (Required): The ID of the Box folder where the file will be uploaded.

  4. Optionally, set configurations such as connection ID for authentication.

  5. Use the output (uploaded file details) in downstream nodes for further processing or notifications.


Most Commonly Used Actions in Box

Below are the most frequently used Box actions in StackAI, along with their input, configuration, and output details:


1. Upload File

Description: Uploads a file to a specified folder in your Box account.

Inputs:

  • file (Required): The file object or path to upload.

  • parent_folder_id (Required): The ID of the destination folder in Box.

  • name (Optional): The name for the uploaded file.

Configurations:

  • connection_id (Required): Your Box connection for authentication.

Outputs:

  • id: The unique ID of the uploaded file.

  • name: The name of the uploaded file.

  • created_at: Timestamp of file creation.

  • modified_at: Timestamp of last modification.

  • size: File size in bytes.

Example:

{
  "file": "{doc-0}",
  "parent_folder_id": "123456789",
  "name": "report.pdf"
}

2. Download File

Description: Downloads a file from Box using its file ID.

Inputs:

  • file_id (Required): The ID of the file to download.

Configurations:

  • connection_id (Required): Your Box connection for authentication.

Outputs:

  • file_content: The binary content of the downloaded file.

Example:

{
  "file_id": "987654321"
}

3. Create Folder

Description: Creates a new folder in a specified parent folder.

Inputs:

  • name (Required): The name of the new folder.

  • parent_folder_id (Required): The ID of the parent folder.

Configurations:

  • connection_id (Required): Your Box connection for authentication.

Outputs:

  • id: The unique ID of the new folder.

  • name: The name of the new folder.

Example:

{
  "name": "Project Files",
  "parent_folder_id": "123456789"
}

4. List Folder Items

Description: Lists all items (files and folders) within a specified folder.

Inputs:

  • folder_id (Required): The ID of the folder to list items from.

Configurations:

  • connection_id (Required): Your Box connection for authentication.

Outputs:

  • entries: Array of items (files/folders) in the folder.

  • total_count: Total number of items.

Example:

{
  "folder_id": "123456789"
}

5. Share File or Folder (Get Shared Link)

Description: Generates a shared link for a file or folder.

Inputs:

  • file_id or folder_id (Required): The ID of the file or folder to share.

  • access (Optional): Access level (e.g., "open", "company", "collaborators").

Configurations:

  • connection_id (Required): Your Box connection for authentication.

Outputs:

  • shared_link: The generated URL for sharing.

Example:

{
  "file_id": "987654321",
  "access": "open"
}

Coda

Comprehensive guide to the Coda node in StackAI: discover top actions, input/output details, and practical usage examples for seamless Coda automation.

What is the Coda Node?

The Coda node in StackAI enables seamless integration with Coda, allowing you to automate document management, data updates, and workflow actions directly within your StackAI flows. With this node, you can create, update, and manage Coda docs, pages, tables, and rows, streamlining your business processes and boosting productivity.


How to Use the Coda Node?

To use the Coda node, simply add it to your StackAI workflow and select the desired action. Connect your Coda account using a valid connection ID, then configure the required inputs and settings for your chosen action. The node will execute the action and return the results, which can be used in downstream nodes for further automation.


Example of Usage

Suppose you want to add a new row to a Coda table whenever a new lead is captured. You would use the "Upsert Rows" action, provide the required Doc ID, Table ID, and row data, and connect the output to your reporting or notification nodes.


Most Commonly Used Actions in the Coda Node

Below are the most popular and useful Coda actions available in StackAI, along with detailed input, configuration, and output explanations:


1. List Docs

Description: Retrieve a list of all Coda docs accessible to your account.

  • Inputs:

    • None required.

  • Configurations:

    • connection_id (Required): Your Coda connection ID.

  • Outputs:

    • docs: Array of document objects, each containing doc ID, name, and other metadata.

Example:

{
  "action_configurations": { "connection_id": "<your-coda-connection-id>" },
  "action_input_parameters": {}
}

2. Create Doc

Description: Create a new Coda document.

  • Inputs:

    • title (Required): The name of the new document.

  • Configurations:

    • connection_id (Required): Your Coda connection ID.

  • Outputs:

    • doc_id: The unique ID of the created document.

    • name: The name of the document.

Example:

{
  "action_configurations": { "connection_id": "<your-coda-connection-id>" },
  "action_input_parameters": { "title": "Project Plan" }
}

3. List Tables

Description: List all tables in a specified Coda doc.

  • Inputs:

    • doc_id (Required): The ID of the Coda document.

  • Configurations:

    • connection_id (Required): Your Coda connection ID.

  • Outputs:

    • tables: Array of table objects with table IDs and names.

Example:

{
  "action_configurations": { "connection_id": "<your-coda-connection-id>" },
  "action_input_parameters": { "doc_id": "abc123" }
}

4. Upsert Rows

Description: Add or update rows in a Coda table.

  • Inputs:

    • doc_id (Required): The ID of the Coda document.

    • table_id (Required): The ID of the table within the document.

    • rows (Required): Array of row objects to insert or update.

  • Configurations:

    • connection_id (Required): Your Coda connection ID.

  • Outputs:

    • row_ids: Array of IDs for the upserted rows.

Example:

{
  "action_configurations": { "connection_id": "<your-coda-connection-id>" },
  "action_input_parameters": {
    "doc_id": "abc123",
    "table_id": "table456",
    "rows": [{ "Name": "John Doe", "Email": "[email protected]" }]
  }
}

5. Get Row

Description: Retrieve a specific row from a Coda table.

  • Inputs:

    • doc_id (Required): The ID of the Coda document.

    • table_id (Required): The ID of the table.

    • row_id (Required): The ID of the row to retrieve.

  • Configurations:

    • connection_id (Required): Your Coda connection ID.

  • Outputs:

    • row: Object containing the row data.

Example:

{
  "action_configurations": { "connection_id": "<your-coda-connection-id>" },
  "action_input_parameters": {
    "doc_id": "abc123",
    "table_id": "table456",
    "row_id": "row789"
  }
}

Crunchbase

Discover how to use the Crunchbase node in StackAI to search for company and investment data, including required inputs, configurations, and output examples.

What is Crunchbase?

The Crunchbase node in StackAI allows you to access and search company and investment data directly from Crunchbase. This integration is ideal for workflows that require up-to-date business intelligence, company profiles, or investment research.


How to use it?

To use the Crunchbase node, simply add it to your StackAI workflow and configure the search parameters. You can specify your search query, the number of results you want, and the country to focus your search. The node will return structured results with company or investment information.


Example of Usage

Suppose you want to find information about "OpenAI" in the United States and retrieve the top 5 results:

  • Query: "OpenAI" (required)

  • Top K: 5 (optional, default is 10)

  • Country: "US" (optional, default is "US")

The node will return a list of results, each containing a title and a text summary.


Available Actions

1. Crunchbase Search

Description: Searches Crunchbase for companies, investments, or related data based on your query.

Inputs

Name
Description
Required
Example Value

Query

The query to search for

Yes

"OpenAI"

Configurations

Name
Description
Required
Default
Example Value

Top K

The number of results to return

No

10

5

Country

The country to search in

No

US

"US", "GB", "IN"

  • Country Options: AR, AU, AT, BE, BR, CA, CL, CN, CO, CZ, DK, FI, FR, DE, HK, IN, ID, IT, JP, KR, MY, MX, NL, NZ, NO, PH, PL, PT, RU, SA, SG, ZA, ES, SE, CH, TW, TH, TR, GB, US

Outputs

Name
Description
Required
Example Value

Query

The query that was used to search Crunchbase

Yes

"OpenAI"

Search Results

The results of the Crunchbase search

Yes

List of result objects

Each result object contains:

  • Title (required): The title of the Crunchbase result (e.g., "OpenAI, Inc.")

  • Text (required): The text content or summary of the result (e.g., "OpenAI is an AI research and deployment company...")


Example Output

{
  "query": "OpenAI",
  "search_results": [
    {
      "title": "OpenAI, Inc.",
      "text": "OpenAI is an AI research and deployment company based in San Francisco, CA..."
    },
    {
      "title": "OpenAI LP",
      "text": "OpenAI LP operates as a limited partnership for AI research and development..."
    }
    // ...more results
  ]
}

Excel

Comprehensive guide to the Excel node in StackAI: Learn how to automate spreadsheet tasks, write data, and integrate Excel with your workflows.

How to Use the Excel Node

To use the Excel node, simply add it to your StackAI workflow and configure it to perform actions such as writing data to a spreadsheet. You can connect it to other nodes (like LLMs, input nodes, or data sources) to automate the flow of information into your Excel files. The node supports secure authentication via connection IDs, ensuring your data remains protected.


Example of Usage

Suppose you want to automatically log survey responses from a form into an Excel spreadsheet stored in SharePoint. You can connect the input node (collecting responses) to the Excel node, which then writes each new entry into the designated spreadsheet, eliminating manual data entry.


Available Actions for the Excel Node

Below are the most commonly used actions for the Excel node in StackAI:

1. Write to Sheet

Description: Automatically writes data to a specified Excel sheet in SharePoint.

Inputs

Name
Description
Required

spreadsheet_id

The unique ID of the Excel spreadsheet

Yes

sheet_name

The name of the sheet/tab to write to

Yes

data

The data to write (array of objects/rows)

Yes

range

The cell range to write data (e.g., A1:D10)

No

Showing 1-4 of 4 items

Example Input:

{
  "spreadsheet_id": "abc123",
  "sheet_name": "Responses",
  "data": [
    {"Name": "John Doe", "Email": "[email protected]", "Score": 95}
  ],
  "range": "A2:C2"
}

Configurations

Name
Description
Required

connection_id

The connection ID for SharePoint/Excel

Yes

Showing 1-1 of 1 items

Example Configuration:

{
  "connection_id": "your-connection-id-here"
}

Outputs

Name
Description
Required

success

Boolean indicating if the write was successful

Yes

updated_range

The range that was updated

No

error

Error message if the operation failed

No

Showing 1-3 of 3 items

Example Output:

{
  "success": true,
  "updated_range": "A2:C2"
}

Best Practices

  • Always ensure your connection ID is valid and has the necessary permissions to access the target Excel file.

  • Validate your data structure before writing to avoid errors.

  • Use dynamic node references (e.g., {in-0}) to automate data flow from other nodes.


Summary

The Excel node in StackAI is a powerful tool for automating spreadsheet operations in SharePoint. By leveraging its write capabilities, you can streamline data management, reduce manual work, and enhance collaboration across your team.

GDocs

Learn how to automate Google Docs creation in StackAI: create new documents, set file names, add content, and organize in folders with easy integration.

GDocs in StackAI allows you to automate the creation of Google Docs directly from your workflow. The GDocs Node integration streamlines document generation, making it easy to create, organize, and access Google Docs using dynamic data and AI-powered content.


How to use it?

To use the GDocs node, connect it in your StackAI workflow where you want to generate a new Google Doc. You can specify the document’s name, content (in Markdown), and optionally the folder where it should be stored in your Google Drive. The node will output the document’s unique ID and a direct link to open it.


Example of Usage

Suppose you want to generate a project report automatically after processing some data. You can use the GDocs node to create a new document titled "Project Report" with the body content generated by an LLM node, and save it in a specific Google Drive folder.


Available Actions

1. Create Google Doc

Description: Creates a new Google Doc with specified content and file name, optionally placing it in a chosen Google Drive folder.

Inputs:

  • File Name (Required):

    • Type: String

    • Description: Name of the Word document to create.

    • Example: "Project Report"

  • Content (Required):

    • Type: String (Markdown)

    • Description: Document body text in Markdown format.

    • Example: "## Executive Summary\nThis report covers..."

  • Folder ID (Optional):

    • Type: String

    • Description: Google Drive folder ID where the file will be created. Leave blank to create in My Drive.

    • Example: "1A2B3C4D5E6F"

Configurations: No additional configurations are required for this action.

Outputs:

  • File ID (Required):

    • Type: String

    • Description: Unique identifier of the created Google Doc.

    • Example: "1x2y3z4w5v6u"

  • File URL (Required):

    • Type: String

    • Description: URL to open the document in Google Drive.

    • Example: "https://docs.google.com/document/d/1x2y3z4w5v6u/edit"


Example Workflow Step

  • Set up an LLM node to generate a summary.

  • Connect the LLM output to the GDocs node’s Content input.

  • Set File Name to "Weekly Summary".

  • (Optional) Provide a Folder ID to organize the document.

  • The GDocs node will output a File ID and a File URL for immediate access.


Summary Table

Input Name
Required
Type
Description
Example

File Name

Yes

String

Name of the document

"Project Report"

Content

Yes

String

Document body in Markdown

"## Executive Summary..."

Folder ID

No

String

Google Drive folder ID (optional)

"1A2B3C4D5E6F"

Output Name
Required
Type
Description
Example

File ID

Yes

String

Unique identifier of the created Google Doc

"1x2y3z4w5v6u"

File URL

Yes

String

URL to open the document in Google Drive

",https://docs.google.com/,..."


Note: To use this node, ensure your StackAI workspace is connected to Google Drive with the necessary permissions. Use the Folder ID to organize documents as needed, or leave it blank to save in your main Drive.

Gmail

Comprehensive guide to using the Gmail node in StackAI workflows, including available actions, required inputs, configurations, and output examples.

The Gmail Node in StackAI enables you to automate sending and managing emails directly through your Gmail account. This integration streamlines communication tasks, allowing you to trigger email actions within your automated workflows.


How to use it?

To use the Gmail node, connect your Gmail account using a valid connection ID. Choose the desired action (such as sending an email), provide the required input fields, and configure any optional settings. The node will execute the action and return the relevant output, which can be used in downstream workflow steps.


Example of Usage

Suppose you want to automatically send a notification email when a new lead is added to your CRM. You can set up a workflow where the Gmail node is triggered with the lead’s details, and an email is sent to your sales team.


Available Actions

Below are the most commonly used actions for the Gmail node in StackAI:


1. Send Email

Description: Send an email from your connected Gmail account to one or more recipients.

Inputs:

  • to (Required): The recipient’s email address or a list of addresses. Example: "to": "[email protected]" or "to": ["[email protected]", "[email protected]"]

  • subject (Required): The subject line of the email. Example: "subject": "Welcome to Our Service"

  • body (Required): The main content of the email. Example: "body": "Thank you for signing up!"

  • cc (Optional): Email addresses to be copied on the email. Example: "cc": "[email protected]"

  • bcc (Optional): Email addresses to be blind-copied on the email. Example: "bcc": "[email protected]"

  • attachments (Optional): List of file paths or file objects to attach. Example: "attachments": ["/path/to/file.pdf"]

Configurations:

  • connection_id (Required): The ID of your Gmail connection. Example: "connection_id": "5a56b86a-c7d3-4e6a-af3f-0969d00ff9f8"

Outputs:

  • message_id (Always returned): The unique ID of the sent email. Example: "message_id": "17c8b2e5e8b2c1a2"

  • status (Always returned): The status of the email send operation. Example: "status": "sent"


2. Search Emails

Description: Search for emails in your Gmail account using specific criteria.

Inputs:

  • query (Required): The search query string (Gmail search syntax). Example: "query": "from:[email protected] is:unread"

  • max_results (Optional): Maximum number of emails to return. Example: "max_results": 10

Configurations:

  • connection_id (Required): The ID of your Gmail connection.

Outputs:

  • emails (Always returned): A list of email objects matching the search criteria. Example:

    "emails": [
      {
        "subject": "Meeting Reminder",
        "from": "[email protected]",
        "date": "2025-07-07T10:00:00Z",
        "snippet": "Don't forget our meeting at 2pm."
      }
    ]

3. (Optional) Additional Actions

Other actions may be available, such as managing labels or retrieving email details. For most workflow automations, "Send Email" and "Search Emails" are the primary actions used.


Best Practices

  • Always use a valid Gmail connection ID from your available connections.

  • Ensure all required fields are provided for each action.

  • Use dynamic references (e.g., from previous nodes) to personalize email content or search queries.

  • Attachments should be properly formatted and accessible by the workflow.


Summary Table

Action
Required Inputs
Optional Inputs
Required Configurations
Outputs

Send Email

to, subject, body

cc, bcc, attachments

connection_id

message_id, status

Search Emails

query

max_results

connection_id

emails


Use the Gmail node in StackAI to automate your email workflows, streamline communication, and enhance productivity with seamless Gmail integration.

GSheets

Learn how to automate writing data to Google Sheets using the GSheets node in StackAI. Step-by-step guide with input, configuration, and output details.

What is GSheets?

GSheets is a StackAI integration node that allows you to write data directly to Google Sheets. This node is ideal for automating data entry, updating records, or logging information in your spreadsheets as part of your workflow.


How to use it?

To use the GSheets node, you need to configure it with the required spreadsheet and sheet details, and provide the data you want to write. The node will then append or update the specified Google Sheet with your data and return the status of the operation.


Example of Usage

Suppose you want to log new user signups into a Google Sheet. You can connect the GSheets node in your StackAI workflow, configure it with your target spreadsheet and sheet, and pass the signup data as input.


Available Actions

1. Write to Google Sheet

Description: Appends or updates data in a specified Google Sheet.

Inputs

  • Spreadsheet (spreadsheet_id)

    • Required: Yes

    • Type: Select

    • Description: Select the Google Sheet to write to.

  • Sheet (sheet_name)

    • Required: Yes

    • Type: String

    • Description: Specify the sheet/tab within the spreadsheet.

  • Data (data)

    • Required: Yes

    • Type: Textarea

    • Description: The data to write. Can be a JSON object (preferred for structured data) or plain text.

Example Input:

{
  "spreadsheet_id": "1A2B3C4D5E6F7G8H9I0J",
  "sheet_name": "Signups",
  "data": {
    "Name": "Jane Doe",
    "Email": "[email protected]",
    "Signup Date": "2025-07-07"
  }
}

Configurations

  • No additional configurations are required beyond the inputs above.

Outputs

  • Status (status)

    • Required: Yes

    • Type: String

    • Description: Indicates if the operation was successful (e.g., "success" or "failure").

  • Updated Range (updated_range)

    • Required: Yes

    • Type: String

    • Description: The cell range in the sheet that was updated (e.g., "Signups!A2:C2").

  • Message (message)

    • Required: Yes

    • Type: String

    • Description: A message describing the outcome of the operation.

Example Output:

{
  "status": "success",
  "updated_range": "Signups!A2:C2",
  "message": "Data written successfully."
}

Summary Table

Input Name
Required
Type
Description

spreadsheet_id

Yes

Select

Google Sheet to write to

sheet_name

Yes

String

Sheet/tab within the spreadsheet

data

Yes

Text

Data to write (JSON or plain text)

Output Name
Required
Type
Description

status

Yes

String

Status of the operation

updated_range

Yes

String

The updated cell range in the spreadsheet

message

Yes

String

Message about the operation


Best Practices

  • Always use structured JSON for the data input for best results.

  • Ensure you have the correct permissions to access and write to the selected Google Sheet.

  • Use clear sheet and spreadsheet names to avoid confusion in multi-sheet documents.


Automate your data workflows with the GSheets node in StackAI for seamless Google Sheets integration.

Make

Learn how to use the Make (Integromat) node in StackAI to trigger webhooks and automate workflows with detailed input, configuration, and output examples.

What is Make?

Make (formerly Integromat) is a powerful automation platform that enables you to connect various apps and automate workflows. In StackAI, the Make node allows you to trigger webhooks, sending data to your Make scenarios and integrating StackAI with thousands of other services.


How to use it?

The Make node in StackAI is primarily used to trigger a webhook in your Make scenario. This is useful for sending data from StackAI to Make, where you can further process, route, or automate actions across your connected apps.


Example of Usage

Suppose you want to send user data from StackAI to a Make scenario for further processing. You would use the Make node to trigger a webhook with the relevant data payload.


Available Actions

1. Trigger Webhook

Description: Send a POST request to a Make webhook URL, optionally including a JSON payload. This action is commonly used to start a Make scenario from StackAI.

Inputs

  • Webhook URL (Required)

    • The URL of your Make webhook.

    • Example: https://hook.us2.make.com/qee6xwvm63a8jpctrgdnxgaj6hdd3v7q

  • Body (Optional)

    • The JSON data payload to send with the webhook request.

    • Example:

      {
        "user_id": "12345",
        "event": "signup",
        "email": "[email protected]"
      }
  • Description (Optional)

    • A human-readable description of the action being triggered.

    • Example: "Send new user signup data to Make"

Configurations

  • Webhook URL (Required)

    • This is the only required configuration. It must be a valid Make webhook URL.

  • Description (Optional)

    • Used for documentation or clarity within your workflow.

Outputs

  • Status (Required)

    • Indicates if the webhook call was successful or failed.

    • Example: "success"

  • Message (Required)

    • A descriptive message about the result of the webhook call.

    • Example: "Webhook triggered successfully"

  • Status Code (Required)

    • The HTTP status code returned by the webhook endpoint.

    • Example: 200

  • Response (Optional)

    • The JSON response data returned by the webhook, if any.

    • Example:

      {
        "result": "ok"
      }

How to use it?

  1. Obtain your Make webhook URL from your Make scenario.

  2. Add the Make node to your StackAI workflow.

  3. Enter the webhook URL in the configuration.

  4. (Optional) Add a JSON body to send data.

  5. (Optional) Add a description for clarity.

  6. Connect the node to trigger the webhook as part of your workflow.


Example of Usage

  • Scenario: Send new user signup data to Make for further automation.

  • Webhook URL: https://hook.us2.make.com/qee6xwvm63a8jpctrgdnxgaj6hdd3v7q

  • Body:

    {
      "user_id": "12345",
      "event": "signup",
      "email": "[email protected]"
    }
  • Description: "Send new user signup data to Make"

Expected Output:

  • Status: "success"

  • Message: "Webhook triggered successfully"

  • Status Code: 200

  • Response: (any data returned by your Make scenario)


Use the Make node in StackAI to seamlessly connect your AI workflows with the vast automation capabilities of Make, enabling powerful integrations and streamlined processes.

Oracle

Learn how to use the Oracle node in StackAI to query Oracle databases using natural language or SQL, with clear input and output examples.

Learn how to use the Oracle node in StackAI to query Oracle databases using natural language or SQL, with clear input and output examples.

What is Oracle?

The Oracle node in StackAI allows you to query Oracle databases directly from your workflow. You can use plain English or SQL queries to retrieve data, making it easy to access and analyze your database information without writing complex code.

How to use it?

To use the Oracle node, you need to provide two required inputs:

  1. Schema: Describe your database schema, including tables, columns, and data types. This helps the node understand the structure of your database.

  2. Query: Enter your question or request in plain English or as a SQL statement. The node will interpret your input, generate the appropriate SQL if needed, execute it, and return the results.

Example of Usage

Suppose you have a table called Employees with columns Name (TEXT), Department (TEXT), and Salary (REAL).

  • Schema (Required):

    TABLE Employees (
      Name TEXT,
      Department TEXT,
      Salary REAL
    );
  • Query (Required):

    What is the average salary in the Sales department?

Inputs

Name
Description
Required
Example

Schema

Database Schema (tables, columns, types, etc.)

Yes

TABLE Employees (Name TEXT, Department TEXT, Salary REAL);

Query

Enter your query in plain English or SQL format to execute against database

Yes

Show me all employees in the Engineering department

Configurations

  • No additional configurations are required for the Oracle node. All you need is the schema and the query.

Outputs

Name
Description
Required
Example

Query

The SQL query that was executed

Yes

SELECT AVG(Salary) FROM Employees WHERE Department = 'Sales';

Results

The results of the query

Yes

[{"AVG(Salary)": 85000}]

Available Actions

  • Query an Oracle Database: Execute a query (in plain English or SQL) against your Oracle database and retrieve the results.

Summary Table

Action Name
Description
Required Inputs
Outputs

Query an Oracle Database

Run queries on your Oracle database

Schema, Query

Query, Results

How to use it in StackAI

  1. Add the Oracle node to your workflow.

  2. Fill in the Schema with your database structure.

  3. Enter your query in the Query field.

  4. Connect the node to downstream nodes to use the results.

Example Output

  • Query: SELECT AVG(Salary) FROM Employees WHERE Department = 'Sales';

  • Results: [{"AVG(Salary)": 85000}]

This makes it easy to integrate Oracle database queries into your StackAI workflows for reporting, analytics, and automation.

Reducto

Reducto is an integration that provides advanced document and data processing capabilities. It is often used for parsing, extracting, editing, and splitting documents or data files, as well as managing asynchronous jobs related to these operations. Reducto is especially useful for workflows that require automated document understanding, transformation, or extraction.


Available Actions

Action Name
Description (Summary)

Parse

Parses a document or file to extract structured data or text.

Async Parse

Submits a document for parsing and returns a job ID for later retrieval.

Extract

Extracts specific information or sections from a document.

Async Extract

Submits an extraction job and returns a job ID for asynchronous processing.

Edit

Applies edits or transformations to a document.

Async Edit

Submits an edit job for asynchronous processing.

Split

Splits a document into multiple parts based on rules or structure.

Retrieve Parse Job

Retrieves the result of a previously submitted async parse job using its job ID.

Cancel Job

Cancels an ongoing asynchronous job by job ID.

Configure Webhook

Sets up a webhook to receive notifications about job status or completion.


How to Use the Reducto Node

  1. Choose the Action:

    • Select the Reducto node in your workflow and pick the action that matches your use case (e.g., Parse, Extract, Edit, Split).

  2. Provide Inputs:

    • For most actions, you will need to provide a file, document, or text input. Some actions may require additional parameters (such as extraction rules, edit instructions, or split criteria).

  3. Asynchronous Jobs:

    • For async actions, you will receive a job ID. Use the "Retrieve Parse Job" or similar action to check the status or get the result when processing is complete.

  4. Webhooks (Optional):

    • If you want to be notified when a job is done, use the "Configure Webhook" action to set up a callback URL.

  5. Connect to Output or Downstream Nodes:

    • The results from Reducto actions can be sent to Output nodes, LLM nodes, or further processing nodes in your workflow.


Example Use Cases

  • Document Parsing: Automatically extract tables, text, or metadata from uploaded PDFs or Word documents.

  • Data Extraction: Pull out specific fields (like names, dates, or totals) from invoices or forms.

  • Document Splitting: Break up large documents into smaller, manageable sections for further analysis.

  • Automated Editing: Redact sensitive information or reformat documents before sharing.

PostgreSQL

The PostgreSQL Node allows you to query a PostgreSQL database directly from your workflow. It is designed to take a user’s question (in plain English or SQL), convert it into a SQL query if needed, execute it against your connected PostgreSQL database, and return the results.


Required Inputs

  1. Query (string, required)

    • This is the main input. You can enter your question in plain English (e.g., "Show me the top 10 customers by revenue") or provide a raw SQL query.

    • The node will interpret the plain English query and generate the appropriate SQL, or execute your SQL directly.

  2. Schema (array of strings, optional but recommended)

    • This is where you can provide the database schema (tables, columns, types, etc.) to help the node understand your database structure.

    • Example:

      TABLE Customers (Name TEXT, Email TEXT, Revenue REAL);
      TABLE Orders (OrderID INT, CustomerID INT, Amount REAL);
    • Providing the schema is especially helpful if you want the node to convert plain English to SQL accurately.

Output

  • Query: The actual SQL query that was executed.

  • Results: The results of the query, returned as an array of objects (rows).


Example Usage

  • Plain English Query: Input: "What is the total revenue for the year 2024?" Output: The node will generate the SQL, run it, and return the result.

  • SQL Query: Input: SELECT SUM(revenue) FROM sales WHERE year = 2024; Output: The node will execute this SQL and return the result.


How to Connect

  • If you have a PostgreSQL connection ID, you can add it to the node’s configuration to connect to your specific database.

  • You can connect the output of this node to downstream nodes (like an LLM for analysis, or an Output node for display).


Summary Table

Input Name
Type
Required
Description

query

string

Yes

The question or SQL to execute

sql_schema

array

No

Database schema (tables, columns, types, etc.)

Output Name
Type
Description

query

string

The SQL query that was executed

results

array

The results of the query


If you want to use this node, just provide your question or SQL, and (optionally) the schema. The node will handle the rest—querying your PostgreSQL database and returning the results for use elsewhere in your workflow!

Get Started with Workflow Builder

A guide to the Workflow View

Our no-code approach is anchored in a visual workflow builder that prioritizes ease of use. This is achieved through an intuitive drag-and-drop interface, with built-in chatbot assistance, and an optimal level of abstraction that caters to both technical and non-technical teams. Search for nodes in the menu on the left, drop them onto the canvas, select parts of your workflow, and even copy/paste nodes across projects!

The Workflow Builder View

Technical teams can extend capabilities even further by using a custom “Code” node (e.g. Python), a custom “API” node, or by building their own tools to orchestrate LLM-driven actions.

Our users can integrate with a broad ecosystem of applications, enabling support for the most common use cases across departments.

Department / Use Case

Integrations

Data & Analytics

Power BI, BigQuery, Databricks, Snowflake, Fred, Excel (Sharepoint), GSheets, Typeform, etc.

Engineering & Dev Tools

Github, Regex, SerpAPI, Weaviate, etc.

AI & Machine Learning

E2B, Pinecone, Wolfram Alpha, HyperBrowser, Reducto, VLM

CRM & Sales

Salesforce, HubSpot, LinkedIn, PitchBook, Yahoo Finance

Marketing

HubSpot, LinkedIn, Gmail, Outlook, YouTube

Project & Task Management

Asana, Clickup, Jira, Notion, Make, Coda, Miro

Collaboration & Communication

Slack, Loom, Gmail, Outlook, GDocs, Knowledge Base

ERP & Business Operations

Oracle, NetSuite, Workday, Veeva

Storage & File Systems

Google Drive, Dropbox, OneDrive, SharePoint, SharePoint (NTLM), Azure Blob Storage, AWS S3

Finance & Reporting

Excel, Airtable, Power BI, Yahoo Finance

Forms & Surveys

Typeform, GSheets

HR & People Ops

Workday, Outlook, LinkedIn

Automation & Integration

Hightouch, Make, Slack

Web & Social Monitoring

YouTube, Firecrawl, Exa AI

Our users can connect to a wide range of AI models:

AI Model

Integrations

LLMs

OpenAI, Anthropic, Google, Meta, xAI, Mistral, Perplexity, TogetherAI, Cerebras, Replicate, Groq, Azure, Bedrock, any local LLM via end-point, StackAI’s fine-tuned text-to-SQL model

Voice models

Deepgram, Elevenlabs

Image models

Stable Diffusion, Flux

A screenshot of a computer AI-generated content may be incorrect.

StackAI also supports connections with MCP servers, allowing your workflows to use not only integrations developed by StackAI’s team, but also integrate with tools served by third parties using the MCP protocol.

Making LLMs take autonomous actions is very easy. Users can add ‘tools’ (e.g., function calling) directly in the LLM node by selecting the one they want from a long list of possible actions. More advanced users can also add their own custom tools.

A screenshot of a chat AI-generated content may be incorrect.
A screenshot of a computer AI-generated content may be incorrect.

Ask Workflow

Users can interact with the assistant directly within the workflow builder to ask questions, get suggestions for improving their project, and easily access documentation for specific features.

A screenshot of a computer AI-generated content may be incorrect.

Trigger Node

The Trigger Node allows you to automatically run your workflow based on certain triggers. The currently available triggers are:

  • Github

  • Gmail

  • Outlook

  • StackAI

  • Stripe

  • Time

  • Typeform

  • Zendesk

A Trigger Node will start your workflow when a certain event occurs, such as an email being received in your Gmail account, or a pull request created on Github. You can also use a Trigger Node to start your project at the same time every day, or when another StackAI project completes a run.

Email triggers like Gmail or Outlook will monitor your Gmail or Outlook inbox and activate your workflow whenever a new email arrives. They extract key information from the email such as sender, subject, body content, thread ID, and any attachments, making this data available for downstream nodes to process.

Some Trigger Nodes like Gmail, Typeform, or Zendesk may ask you to establish a new connection if you are using them for the first time. This allows them to access your personal account on those platforms.

Outputs

When an trigger occurs, the Trigger Node may give you outputs, depending on the trigger. To see these outputs, click on the Trigger Node, then select which trigger you would like to add. Hover over the trigger and you will see the its output fields.

  • Sender (string): The email address of the sender of the email

  • Subject (string): The subject line of the email

  • Body (string): The full text content of the email body

  • Thread ID (string): The thread ID of the email, which can be used to reply to the email in a SendEmail action

  • Attachments (files): Email attachments stored in the data pool for use by other nodes

How to set up the Email Trigger Node

  1. Add an Email Trigger node to your workflow

  2. Connect your Gmail or Outlook account:

    • Click "New Connection" if no connection exists

    • Select an existing connection from the dropdown if available

  3. Configure any test values (these are only used in the workflow builder)

  4. Connect the Email Trigger node to downstream nodes in your workflow

  5. Publish your workflow to activate the trigger

Important Notes

  • The trigger will not work until you publish the workflow

  • You must configure a Gmail or Outlook connection before the trigger can be used

  • The Email Trigger node requires permission to access your Gmail or Outlook inbox

  • Test values are only used during workflow design and testing; they don't affect the actual trigger configuration

Using Email Data in Your Workflow

You can reference the email data in downstream nodes by:

  1. Selecting the Email Trigger node as an input source

  2. Using specific email fields (Sender, Subject, Body, Thread ID, Attachments) in your workflow logic

  3. Processing email content with AI nodes or other actions

Common Use Cases

  • Auto-reply to specific types of emails

  • Extract and process information from structured emails

  • Create tasks or tickets based on email content

  • Filter and categorize incoming emails by sender or subject

  • Process email attachments

  • Automatically process and route new leads from contact forms

  • Create support tickets from feedback or help request forms

  • Process event signups and send confirmation emails

  • Analyze survey responses and generate insights

  • Add new contacts to your CRM system automatically

  • Handle product orders or service requests

  • Collect and categorize customer feedback

  • Review and route job applications or membership requests

  • Store form responses in databases or spreadsheets

  • Daily Reports: Generate and send daily, weekly, or monthly reports

  • Data Backups: Automatically backup databases or files at regular intervals

  • System Maintenance: Run cleanup tasks, cache clearing, or system health checks

  • Content Publishing: Schedule blog posts, social media updates, or newsletters

  • Monitoring and Alerts: Check system status and send alerts if issues are detected

Troubleshooting

  • Ensure your workflow is published for the trigger to be active

  • Verify that your connection has the necessary permissions (if applicable)

  • Confirm that webhooks are properly configured (usually handled automatically)

  • Monitor workflow execution logs for any connection or processing errors

  • Test with sample data first before relying on live form submissions

  • Ensure your account has webhook capabilities (if applicable)

Image Input Node

The Image Input node allows you to analyze and process images using advanced AI vision models. It can describe image content, extract information, answer questions about images, and perform various computer vision tasks by processing images from uploaded files.

To use the image node, upload a file or multiple files and connect the node to your input.

OCR

OCR is OFF by default. Turn it ON to first transform the image to text before passing to the model. A model of your choice will transform the image to text, based on a prompt you provide.

Available Models

Select the AI vision model to use for image analysis

  • gpt-4o: Fastest option

  • gpt-4.1: Balanced option offering good performance with faster processing

  • flux-kontext-pro: Advanced model for detailed image understanding and complex analysis

OCR prompt: Describe what you want the AI to do with the image

  • Be specific about what information you need extracted

  • Examples: "Describe the content of this image in detail", "Count the number of people in this photo", "What text is visible in this image?"

Outputs

The Image Input node provides processed information based on your prompt and the selected model's analysis of the image.

Common Use Cases

  • Content Moderation: Automatically detect inappropriate or unsafe content in images

  • Product Cataloging: Extract product details, descriptions, and features from product photos

  • Document Processing: Extract text and data from scanned documents, receipts, or forms

  • Quality Control: Analyze product images for defects or compliance issues

  • Social Media Management: Generate captions and descriptions for social media posts

  • Accessibility: Create alt text descriptions for web images

  • Inventory Management: Count items or identify products in warehouse photos

  • Medical Imaging: Analyze medical images for preliminary screening (with appropriate oversight)

  • Real Estate: Generate property descriptions from listing photos

  • Education: Create study materials by analyzing diagrams, charts, or textbook images

Prompt Examples

  • General Description: "Describe everything you see in this image in detail"

  • Text Extraction: "Extract all visible text from this image and format it as plain text"

  • Object Counting: "Count how many [specific objects] are visible in this image"

  • Color Analysis: "What are the dominant colors in this image?"

  • Scene Understanding: "What is the setting or location shown in this image?"

  • Safety Assessment: "Identify any potential safety hazards visible in this workplace image"

  • Product Information: "List all the product features and specifications visible on this packaging"

Best Practices

  • Image Quality: Use high-resolution, clear images for better analysis results

  • Specific Prompts: Be precise about what information you need from the image

  • Model Selection: Choose the appropriate model based on complexity requirements

  • URL Accessibility: Ensure image URLs are publicly accessible and don't require authentication

  • File Formats: Use standard image formats (JPG, PNG) for best compatibility

  • Privacy Considerations: Be mindful of privacy when processing images containing personal information

Troubleshooting

  • Image Not Loading: Verify the image URL is correct and publicly accessible

  • Poor Analysis Results: Try using a more detailed or specific prompt

  • Model Errors: Switch to a different model if you encounter processing issues

  • Slow Processing: Consider using o3-mini for faster results on simple tasks

  • Format Issues: Ensure your image is in a supported format and not corrupted

Creating a Knowledge Base

The Knowledge Base Dashboard

It's easy to create a Knowledge Base on the fly with the KB Node, but the Knowledge Base Dashboard will be your centralized space to create, manage, and share knowledge bases. It enables you to upload files, import data from external connections, customize upload settings, and manage access permissions. This guide will walk you through the main features.


Your First Knowledge Base

The Knowledge Base Dashboard allows you to create and manage your first knowledge base seamlessly. After creating a knowledge base, you can:

  1. Add Files: Upload files directly by dragging and dropping or selecting files manually.

  2. Organize Content: Group related files for easy navigation.

  3. Edit Descriptions: Add or modify descriptions to reflect the purpose of the knowledge base.


Upload Files

The most straightforward way to create a knowledge base is by uploading files.

Example File Types:

  • Word Documents: .doc, .docx

  • PDF Files: .pdf

  • PowerPoint Presentations: .ppt, .pptx

  • Excel Spreadsheets: .xls, .xlsx


Import from Connection

Easily import files from external connections such as Dropbox, Google Drive, Notion, and SharePoint. Follow these steps:

  1. Navigate to Import from Connection in the dashboard.

  2. Select the external connection (e.g., Dropbox).

  3. Browse and pick the files or folders you want to import.

  4. Click Import selected files to transfer them to your knowledge base.

Be aware that the more files you import, the longer it will take to process them.


Auto-Sync Files

If you are importing from a connection, you may want your files to automatically sync with your connection so that if a file is updated, it is also updated in your StackAI Knowledge Base.

To do this, first go to the Knowledge Base Dashboard, then go to the knowledge base you want to sync. Toggle ON auto-sync. Here you can also re-sync the files by clicking "Sync Files." This is useful if you have added a new file and you would like it to be reflected you StackAI KB immediately.


Advanced: Upload Settings

Customize how files are processed and indexed in your knowledge base using Upload Settings:

  1. Chunking Algorithm: Choose where and how files are broken down for indexing (e.g., sentence-based chunking). Sentence-based chunking is more granular; files are broken down into sentence chunks.

  2. Chunk Length: Define the maximum size of each chunk in characters.

  3. Chunk Overlap: Specify how much content should overlap between chunks. Some overlap can improve retrieval, as it avoids fragmentation.

  4. OCR (Optical Character Recognition): Enable extracting text from images.

  5. Advanced Data Extraction: Activate for enhanced processing of complex files.

  6. Embedding Model: Select the AI model for embedding and indexing files.

Example:

  • Chunking Algorithm: Sentence-based

  • Chunk Length: 2,500 characters

  • Embedding Model: text-embedding-3-large


Successfully Synced Files

When uploading files to a knowledge base, the dashboard displays a status bar indicating progress. Once completed, a confirmation message appears: "Successfully synced [X] files."


Role-Based Access Control (RBAC)

Control who can view, edit, or manage your knowledge bases with RBAC. You can assign roles and permissions to individuals or groups:

  1. Admin: Full control over the knowledge base, including editing and sharing.

  2. Viewer: Read-only access.

  3. Groups: Share with predefined groups for streamlined collaboration.

Example:

  • Admin: Kai Henthorn-Iwane ([email protected])

  • Viewer: Bernard Aceituno ([email protected])

  • Group Viewer: group1


Conclusion

The Knowledge Base Dashboard simplifies the process of creating and managing knowledge bases, whether for personal use or organizational collaboration. From file uploads and import options to advanced settings and access controls, this tool provides all the features needed to maintain an efficient and accessible repository of information.

For further assistance, refer to the Help & More section in the sidebar or contact your administrator.

Databricks

Learn how to use the Databricks node in StackAI to run analytics and machine learning queries. See required inputs, configurations, and output examples.

What is Databricks?

The Databricks node in StackAI allows you to query a Databricks workspace for data analytics and machine learning. It translates plain English or SQL queries into actionable database operations, returning both the executed SQL and the results.


How to make a Databricks Connection?

  • Login to Databricks, the Workspace URL is illustrated below:

    • In this example dbc-003abf48-15.cloud.databricks.com is the Workspace URL

  • To get your Personal Access Token:

    1. Click on your Profile in the upper right

    2. Then click on Settings

  • Under Users, click on Developer

  • Then in this menu, click on Manage


How to use it?

  1. Add the Databricks node to your StackAI workflow.

  2. Provide the required database schema and your query (in plain English or SQL).

  3. Connect the node to downstream nodes to process or display the results.


Example of Usage

Suppose you want to find the total revenue for 2024 from your sales table:

  • Schema Example:

  • Query Example:


Available Actions

1. Query a Databricks Workspace

Description: Run analytics or machine learning queries on your Databricks database using natural language or SQL.

Inputs

  • Schema (sql_schema)

    • Type: Array of strings (textarea)

    • Required: Yes

    • Description: The database schema, including tables, columns, and types.

    • Example:

  • Query (query)

    • Type: String

    • Required: Yes

    • Description: The question or command you want to run, in plain English or SQL.

    • Example:

Configurations

  • No additional configurations are required for this action.

Outputs

  • Query (sql_query)

    • Type: String

    • Required: Yes

    • Description: The SQL query that was executed.

    • Example:

  • Results (results)

    • Type: Array of objects

    • Required: Yes

    • Description: The results returned from the Databricks query.

    • Example:


Summary Table

Field
Type
Required
Description
Example

Best Practices

  • Always provide a clear and complete schema for accurate query translation.

  • Use natural language for ease, or SQL for precision.

  • Review the returned SQL to ensure it matches your intent.


Use the Databricks node in StackAI to seamlessly integrate advanced analytics and machine learning queries into your automated workflows.

E2B

Learn how to use the E2B node in StackAI to execute code in a secure, sandboxed environment. Discover available actions, input/output details, and best practices.

What is E2B?

The E2B node in StackAI allows you to execute code in a secure, sandboxed environment. This is ideal for running scripts, automating tasks, or processing data without exposing your main system to risk. E2B is designed for flexibility, supporting a wide range of coding and automation scenarios.


How to Use the E2B Node

To use the E2B node, add it to your StackAI workflow and configure it to execute your desired code. You can pass inputs from other nodes, specify code to run, and retrieve outputs for further processing. E2B is especially useful for custom logic, data transformation, or integrating with APIs not natively supported by StackAI.


Example of Usage

Suppose you want to process user input, run a Python script, and return the result:

  1. Add an Input node to collect user data.

  2. Connect the Input node to the E2B node.

  3. In the E2B node, specify the code you want to execute, referencing the input as needed.

  4. Connect the E2B node to an Output node to display the result.


Available Actions for E2B

Below are the most commonly used actions for the E2B node:

1. Run Code (run_code_e2b)

Description: Executes custom code in a sandboxed environment and returns the output.

Inputs

Name
Description
Required
Example

Showing 1-3 of 3 items

Configurations

Name
Description
Required
Example

Showing 1-1 of 1 items

Outputs

Name
Description
Required
Example

Showing 1-3 of 3 items

Example

Input:

Output:


Best Practices

  • Always validate your code before running to avoid errors.

  • Use input variables to make your code reusable and dynamic.

  • Check the output and error fields to handle execution results gracefully.

  • For sensitive or resource-intensive tasks, ensure your code is optimized and secure.


Summary

The E2B node in StackAI is a powerful tool for executing custom code securely within your workflows. By leveraging its flexible input and output options, you can automate complex tasks, process data, and extend StackAI’s capabilities to fit your unique needs.

Hightouch

Comprehensive guide to the Hightouch node in StackAI, including top actions, input and output details, and practical usage examples.

What is Hightouch?

Hightouch is a powerful integration node in StackAI that enables seamless data synchronization between your data warehouse and various business applications. With Hightouch, you can automate workflows, trigger actions, and keep your business tools up-to-date with the latest data.

How to use it?

To use the Hightouch node in StackAI, add it to your workflow and select the desired action. Configure the required inputs and connection settings, then connect it to other nodes to automate data-driven processes. Hightouch supports a variety of actions, allowing you to query sources and destinations, and orchestrate data syncs.

Example of Usage

Suppose you want to list all available sources in your Hightouch account. You would add the Hightouch node, select the "List Source" action, provide your connection ID, and connect the output to a display or processing node.


Available Actions in Hightouch

Below are the most commonly used Hightouch actions in StackAI:


1. List Source

Description: Retrieves a list of all data sources connected to your Hightouch account.

Inputs:

  • None required.

Configurations:

  • connection_id (Required): The unique identifier for your Hightouch connection.

Outputs:

  • sources (Required): An array of source objects, each containing details such as source name, type, and status.

Example:


2. List Destination

Description: Fetches all destinations configured in your Hightouch account.

Inputs:

  • None required.

Configurations:

  • connection_id (Required): The unique identifier for your Hightouch connection.

Outputs:

  • destinations (Required): An array of destination objects, each with properties like destination name, type, and status.

Example:


How to Use These Actions in StackAI

  1. Add the Hightouch Node: Drag the Hightouch node into your workflow.

  2. Select an Action: Choose "List Source" or "List Destination" based on your needs.

  3. Configure the Node: Enter your Hightouch connection ID in the configuration.

  4. Connect to Other Nodes: Use the output in downstream nodes for further processing or display.


Example Workflow

  • Add Hightouch node → Select "List Source" → Set connection_id → Connect output to a Template or Output node to display the list of sources.


Summary Table

Action
Required Inputs
Required Configurations
Outputs

Use the Hightouch node in StackAI to automate and streamline your data operations, ensuring your business tools always have the most up-to-date information.

MongoDB

Learn how to use the MongoDB node in StackAI to query your MongoDB database, including required inputs, configuration, and output examples.

The MongoDB Node in StackAI allows you to query a MongoDB database directly from your workflow. This node is ideal for retrieving data from your collections using flexible filters, making it easy to integrate database results into your AI-powered automations.

How to use it?

To use the MongoDB node, you need to provide the following required inputs and configurations:

  • Database Name (Required Configuration)

    • Description: The name of the MongoDB database you want to query.

    • Example: "customer_data"

  • Collection Name (Required Configuration)

    • Description: The name of the collection within the database to query.

    • Example: "orders"

  • Filter (Required Input)

    • Description: The MongoDB query filter in JSON format. This determines which documents are returned.

    • Example:

      • To find all orders for a specific user: {"user_id": "499484bb-fd54-4fb0-93e5-4ad98bdcdf94"}

      • To find users aged 18 or older: {"age": {"$gte": 18}}

      • To find orders with status "active" or "pending": {"status": {"$in": ["active", "pending"]}}

      • To find documents created after a certain date: {"created_at": {"$gte": "2024-01-01T00:00:00Z"}}

Example of Usage

Suppose you want to retrieve all active orders from the "orders" collection in the "customer_data" database:

  • Database Name: "customer_data" (Required)

  • Collection Name: "orders" (Required)

  • Filter: {"status": "active"} (Required)

Expected Output

  • Results (Required Output)

    • Description: An array of objects, each representing a document from the collection that matches the filter.

    • Example:

Available Actions

  • Query MongoDB

    • Description: Retrieve documents from a specified database and collection using a JSON filter.

    • Required Inputs:

      • Database Name (string, required)

      • Collection Name (string, required)

      • Filter (string, required; JSON format)

    • Output:

      • Results (array of objects, required): The documents matching your query.

Summary

The MongoDB node in StackAI is a powerful tool for querying your MongoDB collections. By specifying the database, collection, and filter, you can retrieve exactly the data you need and use it in your automated workflows.

MySQL

Learn how to use the MySQL node in StackAI to run database queries, including required inputs, configurations, and output details with practical examples.

The MySQL Node in StackAI allows you to query a MySQL database directly from your workflow. It is designed to execute queries—either in plain English or SQL format—against your database and return structured results.

How to use it?

To use the MySQL node, you need to provide two required inputs:

  1. Schema (Required):

    • Description: The full database schema, including tables, columns, and data types.

    • Example:

  2. Query (Required):

    • Description: The question or command you want to run. You can write this in plain English or as a SQL statement.

    • Example:

      • "Show me all customers from Canada."

      • "SELECT * FROM Orders WHERE Amount > 1000;"

Configurations There are no additional configuration parameters required for this node. All you need is the schema and the query.

Outputs The MySQL node provides two outputs:

  1. Query (Required):

    • Description: The actual SQL query that was executed, even if you provided a plain English question.

  2. Results (Required):

    • Description: The results of the query, returned as an array of objects (rows).

Example of Usage

Suppose you want to find all orders above $500:

  • Schema:

  • Query: "Show me all orders where the amount is greater than 500."

Expected Output:

  • Query: SELECT * FROM Orders WHERE Amount > 500;

  • Results:

Available Actions

  • database_query_mysql:

    • Description: Run a query (in plain English or SQL) against your MySQL database and get structured results.

Inputs for database_query_mysql:

  • sql_schema (Required): The database schema (tables, columns, types, etc.).

  • query (Required): The query in plain English or SQL.

Outputs for database_query_mysql:

  • query (Required): The SQL query that was executed.

  • results (Required): The results of the query as an array of objects.

Use the MySQL node in StackAI to seamlessly integrate database queries into your automated workflows, making data retrieval and analysis easy and efficient.

Pitchbook

The PitchBook Node allows you to access private market intelligence, company, and deal data through PitchBook’s search capabilities. It is designed to help you find information about companies, deals, investors, and more from the PitchBook database.


Available Actions

PitchBook Search

  • This is the main action available for the PitchBook node.

  • It lets you search the PitchBook database using a text query and returns relevant results.

Required Inputs

  • Query (string, required): The search term or phrase you want to look up in PitchBook. Example: "StackAI funding rounds"

  • Top K (integer, optional, default 10): The number of results to return (up to 100). Example: 5

Output

  • Query: The query you used.

  • Search Results: An array of results, where each result contains:

    • Title: The title of the PitchBook result.

    • Text: The text content or summary of the result.


Example Usage

Suppose you want to find recent deals involving "StackAI":

  • Set the Query to "StackAI deals".

  • Optionally set Top K to 5 if you only want the top 5 results.

The node will return a list of results, each with a title and a summary text, which you can then use in downstream nodes (like an LLM for summarization or an Output node for display).


How to Connect and Use in Your Workflow

  1. Connect an Input Node: Pass a user’s search term or a dynamic query from another node to the PitchBook node’s query input.

  2. Configure Top K: Set how many results you want (optional).

  3. Use the Output: The results can be sent to an Output node, a Template node for formatting, or an LLM node for further analysis.


Summary Table

Input Name
Type
Required
Description
Output Name
Type
Description

If you want to use this node, just provide a search query (and optionally, the number of results), and connect the output to wherever you want to use the PitchBook data in your workflow! If you need a sample configuration or want to see how to wire it up, let me know.

MCP

Learn how to use the MCP node in StackAI to connect and interact with Model Context Protocol servers, including input, configuration, and output details.

To use the MCP Node, you configure it to call a specific tool on an MCP server and pass any required parameters. The node will then execute the tool and return the results, which can be used in downstream nodes in your workflow.

Connecting to public servers is easy! Choose the server you'd like to connect to . Then, create a new connection with the server's URL. You should be able to then choose from a dropdown of available actions.


Example of Usage

Suppose you want to invoke a tool named "web_search" on your ExaAI MCP server and pass a text string to search the web. You would configure the MCP node as follows:

  • Tool Name: "web_search"

  • Parameters: { "query": "StackAI is a powerful workflow automation platform..." }

The node will return the summarized text in the output.


Call MCP Server

This is the primary action available for the MCP node.

Description

Invoke a tool hosted on an MCP server by specifying the tool name and any parameters required by that tool.

Inputs

  • Tool Name (tool_name)

    • Type: String

    • Required: No

    • Description: The name of the tool to invoke on the MCP server.

    • Example: "summarize_text"

  • Parameters (parameters)

    • Type: Object

    • Required: No

    • Description: The parameters to pass to the specified tool. The structure depends on the tool being called.

    • Example: { "text": "Your input text here" }

Configurations

  • No additional configurations are required for this action.

Outputs

  • Result (result)

    • Type: Object

    • Required: Yes

    • Description: The result returned from the tool invocation. The structure of this object depends on the tool you called.

    • Example: { "summary": "StackAI automates workflows efficiently." }


Summary Table

Field
Type
Required
Description
Example

Best Practices

  • Always check the documentation of the specific tool you are invoking for required parameters.

  • Use the output of the MCP node as input for downstream nodes to build powerful, automated workflows.

  • Advanced users can run their own MCP server locally and expose it to the web using a tool like ngrok. Putting your ngrok URL when making a connection will allow you to connect to the local server in Stack!

SerpAPI

SerpAPI node enables real-time web search, news, and job search capabilities within StackAI workflows, providing structured and actionable search results.

What is SerpAPI?

SerpAPI is a powerful node in StackAI that allows you to perform real-time web searches, news searches, job searches, and website to markdown directly within your workflow. It leverages the SerpAPI platform to fetch up-to-date information from the internet, making it ideal for research, content generation, and data enrichment tasks.


Example of Usage

  • Connect a Text Input node to SerpAPI.

  • Set the action to "Web Search".

  • Enter a search query like "latest AI trends".

  • The node returns a list of relevant web results, which can be displayed or processed further.


Available Actions

1. Web Search

Description: Performs a real-time search on the web and returns a list of relevant results.

Inputs:

  • Query (string, required): The search term or phrase to look up (e.g., "StackAI features").

  • Num (number, optional): Specify how many search results you want to retrieve (default: 5), needs to be at least 1.

Configurations:

  • Device (dropdown, optional): Choose whether to simulate search results as seen on desktop or mobile devices. Default is desktop.

  • CountryCode (dropdown, optional): Select which country's search engine to use for results.

  • LanguageCode (dropdown, optional): Choose the language for search results and interface.

Outputs:

  • Web Search Results (object_array): A collection of web search results containing URLs, titles, and content snippets.

Example: Input:

Output:

2. News Search

Description: Fetches the latest news articles related to a specific query.

Inputs:

  • Query (string, required): The search term or phrase to look up (e.g., "AI news").

Configurations:

  • Device (dropdown, optional): Choose whether to simulate search results as seen on desktop or mobile devices. Default is desktop.

  • CountryCode (dropdown, optional): Select which country's search engine to use for results.

  • LanguageCode (dropdown, optional): Choose the language for search results and interface.

Outputs:

  • Result (string): The result of the news search

Example: Input:

Output:

3. Job Search

Description: Searches for job postings based on a given query and location.

Inputs:

  • Query (string, required): The search term or phrase to look up (e.g., "Data Scientist").

  • Num (number, optional): Specify how many search results you want to retrieve (default: 5), needs to be at least 1.

Configurations:

  • CountryCode (dropdown, optional): Select which country's search engine to use for results.

  • LanguageCode (dropdown, optional): Choose the language for search results and interface.

Outputs:

  • Jobs (string): List of jobs found.

Example: Input:

Output:

4. Website to Markdown

Description: This action takes a website URL and converts the entire page into a markdown-formatted document. It’s useful for extracting readable, structured content from any public web page.

Inputs:

  • Url (string, required): The URL of the website you want to convert to markdown.

Configurations:

  • Location (string, optional): The geographic location from which to perform the conversion (affects region-specific content). Default is US.

Outputs:

  • Markdown (string): The markdown representation of the website

Example: Input:

Output:

Power BI

The Power BI Node allows you to interact with your Power BI workspace directly from your workflow. You can retrieve information about datasets and reports, list all available datasets/reports, and get detailed metadata for each. This is useful for integrating business analytics, dashboards, and data insights into your automated processes.


Available Actions

1. List Datasets

  • Purpose: Retrieve a list of all datasets in your Power BI workspace.

  • Inputs: None required.

  • Outputs:

    • datasets: Array of datasets, each with:

      • id: Dataset ID

      • name: Dataset name

      • is_refreshable: Whether the dataset can be refreshed

      • configured_by: Who configured it

      • is_on_prem_gateway_required: If an on-premises gateway is needed

      • web_url: Link to the dataset

    • count: Total number of datasets


2. Get Dataset

  • Purpose: Retrieve detailed information about a specific dataset.

  • Inputs:

    • dataset_id (string, required): The ID of the dataset you want details for.

  • Outputs:

    • dataset: Object with:

      • id, name, web_url, datasource_type, configured_by, created_date


3. List Reports

  • Purpose: Retrieve a list of all reports in your Power BI workspace.

  • Inputs: None required.

  • Outputs:

    • reports: Array of reports, each with:

      • id: Report ID

      • name: Report name

      • web_url: Link to view the report

      • embed_url: URL for embedding

      • dataset_id: Associated dataset

      • report_type: Type (PaginatedReport or PowerBIReport)

      • is_owned_by_me: If you can modify/copy it

    • count: Total number of reports


4. Get Report

  • Purpose: Retrieve detailed information about a specific report.

  • Inputs:

    • report_id (string, required): The ID of the report you want details for.

  • Outputs:

    • report: Object with:

      • id, name, web_url, embed_url, dataset_id, report_type, description


How to Use the Power BI Node

  1. List Datasets/Reports:

    • Add the Power BI node and select the "List Datasets" or "List Reports" action.

    • No input is needed; the node will output a list of all datasets or reports in your workspace.

  2. Get Dataset/Report Details:

    • First, use "List Datasets" or "List Reports" to get the IDs.

    • Add another Power BI node and select "Get Dataset" or "Get Report".

    • Provide the dataset_id or report_id as input (can be referenced from the output of the previous node).

  3. Connect to Output or LLM:

    • You can send the results to an Output node for display, or to an LLM node for further analysis or summarization.

  4. Use a Connection ID:

    • If you have a Power BI connection, add the connection ID in the node’s configuration to access your specific workspace.


Example Workflow

  • Step 1: Use "List Reports" to get all reports.

  • Step 2: Use "Get Report" with a selected report ID to get detailed info.

  • Step 3: Send the report details to an Output node or summarize with an LLM.


Summary Table

Action
Input(s)
Output(s)
Description

code

The code to execute (string)

Yes

print("Hello, World!")

language

Programming language (string)

Yes

python

inputs

Input variables (object/dictionary)

No

{"x": 5, "y": 10}

connection_id

Connection ID for E2B (if required)

No

result

Output of the executed code

Yes

Hello, World!

logs

Execution logs (if any)

No

...

error

Error message (if execution fails)

No

SyntaxError...

{
  "code": "return x + y",
  "language": "python",
  "inputs": {"x": 5, "y": 10}
}
{
  "result": 15,
  "logs": "",
  "error": null
}
{
  "action_configurations": {
    "connection_id": "<your-hightouch-connection-id>"
  }
}
{
  "action_configurations": {
    "connection_id": "<your-hightouch-connection-id>"
  }
}

List Source

None

connection_id

sources

List Destination

None

connection_id

destinations

{
  "results": [
    {
      "order_id": "12345",
      "user_id": "abcde",
      "status": "active",
      "amount": 99.99
    },
    {
      "order_id": "12346",
      "user_id": "fghij",
      "status": "active",
      "amount": 49.99
    }
  ]
}
TABLE Customers (
  CustomerID INT,
  Name TEXT,
  Email TEXT,
  Country TEXT
);
TABLE Orders (
  OrderID INT,
  CustomerID INT,
  Amount DECIMAL,
  OrderDate DATE
);
TABLE Orders (
  OrderID INT,
  CustomerID INT,
  Amount DECIMAL,
  OrderDate DATE
);
[
  { "OrderID": 101, "CustomerID": 1, "Amount": 750, "OrderDate": "2024-06-01" },
  { "OrderID": 102, "CustomerID": 3, "Amount": 1200, "OrderDate": "2024-06-02" }
]

query

string

Yes

The search term for PitchBook

top_k

integer

No

Number of results to return (max 100, default 10)

query

string

The query that was used

search_results

array

List of results (each with title and text)

{
  "query": "StackAI workflow automation"
}
{
  "results": [
    {
      "title": "How to Automate Workflows with StackAI",
      "link": "https://example.com/stackai-workflow",
      "snippet": "Learn how to automate your business processes using StackAI..."
    }
  ],
  "metadata": {
    "total_results": 100000,
    "search_time": "0.45s"
  }
}
{
  "query": "AI breakthroughs"
}
{
  "articles": [
    {
      "title": "Recent Breakthroughs in AI",
      "link": "https://news.com/ai-breakthroughs",
      "source": "Tech News",
      "published_date": "2025-07-20"
    }
  ],
  "metadata": {
    "total_articles": 50
  }
}
{
  "query": "machine learning engineer",
  "country_code": "UNITED_STATES"
}
{
  "jobs": [
    {
      "title": "Machine Learning Engineer",
      "company": "Tech Innovators",
      "location": "San Francisco, CA",
      "link": "https://jobs.com/ml-engineer"
    }
  ],
  "metadata": {
    "total_jobs": 120
  }
}
{
  "url": "https://en.wikipedia.org/wiki/Markdown"
}
{
  "markdown": "# Markdown\nMarkdown is a lightweight markup language..."
}

List Datasets

None

datasets, count

List all datasets in workspace

Get Dataset

dataset_id

dataset

Get details for a specific dataset

List Reports

None

reports, count

List all reports in workspace

Get Report

report_id

report

Get details for a specific report

Tool Name

String

No

Name of the tool to invoke

"summarize_text"

Parameters

Object

No

Parameters to pass to the tool

{ "text": "Your input text here" }

Result

Object

Yes

Result from the tool invocation

{ "summary": "..." }

here
Creating a new KB in the Dashboard
TABLE Sales (OrderID INT, Customer STRING, Revenue DOUBLE, Year INT);
What is the total revenue for the year 2024?
TABLE Sales (OrderID INT, Customer STRING, Revenue DOUBLE, Year INT);
What is the total revenue for the year 2024?
SELECT SUM(Revenue) FROM Sales WHERE Year = 2024;
[
  { "SUM(Revenue)": 1250000 }
]

sql_schema

Array of strings

Yes

Database schema (tables, columns, types, etc.)

TABLE Sales (OrderID INT, Revenue DOUBLE);

query

String

Yes

Query in plain English or SQL

What is the total revenue for 2024?

sql_query

String

Yes

The SQL query that was executed

SELECT SUM(Revenue) FROM Sales WHERE ...

results

Array of objects

Yes

Results of the Databricks query

[{ "SUM(Revenue)": 1250000 }]

Asana

Master Asana integration in StackAI: discover top actions, required inputs, configurations, and outputs with clear examples for seamless workflow automation.

The Asana Node in StackAI is a powerful integration that allows you to automate and manage tasks, projects, and team collaboration directly from your AI workflows. With Asana actions, you can create, update, retrieve, and list tasks and projects, streamlining your project management processes.


How to use it?

To use Asana in your StackAI workflow:

  1. Add the Asana node to your workflow.

  2. Select the desired action (e.g., Create Task, Update Task).

  3. Fill in the required input fields and configurations, including your Asana connection ID.

  4. Connect the node to other workflow components as needed.

  5. Run the workflow to automate your Asana operations.


Example of Usage

Suppose you want to automatically create a new Asana task when a form is submitted:

  • Add an Input node to collect form data.

  • Add an Asana node and select the "Create Task" action.

  • Map the form fields to the Asana task inputs (e.g., task name, notes).

  • Provide your Asana connection ID in the configuration.

  • Connect the nodes and execute the workflow.


Most Commonly Used Asana Actions

Below are the most frequently used Asana actions in StackAI, with detailed input, configuration, and output requirements.


1. Create Task

Description: Create a new task in a specified Asana project or workspace.

Inputs:

  • name (string, required): The name/title of the task. Example: "Design Homepage"

  • notes (string, optional): Additional details or description for the task. Example: "Create initial wireframes and upload to Figma."

  • assignee (string, optional): The user ID or email to assign the task to. Example: "[email protected]"

  • projects (array of strings, optional): List of project IDs to add the task to. Example: ["1234567890"]

  • due_on (string, optional): Due date in YYYY-MM-DD format. Example: "2025-07-10"

Configurations:

  • connection_id (string, required): Your Asana connection ID.

Outputs:

  • task_id (string, required): The unique ID of the created task.

  • task_url (string, required): Direct URL to the new task in Asana.

  • status (string, required): Status of the operation (e.g., "success").


2. Update Task

Description: Update details of an existing Asana task.

Inputs:

  • task_id (string, required): The ID of the task to update.

  • name (string, optional): New name for the task.

  • notes (string, optional): Updated notes.

  • completed (boolean, optional): Mark the task as completed or not.

Configurations:

  • connection_id (string, required): Your Asana connection ID.

Outputs:

  • task_id (string, required): The ID of the updated task.

  • status (string, required): Status of the operation.


3. Get Task

Description: Retrieve details of a specific Asana task.

Inputs:

  • task_id (string, required): The ID of the task to retrieve.

Configurations:

  • connection_id (string, required): Your Asana connection ID.

Outputs:

  • task (object, required): Full details of the task, including name, notes, assignee, status, etc.


4. Create Project

Description: Create a new project in Asana.

Inputs:

  • name (string, required): Name of the project.

  • team (string, optional): Team ID to associate the project with.

  • notes (string, optional): Project description.

Configurations:

  • connection_id (string, required): Your Asana connection ID.

Outputs:

  • project_id (string, required): The unique ID of the created project.

  • project_url (string, required): Direct URL to the new project.


5. List Projects

Description: List all projects in a workspace or team.

Inputs:

  • workspace (string, optional): Workspace ID to filter projects.

  • team (string, optional): Team ID to filter projects.

Configurations:

  • connection_id (string, required): Your Asana connection ID.

Outputs:

  • projects (array, required): List of project objects.


6. Create Comment

Description: Add a comment to a specific Asana task.

Inputs:

  • task_id (string, required): The ID of the task to comment on.

  • text (string, required): The comment text.

Configurations:

  • connection_id (string, required): Your Asana connection ID.

Outputs:

  • comment_id (string, required): The unique ID of the created comment.

  • status (string, required): Status of the operation.


7. List Tasks

Description: Retrieve a list of tasks from a project, section, or workspace.

Inputs:

  • project (string, optional): Project ID to filter tasks.

  • section (string, optional): Section ID to filter tasks.

  • workspace (string, optional): Workspace ID to filter tasks.

Configurations:

  • connection_id (string, required): Your Asana connection ID.

Outputs:

  • tasks (array, required): List of task objects.


Clickup

Comprehensive guide to the Clickup node in StackAI: discover top actions, input/output details, and practical usage examples for seamless Clickup automation.

What is Clickup?

The Clickup node in StackAI enables seamless integration with your Clickup workspace, allowing you to automate project management tasks such as creating tasks, updating lists, managing comments, and more. This node connects StackAI workflows directly to Clickup, streamlining your productivity and collaboration.


How to use it?

To use the Clickup node, add it to your StackAI workflow and select the desired action. Configure the required inputs and connection settings, referencing other nodes as needed. You can automate task creation, update project details, manage comments, and more, all within your workflow.


Example of Usage

Suppose you want to automatically create a new task in Clickup when a form is submitted. Connect the form input node to the Clickup node, select the "Create Task" action, and map the form fields to the required Clickup task fields. When the workflow runs, a new task will be created in your Clickup workspace with the provided details.


Most Commonly Used Actions in Clickup

Below are the most popular and practical Clickup actions available in StackAI, along with their input, configuration, and output details:


1. Create Task

Description: Automatically create a new task in a specified Clickup list.

Inputs:

  • list_id (Required): The ID of the Clickup list where the task will be created.

  • name (Required): The name/title of the task.

  • description (Optional): Detailed description of the task.

  • assignees (Optional): Array of user IDs to assign the task to.

  • status (Optional): The status of the task (e.g., "to do", "in progress").

  • priority (Optional): Priority level (1-4).

  • due_date (Optional): Due date in timestamp format.

Configurations:

  • connection_id (Required): Your Clickup connection ID for authentication.

Outputs:

  • task_id: The unique ID of the created task.

  • task_url: Direct URL to the new task.

  • status: Confirmation of task creation.

Example:

{
  "action_input_parameters": {
    "list_id": "123456",
    "name": "Review Project Proposal",
    "description": "Review the new project proposal submitted by the team.",
    "assignees": ["78910"],
    "status": "to do",
    "priority": 2
  },
  "action_configurations": {
    "connection_id": "<your_clickup_connection_id>"
  }
}

2. Update Task

Description: Update details of an existing task in Clickup.

Inputs:

  • task_id (Required): The ID of the task to update.

  • name (Optional): New name/title for the task.

  • description (Optional): Updated description.

  • status (Optional): New status.

  • priority (Optional): Updated priority.

  • assignees (Optional): Updated list of assignees.

Configurations:

  • connection_id (Required): Your Clickup connection ID.

Outputs:

  • task_id: The ID of the updated task.

  • status: Confirmation of update.

Example:

{
  "action_input_parameters": {
    "task_id": "654321",
    "status": "in progress"
  },
  "action_configurations": {
    "connection_id": "<your_clickup_connection_id>"
  }
}

3. Get Task

Description: Retrieve details of a specific task from Clickup.

Inputs:

  • task_id (Required): The ID of the task to retrieve.

Configurations:

  • connection_id (Required): Your Clickup connection ID.

Outputs:

  • task: Full details of the task (name, description, status, assignees, etc.).

Example:

{
  "action_input_parameters": {
    "task_id": "654321"
  },
  "action_configurations": {
    "connection_id": "<your_clickup_connection_id>"
  }
}

4. Create Task Comment

Description: Add a comment to a specific task in Clickup.

Inputs:

  • task_id (Required): The ID of the task to comment on.

  • comment_text (Required): The content of the comment.

Configurations:

  • connection_id (Required): Your Clickup connection ID.

Outputs:

  • comment_id: The ID of the created comment.

  • status: Confirmation of comment creation.

Example:

{
  "action_input_parameters": {
    "task_id": "654321",
    "comment_text": "Please review the latest updates."
  },
  "action_configurations": {
    "connection_id": "<your_clickup_connection_id>"
  }
}

5. Get Lists

Description: Retrieve all lists within a specified Clickup folder or space.

Inputs:

  • folder_id (Optional): The ID of the folder to retrieve lists from.

  • space_id (Optional): The ID of the space to retrieve lists from.

Configurations:

  • connection_id (Required): Your Clickup connection ID.

Outputs:

  • lists: Array of lists with details (list_id, name, status, etc.).

Example:

{
  "action_input_parameters": {
    "space_id": "987654"
  },
  "action_configurations": {
    "connection_id": "<your_clickup_connection_id>"
  }
}

Exa AI

Comprehensive guide to the Exa AI node in StackAI: Learn how to use Exa AI for internet-scale search, including action details, input/output parameters, and practical examples.

What is Exa AI?

Exa AI is a powerful node in StackAI that enables users to perform advanced, internet-scale searches using both embeddings-based and traditional search methods. It allows you to query a wide variety of sources, retrieve relevant results, and integrate real-time web data into your AI workflows.


How to use it?

To use the Exa AI node in StackAI, simply add the node to your workflow and select the desired action. Configure the required input parameters to define your search query and any additional options. The node will return structured search results that can be used in downstream nodes for further processing, analysis, or display.


Example of Usage

Suppose you want to perform a web search for the latest AI research papers. You would select the "Web Search" action, provide your query (e.g., "latest AI research papers"), and configure any optional parameters such as the number of results. The node will return a list of relevant web pages, including titles, URLs, and summaries.


Available Actions in Exa AI

Below are the most commonly used actions available in the Exa AI node:


1. Web Search

Description: Performs a real-time web search using advanced algorithms to retrieve the most relevant results from the internet.

Inputs:

  • query (string, required): The search term or question you want to look up.

    • Example: "latest AI research papers"

  • num_results (integer, optional): Number of search results to return.

    • Example: 5

Configurations: No additional configurations are required for this action.

Outputs:

  • results (array, required): List of search results, each containing:

    • title (string): Title of the web page.

    • url (string): Direct link to the web page.

    • snippet (string): Short summary or excerpt from the page.

Example:

{
  "results": [
    {
      "title": "Recent Advances in AI Research",
      "url": "https://example.com/ai-research",
      "snippet": "This article discusses the latest breakthroughs in artificial intelligence..."
    }
  ]
}

2. Deep Research

Description: Conducts an in-depth search and analysis on a given topic, aggregating information from multiple sources for comprehensive insights.

Inputs:

  • query (string, required): The topic or question for deep research.

    • Example: "impact of AI on healthcare"

  • num_results (integer, optional): Number of sources to aggregate.

    • Example: 3

Configurations: No additional configurations are required.

Outputs:

  • summary (string, required): A synthesized summary of findings.

  • sources (array, required): List of source URLs and brief descriptions.

Example:

{
  "summary": "AI is transforming healthcare by improving diagnostics, patient care, and operational efficiency...",
  "sources": [
    {
      "url": "https://example.com/ai-healthcare",
      "description": "Overview of AI applications in healthcare."
    }
  ]
}

3. Find Similar

Description: Finds web pages or documents similar to a provided URL or text snippet.

Inputs:

  • url (string, required): The URL of the reference page.

    • Example: "https://example.com/ai-overview"

  • num_results (integer, optional): Number of similar results to return.

    • Example: 5

Configurations: No additional configurations are required.

Outputs:

  • similar_results (array, required): List of similar web pages with titles, URLs, and similarity scores.

Example:

{
  "similar_results": [
    {
      "title": "Understanding AI",
      "url": "https://example.com/understanding-ai",
      "score": 0.92
    }
  ]
}

4. Get Contents

Description: Retrieves the full content of a web page or document from a given URL.

Inputs:

  • url (string, required): The URL of the page to extract content from.

    • Example: "https://example.com/ai-article"

Configurations: No additional configurations are required.

Outputs:

  • content (string, required): The extracted text content of the page.

Example:

{
  "content": "Artificial intelligence (AI) is a rapidly evolving field..."
}

5. Answer

Description: Provides a direct answer to a question by searching and synthesizing information from the web.

Inputs:

  • question (string, required): The question you want answered.

    • Example: "What is generative AI?"

Configurations: No additional configurations are required.

Outputs:

  • answer (string, required): The synthesized answer.

  • sources (array, required): List of source URLs used to generate the answer.

Example:

{
  "answer": "Generative AI refers to artificial intelligence systems that can create new content...",
  "sources": [
    "https://example.com/generative-ai"
  ]
}

Best Practices for Using Exa AI in StackAI

  • Always provide clear and specific queries for the best results.

  • Use the "num_results" parameter to control the amount of data returned.

  • Integrate Exa AI outputs with downstream nodes for advanced processing, such as summarization or visualization.

  • Review the sources and content for accuracy, especially when using results in critical applications.


Summary

The Exa AI node in StackAI is a versatile tool for integrating real-time, internet-scale search and research capabilities into your workflows. By leveraging its powerful actions, you can access, analyze, and utilize web data efficiently and effectively.

Github

Comprehensive guide to the Github node in StackAI workflows, including key actions, input requirements, configurations, and output examples.

The Github Node in StackAI enables seamless integration with Github, allowing you to automate, query, and manage repositories, workflows, and project data directly within your workflow. This node connects your StackAI automation to Github’s powerful API, making it easy to interact with repositories, pull requests, workflows, and more.


How to Use It?

To use the Github node, add it to your StackAI workflow and select the desired action. Connect your Github account using a valid connection ID if required. Configure the action by providing the necessary input parameters and configurations. The node will execute the selected action and return the output, which can be used in downstream nodes.


Example of Usage

Suppose you want to list all branches in a repository. You would select the "List Branches" action, provide the repository owner and name as required inputs, and the node will return a list of branches.


Commonly Used Actions in the Github Node

Below are some of the most commonly used Github actions available in StackAI workflows. For each action, you’ll find a description, required inputs, configurations, and example outputs.


1. List Branches

Description: Retrieve a list of branches for a specified Github repository.

Inputs:

  • owner (Required): The username or organization name that owns the repository. Example: "octocat"

  • repo (Required): The name of the repository. Example: "Hello-World"

Configurations:

  • connection_id (Required): The Github connection ID for authentication.

Outputs:

  • branches (Required): An array of branch objects, each containing branch name and commit details.

Example: Input:

{
  "owner": "octocat",
  "repo": "Hello-World"
}

Output:

{
  "branches": [
    { "name": "main", "commit": { "sha": "abc123", ... } },
    { "name": "dev", "commit": { "sha": "def456", ... } }
  ]
}

2. List Commits

Description: Fetch a list of commits from a repository.

Inputs:

  • owner (Required): Repository owner.

  • repo (Required): Repository name.

  • sha (Optional): SHA or branch to start listing commits from.

Configurations:

  • connection_id (Required): Github connection ID.

Outputs:

  • commits (Required): Array of commit objects with details like SHA, author, and message.

Example: Input:

{
  "owner": "octocat",
  "repo": "Hello-World"
}

Output:

{
  "commits": [
    { "sha": "abc123", "commit": { "message": "Initial commit", ... } },
    { "sha": "def456", "commit": { "message": "Update README", ... } }
  ]
}

3. List Pull Requests

Description: Retrieve all pull requests for a repository.

Inputs:

  • owner (Required): Repository owner.

  • repo (Required): Repository name.

  • state (Optional): Filter by state (open, closed, all).

Configurations:

  • connection_id (Required): Github connection ID.

Outputs:

  • pull_requests (Required): Array of pull request objects with title, state, and author.

Example: Input:

{
  "owner": "octocat",
  "repo": "Hello-World",
  "state": "open"
}

Output:

{
  "pull_requests": [
    { "number": 1, "title": "Add new feature", "state": "open", ... }
  ]
}

4. Get Repository Details

Description: Fetch metadata and details about a specific repository.

Inputs:

  • owner (Required): Repository owner.

  • repo (Required): Repository name.

Configurations:

  • connection_id (Required): Github connection ID.

Outputs:

  • repository (Required): Object containing repository details such as description, stars, forks, and more.

Example: Input:

{
  "owner": "octocat",
  "repo": "Hello-World"
}

Output:

{
  "repository": {
    "name": "Hello-World",
    "description": "This is your first repository",
    "stargazers_count": 42,
    "forks_count": 10,
    ...
  }
}

5. List Releases

Description: Get a list of releases published in a repository.

Inputs:

  • owner (Required): Repository owner.

  • repo (Required): Repository name.

Configurations:

  • connection_id (Required): Github connection ID.

Outputs:

  • releases (Required): Array of release objects with tag name, release notes, and published date.

Example: Input:

{
  "owner": "octocat",
  "repo": "Hello-World"
}

Output:

{
  "releases": [
    { "tag_name": "v1.0.0", "name": "First Release", ... }
  ]
}

Best Practices:

  • Always provide the required owner, repo, and connection_id for all actions.

  • Use the output of the Github node as input for downstream nodes, such as LLMs or output nodes, to automate reporting or notifications.

  • For advanced use cases, chain multiple Github actions to build complex automations.


Summary

The Github node in StackAI empowers you to automate and manage Github repositories, branches, commits, pull requests, and more. By configuring the right action and providing the necessary inputs, you can streamline your development workflows and integrate Github data into your automation pipelines.

Miro

Comprehensive guide to the Miro node in StackAI: discover key actions, input requirements, configurations, and output examples for seamless Miro integration.

What is Miro?

The Miro node in StackAI enables seamless integration with the Miro collaborative whiteboard platform. It allows you to automate board management, content creation, and team collaboration directly from your StackAI workflows.


How to use it?

To use the Miro node, select the desired action, provide the required inputs and configurations, and connect your Miro account using a valid connection ID. The node can be used to automate board creation, manage content, and interact with Miro items programmatically.


Example of Usage

Suppose you want to automatically create a new Miro board for every new project. You would use the "Create Board" action, provide the board name as input, and optionally set a description or team ID. The output will include the board's unique ID and URL, which you can use in subsequent workflow steps.


Available Actions

Below are the most commonly used Miro actions in StackAI, along with their input, configuration, and output details:


1. Create Board

Description: Create a new Miro board in your workspace.

Inputs:

  • name (Required): The name of the new board. Example: "Project Kickoff Board"

  • description (Optional): A description for the board. Example: "Board for the new project kickoff meeting."

  • team_id (Optional): The ID of the team to assign the board to. Example: "345678"

Configurations:

  • connection_id (Required): Your Miro connection ID for authentication.

Outputs:

  • id (Always returned): The unique identifier of the created board.

  • name: The name of the board.

  • description: The board description.

  • viewLink: The URL to access the board.


2. Get Boards

Description: Retrieve a list of all boards accessible to your account.

Inputs:

  • team_id (Optional): Filter boards by team ID.

Configurations:

  • connection_id (Required): Your Miro connection ID.

Outputs:

  • boards: An array of board objects, each containing:

    • id: Board ID

    • name: Board name

    • description: Board description

    • viewLink: Board URL


3. Create Sticky Note Item

Description: Add a sticky note to a specific Miro board.

Inputs:

  • board_id (Required): The ID of the board to add the sticky note to.

  • data (Required): The content/text of the sticky note. Example: "Discuss project milestones"

Configurations:

  • connection_id (Required): Your Miro connection ID.

Outputs:

  • id: The unique identifier of the sticky note.

  • data: The content of the sticky note.

  • position: The coordinates of the sticky note on the board.


4. Get Board

Description: Retrieve details of a specific Miro board.

Inputs:

  • board_id (Required): The ID of the board.

Configurations:

  • connection_id (Required): Your Miro connection ID.

Outputs:

  • id: Board ID

  • name: Board name

  • description: Board description

  • viewLink: Board URL


5. Create Shape Item

Description: Add a shape (rectangle, circle, etc.) to a Miro board.

Inputs:

  • board_id (Required): The ID of the board.

  • shape (Required): The type of shape (e.g., "rectangle", "circle").

  • text (Optional): Text to display inside the shape.

Configurations:

  • connection_id (Required): Your Miro connection ID.

Outputs:

  • id: Shape item ID

  • shape: Type of shape

  • text: Text inside the shape

  • position: Coordinates on the board


Best Practices:

  • Always provide a valid connection ID for authentication.

  • Use required inputs as specified for each action to avoid errors.

  • Use the output IDs to reference created items in subsequent workflow steps.


Summary Table

Action
Required Inputs
Optional Inputs
Output Highlights

Create Board

name

description, team_id

id, name, viewLink

Get Boards

—

team_id

boards[]

Create Sticky Note Item

board_id, data

—

id, data, position

Get Board

board_id

—

id, name, viewLink

Create Shape Item

board_id, shape

text

id, shape, position


Use the Miro node in StackAI to automate and streamline your collaborative workflows, making project management and team collaboration more efficient.

Fred

Learn how to use the Fred node in StackAI workflows. Discover key actions, input requirements, configurations, and output examples for seamless economic data integration.

The Fred Node is a powerful integration node in StackAI that connects your workflow to the Federal Reserve Economic Data (FRED) API. This enables you to access, search, and retrieve a wide range of economic data, including categories, series, releases, and tags, directly within your automated processes.


How to use it?

To use the Fred node, simply add it to your StackAI workflow and select the desired action. Each action allows you to interact with different aspects of the FRED database, such as fetching economic series, searching for categories, or retrieving release information. Configure the required inputs and parameters for each action to tailor the data retrieval to your needs.


Example of Usage

Suppose you want to retrieve a list of economic data series for a specific category. You would select the "Get Category Series" action, provide the required category ID, and optionally set filters like limit or order. The node will output a structured list of series matching your criteria.


Available Actions in Fred

Below are the most commonly used actions in the Fred integration, along with their input, configuration, and output details:


1. Get Category

Description: Retrieve information about a specific FRED category.

Inputs:

  • category_id (Required): The unique ID of the FRED category.

Configurations:

  • None required.

Outputs:

  • Category details (Required): Returns the category’s ID, name, parent ID, and notes.

Example: Retrieve details for category_id "125".


2. Get Category Children

Description: List all subcategories under a specific FRED category.

Inputs:

  • category_id (Required): The unique ID of the parent category.

Configurations:

  • None required.

Outputs:

  • List of child categories (Required): Each with ID, name, and parent ID.

Example: List children for category_id "125".


3. Get Category Series

Description: Retrieve all economic data series within a specific category.

Inputs:

  • category_id (Required): The unique ID of the category.

  • limit (Optional): Maximum number of series to return.

  • order_by (Optional): Field to order results by (e.g., "popularity").

  • sort_order (Optional): "asc" or "desc".

Configurations:

  • None required.

Outputs:

  • List of series (Required): Each with ID, title, frequency, units, and more.

Example: Get up to 10 series in category_id "125", ordered by popularity.


4. Get Releases

Description: Retrieve a list of all FRED data releases.

Inputs:

  • limit (Optional): Maximum number of releases to return.

  • order_by (Optional): Field to order results by.

  • sort_order (Optional): "asc" or "desc".

Configurations:

  • None required.

Outputs:

  • List of releases (Required): Each with ID, name, and release dates.

Example: Get the 5 most recent releases.


5. Get Release Info

Description: Retrieve detailed information about a specific FRED release.

Inputs:

  • release_id (Required): The unique ID of the release.

Configurations:

  • None required.

Outputs:

  • Release details (Required): Includes name, press release, and notes.

Example: Get info for release_id "53".


6. Get Release Series

Description: List all data series associated with a specific release.

Inputs:

  • release_id (Required): The unique ID of the release.

  • limit (Optional): Maximum number of series to return.

Configurations:

  • None required.

Outputs:

  • List of series (Required): Each with ID, title, and frequency.

Example: Get up to 10 series for release_id "53".


7. Get Release Dates

Description: Retrieve all release dates for a specific FRED release.

Inputs:

  • release_id (Required): The unique ID of the release.

Configurations:

  • None required.

Outputs:

  • List of release dates (Required): Each with a date and status.

Example: Get all dates for release_id "53".


8. Get Category Tags

Description: List all tags associated with a specific FRED category.

Inputs:

  • category_id (Required): The unique ID of the category.

Configurations:

  • None required.

Outputs:

  • List of tags (Required): Each with name, group ID, and notes.

Example: Get tags for category_id "125".


9. Get Releases Sources

Description: List all sources for a specific FRED release.

Inputs:

  • release_id (Required): The unique ID of the release.

Configurations:

  • None required.

Outputs:

  • List of sources (Required): Each with ID and name.

Example: Get sources for release_id "53".


10. Get Release Tables

Description: Retrieve all tables associated with a specific FRED release.

Inputs:

  • release_id (Required): The unique ID of the release.

Configurations:

  • None required.

Outputs:

  • List of tables (Required): Each with ID, name, and description.

Example: Get tables for release_id "53".


Summary

The Fred node in StackAI provides seamless access to economic data from the FRED database. By configuring the appropriate action and supplying the required inputs, you can automate the retrieval and analysis of economic indicators, categories, releases, and more within your workflows.

GitHub - stackai/stackai-vpc: StackAI self-hosted on Virtual Private Cloud (VPC)GitHub

Firecrawl

Comprehensive guide to the Firecrawl node in StackAI: discover its most common actions, input requirements, configurations, and output examples for seamless web data extraction.

What is Firecrawl?

Firecrawl is a powerful integration within StackAI that enables automated web data extraction, web scraping, and content retrieval from websites. It is designed to help users gather structured or unstructured data from web pages, making it ideal for research, monitoring, and automation workflows.


How to use it?

To use the Firecrawl node in StackAI, simply add the node to your workflow and select the desired action. Configure the required inputs and settings based on your use case. Firecrawl supports a variety of actions, from scraping a single URL to crawling entire websites or searching for specific content. Connect the node to downstream nodes to process or analyze the extracted data.


Example of Usage

Suppose you want to extract the main content from a specific web page. You would use the "Scrape from URL" action, provide the target URL as input, and receive the extracted text and metadata as output. This data can then be used for further analysis, summarization, or storage.


Firecrawl: Most Common Actions

Below are the most commonly used Firecrawl actions in StackAI, along with detailed explanations, input requirements, configurations, and output examples.


1. Scrape from URL

Description: Extracts the main content, metadata, and structure from a single web page.

Inputs:

  • url (Required): The full URL of the web page to scrape. Example: "https://example.com/article"

Configurations:

  • None required for basic usage.

Outputs:

  • content (Always returned): The main text content of the page.

  • metadata (Always returned): Information such as title, description, and author.

  • structure (Optional): Structured representation of the page (e.g., headings, sections).

Example:


2. Web Scrape

Description: Performs advanced scraping with options for custom selectors, extracting specific elements or data points from a web page.

Inputs:

  • url (Required): The target web page URL.

  • selectors (Optional): CSS selectors or XPath expressions to target specific elements. Example: [".article-title", ".author-name"]

Configurations:

  • None required for basic usage.

Outputs:

  • results (Always returned): An array of extracted elements or data points.

Example:


3. Batch Scrape

Description: Scrapes multiple URLs in a single request, ideal for bulk data extraction.

Inputs:

  • urls (Required): An array of URLs to scrape. Example: ["https://site1.com", "https://site2.com"]

Configurations:

  • None required for basic usage.

Outputs:

  • results (Always returned): An array of objects, each containing the content and metadata for a URL.

Example:


4. Crawl Website

Description: Automatically crawls a website, following links to extract content from multiple pages.

Inputs:

  • start_url (Required): The starting URL for the crawl.

  • max_depth (Optional): How many link levels deep to crawl (default is 1). Example: 2

Configurations:

  • None required for basic usage.

Outputs:

  • pages (Always returned): An array of page objects, each with content and metadata.

Example:


5. Search

Description: Searches a website or a set of pages for specific keywords or patterns.

Inputs:

  • url (Required): The base URL to search.

  • query (Required): The keyword or pattern to search for. Example: "AI automation"

Configurations:

  • None required for basic usage.

Outputs:

  • matches (Always returned): An array of search results with context.

Example:


Summary Table: Firecrawl Actions

Action
Required Inputs
Configurations
Outputs

HyperBrowser

Learn how to automate browser tasks with the HyperBrowser node in StackAI. Discover available actions, required inputs, configurations, and output details.

HyperBrowser is a StackAI integration that enables you to automate and control browser-based tasks programmatically. It is ideal for scenarios such as web automation, scraping, testing, and simulating user interactions on websites.


How to use HyperBrowser

First, create a connection to HyperBrowser if you are using it for the first time. You must have a HyperBrowser account.

In HyperBrowser, select Settings and then select API Keys. Create a new key and copy/paste the key into the new connection window in your StackAI workflow. Your connection will now be saved and can be reused across different workflows.

There are two ways to include HyperBrowser in your workflow:

  1. As a tool: Allow your LLM to decide when a query necessitates searching the web. Click on your LLM, and choose "Add Tool". Select HyperBrowser. Make sure to reference this tool in your prompt for .

  2. As a node: Enforce Hyperbrowser usage every run. To set up HyperBrowser as a Node, make sure to reference all the Hyperbrowser actions in your LLM prompt.

You can monitor the job status and retrieve results or errors upon completion.


Available Actions

1. Use Browser

Automate a browser session to perform a specific task.

Inputs

  • Task (Required)

    • Description: The task to perform in the browser (e.g., "Log in to example.com and scrape the dashboard data").

    • Example:

  • Session ID (Optional)

    • Description: The ID of an existing browser session to use.

    • Example:

  • Max Failures (Optional, default: 1)

    • Description: The maximum number of failures allowed before stopping the browser.

    • Example:

  • Max Steps (Optional, default: 25)

    • Description: The maximum number of steps to allow before stopping the browser.

    • Example:

  • Keep Browser Open (Optional, default: false)

    • Description: Whether to keep the browser open after the task is completed.

    • Example:

Configurations

All configuration options are provided as part of the input parameters above. No additional required configurations.

Outputs

  • Job ID (Required)

    • Description: The unique identifier for the browser automation job.

    • Example:

  • Live URL (Required)

    • Description: The URL to view the live browser session.

    • Example:

  • Status (Required)

    • Description: The current status of the job. Possible values: pending, running, completed, failed, stopped.

    • Example:

  • Final Result (Optional)

    • Description: The final result or output of the browser task.

    • Example:

  • Error (Optional)

    • Description: Error message if the task failed.

    • Example:

  • Session Stopped (Optional, default: false)

    • Description: Indicates if the browser session was stopped after completion.

    • Example:


Summary Table

Input Name
Required
Description
Example
Output Name
Required
Description
Example

Best Practices

  • Always provide a clear and specific task description.

  • Use session management for multi-step or persistent workflows.

  • Monitor the job status and handle errors gracefully in your workflow.


HyperBrowser in StackAI empowers you to automate and control browser tasks efficiently, making it a powerful tool for web automation and data extraction.

Outlook

Learn how to automate Outlook email tasks in StackAI. Discover available actions, required inputs, configurations, and output examples for seamless workflow integration.

What is Outlook?

Outlook in StackAI is a powerful integration node that enables you to automate sending and managing emails directly through your Outlook account. This node streamlines communication workflows, allowing you to trigger email actions as part of your automated processes.

How to use it?

To use the Outlook node in StackAI, simply add it to your workflow and configure the desired action. Connect it to other nodes to dynamically generate email content, recipients, or attachments. You can use this node to send emails, search your mailbox, or automate other Outlook-related tasks.

Example of Usage

Suppose you want to automatically send a summary report to your team every week. You can connect a report-generating node to the Outlook node, configure the email details, and automate the process without manual intervention.


Available Actions

Below are the most commonly used Outlook actions in StackAI:

1. Send Email

Description: Send an email from your Outlook account to one or more recipients.

Inputs:

  • to (Required): The recipient's email address(es). Accepts a single email or a list of emails.

    • Example: "to": "[email protected]" or "to": ["[email protected]", "[email protected]"]

  • subject (Required): The subject line of the email.

    • Example: "subject": "Weekly Report"

  • body (Required): The main content of the email. Supports plain text or HTML.

    • Example: "body": "Please find the attached weekly report."

  • cc (Optional): Email address(es) to be copied.

    • Example: "cc": "[email protected]"

  • bcc (Optional): Email address(es) to be blind copied.

    • Example: "bcc": ["[email protected]"]

  • attachments (Optional): List of files to attach. Provide file paths or references from previous nodes.

    • Example: "attachments": ["{doc-0}"]

Configurations:

  • connection_id (Required if you have multiple Outlook accounts): Specify the Outlook connection to use.

    • Example: "connection_id": "your-connection-id"

Outputs:

  • message_id (Always returned): The unique ID of the sent email.

  • status (Always returned): Confirmation of successful delivery or error details.

    • Example:


2. Search Emails

Description: Search your Outlook mailbox for emails matching specific criteria.

Inputs:

  • query (Required): The search query string (e.g., keywords, sender, date).

    • Example: "query": "from:[email protected] subject:invoice"

  • folder (Optional): Specify the folder to search in (e.g., Inbox, Sent).

    • Example: "folder": "Inbox"

  • max_results (Optional): Limit the number of results.

    • Example: "max_results": 10

Configurations:

  • connection_id (Required if you have multiple Outlook accounts): Specify the Outlook connection to use.

Outputs:

  • emails (Always returned): List of matching emails with details such as subject, sender, date, and body.

    • Example:


Best Practices

  • Always ensure required fields are filled to avoid errors.

  • Use dynamic references (e.g., {llm-0} or {doc-0}) to personalize emails or attach generated content.

  • For attachments, connect a Files node or other relevant node to provide file paths.


Summary Table

Action
Required Inputs
Optional Inputs
Outputs

Automate your Outlook email workflows in StackAI to save time, reduce manual effort, and ensure consistent communication.

RunwayML

Discover how to use the RunwayML node in StackAI to generate AI-powered images and videos with customizable prompts, models, and output settings.

What is RunwayML?

RunwayML is an AI-powered node in StackAI that enables users to generate high-quality images and videos using advanced generative models. It supports both text-to-image and image-to-video workflows, making it ideal for creative projects, content generation, and multimedia automation.


How to use it?

The RunwayML node offers two main actions:

  1. Generate Video from Image

  2. Generate Image from Text

Each action has its own set of inputs, configurations, and outputs. Below are detailed explanations and examples for each.


1. Generate Video from Image

Create a video from an initial image, with optional text prompt guidance and customizable settings.

Inputs:

  • Image URL (string, required): The publicly accessible URL of the image to use as the first frame of the video. Example: "https://example.com/image.png"

  • Prompt (string, optional): A text prompt to guide the video generation. Example: "A futuristic cityscape at sunset"

Configurations:

  • Model (select, optional, default: Gen4 Turbo): The model variant for video generation. Options: "gen4_turbo"

  • Duration (select, optional, default: 5): The length of the generated video in seconds. Options: 5 or 10

  • Aspect Ratio (select, optional, default: 1280:720): The output video’s aspect ratio. Options:

    • 1280:720

    • 720:1280

    • 1104:832

    • 832:1104

    • 960:960

    • 1584:672

  • Seed (number, optional): Random seed for reproducible results (0-4294967295), using the same seed will produce the same image if all inputs and configurations are the same. Example: 123456

  • Watermark (boolean, optional, default: true): Whether to add a RunwayML watermark to the video. Options: true or false

Outputs:

  • Video URL (string): The URL to download or view the generated video.

  • Task ID (string): The unique identifier for the video generation task.

Example of Usage:

Output:


2. Generate Image from Text

Create an image from a detailed text prompt, with options for model, resolution, and more.

Inputs:

  • Prompt (string, required): A detailed text description of the image to generate. Example: "A serene mountain landscape with a clear blue lake"

Configurations:

  • Model (select, optional, default: Gen4 Image): The model to use for image generation. Options: "gen4_image"

  • Resolution (select, optional, default: 1024:1024): The output image’s resolution/aspect ratio. Options include:

    • 1920:1080 (16:9)

    • 1080:1920 (9:16)

    • 1024:1024 (Square)

    • 1360:768 (Landscape)

    • 1080:1080 (Square)

    • 1168:880 (4:3)

    • 1440:1080 (4:3)

    • 1080:1440 (3:4)

    • 1808:768 (Wide)

    • 2112:912 (Ultra Wide)

    • 1280:720 (HD)

    • 720:1280 (Portrait HD)

    • 720:720 (Square HD)

    • 960:720 (4:3 HD)

    • 720:960 (3:4 HD)

    • 1680:720 (Cinematic)

  • Seed (number, optional): Random seed for reproducible results (0-4294967295), using the same seed will produce the same image if all inputs and configurations are the same. Example: 78910

  • Public Figure Threshold (select, optional, default: auto): Content moderation strictness for public figures. Options: "auto", "low"

  • Reference Images (string array, optional): Array of image URLs to use as references for generation. Can help augment the style of the image created. Example: ["https://example.com/ref1.jpg", "https://example.com/ref2.jpg"]

Outputs:

  • Image URL (string): The URL to download or view the generated image.

  • Task ID (string): The unique identifier for the image generation task.

Example of Usage:

Output:


Summary Table

Action
Required Inputs
Optional Configurations
Outputs (Required)

Advanced Settings

  • Retry on Failure: Enable retrying when the node execution fails

  • Fallback Branch: Create a separate branch that executes when this node fails, allowing you to handle errors gracefullyCreate a separate branch that executes when this node fails, allowing you to handle errors gracefully

{
  "content": "This is the main article text...",
  "metadata": {
    "title": "Example Article",
    "description": "A sample article for demonstration.",
    "author": "Jane Doe"
  },
  "structure": {
    "headings": ["Introduction", "Main Content", "Conclusion"]
  }
}
{
  "results": [
    {"selector": ".article-title", "value": "Example Article"},
    {"selector": ".author-name", "value": "Jane Doe"}
  ]
}
{
  "results": [
    {
      "url": "https://site1.com",
      "content": "Content from site 1...",
      "metadata": {"title": "Site 1"}
    },
    {
      "url": "https://site2.com",
      "content": "Content from site 2...",
      "metadata": {"title": "Site 2"}
    }
  ]
}
{
  "pages": [
    {
      "url": "https://example.com/page1",
      "content": "Page 1 content...",
      "metadata": {"title": "Page 1"}
    },
    {
      "url": "https://example.com/page2",
      "content": "Page 2 content...",
      "metadata": {"title": "Page 2"}
    }
  ]
}
{
  "matches": [
    {
      "url": "https://example.com/page1",
      "snippet": "AI automation is transforming industries..."
    }
  ]
}

Scrape from URL

url

None

content, metadata, structure

Web Scrape

url, selectors (opt.)

None

results

Batch Scrape

urls

None

results

Crawl Website

start_url, max_depth(opt.)

None

pages

Search

url, query

None

matches

{
  "message_id": "abc123",
  "status": "sent"
}
{
  "emails": [
    {
      "subject": "Invoice Due",
      "from": "[email protected]",
      "date": "2025-07-01",
      "body": "Please see the attached invoice."
    }
  ]
}

Send Email

to, subject, body

cc, bcc, attachments

message_id, status

Search Emails

query

folder, max_results

emails

{
  "image_url": "https://example.com/image.png",
  "prompt": "A futuristic cityscape at sunset",
  "model": "gen4_turbo",
  "duration": 10,
  "aspect_ratio": "1280:720",
  "seed": 123456,
  "watermark": false
}
{
  "video_url": "https://runwayml.com/generated/video123.mp4",
  "task_id": "task_abc123"
}
{
  "prompt": "A serene mountain landscape with a clear blue lake",
  "model": "gen4_image",
  "resolution": "1920:1080",
  "seed": 78910,
  "public_figure_threshold": "auto",
  "reference_images": ["https://example.com/ref1.jpg"]
}
{
  "image_url": "https://runwayml.com/generated/image456.png",
  "task_id": "task_def456"
}

Generate Video

image_url

prompt, model, duration, aspect_ratio, seed, watermark

video_url, task_id

Generate Image

prompt

model, resolution, seed, public_figure_threshold, reference_images

image_url, task_id

Log in to https://example.com with username 'user' and password 'pass', then extract the text from the dashboard.
"session_abc123"
2
10
true
"job_456def"
"https://hyperbrowser.live/session/job_456def"
"completed"
"Dashboard data: { ... }"
"Login failed: Invalid credentials"
true

Task

Yes

The browser task to perform

"Log in and scrape dashboard"

Session ID

No

Use an existing browser session

"session_abc123"

Max Failures

No

Max failures before stopping (default: 1)

2

Max Steps

No

Max steps before stopping (default: 25)

10

Keep Browser Open

No

Keep browser open after task (default: false)

true

Job ID

Yes

Unique job identifier

"job_456def"

Live URL

Yes

URL to view live browser session

",https://hyperbrowser.live/session/job_456def,"

Status

Yes

Job status (,pending,, ,running,, etc.)

"completed"

Final Result

No

Final result of the task

"Dashboard data: { ... }"

Error

No

Error message if failed

"Login failed: Invalid credentials"

Session Stopped

No

Was the session stopped after completion?

true

best results

Outreach

Learn how to use the Outreach integration node in StackAI, including available actions, input requirements, configuration, and output details.

Outreach is a leading sales engagement platform. The Outreach Node allows you to automate and streamline sales operations, such as managing prospects, sequences, and tasks, directly from your StackAI workflows.

How to use it?

To use the Outreach node, select an action that matches your sales process needs (such as managing prospects or sequences). Configure the required connection and input parameters, then connect the node to other workflow steps to automate Outreach operations.

Inputs

  1. Actionparams (string): a stringified JSON object that specifies the details required to add a prospect to a specific sequence in Outreach.

    At a minimum, it typically defines:

    • The sequence ID (which Outreach sequence to add the prospect to)

    • The mailbox ID (which Outreach user account will send the sequence)

    • Optionally, other fields like starting step, custom fields, or override settings

    • Example: "{\"sequence_id\":123456,\"mailbox_id\":987654}"

  2. Id (integer): This is the unique Prospect ID assigned by Outreach to the prospect you want to delete.

    • In Outreach, if you view a prospect's profile, the URL will look like: https://app.outreach.io/prospects/123456 . The number at the end (123456) is the Prospect ID.

    • Examples: 123456, 987321, 456789


Summary Table

Action Name
Inputs
Outputs

Prospect Add to Sequence

Actionparams (string)

Status code (integer), Headers (dict), Body (object)

Delete An Existing Prospect By Id

Id (integer)

Status code (integer), Headers (dict), Body (object)

Get A Prospect By Id

Id (integer)

Status code (integer), Headers (dict), Body (object)

Get A Collection Of Prospects

Status code (integer), Headers (dict), Body (object)

Create A New Prospect

Status code (integer), Headers (dict), Body (object)

Activate

Id (integer)

Status code (integer), Headers (dict), Body (object)

Deactivate

Id (integer)

Status code (integer), Headers (dict), Body (object)

Delete An Existing Sequence By Id

Id (integer)

Status code (integer), Headers (dict), Body (object)

Get A Sequence By Id

Id (integer)

Status code (integer), Headers (dict), Body (object)

Update A Sequence

Id (integer)

Status code (integer), Headers (dict), Body (object)

Get A Collection Of Sequences

Status code (integer), Headers (dict), Body (object)

Create A New Sequence

Status code (integer), Headers (dict), Body (object)

Delete An Existing Task By Id

Id (integer)

Status code (integer), Headers (dict), Body (object)

Update A Task

Id (integer)

Status code (integer), Headers (dict), Body (object)

Get A Collection Of Tasks

Status code (integer), Headers (dict), Body (object)

Create A New Task

Status code (integer), Headers (dict), Body (object)

Get A User By Id

Id (integer)

Status code (integer), Headers (dict), Body (object)

Update A User

Id (integer)

Status code (integer), Headers (dict), Body (object)

Get A Collection Of Users

Status code (integer), Headers (dict), Body (object)

Create A New User

Status code (integer), Headers (dict), Body (object)


Best Practices:

  • Always ensure you provide the correct connection ID for authentication.

  • Required fields must be filled for the action to execute successfully.

  • Use outputs from previous nodes (e.g., new lead data) as dynamic inputs for Outreach actions to automate your sales workflow.

Advanced Settings

Stream Data

Default - ON. When ON, output from the LLM is shown word-by-word, as it is produced. If you'd prefer the output to appear all at once when the LLM is done thinking, turn this feature OFF.

Safe Context Window

Default - OFF. When ON, context will be automatically reduced to the model's maximum context size. If OFF, you will get an error message if your context gets too big. Be careful--turning this on means you might unexpectedly lose meaningful context without a warning!

Charts

To maintain consistency, chart generation is now handled exclusively by the dedicated StackAI analysis tool.

Data & Time

Default - ON. When ON, the LLM will be aware of the current date & time.

Guardrails

Default - OFF. Turn ON guardrails to screen for toxic content, legal advice, or suicidal thoughts.

PII Compliance

Default - OFF. Turn ON PII Compliance in order to hide certain types of PII from the LLM if entered by a user.

Temperature

Default - 0. Increase temperature in order to increase randomness in the output, making the model less deterministic.

Max Output Length

Default - 3000. Increasing this slider will allow the model to give more verbose answers.

Retry on Failure

Default - OFF. Turn ON to enable retrying when execution of a node fails. This can help make your project more robust

LLM Fallback Mode

Default - OFF. Turn ON to make your project more robust to model failure. This allows you to select a backup model and provider to use in case something goes wrong.

Fallback Branch

Default - OFF. When turned ON, you can specify a different flow to follow in case this LLM node's execution fails.

Knowledge Base Nodes

A knowledge base node performs a search or retrieval operation on a knowledge base of your choice, returning relevant document chunks or information in response to a query. The files uploaded to the Knowledge Base node can be reused across multiple flows. All created knowledge bases are automatically synced to the Knowledge Base Dashboard.


Node Settings & Search Parameters

Click on the node to change its settings.

At the top of the window, you will find a drop-down menu to select a Knowledge Base or choose documents to form a new Knowledge Base. You can select or de-select individual documents that you'd like to include.

Settings + Search Parameters

Below that you will see the configurations for Settings and Search Parameters

  • Output Format: Choose between chunks, pages, and docs.

  • Metadata Filter Strategy: Choose between Strict Filter, Loose Filter, and No Filter.

  • Query Strategy: Choose between Semantic, Keyword, and Hybrid.

  • Top Results: Number of search results ranked by relevance.

  • Max Characters: Limits the number of characters sent to the LLM.

  • Answer Multiple Questions: Get the answers from multiple questions in parallel.

  • Advanced Q&A: Handle questions to compare or summarize documents. By enabling this feature, the knowledge base search will automatically use "retrieval utilities" to select the best mechanism to answer the user questions depending on whether the question aims to: retrieve a fact, compare a set of documents, or summarize a document inside the knowledge base.

  • Rerank: Get more precise information retrieval. The knowledge base will divide its number of results in half with the most relevant results, using a sophisticated ranking algorithm. This will reduce token usage.

  • Query Transformation: Get more precise information retrieval. Forces the knowledge base to rewrite the user message as a better question. This increases the quality of the search results for the language model.

Advanced Upload Parameters

Some nodes, like the Websites Node and Google Drive Node, will also allow you to specify Advanced Upload Parameters, settings for how you'd like your documents to be ingested. You can also control these parameters from the KB Dashboard.

  • Model for Embeddings: as default, the text-embedding-3-large model from OpenAI is selected. However, you will have the option to select the following ones: azure-text-embedding-ada-002, bert-base-cased, all-mpnset-base, palm2 and more.

  • Chunking algorithm: by default, the system uses sentence. You can also choose naive.

  • Chunk overlap: by default, the system uses 500. You can also choose as many as you want up to 4500 by clicking the number and editing it.

  • Chunk length: by default, the system uses 2500. You can also choose as many as you want up to 4500 by clicking the number and editing it.

  • Advanced Data Extraction: For complex data like tables, images, charts. Enable it if you want to extract text from images that are present in your documents. By default, this option is deselected since it will increase the latency of your workflow (i.e., it will run slower).

  • Text in images (OCR): by default, this option is deselected. Enable it if you want to extract text from imgs that are present in your documents.

  • Embeddings API key: by default, the text field is empty. Stack AI's API key are used. If you would like to use yours, then include your API key in this text field.

Chunking Algorithms

StackAI implements both sentence and naive chunking algorithms:

Naive Algorithms are typically simpler and less sophisticated. They often rely on basic methods like searching for specific keywords or phrases.

  • Lack of Context Understanding: they usually don't understand the context or the structure of the language. For example, a naive algorithm might count the frequency of words without understanding their meaning or part of speech.

  • Speed and Efficiency: due to their simplicity, these algorithms can be faster and more efficient, especially for straightforward tasks.

  • Limitations: naive algorithms are generally less accurate in complex language processing tasks. They might miss nuances, sarcasm, or idiomatic expressions.

Sentence Chunking Algorithms are more sophisticated. They involve breaking down text into syntactically correlated parts of words like noun phrases, verb phrases, etc.

  • Context and Structure Understanding: sentence chunking algorithms understand the structure of a sentence. They analyze parts of speech and how words relate to each other in a sentence.

  • Accuracy: they are more accurate in understanding the meaning and context of sentences. This makes them suitable for complex tasks like sentiment analysis, information extraction, and language translation.

  • Resource Intensity: these algorithms are usually more resource-intensive due to their complexity. They might require more computational power and time to process text.

To learn more about best practices with regard to chunking, see our guide to chunking here.

File Status

You will see a label for each document that you upload with the following icons:

  • Pending: the document is being processed and indexed.

  • ✅: the document was successfully indexed.

  • Error: the document could not be indexed (e.g., due to a formatting issue).

Metadata Filter Strategy

Metadata filtering helps narrow down the documents retrieved from a vector store. The filters operate on metadata associated with each document — like date, source, topic, etc.

If you want to surface as much information as possible, it is best to use no filter. In this case, no metadata constraints are applied. The system retrieves the top-k most relevant documents based on your search algorithm. This strategy is best to use when you don't have a very large knowledge base as it increases the likelihood that irrelevant documents will be retrieved.

With loose filtering, metadata is used as a soft constraint — the system prefers documents matching the filter, but still considers other documents. This gives you the best of both worlds and is best used when metadata is helpful but not critical.

Strict filtering should be used for situations where you have many similar documents in your KB that you need to distinguish between. With a strict filter, metadata constraints are hard requirements — only documents matching the specified metadata are eligible for retrieval. When results must meet certain conditions — e.g. regulatory compliance, user access control, project-specific scopes — strict filtering is the way to go.

Query Strategy

In Retrieval-Augmented Generation (RAG), keyword-based search relies on traditional information retrieval techniques that match exact or fuzzy terms within documents. This approach works best when the query and content use consistent vocabulary, such as in legal, technical, or structured domains where terminology is predictable. It's especially useful when you know the precise terms you're looking for.

Semantic querying, on the other hand, uses vector embeddings to represent the meaning of both queries and documents. It enables retrieval based on conceptual similarity, rather than exact keyword matches. This makes it well-suited for natural language questions, varied phrasing, and content where language is less standardized—such as customer support, internal knowledge bases, or conversational search. By focusing on meaning, semantic search improves recall, but may miss documents with exact keyword relevance.

Hybrid querying combines both strategies—typically by blending keyword and semantic relevance scores or performing multi-stage retrieval. This approach provides the benefits of both precision and flexibility, making it ideal for general-purpose RAG systems that must handle a variety of user intents and content types. While slightly more complex to implement, hybrid search often yields the most balanced and robust retrieval performance in production applications.

Typical Workflow Structure

A common pattern is:

  1. User Input (Input Node): The user provides a question or prompt.

  2. Knowledge Base Node: Receives the user’s query (directly or via an LLM node) and retrieves relevant information from the knowledge base.

  3. LLM Node: Uses both the user’s input and the retrieved knowledge base content to generate a final, context-rich answer.


How the Interface Works

A. Data Flow

  • The Knowledge Base node typically takes the user’s input as its query, searches the knowledge base, and outputs relevant text chunks.

  • The LLM node can reference the output of the Knowledge Base node in its prompt using the node’s ID.

B. Connections (Edges)

  • The Input node is connected to the Knowledge Base node (for the query).

  • The Knowledge Base node is connected to the LLM node (providing retrieved content).

  • The LLM node is connected to the Output node (displaying the answer).

C. Execution Order

  1. The user submits a question.

  2. The Knowledge Base node receives the question and retrieves relevant information.

  3. The LLM node receives both the user’s question and the retrieved information, then generates a response.

  4. The Output node displays the LLM’s answer.


Why Use This Pattern?

  • Retrieval-Augmented Generation (RAG): This approach allows the LLM to ground its answers in specific, up-to-date, or proprietary knowledge, improving accuracy and relevance.

  • Separation of Concerns: The Knowledge Base node handles retrieval, while the LLM node handles synthesis and reasoning.


Key Points

  • The LLM node does not “search” the knowledge base directly; it relies on the Knowledge Base node to do the retrieval.

  • The LLM node’s prompt must reference the Knowledge Base node’s output to use the retrieved information.

  • All node references must match actual node IDs in the workflow.


Available Knowledge Bases

  • Documents

  • Websites

  • Tables

  • Data

  • Sharepoint

  • Google Drive

  • OneDrive

  • Dropbox

  • Azure Blob Storage

  • AWS S3

  • Notion

  • Confluence

  • Veeva

  • ServiceNow

  • Jira

Tools

Add tools directly to your LLM Node to let the LLM decide when to use them. The LLM intelligently determines when and how to call these tools based on the context of the conversation and user inputs. Unlike an outside node, whose input is always passed in to the LLM, a tool is integrated into the LLM itself. This approach works best when:

  • You don't need the tool to be accessed at every query, it's okay for the LLM to autonomously decide when to use the tool.

  • You would like the LLM to have access to multiple tools at once.

LLMs with Tools

Only certain models are able to handle tools. If you don't see the option for tool calling in your LLM Node, it means that model is not built to handle tool calls.

Tool Provider

Before using a tool, it's important to understand how tools are organized in Stack AI. Tools are grouped under "Providers" - these are the main services or systems that contain related functionality. Think of a Provider as a container for multiple related tools.

For example:

  • Salesforce (Provider)

    • Create Lead (Tool)

    • Update Contact (Tool)

    • Search Records (Tool)

This organization makes it easy to find and use related tools. Additionally, providers share authentication headers and common access methods, allowing tools within the same provider to seamlessly utilize the same authentication and connection details when performing their actions.

LLM tool to execute another StackAI project

How It Works

  • The LLM analyzes the user's request or query to understand what action needs to be taken

  • It identifies which tool (API endpoint) is most appropriate for fulfilling that request

  • It automatically constructs the API request by filling in:

    • Query parameters

    • Body parameters

    • Path parameters

    • Headers

    • Any other required request data

Tools vs. Separate Node

When should you use a tool and when should you use a separate node? This depends on what you want to accomplish. If you want to enforce using the app at every invocation, then use an outside node--the LLM will have to use the app every time. If you only want the app to be used when necessary and you want the LLM to decide--use a tool!

Tools are also a great choice if you want the LLM to have options. For example, if you want it to search LinkedIn, the Web, and your own knowledge base, you can add those tools to the same LLM and it may use one, two, or all of the options to answer your query.

On the other hand, if you want to make sure that a search is carried out across LinkedIn, the Web, and your KB--then its better to have three separate nodes delivering their output to the LLM. In this case, be careful! Concatenating inputs could exceed you chosen model's context window.

Prompt Optimization with Tools

If you'd like to reference the tool directly in your user prompt, type @ and then select the tool.

When using custom tools with an LLM node, it's important to provide clear prompting to help the LLM understand how and when to use your tools effectively:

  1. Describe the Tool's Purpose: Include a clear description of what the tool does and when it should be used in your system prompt. For example: "Use the addPet tool to add a new pet to the store database."

  2. Provide Usage Examples: Give examples of proper tool usage in your prompts to demonstrate the expected input/output patterns. For example: "addPet(name='Max', category='dog', status='available')"

  3. Set Clear Instructions: Specify any requirements or constraints for using the tool in your prompts. For example: "When using addPet, ensure all required fields (name, category, status) are provided."

  4. Handle Errors: Include guidance on how to handle potential errors or edge cases when using the tool. For example: "If addPet returns an error, verify the input data and try again with corrected values."

Example system prompt:

When the user wants to include a new pet, follow these steps:

1. Ask for the name of the pet
2. Use the listPets tool to check if the name already exists. If it does, ask the user for a different name that is not in the list.
3. If the pet name is unique, collect all required information for addPet.
   3.1. If any information is missing, ask the user for it.
4. Use the addPet tool to create the new pet entry
5. Use the getPetById tool to retrieve the newly created pet
6. Provide a summary confirming the successful pet addition with the key details

Custom Tools

Custom Tools enable AI agents to execute custom actions by integrating with your API systems and services. When you define API endpoints in your custom tools, each endpoint becomes a distinct tool that the LLM can utilize.

A custom tool represents a specific API endpoint and its functionality. Each tool has several key components:

  • Name: A unique identifier for the tool that can be referenced in LLM prompts. For example, if you name a tool addPet, you would reference it as "addPet" when instructing the LLM to use it.

  • Description: A clear explanation of what the tool does. This helps the LLM understand when and how to use the tool appropriately.

  • Path: The API endpoint path that the tool will call (e.g., /api/v1/pets)

  • Method: The HTTP method to use (GET, POST, PUT, DELETE, etc.)

When the LLM needs to create a new pet, it can reference the addPet tool by name and provide the necessary parameters based on the tool's description and requirements.

Create a Custom Tool

Custom tools are defined through API services, allowing you to integrate external functionality into your LLM. When you create a custom tool, you'll describe your API endpoints and their capabilities.

Each API endpoint becomes a distinct tool that represents a specific action or operation in your system. The LLM will automatically understand how to use these endpoints and fill in the required parameters (like body and query parameters) based on the context and user input.

For example, if you have an e-commerce API:

  • The /products endpoint becomes a tool for retrieving product information, where the LLM can fill search parameters

  • The /orders/create endpoint becomes a tool for placing new orders, with the LLM providing order details in the request body

  • The /inventory/update endpoint becomes a tool for managing stock levels, where the LLM determines the updated quantities

This approach lets you transform your existing APIs into reusable tools that can be easily incorporated into any LLM, making your external services and systems accessible to AI agents. The LLM handles the complexity of constructing proper API requests by intelligently filling parameters based on the conversation context. Custom tools help you build more maintainable and scalable flows by promoting code reuse and modular design.

To create a custom tool:

  1. Navigate to an LLM Node that supports Tools (like GPT-4 or Claude)

  2. Click the "Tools" button in the Tools section

  3. Select the "Custom tools" tab where your custom tools will appear. Click the "Add Custom Tool" button.

This will open the custom tool creation interface where you can define your tool's functionality.

Adding Tool Information

To create a custom tool, you need to include:

  1. Tool Provider Name: Give your tool provider a descriptive name that represents the service or system

  2. OpenAPI Schema: Provide the OpenAPI specification that defines your API endpoints. The schema must include:

    • Server URLs for the API endpoints

    • Complete endpoint definitions with:

      • Important! Clear descriptions explaining what each endpoint does and its purpose to help the LLM understand how to use them correctly

      • HTTP methods (GET, POST, PUT, etc.)

      • Path parameters

      • Query parameters for GET requests

      • Detailed request body schemas for POST/PUT requests

      • Response schemas

      • Required headers specific to endpoints

  3. Common Headers (Optional): Define headers that should be applied across all endpoints, such as:

    • Authentication headers (e.g. API keys)

    • Custom headers required by your API

Each API endpoint defined in your OpenAPI schema will be automatically transformed into an individual tool that you can use in your LLMs. Taking time to properly configure these settings will make your tools more user-friendly and reliable.

Your custom tool will now appear in the tools panel and can be used in any LLM!

HubSpot

Comprehensive guide to using the HubSpot node in StackAI workflows, including key actions, input requirements, configurations, and output examples.

What is HubSpot?

The HubSpot node in StackAI enables seamless integration with HubSpot CRM, allowing you to automate, query, and manage your sales, marketing, and customer data directly within your workflow. This node connects to HubSpot’s powerful CRM features, making it easy to retrieve, create, or update contacts, deals, pipelines, and more.


How to use it?

To use the HubSpot node, add it to your StackAI workflow and select the desired action. Each action corresponds to a specific HubSpot operation, such as searching for deals, retrieving contact information, or creating new records. Configure the required inputs and connection settings, then connect the node to other workflow components to automate your business processes.


Example of Usage

Suppose you want to search for deals in your HubSpot CRM based on specific criteria. You would select the "Search Deals" action, provide the necessary search parameters, and connect the output to a reporting or notification node.


Most Commonly Used Actions in HubSpot

Below are the most frequently used actions available in the HubSpot node, along with their input, configuration, and output details:


1. Search Deals

Description: Retrieve a list of deals from HubSpot CRM based on search criteria.

  • Inputs:

    • filters (Required): Array of filter objects specifying search conditions (e.g., propertyName, operator, value).

    • sorts (Optional): Array of property names to sort results.

    • limit (Optional): Maximum number of results to return.

    • after (Optional): Pagination cursor for additional results.

  • Configurations:

    • connection_id (Required): Your HubSpot connection ID.

  • Outputs:

    • results: Array of deal objects matching the search criteria.

    • total: Total number of deals found.

Example:


2. Get Full Deal Data

Description: Retrieve detailed information for a specific deal.

  • Inputs:

    • deal_id (Required): The unique ID of the deal.

  • Configurations:

    • connection_id (Required): Your HubSpot connection ID.

  • Outputs:

    • deal: Full deal object with all properties.

Example:


3. Get Contact with History

Description: Retrieve a contact’s details along with their historical changes.

  • Inputs:

    • contact_id (Required): The unique ID of the contact.

  • Configurations:

    • connection_id (Required): Your HubSpot connection ID.

  • Outputs:

    • contact: Contact object with history.

Example:


4. Create Deal

Description: Create a new deal in HubSpot CRM.

  • Inputs:

    • properties (Required): Object containing deal properties (e.g., dealname, amount, pipeline, dealstage).

  • Configurations:

    • connection_id (Required): Your HubSpot connection ID.

  • Outputs:

    • deal: The newly created deal object.

Example:


5. Create Contact

Description: Add a new contact to HubSpot CRM.

  • Inputs:

    • properties (Required): Object containing contact properties (e.g., email, firstname, lastname).

  • Configurations:

    • connection_id (Required): Your HubSpot connection ID.

  • Outputs:

    • contact: The newly created contact object.

Example:


6. Get Pipeline

Description: Retrieve information about a specific pipeline.

  • Inputs:

    • pipeline_id (Required): The unique ID of the pipeline.

  • Configurations:

    • connection_id (Required): Your HubSpot connection ID.

  • Outputs:

    • pipeline: Pipeline object with details.

Example:


7. List Pipelines

Description: List all pipelines in your HubSpot account.

  • Inputs: None.

  • Configurations:

    • connection_id (Required): Your HubSpot connection ID.

  • Outputs:

    • pipelines: Array of pipeline objects.


Summary Table

Action
Required Inputs
Required Configurations
Outputs

Showing 1-7 of 7 items


Best Practices

  • Always provide the required connection_id to authenticate with your HubSpot account.

  • Ensure all required input fields are filled to avoid errors.

  • Use the outputs to connect to downstream nodes for further automation, such as notifications, reporting, or data enrichment.


Example of Usage

To automatically add a new contact when a user submits a form, use the "Create Contact" action, map the form fields to the required properties, and connect the output to a notification or CRM update node.


This guide helps you leverage the HubSpot node in StackAI to automate and streamline your CRM workflows efficiently.

{
  "filters": [
    { "propertyName": "dealstage", "operator": "EQ", "value": "appointmentscheduled" }
  ],
  "limit": 5
}
{
  "deal_id": "123456789"
}
{
  "contact_id": "987654321"
}
{
  "properties": {
    "dealname": "New Enterprise Deal",
    "amount": "50000",
    "pipeline": "default",
    "dealstage": "appointmentscheduled"
  }
}
{
  "properties": {
    "email": "[email protected]",
    "firstname": "Jane",
    "lastname": "Doe"
  }
}
{
  "pipeline_id": "default"
}

Search Deals

filters

connection_id

results, total

Get Full Deal Data

deal_id

connection_id

deal

Get Contact w/History

contact_id

connection_id

contact

Create Deal

properties

connection_id

deal

Create Contact

properties

connection_id

contact

Get Pipeline

pipeline_id

connection_id

pipeline

List Pipelines

None

connection_id

pipelines

SAP

Comprehensive guide to the SAP node in StackAI: actions, required inputs, configurations, and outputs with clear examples.

The SAP Node in StackAI enables seamless integration with your SAP system, allowing you to automate, retrieve, and manage business data and processes directly within your workflows. It supports a wide range of actions for financial, project, and resource management.


How to use it?

  1. Add the SAP node to your StackAI workflow.

  2. Select the desired action (e.g. 'Get Entities From Financial Item Consumption').

  3. Establish a connection to your SAP account.

  4. Provide the required input parameters and configurations.

  5. Connect the output to downstream nodes for further processing or reporting.


Establishing a Connection

You must already have access to a valid SAP account with the necessary permissions.

  • Connection Name (string, required)

    • What it is: A label you assign to your SAP connection. It helps you identify the connection in Stack AI, especially if you have multiple SAP environments (e.g., production, staging, test).

    • Example:

      • Sam's Connection

      • Test Connection

  • SAP Host (string, required)

    • What it is: The hostname or IP address of your SAP server. This tells Stack AI where to send API requests.

    • Example:

      • sap.company.com

      • 192.168.1.100

      • sap-prod.internal.corp

  • SAP Port (string, required)

    • What it is: The network port on which your SAP server is listening for API or OData requests. Common SAP ports are 443 (HTTPS) or 50000+ for custom SAP services.

    • Example:

      • 443 (for secure HTTPS)

      • 50000 (for SAP NetWeaver Gateway)

      • 8000 (for HTTP, not recommended for production)

  • SAP Token URL (string, required)

    • What it is: The URL endpoint used to obtain OAuth2 tokens for authenticating with SAP. This is required if your SAP system uses OAuth2 for secure API access.

    • Example:

      • https://sap.company.com/oauth2/token

      • https://sap-prod.internal.corp:443/oauth/token

      • https://192.168.1.100:50000/sap/bc/sec/oauth2/token

  • SAP Client (string, required)

    • What it is: The SAP client number is a three-digit code that identifies a logical partition within your SAP system. Each client is like a separate environment (e.g., 100 for production, 200 for test).

    • Example:

      • 100 (Production)

      • 200 (Test)

      • 300 (Development)


Action Summary

Name
Required Inputs
Optional Inputs
Output

Get Entities From Financial Planning Consumption Type

$Top, $Skip, $Filter, $Inlinecount, $Orderby, $Select

Status Code, Headers, Body

Get Entities From Financial Item Consumption

$Top, $Skip, $Filter, $Inlinecount, $Orderby, $Select

Status Code, Headers, Body

Get Entities From Team Project Consumption

$Top, $Skip, $Filter, $Inlinecount, $Orderby, $Select

Status Code, Headers, Body

Get Entities From Work Package Consumption

$Top, $Skip, $Filter, $Inlinecount, $Orderby, $Select

Status Code, Headers, Body

Get Entity From Financial Item Consumption By Key

Guid

SAP Client, $Select

Status Code, Headers, Body

Get Entity From Financial Planning Consumption Type By Key

Guid

$Select

Status Code, Headers, Body

Get Entity From Team Project Consumption By Key

Projectguid, Hierarchyguid, Appobjectguid, Taskguid, Entityguid

$Select

Status Code, Headers, Body

Get Entity From Work Package Consumption By Key

Guid

$Select

Status Code, Headers, Body

  • $Top (integer)

    • A data service URI with the $top system query option returns the first N entities from the collection identified by the URI’s resource path. N must be an integer ≥ 0; otherwise, the URI is considered malformed.

  • $Skip (integer)

    • A data service URI with the $skip query option returns entities starting from position N + 1 in the collection identified by the URI’s resource path. N must be an integer ≥ 0; otherwise, the URI is considered malformed.

  • $Filter (string)

    • A data service URI with the $filter query option returns entities from the specified EntitySet that satisfy a given boolean expression.

      • Syntax: $filter=<bool expression>

      • Example: /Orders?$filter=ShipCountry eq 'France' — returns orders shipped to France. /Orders?$filter=Customers/ContactName ne 'Fred' — returns orders where the customer’s contact name is not Fred.

  • $Inlinecount (string)

    • The $inlinecount query option includes the total count of entities (after applying $filter) in the response.

      • Syntax: $inlinecount=allpages (include count) or $inlinecount=none (exclude count). Counting behavior is implementation-specific.

  • $Orderby (string_array)

    • The $orderby query option specifies how to sort entities in the EntitySet identified by the URI’s resource path.

      • Syntax: $orderby=<expression> [asc|desc], <expression> [asc|desc], ... Default sort order is ascending (asc) if not specified.

  • $Select (string_array)

    • The $select query option returns the same entities as without it, but limits the response to only the specified properties. The service may still include additional properties in the response.

  • SAP Client (string)

    • In SAP, a client number is a unique, three-digit identifier that distinguishes different business entities or organizational units within a single SAP system. It allows for the separation of data and configurations for various companies or projects, even within the same physical system. Client numbers typically range from 000 to 999, allowing for up to 1000 clients within a single SAP system.

  • Guid (string)

    • In SAP, a GUID (Globally Unique Identifier) is a unique key used to identify objects or components. It's a string of characters, often 32 hexadecimal characters long, that ensures uniqueness across different systems and databases.

    • To find the SAP GUID (Globally Unique Identifier) for an application object, you'll typically need to navigate to the specific transaction or object within the SAP system and access its properties or details. The exact location and method will depend on the specific SAP object you are working with.

  • Projectguid (string)

    • In SAP systems, a Project GUID (Globally Unique Identifier) is a unique identifier used to distinguish projects, activities, and other related elements within the system. It is a 16-character identifier that is automatically generated by the SAP system.

  • Hierarchyguid (string)

    • In SAP, a "Hierarchy GUID" typically refers to a Globally Unique Identifier (GUID) used to uniquely identify a hierarchy within the system. This GUID acts as a primary key, ensuring that each hierarchy can be distinguished from others. It's often used in various applications and is a crucial element in data modeling and reporting within SAP.

  • Appobjectguid (string)

    • The Application Object GUID specifically refers to the unique identifier for an "application object"—which could be a project, task, document, or any business object managed within SAP.

  • Taskguid (string)

    • Taskguid is a 128-bit Globally Unique Identifier (GUID) that uniquely identifies a task object in SAP's project system.

    • In SAP, projects are often broken down into multiple tasks or work packages. Each of these tasks is assigned a unique GUID to distinguish it from other tasks, even across different projects.

  • Entityguid (string)

    • In SAP, an "entity" in this context typically refers to a specific record or object within the team project consumption module—such as a cost item, resource, or any other sub-object related to a project or task.

    • The Entityguid is a 128-bit unique identifier (GUID) that ensures you are referencing exactly the right entity, even if there are many similar records in the system.

SharePoint

Discover how to search and retrieve files from SharePoint using StackAI, including required inputs, configurations, and output details.

The SharePoint Node in StackAI allows you to search for files stored in your SharePoint environment. You can specify search queries, filter by file types, and control the number of results returned. This is ideal for automating document retrieval, content management, and knowledge discovery. Sharepoint allows you to search for files stored in your Sharepoint, as well as Sharepoint News.

Example of Usage

Suppose you want to find all PDF reports related to "Quarterly Sales" in your SharePoint. You would set up the node as follows:

  • Input Example:

    • Search Query (string, required): "Quarterly Sales"

    • File Types (string array, optional): ["pdf"]

    • Max Results (integer, optional): 10

The node will return a list of matching files, including their names, URLs, types, modification dates, and more.


Setting Up A Connection

To use the the SharePoint node, you must connect your SharePoint account to Stack AI using OAuth2. This connection requires several key pieces of information, which are typically provided by your Microsoft Azure/SharePoint administrator:

1. Client ID (string, required)

  • What it is: A unique identifier for your Azure AD application (also called "Application (client) ID").

  • Where to find it: In the Azure portal, under "Azure Active Directory" > Under "Manage" select "App registrations" > [Your App] > "Overview".

  • Example: b1a7c8e2-1234-4f56-9abc-1234567890ab

2. Client Secret (string, required)

  • What it is: A password-like value generated for your Azure AD application, used to authenticate your app.

  • Where to find it: In the Azure portal, under "Azure Active Directory" > "App registrations" > [Your App] > "Certificates & secrets". You must create a new client secret and copy the value.

  • Example: wJ8Q~abc1234efgh5678ijklmnop9qrstuvwx

3. Tenant ID (string, required)

  • What it is: The unique identifier for your Microsoft 365 tenant (organization).

  • Where to find it: In the Azure portal, under "Azure Active Directory" > "Overview" > "Tenant ID".

  • Example: 72f988bf-86f1-41af-91ab-2d7cd011db47

4. SharePoint Site ID (string, optional)

  • What it is: The unique identifier for the SharePoint site you want to access. This is not the site URL, but an internal ID.

  • Where to find it: You can get this via the Microsoft Graph API or from your SharePoint admin. Sometimes, it is in the format: contoso.sharepoint.com,12345678-90ab-cdef-1234-567890abcdef,abcdef12-3456-7890-abcd-ef1234567890

  • Example: contoso.sharepoint.com,12345678-90ab-cdef-1234-567890abcdef,abcdef12-3456-7890-abcd-ef1234567890


Available Actions

1. Search Files

Search for files and documents in SharePoint using a query and optional filters.

Inputs

Name
Description
Example
Required

Search Query

The search string to find files and documents

"Quarterly Sales"

Yes

File Types

List of file types to filter by

["pdf", "docx"]

No

Max Results

Maximum number of results to return

10

No

  • Search Query (string, required): The keywords or phrase to search for in file names and content.

  • File Types (array of strings, optional): Filter results by file extensions (e.g., "pdf", "docx", "xlsx").

    • You can include any file extension that is supported by your SharePoint environment.

    • Common examples include:

      • "pdf" (PDF documents)

      • "docx" (Word documents)

      • "xlsx" (Excel spreadsheets)

      • "pptx" (PowerPoint presentations)

      • "txt" (Text files)

      • "csv" (Comma-separated values)

      • "jpg", "png", "gif" (Image files)

      • "zip" (Compressed archives)

      • ...and any other file extension that your SharePoint instance stores

  • Max Results (integer, optional, default: 20): Limit the number of files returned.

Outputs

Each file in the output includes:

  • File ID (string): Unique identifier for the file.

  • File Name (string): Name of the file.

  • File URL (string): Direct link to access the file.

  • File Type (string): File extension/type (e.g., "pdf").

  • Modified Date (string): Last modified date.

  • Size (integer): File size in bytes.

  • Author (string): File author or creator.

Example Output:

{
  "files": [
    {
      "file_id": "12345",
      "file_name": "Quarterly_Report_Q1.pdf",
      "file_url": "https://contoso.sharepoint.com/sites/finance/Shared%20Documents/Quarterly_Report_Q1.pdf",
      "file_type": "pdf",
      "modified_date": "2025-06-15T10:23:45Z",
      "size": 1048576,
      "author": "Jane Doe"
    },
    {
      "file_id": "67890",
      "file_name": "Budget_2025.pdf",
      "file_url": "https://contoso.sharepoint.com/sites/finance/Shared%20Documents/Budget_2025.pdf",
      "file_type": "pdf",
      "modified_date": "2025-07-01T14:05:12Z",
      "size": 2097152,
      "author": "John Smith"
    }
  ],
  "total_count": 2
}

2. Search Files

This action creates a new Microsoft Word (DOCX) file in a specified SharePoint site and folder, with the content you provide (in Markdown format). Typically used to automate report generation, meeting notes, or any document creation directly into your SharePoint library.

Inputs

Name
Description
Example
Required

File Name

Name of the Word document to create (must end in .docx).

Report.docx

Yes

Hostname

SharePoint site hostname (e.g., your company’s SharePoint domain).

contoso.sharepoint.com

No

Site Name

SharePoint site name (e.g., sites/YourSiteName).

sites/Finance

No

Content

Document body text in Markdown format.

# Q2 Report\nSummary...

Yes

Folder Path

Destination folder path within the site. If omitted, saves to root folder.

/Shared Documents/Reports

No

Outputs

On success, the action returns:

  • File ID (string): Unique identifier of the created file.

  • File URL (string): URL to open the document in SharePoint/OneDrive.


Advanced Settings

  1. Retry on Failure: Enable retrying when the node execution fails

  2. Fallback Branch (integer): Create a separate branch that executed when this node fails, allowing you to handle errors gracefully


Summary Table

Action Name
Description
Required Inputs
Optional Inputs
Outputs

Search Files

Search for files in SharePoint

Search Query

File Types, Max Results

Files, Total Count

Create Word Document

Creates a word document in SharePoint

File Name, Content

Hostname, Site Name, Folder Path

File ID, File URL


Best Practices

  • Always provide a clear and specific search query for best results.

  • Use file type filters to narrow down results when searching for specific document formats.

  • Adjust the max results parameter to control the volume of data returned.

Slack

The Slack Node allows you to connect with your Slack account and retrieve data from specific channels through StackAI's App for Slack. You can also deploy a custom bot for full control over branding and configuration.

Please review our privacy policy before you install the application.

Disclaimer: StackAI uses advanced generative AI technology to assist users directly within Slack by answering questions, summarizing discussions, and generating helpful content. Please note that AI-generated responses may occasionally be inaccurate or incomplete. StackAI does not use Slack data to train its models, and all user data is handled in accordance with our data retention and security policies.

Adding StackAI's App for Slack

The StackAI App for Slack can be added to both private and shared channels in your Slack workspace. Below, we'll guide you through the steps required:

Step 1: Select Slack (OAuth2) as a New Connection

Step 2: Choose a Channel and Allow Access

You will see a pop-up window with StackAI requesting access.

Select a channel to validate the connection. This is the channel your app will be able to post to. Your app will also inherit your user permissions and be able to read from all the channels you have access to.

Step 3: Ensure the Connection is Selected

You can test the connection to make sure you are connected using the "Test" button.

Step 4: Use Your Slack App

Select the actions you would like to use in Slack. To run the actions, make sure you have the parameters filled out.


Available Actions

1. Search Slack Messages (slack_search_messages)

  • Description: Search for messages across the Slack workspace using query parameters with optional channel filtering.

  • Inputs:

    • query (string, required): Search query to find messages in the Slack workspace. Supports advanced search syntax like 'from:@user', 'in:#channel', date ranges, etc. Example: "project update from:@John

    • channels (array of strings, optional): List of channel IDs to filter search results. We simplify the process by adding a multi-selector that automatically retrieves all the available channels. If empty, searches all accessible channels

    • count (number, optional): Maximum number of search results to return. Default: 20, Maximum: 100

    • sort (string, optional): Sort results by timestamp or relevance score. Default: "timestamp"

    • sort_dir (string, optional): Sort direction. Options: "asc" or "desc". Default: "desc"

    • highlight (boolean, optional): Whether to highlight matching terms in results. Default: true

  • Outputs:

    • query (string): The search query that was executed (with channel filters applied)

    • total_matches (number): Total number of messages matching the search query

    • messages (array): List of messages matching the search query. Each message contains: text, user, username, timestamp, channel_id, channel_name, permalink, is_thread_reply, parent_timestamp

    • has_more (boolean): Whether there are more results available

2. Send a Message (slack_message)

  • Description: Send a message to a Slack channel.

  • Inputs:

    • message (string, required): Content/body of the message you want to send

    • channel_id (string, required): The Slack channel to send the message to

    • Dynamic options populated from available channels

  • Outputs:

    • channel_id (string): The Slack channel ID where the message was sent

    • results (string): The status of the message sent (confirmation or error)

    • message_ts (string): Slack message timestamp (unique identifier)

3. Send Message and Wait for Response (slack_send_and_wait_block_kit)

  • Description: Send a Slack message that collects an approval or free-text response using modern Block Kit.

  • Inputs:

    • message (string, required): Content/body of the interactive message

    • channel_id (string, required): The Slack channel to send the interactive message to

    • response_type (string, required): Choose response type. Options: "approval" (buttons) or "free_text" (opens modal to collect text). Default: "approval"

    • button_text (string, optional): Text displayed on the primary interactive button. Default: "Approve"

    • include_disapprove (boolean, optional): If true, include a Disapprove button. Default: true

    • disapprove_button_text (string, optional): Text displayed on the disapprove button. Default: "Disapprove"

    • free_text_placeholder (string, optional): Placeholder for the modal's text input. Used when response_type is "free_text"

    • block_id (string, optional): Unique identifier for the top section block

    • button_style (string, optional): Visual style. Options: "primary", "danger" or default

  • Outputs:

    • channel_id (string): The Slack channel ID where the interactive message was sent

    • results (string): The status of the interactive message sent

    • message_ts (string): Slack message timestamp (unique identifier)

    • resume_output (object, optional): User interaction data when workflow resumes

Note: Remember if you are using your own connection to enable Interactivity and include the https://api.stack-ai.com/workflow/v1/resume url, as explained in the connection. This is critical, otherwise you will not be able to resume the workflow.

4. Upload File (slack_upload_file)

  • Description: Upload files to Slack using modern external upload workflow with multiple source options.

  • Inputs:

    • source_url (string, optional): URL to download file from.

    • bytes_b64 (string, optional): Base64 encoded file content.

    • text (string, optional): Raw text content to upload as a file.

    • filename (string, optional): Name of the file. Required for bytes_b64 and text modes. Optional for source_url (inferred from URL).

    • title (string, optional): Title for the file (defaults to filename).

    • content_type (string, optional): MIME type of the file. Auto-set to 'text/plain' for text mode.

    • channel_id (string, optional): Channel to share the file. Leave empty to keep file private.

    • thread_ts (string, optional): Timestamp of thread to reply in.

    • initial_comment (string, optional): Initial comment with the file.

    • unfurl_links (boolean, optional): Auto-unfurl links in comment. Default: true.

    • unfurl_media (boolean, optional): Auto-unfurl media in comment. Default: true.

  • Outputs:

    • success (boolean): Whether the file was uploaded successfully.

    • file_id (string): Unique ID of the uploaded file.

    • file_url (string): Private URL to access the file.

    • permalink (string): Permanent link to the file.

    • sharing_summary (string): Summary of where the file was shared.

    • upload_method (string): Method used for upload.

5. Delete Slack File (slack_delete_file)

  • Description: Delete a file from the Slack workspace permanently.

  • Inputs:

    • file_id (string, required): The unique ID of the file to delete.

  • Outputs:

    • success (boolean): Whether the file was successfully deleted.

    • file_id (string): The ID of the file that was deleted.

    • message (string): Message describing the deletion result.

6. Get Slack File Info (slack_get_file_info)

  • Description: Get detailed information about a specific Slack file.

  • Inputs:

    • file_id (string, required): The unique ID of the file to get information about.

  • Outputs:

    • file (object): Detailed information about the file.

    • Includes: id, name, title, mimetype, filetype, size, created, user, urls, etc.

    • access_urls (object): URLs for accessing the file (require Authorization header).

    • sharing_info (object): Information about where the file is shared.

7. List Files (slack_list_files)

  • Description: List files in the Slack workspace with filtering options.

  • Inputs:

    • user (string, optional): Filter files by specific user ID.

    • channel (string, optional): Filter files by specific channel.

    • ts_from (string, optional): Filter files created after this timestamp (Unix timestamp).

    • ts_to (string, optional): Filter files created before this timestamp (Unix timestamp).

    • types (string, optional): Filter by file types. Options: "all", "spaces", "snippets", "images", "gdocs", "zips", "pdfs".

    • count (number, optional): Maximum number of files to return. Default: 20, Maximum: 1000.

    • show_files_hidden_by_limit (boolean, optional): Show files hidden by 5GB limit in free workspaces. Default: false.

  • Outputs:

    • files (array): List of files matching the criteria. Each file contains: id, name, title, mimetype, size, created, user, etc.

    • total_count (number): Total number of files found.

    • has_more (boolean): Whether there are more files available.

    • paging (object): Pagination information.

8. Query Slack Channel (slack_query)

  • Description: Retrieve messages from a Slack channel. Query and retrieve messages from a pre-configured Slack channel.

  • Inputs:

    • No required input parameters (the channel is typically configured in the connection or node settings).

  • Outputs:

    • channel_id: The Slack channel ID that was queried.

    • results: The messages retrieved from the Slack channel. Includes main messages and thread replies. Each message contains: id, text, timestamp, is_thread_reply, parent_ts.

Airtable

Query and manage your Airtable bases directly from StackAI workflows.

The Airtable node in StackAI enables seamless integration with your Airtable workspaces. You can perform database queries, retrieve records, and automate data management tasks as part of your workflow automation.

Usage overview

  1. Choose the action that you'd like to perform in Airtable (query or write).

  2. Establish a connection to Airtable by either signing in via the New Connection button or selecting an existing connection from the dropdown. The Airtable node requires a valid API key or OAuth connection to access your bases. Validate the connection is 'Healthy' with Test.

  3. Fill out the parameters for the action that you'd like to take in Airtable by setting up the Inputs and Configurations (more details in dedicated sections below).

  4. The node can be connected to input nodes (for dynamic queries), LLM nodes (for natural language queries), or other action nodes for advanced automation.

Available Actions and Triggers

1. Query Airtable

Description: Query an Airtable base using a structured query or natural language.

Inputs:

  • Query (string, required):

    • The query is a plain English question or instruction that describes what data you want from the Airtable table. For example, you might write:

      • "Employees in the Marketing department"

      • "Employees hired after January 2024"

      • "Employees with the title 'Manager'"

Configurations:

  • Base_id (string, required): The unique identifier of your Airtable base.

    • Example: app1234567890

    • You can locate this in the URL on your Airtable browser: https://airtable.com/app1234567890/tbl...

  • Table (string, required): The name of the table to query.

    • Open your Airtable base.

    • Look at the tabs along the top (or left, depending on your layout). Each tab represents a table.

    • The table name is the label shown on each tab, e.g. Employees

  • View (string, optional): Name of the view to use for filtering/sorting.

    • Open your Airtable base.

    • Navigate to the specific table you're working with.

    • At the top left of the table, next to the table name, you’ll see a dropdown with the current view name (e.g., Grid view , Kanban , Calendar ).

    • Click the dropdown to see all available views. The view you’re currently in is highlighted.

  • AirtableSearchModeEnum (select, optional):

    • Determines how your query is interpreted and executed.

    • Formula mode ("formula"): Translates your natural language query into Airtable formula syntax—ideal for precise, rule-based filtering (e.g., your query may be “Find all employees in the Sales department”).

    • Semantic mode ("semantic"): Uses semantic search to find records by meaning, not exact wording—useful for broader, context-driven queries (e.g., your query may be “Show me people who work with customers”.)

  • Max Records (integer, optional): Maximum number of records to return.

    • Example: 100

Configurations:

  • connection_id (string, required): The connection ID for your Airtable account.

    • Example: "your-connection-id"

Outputs:

  • records (array, required): List of records matching the query.

    • Each record includes field values and record ID.

    • Example:


2. Write to Airtable

Description:

Insert or update records in an Airtable base.

Inputs:

  • Data (string, required): Your intent in plain English, describing what record you want to create and what values to set for each field.

    • Examples:

      • Add an employee named Alice Johnson, email [email protected], department Engineering, and start date July 15, 2025.

      • Create a new product with name 'Widget X', price $99.99, and category 'Electronics'.

    • Tips

      • Be as specific as possible about the fields and values you want to set.

      • Use the field names as they appear in your Airtable table for best results.

      • You can add multiple fields in one sentence.

Configurations:

  • Base Id (string, required): The unique identifier of your Airtable base.

    • Example: app1234567890

    • You can locate this in the URL on your Airtable browser: https://airtable.com/app1234567890/tbl...

  • Table (string, required): The name of the table to query.

    • Open your Airtable base.

    • Look at the tabs along the top (or left, depending on your layout). Each tab represents a table.

    • The table name is the label shown on each tab, e.g. Employees

  • View (string, optional): Expects either the name or the ID of a view in your Airtable table.

    • Open your Airtable base.

    • Navigate to the specific table you're working with.

    • At the top left of the table, next to the table name, you’ll see a dropdown with the current view name (e.g., Grid view , Kanban , Calendar ).

    • Click the dropdown to see all available views. The view you’re currently in is highlighted.

    • If you leave it blank, the default view (usually "Grid view") will be used.

    • Specifying a view can be useful if you want to restrict the write operation to records visible in that view

Outputs:

  • records (array, required): List of records that were created or updated.

    • Each record includes field values and record ID.

    • Example:


3. Update Airtable Record

Description:

Updates the values of an existing record on Airtable.

Inputs:

  • Record Id (string, required): The unique identification number of the record you would like to modify.

    • Retrieve Record ID from Airtable

      1. Using the Query Airtable Node

        • You can retrieve a record by using the Query Airtable node.

        • This method automatically returns the record ID as a parameter.

      2. Using a Formula Field in Airtable

        • Add a new Formula field to your table.

        • Enter the formula: RECORD_ID()

        • The field will display the record ID for each record.

        • Copy the ID from the relevant cell as needed.

  • Data (string, required): .Your intent in plain English, describing what part of the record you would like to update.

    • Examples:

      • Change 'X' column's entry to 100

Configurations:

  • Base Id (string, required): The unique identifier of your Airtable base.

    • Example: app1234567890

    • You can locate this in the URL on your Airtable browser: https://airtable.com/app1234567890/tbl...

  • Table (string, required): The name of the table to query.

    • Open your Airtable base.

    • Look at the tabs along the top (or left, depending on your layout). Each tab represents a table.

    • The table name is the label shown on each tab, e.g. Employees


4. Delete Airtable Record

Description:

Delete an existing record on Airtable.

Inputs:

  • Record Id (string, required): The unique identification number of the record you would like to modify.

    • Retrieve Record ID from Airtable

      1. Using the Query Airtable Node

        • You can retrieve a record by using the Query Airtable node.

        • This method automatically returns the record ID as a parameter.

      2. Using a Formula Field in Airtable

        • Add a new Formula field to your table.

        • Enter the formula: RECORD_ID()

        • The field will display the record ID for each record.

        • Copy the ID from the relevant cell as needed.

Configurations:

  • Base Id (string, required): The unique identifier of your Airtable base.

    • Example: app1234567890

    • You can locate this in the URL on your Airtable browser: https://airtable.com/app1234567890/tbl...

  • Table (string, required): The name of the table to query.

    • Open your Airtable base.

    • Look at the tabs along the top (or left, depending on your layout). Each tab represents a table.

    • The table name is the label shown on each tab, e.g. Employees


Summary Table of Actions

Action Name
Description
Required Inputs
Required Configurations
Outputs

[
  {
    "id": "rec1234567890",
    "fields": {
      "Name": "Task 1",
      "Status": "Open",
      "Due Date": "2025-07-10"
    }
  }
]
[
  {
    "id": "rec0987654321",
    "fields": {
      "Name": "New Task",
      "Status": "In Progress"
    }
  }
]

Query Airtable

Query records from a base/table

base_id, table_name

connection_id

query, results

Write to Airtable

Insert/update records in a table

base_id, table_name, records

connection_id

record_id, fields

Update Airtable Record

Update existing record in a table

record_id, data

base_id, table_name

record_id, fields

Delete Airtable Record

Delete a record in existing table

record_id

base_id, table_name

record_id, deleted

Logo

Jira

Comprehensive guide to using the Jira node in StackAI workflows, including top actions, input requirements, configurations, and output details.

What is Jira?

Jira is a powerful project management tool designed for issue and ticket tracking. The Jira node in StackAI allows you to automate the creation of Jira issues directly from your workflow, streamlining project management and team collaboration.


Establishing A Connection

  1. Click 'Create Connection' and give it a name you’ll recognize later (e.g., Sam's Connection).

  2. You’ll be redirected to the Atlassian website — sign in using your existing Jira account.

  3. Accept the requested permissions.

  4. Once redirected back to StackAI, open the dropdown menu under 'Select Connection' and select your newly created connection.

  5. Click the 'Test' button to verify the connection status is Healthy.


Action Summary Table

Action
Descriprion
Inputs
Outputs

Create Issue

Automatically create a new issue (ticket) in your Jira project.

Project ID, Issue Type ID, Summary, Description, Assignee ID, Parent Key, Priority ID, Labels, Due Date, Component, Custom Fields

Issue Key, Issue ID, Issue URL, Summary, Status, Message

Add Issue Attachment

Add one or more file attachments to a Jira issue

Jira Issue ID or Key, File Path

Issue ID or Key, Issue, Message

Add Issue Comment

Add a comment to an existing Jira issue.

Issue ID, Comment Text

Jira Comment ID, Jira Issue URL, Message

Link Jira Issues

Create a link between two Jira issues with an optional comment

Outward Issue Id Or Key, Inward Issue Id Or Key, Link Type, Comment Text

Outward Jira Issue, Inward Jira Issue, Jira Link Type, Operation Message

Get Issue

Retrieve details of a specific Jira issue.

Issue ID or Key, Fields, Expand, Update History, Custom Field IDs, Include All Custom Fields

Issue ID or Key, Issue

Get Issue Comments

Retrieve comments for a Jira issue

Jira Issue ID or Key, Start At, Max Results, Order By, Expand

Issue ID or Key, Comments

Get Project

Get a specific Jira project by its ID or key

Project ID or Key, Expand

Project ID, Project Key, Project Name, Project Category, Project Description, Project Lead, Project Issue Types, Project URL, Project Keys

List Projects

Retrieve a list of all Jira projects accessible to your account.

Start At, Max Results, Order By, Query, Type Key, Status, Expand

Total Projects, Start At, Max Results, Is Last, Projects

Update Issue

Modify details of an existing Jira issue.

Issue ID or Key, Summary, Description, Labels Add, Labels Remove, Assignee, Priority, Custom Fields, Notify Users, Return Issue

Issue ID, Issue, Message

Input Breakdowns

  • Assignee (ID) (string)

    • Description: Account ID of the user to assign the issue to. Use null or an empty string to unassign.

    • Where to find it: In Jira Cloud, the Account ID is a unique identifier for each user. You can usually find it by:

      1. Going to the user's profile in Jira (the URL will contain the accountId parameter).

      2. Using the Jira API to list users, which will return their accountId.

      3. Sometimes, when assigning users in the Jira UI, you can inspect the network requests to see the accountId.

    • Example: 5b10a2844c20165700ede21g

  • Comment Text (string)

    • Description: The text content of the comment that will be added to the Jira issue.

    • Example: Linking this issue to track its dependency on the target issue

  • Component (IDs) (string)

    • Description: Components are sub-sections or parts of a Jira project. They are used to group issues within a project into smaller parts, such as features, teams, modules, or functional areas. Each component has a unique component ID within the project.

    • Where to find it: In the Jira UI

      • Go to your Jira project.

      • In the left sidebar, look for "Project settings" (or "Settings").

      • Click on "Components."

      • Here, you will see a list of all components for the project, along with their names and IDs (the ID is often visible in the URL when you click on a component, or you can get it via the API).

    • Example: ["10001", "10003"]

  • Custom Field IDs (string)

    • Description: A comma-separated list of specific custom field IDs to extract from the Jira issue. Only these fields will be included in the custom_fields response.

    • Where to find it: You can find custom field IDs in your Jira instance by navigating to Jira Administration > Issues > Custom Fields. The field ID is usually shown in the URL when you edit a custom field (e.g., .../customfields/customfield_12345). You can also use the Jira REST API to list all custom fields and their IDs.

    • Example: customfield_18165, customfield_12345

  • Custom Fields (string)

    • Description: A JSON object containing custom field key-value pairs i.e. additional fields that your organization has configured to capture information beyond the standard fields (like summary, description, priority, etc.). These fields can be of various types (text, number, date, dropdown, user picker, etc.) and are used to tailor Jira issues to your team's specific needs.

    • Example:

      "custom_fields": {
        "customfield_10010": "Affects all users in Europe",
        "customfield_10011": "High"
      }
  • Description (string)

    • Description: A detailed description of the issue (supports Atlassian Document Format)

    • Example: Steps to reproduce the bug: 1. Log in to the app. 2. Click on the dashboard. 3. Observe the error message. Expected: Dashboard loads successfully. Actual: Error 500 is shown.

  • Due Date (string)

    • Description: Allows you to specify the deadline for the issue in the format YYYY-MM-DD. This field is optional—if provided, it sets when the issue should be completed; if left blank, no due date will be assigned.

    • Example: 2025-08-15

  • Expand (string)

    • Description: The expand parameter allows you to request additional information in the response.

    • Example: renderedBody

  • Fields (string)

    • Description: The fields parameter lets you specify which fields to include in the response when retrieving a Jira issue. You can use it to limit the output to only the fields you care about (like summary, status, or custom fields), or use special keywords to include all or only navigable fields.

    • Where to find it: You can find details about the fields parameter in the Jira Cloud REST API documentation for "Get issue" by searching for "fields parameter Jira API".

    • Example: summary,comment

  • File Path (string)

    • Description: The file path parameter specifies the location of the file you want to attach to a Jira issue. It must be a valid path to the file on your system or accessible storage.

    • Where to find it: You can find details about this parameter in the Jira Cloud REST API documentation for adding attachments or by searching for "Jira API add attachment file path".

    • Example: /Users/alex/Documents/screenshot.png

  • Include All Custom Fields (boolean)

    • Description: Whether to include all custom fields for exploration (WARNING: creates very long output).

  • Inward Issue Id Or Key (string)

    • Description: This parameter specifies the ID or key of the target (inward) Jira issue that you want to link to. It identifies the issue that will be on the receiving end of the link relationship.

    • Where to find it: You can find the issue key or ID in the Jira issue’s URL or at the top of the issue page. For more details, search for "Jira issue key" or see Atlassian’s documentation on Finding an issue key in Jira.

    • Example: PROJ-123, 10002

  • Issue ID (string)

    • Description: The Issue Id parameter specifies the unique identifier of the Jira issue where you want to add a comment. This is required to ensure the comment is attached to the correct issue.

    • Where to find it: You can find the Issue Id in the Jira issue’s URL (e.g., the part like "PROJ-123" in https://yourcompany.atlassian.net/browse/PROJ-123) or at the top of the issue page. It is also sometimes referred to as the "issue key."

    • Example: PROJ-123

  • Issue Type ID (string)

    • Description: Specifies what kind of issue you are creating in Jira. Common issue types include "Bug", "Task", "Story", "Epic", etc. Each type has its own workflow, fields, and purpose within your Jira project.

    • Where to Find It:

      • Jira Web Interface:

        1. Go to your Jira project.

        2. Click “Create” to open the new issue dialog.

        3. In the “Issue Type” dropdown, you’ll see the available types (e.g., Bug, Task, Story).

  • Labels (Add/Remove) (string)

    • Description: Labels are keywords or tags you can attach to a Jira issue to help categorize, filter, and search for related issues. They are optional and can be used for custom organization or reporting.

    • Where to find it: In the Jira issue creation or edit screen, there is a "Labels" field where you can add one or more labels. You can also see existing labels on the issue view page, usually near the bottom or in the details section.

    • Example: customer-request, backend, sprint-12

  • Link Type (string)

    • Description: Specifies the relationship between two Jira issues, such as whether one issue blocks another, duplicates it, or is simply related. This determines how the issues are visually and logically connected in Jira.

    • Where to find it: In Jira, when you manually link issues, you select the link type from a dropdown menu in the "Link" dialog (e.g., "Blocks", "Relates to", "Duplicate"). You can see available link types in your Jira instance by starting to link an issue or by asking your Jira admin for the configured link types.

    • Example: Blocks, Relates to, Duplicate

  • Max Results (number)

    • Description: The "max results" parameter controls the maximum number of comments to return per request when retrieving comments for a Jira issue.

    • Example: 10, 50

  • Notify Users (boolean)

    • Description: Whether to send email notifications about the update.

  • Order By (string)

    • Description: The "Order By" parameter determines the sort order of the returned comments, such as by creation date in ascending or descending order.

    • Example:

      • created - no sorting

      • -created - descending order by creation date

      • +created - ascending order by creation date

  • Outward Issue Id Or Key (string)

    • Description: This is the ID or key of the source Jira issue from which the link originates (the "outward" issue in the relationship). It identifies the issue that will be linked to another issue.

    • Where to find it: You can find the issue key or ID in the Jira web interface—it's usually displayed at the top of the issue page (e.g., "PROJ-123"). You can also copy it from the issue's URL or from search results in Jira.

    • Example: PROJ-123

  • Parent Key (string)

    • Description: specify the key of a parent issue when you are creating a subtask in Jira.

      • It links the new subtask to an existing parent issue (like a Story, Task, or Bug).

      • For regular issues (not subtasks), you should leave this field blank.

    • Where to find it:

      • The parent key is the unique identifier of the parent issue, visible in Jira as the issue key (e.g., PROJ-123).

      • You can find it in the Jira issue list, in the issue’s URL, or at the top of the issue detail page.

    • Example: ENG-456

  • Priority (ID) (string)

    • Description: used to set the priority level of the Jira issue you are creating (such as "High", "Medium", "Low", etc.). The value must be the ID of the priority, not its name or label.

    • Where to find the priority ID:

      • In Jira, go to Issues → Priorities (admin section).

      • Each priority (like "High", "Medium", "Low") has a unique ID (e.g., "1", "2", "3", or sometimes a UUID).

      • You can also get the priority ID using the Jira API or by inspecting the page URL when editing a priority.

    • Example: 1

  • Project ID (string)

    • Description: A unique identifier for a Jira project.

    • Where to Find It:

      • Jira Web Interface:

        1. Go to your Jira dashboard.

        2. Click on "Projects" in the top menu and select your project.

        3. The project key is usually shown in the project’s URL and in the project header (e.g., "ABC" in "ABC-123").

        4. The project id (a numeric value) is not always visible in the UI, but the project key (a short code like "ABC") is commonly used and accepted in most API calls and integrations.

      • Jira URL Example:

        • If your project’s issues look like: https://yourcompany.atlassian.net/browse/ABC-123 Then "ABC" is the project key.

  • Status (string)

    • Description: The status input parameter allows you to filter Jira projects by their status. You can specify one or more statuses (comma-separated) such as live, archived, or deleted (for projects in the recycle bin). By default, only live projects are returned.

    • Example: To list only archived projects, set status to archived.

  • Type Key (string)

    • Description: The "type key" parameter lets you filter Jira projects by their type. You can specify one or more types (comma-separated), such as business, service_desk, or software, to only return projects of those types.

    • Example: To list only software projects software .

  • Query (string)

    • Description: The query parameter allows you to filter Jira projects by a search string. It returns projects whose key or name matches the provided text (case insensitive), making it easy to find specific projects by name or key.

    • Example: To find all projects with "marketing" in their name or key, set query to marketing .

  • Return Issue (boolean)

    • Description: Whether to return the full updated issue in the response.

  • Start At (number)

    • Description: The index of the first comment to return (pagination).

    • Example: 10

  • Summary (string)

    • Description: A concise summary or title for the Jira issue

    • Example: Add dark mode support to dashboard

  • Update History (boolean)

    • Description: Whether to update the user's recently viewed project list.


Best Practices:

  • Always use the correct connection ID for your Jira account.

  • Ensure required fields are provided for each action.

  • Use outputs to connect Jira actions to downstream workflow nodes for further automation.


Advanced Settings

  • Retry on Failure: Enable retrying when the node execution fails

  • Fallback Branch: Create a separate branch that executes when this node fails, allowing you to handle errors gracefullyCreate a separate branch that executes when this node fails, allowing you to handle errors gracefully

Salesforce

Integrate Stack AI with Salesforce for smarter sales workflows. Automate tasks, capture leads, and enhance CRM functionality with AI-powered insights.

The Salesforce Node allows you to connect directly to your Salesforce CRM account and query structured sales and customer data. It supports SQL queries to retrieve contact, lead, opportunity, and account information, making it easier to automate workflows, analyze performance, or power downstream nodes like LLMs or VectorDBs.

Once connected, Stack AI can use this data to enhance sales processes, generate insights, and improve decision-making.

Step-by-step guide for Salesforce connection credentials:

  1. Obtain your login credentials:

    • Username: your Salesforce login name.

    • Password: your Salesforce account password.

  1. After logging in, click on your profile picture on the upper right.

  1. A dropdown menu will appear, click on Settings.

  1. On the Settings page, on the left navigation, select Reset My Security Token.

  1. Click on Reset Security Token.

  1. On click, the new Security token will be sent to the email associated to your Salesforce account.

  1. Finally, in the email, this will be the Security token that will be used for the connection.

How to connect it?

Steps to connect:

  1. Add the Salesforce node from the Apps section into your Stack AI project.

  1. Select "Query Salesforce" as the Action.

  1. Click on "Select a connection" or create a new connection with your credentials.

  1. Choose the appropriate domain for your Salesforce environment:

    Domain
    Use When

    "login" or left blank

    Connecting to production or developer orgs

    "test"

    Connecting to a sandbox instance

    mycustomdomain

    Using a custom domain (recommended for SSO or OAuth setups)

Note that the Username for your API is different from the email address you used to sign up for Salesforce.

  • Enabling the "sandbox" toggle tells Stack AI to connect to your Salesforce sandbox environment instead of your main (production) Salesforce account.

  • Disabling the "sandbox" toggle means Stack AI will connect to your production Salesforce environment.

  1. Add your Salesforce Schema in the configurations of the node to define which table you want to extract.

    • For example: TABLE Contact (Id TEXT, Name TEXT, Industry TEXT);

    • To list tables from your Salesforce account, you can run the included Python script below on your local machine.

Show Python Script
list_salesforce_schema_with_filter.py
from simple_salesforce import Salesforce

# Salesforce credentials (replace with environment variables in production)
username = ""
password = ""
security_token = ""

# 🔍 Specify object names to include (DANGER leaving the list empty will include ALL)
included_objects = ["Contact", "Account", "Opportunity"]  # Case-sensitive

# Connect to Salesforce
sf = Salesforce(username=username, password=password, security_token=security_token)

# Mapping Salesforce field types to SQL types
def map_salesforce_type(sf_type):
    mapping = {
        "string": "TEXT",
        "textarea": "TEXT",
        "picklist": "TEXT",
        "id": "VARCHAR(18)",
        "boolean": "BOOLEAN",
        "int": "INTEGER",
        "double": "FLOAT",
        "currency": "DECIMAL(18,2)",
        "percent": "DECIMAL(5,2)",
        "date": "DATE",
        "datetime": "TIMESTAMP",
        "email": "TEXT",
        "phone": "TEXT",
        "url": "TEXT",
        "reference": "VARCHAR(18)",
        "base64": "BYTEA",
        "location": "TEXT",
    }
    return mapping.get(sf_type, "TEXT")


# Get all object descriptions
objects = sf.describe()["sobjects"]

# Open file for writing schema output
with open("salesforce_schema.txt", "w") as file:
    for obj in objects:
        obj_name = obj["name"]
        
        # Apply filter if specified
        if included_objects and obj_name not in included_objects:
            continue
        
        try:
            obj_details = sf.__getattr__(obj_name).describe()
            field_defs = []

            for field in obj_details["fields"]:
                field_name = field["name"]
                field_type = map_salesforce_type(field["type"])
                field_defs.append(f"{field_name} {field_type}")

            field_list = ", ".join(field_defs)
            table_def = f"TABLE {obj_name} ({field_list});"
            print(table_def)
            file.write(table_def + "\n")

        except Exception as e:
            error_msg = f"-- Could not process {obj_name}: {e}"
            print(error_msg)
            file.write(error_msg + "\n")

print("\nFiltered schema has been saved to 'salesforce_schema.txt'")

  1. Connect the input, it can be in natural language or SOQL query from an Input node or LLM.

  2. Connect the output to a downstream node like an Output node or directly to an LLM.

Visual overview

Below are some examples of valid SQL queries for your Salesforce database, provided that it has been included in the Schema:

SELECT Name, Email FROM Contact
SELECT Id, Email FROM Contact WHERE Income > USD10000
SELECT Id, Email FROM Contact WHERE LastName = 'Aceituno'
SELECT Id, Email, Name, ParentAccount.Name FROM Contact

Available Actions

1. Salesforce Query

  • Purpose: Query Salesforce using plain English or SOQL.

  • Inputs:

    • sql_schema (required): The database schema (tables, columns, types, etc.).

    • query (required): Your question in plain English or a SOQL query.


2. Create Case

  • Purpose: Create a new case in Salesforce.

  • Inputs:

    • subject (required): The subject/title of the case.

    • status (optional): Status of the case (e.g., New, Working, Closed).

    • priority (optional): Priority level (e.g., High, Medium, Low).

    • origin (optional): Source of the case (e.g., Web, Phone, Email).

    • description (optional): Detailed description.

    • account_id (optional): Salesforce Account ID to associate.

    • contact_id (optional): Salesforce Contact ID to associate.


3. Update Case

  • Purpose: Update an existing case.

  • Inputs:

    • case_id (required): Salesforce Case ID to update.

    • subject (optional): New subject/title.

    • status (optional): New status.

    • priority (optional): New priority.

    • origin (optional): New origin/source.

    • description (optional): New description.

    • account_id (optional): New Account ID.

    • contact_id (optional): New Contact ID.


4. Delete Case

  • Purpose: Delete a case.

  • Inputs:

    • case_id (required): Salesforce Case ID to delete.


5. Create Opportunity

  • Purpose: Create a new opportunity.

  • Inputs:

    • name (required): Name of the opportunity.

    • stage_name (required): Sales stage (e.g., Prospecting, Closed Won).

    • close_date (required): Expected close date (YYYY-MM-DD).

    • amount (optional): Opportunity amount.

    • account_id (optional): Account ID to associate.

    • description (optional): Description.


6. Update Opportunity

  • Purpose: Update an existing opportunity.

  • Inputs:

    • opportunity_id (required): Salesforce Opportunity ID to update.

    • name (optional): New name.

    • stage_name (optional): New stage.

    • close_date (optional): New close date.

    • amount (optional): New amount.

    • description (optional): New description.


7. Delete Opportunity

  • Purpose: Delete an opportunity.

  • Inputs:

    • opportunity_id (required): Salesforce Opportunity ID to delete.


8. Create Contact

  • Purpose: Create a new contact.

  • Inputs:

    • last_name (required): Last name of the contact.

    • first_name (optional): First name.

    • email (optional): Email address.

    • phone (optional): Phone number.

    • account_id (optional): Account ID to associate.

    • title (optional): Job title.


9. Update Contact

  • Purpose: Update an existing contact.

  • Inputs:

    • contact_id (required): Salesforce Contact ID to update.

    • last_name (optional): New last name.

    • first_name (optional): New first name.

    • email (optional): New email.

    • phone (optional): New phone.

    • title (optional): New job title.


10. Delete Contact

  • Purpose: Delete a contact.

  • Inputs:

    • contact_id (required): Salesforce Contact ID to delete.

Here are the details for the Salesforce actions you requested, in your specified format:


11. Create Comment

  • Purpose: Add a new comment to a Salesforce record (such as a Case, Opportunity, etc.).

  • Inputs:

    • parent_id (required): The Salesforce ID of the parent record to associate the comment with (e.g., Case ID, Opportunity ID).

    • comment_body (required): The text content of the comment.

    • is_published (optional): Boolean indicating if the comment should be published and visible to customers (default: true).


12. Update Comment

  • Purpose: Update the content or visibility of an existing Salesforce comment.

  • Inputs:

    • comment_id (required): The Salesforce ID of the comment to update.

    • comment_body (optional): The updated text content of the comment.

    • is_published (optional): Boolean indicating if the comment should be published and visible to customers.


13. Delete Comment

  • Purpose: Remove a comment from Salesforce.

  • Inputs:

    • comment_id (required): The Salesforce ID of the comment to delete.