Knowledge Base Nodes
A knowledge base node performs a search or retrieval operation on a connected knowledge base, returning relevant document chunks or information in response to a query. The files uploaded to the Knowledge Base node can be reused across multiple flows. All created knowledge bases are automatically synced to the Knowledge Base Dashboard.
Node Settings & Search Parameters
Click on the node to change its settings.
At the top of the window, you will find a drop-down menu to select a Knowledge Base or choose documents to form a new Knowledge Base.
Below that you will see the configurations for Settings and Search Parameters
Output Format: Choose between chunks, pages, and docs.
Metadata Filter Strategy: Choose between Strict Filter, Loose Filter, and No Filter.
Query Strategy: Choose between Semantic, Keyword, and Hybrid.
Top Results: Number of search results ranked by relevance.
Max Characters: Limits the number of characters sent to the LLM.
Answer Multiple Questions: Get the answers from multiple questions in parallel.
Advanced Q&A: Handle questions to compare or summarize documents.
Rerank: Get more precise information retrieval
Query Transformation: Get more precise information retrieval
File Status
You will see a label for each document that you upload with the following icons:
Pending: the document is being processed and indexed.
✅: the document was successfully indexed.
Error: the document could not be indexed (e.g., due to a formatting issue).
Typical Workflow Structure
A common pattern is:
User Input (Input Node): The user provides a question or prompt.
Knowledge Base Node: Receives the user’s query (directly or via an LLM node) and retrieves relevant information from the knowledge base.
LLM Node: Uses both the user’s input and the retrieved knowledge base content to generate a final, context-rich answer.
How the Interface Works
A. Data Flow
The Knowledge Base node typically takes the user’s input as its query, searches the knowledge base, and outputs relevant text chunks.
The LLM node can reference the output of the Knowledge Base node in its prompt using the node’s ID.
B. Connections (Edges)
The Input node is connected to the Knowledge Base node (for the query).
The Knowledge Base node is connected to the LLM node (providing retrieved content).
The LLM node is connected to the Output node (displaying the answer).
C. Execution Order
The user submits a question.
The Knowledge Base node receives the question and retrieves relevant information.
The LLM node receives both the user’s question and the retrieved information, then generates a response.
The Output node displays the LLM’s answer.
Why Use This Pattern?
Retrieval-Augmented Generation (RAG): This approach allows the LLM to ground its answers in specific, up-to-date, or proprietary knowledge, improving accuracy and relevance.
Separation of Concerns: The Knowledge Base node handles retrieval, while the LLM node handles synthesis and reasoning.
Key Points
The LLM node does not “search” the knowledge base directly; it relies on the Knowledge Base node to do the retrieval.
The LLM node’s prompt must reference the Knowledge Base node’s output to use the retrieved information.
All node references must match actual node IDs in the workflow.
Available Knowledge Bases
Documents
Websites
Tables
Data
Sharepoint
Google Drive
OneDrive
Dropbox
Azure Blob Storage
AWS S3
Notion
Confluence
Veeva
ServiceNow
Last updated
Was this helpful?