Vector Databases
Vector databases help you retrieve the most important segments of a document for an LLM.
Last updated
Vector databases help you retrieve the most important segments of a document for an LLM.
Last updated
Vector databases are central components of most LLM applications and are specialized databases designed for storing vectorized data (such as a document with its embeddings) and quickly retrieving the most relevant documents based on a query by identifying the embeddings that are most similar to that of the query.
In a few words, vector DBs help you store a document (from a data source), send a query (from user input), and return the few pieces that are most relevant to the LLM.
Vector databases work as follows:
Receive a document or text segment from a data loader.
Receive a query as text (coming from user input or an LLM).
Embed the text segments using OpenAI text-ada-002 embeddings.
Query the most relevant embeddings
Returns the more relevant document segments to be sent to an LLM or output node
and have the following structure:
Inputs:
Data loader.
Query: a string of text coming from user input or an LLM completion
Outputs:
Result: most relevant segments of the data loader in a text string. Can go to an LLM or an output node.
A typical structure for a vector DB is the following:
If you are loading data online in every flow run (e.g. from a Google search, a database, or a table), we support the following vector databases:
Basic DB: A vector database that runs in memory without sending the data source to an external tool.
The basic DB does not store embeddings after the flow stops running and hence recomputes embeddings at every run.
Chroma: A similar vector database that runs in memory without sending the data source to an external tool.
Chroma does not store embeddings after the flow stops running and recomputes embeddings at every run.
If you want to upload documents and embeddings offline you can use the following nodes: