Large Language Models (LLMs)
Last updated
Last updated
LLMs are the core of any AI flow. An LLM can receive inputs, use them as parameters in its prompt, and generate a completion.
You can add inputs to the LLM prompt by enclosing their IDs in brackets.
LLM prompts are interpretable as Python f-strings.
The LLM prompt will highlight correct inputs in blue and incorrect inputs in red.
You can use LLMs from a few different providers:
OpenAI: call models such as ada, babbage, curie, davinci, gpt-3.5-turbo, and gpt-4. You can also run your own fine-tuned models (see Fine-Tuner).
Anthropic: call models such as claude-v1, claude-v1-100k, instant-claude-v1, and instant-claude-v1-100k.
Replicate: call open-source models hosted in Replicate such: as llama, dolly, or stablelm. You can also use fine-tuned versions of these models (coming soon)
Google: call models such as PaLM.
HuggingFace Hub (Coming Soon): call models hosted in the Huggingface hub.
You can further set up your LLM by clicking on the gear icon ⚙️. This allows you to review parameters such as temperatures, completion length, or streaming.
Each LLM prompt and completion gets recorded for Evaluation and Data Preparation.