Large Language Models (LLMs)

LLMs are the core of any AI flow. An LLM can receive inputs, use them as parameters in its prompt, and generate a completion.

  • You can add inputs to the LLM prompt by enclosing their IDs in brackets.

    • LLM prompts are interpretable as Python f-strings.

    • The LLM prompt will highlight correct inputs in blue and incorrect inputs in red.

You can use LLMs from a few different providers:

  1. OpenAI: call models such as ada, babbage, curie, davinci, gpt-3.5-turbo, and gpt-4. You can also run your own fine-tuned models (see Fine-Tuner).

  2. Anthropic: call models such as claude-v1, claude-v1-100k, instant-claude-v1, and instant-claude-v1-100k.

  3. Replicate: call open-source models hosted in Replicate such: as llama, dolly, or stablelm. You can also use fine-tuned versions of these models (coming soon)

  4. Google: call models such as PaLM.

  5. HuggingFace Hub (Coming Soon): call models hosted in the Huggingface hub.

You can further set up your LLM by clicking on the gear icon ⚙️. This allows you to review parameters such as temperatures, completion length, or streaming.

Each LLM prompt and completion gets recorded for Evaluation and Data Preparation.

Last updated