Advanced Settings
Stream Data
Default - ON. When ON, output from the LLM is shown word-by-word, as it is produced. If you'd prefer the output to appear all at once when the LLM is done thinking, turn this feature OFF.
Safe Context Window
Default - OFF. When ON, context will be automatically reduced to the model's maximum context size. If OFF, you will get an error message if your context gets too big. Be careful--turning this on means you might unexpectedly lose meaningful context without a warning!
Charts
Default - OFF. When ON, LLMs will generate charts. Turn this on if you'd like to see charts in your output.
Data & Time
Default - ON. When ON, the LLM will be aware of the current date & time.
Guardrails
Default - OFF. Turn ON guardrails to screen for toxic content, legal advice, or suicidal thoughts.
PII Compliance
Default - OFF. Turn ON PII Compliance in order to hide certain types of PII from the LLM if entered by a user.
Temperature
Default - 0. Increase temperature in order to increase randomness in the output, making the model less deterministic.
Max Output Length
Default - 3000. Increasing this slider will allow the model to give more verbose answers.
Retry on Failure
Default - OFF. Turn ON to enable retrying when execution of a node fails. This can help make your project more robust
LLM Fallback Mode
Default - OFF. Turn ON to make your project more robust to model failure. This allows you to select a backup model and provider to use in case something goes wrong.
Fallback Branch
Default - OFF. When turned ON, you can specify a different flow to follow in case this LLM node's execution fails.
Last updated
Was this helpful?