Prompt Engineering

Prompt Engineering and Context Management

  • Prompt Customization: Each LLM node can have its own prompt and system message. Tailor these to the specific sub-task each LLM is handling.

  • Context Passing: Use node references (e.g., {llm-0}) in prompts to pass context or results between LLMs.

Prompting Best Practices

1

Clarity, Specificity, and Explicitness

  • Minimize Ambiguity: Vague requests lead to off-target responses. Instead of "Summarize this document," specify "Summarize this document in 3 bullet points focusing on the main challenges discussed."

  • Use Imperative Language: Frame prompts as direct commands (e.g., "Generate," "Summarize," "Translate"). Avoid conversational phrases.

  • Define Constraints: Clearly state desired length ("Use a 3 to 5 sentence paragraph"), tone ("Use a friendly and conversational tone"), or style ("in the style of a {famous poet}").

  • Positive Framing: Instruct the model on what it should do, rather than what it should not do. For instance, instead of "DO NOT ASK for a username or password," state: "The agent will attempt to diagnose the problem... whilst refraining from asking any questions related to PII. Instead of asking for PII, such as username or password, refer the user to the help article [www.samplewebsite.com/help/faq](https://www.samplewebsite.com/help/faq)."

2

Structuring the Prompt for Optimal Parsing and Reliability

  • Use Delimiters: Employ characters or symbols (e.g., `"""`, `###`, `<tag>`) to clearly separate sections like instructions, context, and examples. This prevents information "bleeding."

  • Place Instructions at the Beginning: Start your prompt with instructions for clear parsing.

  • Request JSON Format for Machine-Readable Output: For structured data, explicitly ask for JSON. Provide a complete example with desired keys and value types. Consider using API's "JSON mode" if available.

  • Adopt JSON Output by Default: Always request JSON, even for single fields, to allow for easy expansion later.

  • Use Hierarchical Structures: For complex prompts, organize content with headings, subheadings, and bullet points.

3

In-Context Learning (ICL): Guiding by Demonstration

  • Start with Zero-Shot Prompting: For simple tasks, provide only the task description without examples.

  • Use Few-Shot Prompting when Needed: If zero-shot isn't sufficient, include one or more input-output examples to demonstrate the desired output structure, style, or pattern.

  • Curate Diverse Examples: Ensure few-shot examples are representative and varied to avoid bias.

4

Strategic Allocation (System vs. User Prompts)

  • Utilize the System Prompt: Use for high-level, foundational instructions defining the AI's core behavior, persona, and persistent constraints (e.g., role-prompting, ethical guidance, tool-use instructions).

  • Use the User Prompt: This is for dynamic, task-oriented instructions, specific questions, task-specific context, few-shot examples relevant to the current task, and response formatting instructions for that single interaction.

  • Separate Concerns: This modular approach improves clarity, maintainability, and model performance.

Prompting with Tools

When using tools in an LLM Node, include them in your prompt with the @ sign.

XML Tags

For long prompts, or prompts that reference multiple inputs, group them with XML tags. Grouping signals to the LLM where certain associated blocks of information begin and end.

Last updated

Was this helpful?