Skip to main content

LLM Model

The LLM Model node is a core component for interacting with a Large Language Model (LLM). It sends a prompt to a selected provider and returns the generated response, enabling tasks like answering questions, summarizing text, or engaging in conversation.

Configuring the node

  1. Click on the node’s header.
  2. The Configure LLM Model dialog will appear.
  3. Select a preconfigured LLM Provider.
  4. Enter a System Prompt to define the model’s persona or provide high-level instructions.
  5. Enter a User Prompt.
  6. Select an Agent Type (either Base or Chain-of-Thought).
  7. Toggle Enable Memory on to allow the model to remember the context of the current conversation.