Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

maxSteps config for chains and agents #779

Merged
merged 1 commit into from
Jan 17, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
13 changes: 11 additions & 2 deletions docs/guides/prompt-manager/configuration.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,8 @@ This section is enclosed between three dashes (`---`) and written in YAML format

```yaml
---
model: gpt-4o
provider: OpenAI
model: gpt-4o-mini
temperature: 0.6
top_p: 0.9
---
Expand Down Expand Up @@ -73,9 +74,17 @@ It is recommended to set either temperature or topP, but not both.
<Note>
Not configuring temperature, or setting it to 0 will cache the response for the prompt, and the same response will be returned when the prompt is called again with the same parameters.

[Learn more about caching in Latitude](/guides/prompt-manager/cache)
[Learn more about caching in Latitude](/guides/prompt-manager/cache)

</Note>

#### **`maxSteps`**

Maximum number of steps to execute.

When using [Chains](/promptl/advanced/chains), you can configure the maximum number of steps to execute. Reaching this limit will stop the execution of the chain and return an error.
Defaults to 20, and has a maximum value of 150.

#### **`maxTokens`**

Maximum number of tokens to generate.
Expand Down
35 changes: 35 additions & 0 deletions docs/promptl/advanced/chains.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@ description: Chains and Steps are used to create multi-step prompts that can int
Chains in PromptL allow you to break complex workflows into smaller, manageable steps. Each step generates a response, which can then be used in subsequent steps. This approach improves the model's ability to perform complex tasks and provides greater control over dynamic conversations.

With Chains, you can:

- Process tasks incrementally to guide the model step-by-step.
- Store and reuse intermediate results dynamically.
- Customize step-specific configurations for more efficient execution.
Expand All @@ -20,6 +21,7 @@ With Chains, you can:
Define steps in your prompt using the `<step>` tag. The engine pauses after each step, waits for the model's response, and adds it as an assistant message before continuing.

### Basic Syntax

```xml
<step>
Analyze the following text and identify the main idea:
Expand All @@ -35,7 +37,9 @@ Define steps in your prompt using the `<step>` tag. The engine pauses after each
```

### Step with Custom Configuration

Override the default configuration by adding attributes to the `<step>` tag:

```tsx
<step model="gpt-3.5-turbo" temperature={{0.5}}>
Rewrite the following paragraph in simpler language:
Expand All @@ -51,6 +55,7 @@ Override the default configuration by adding attributes to the `<step>` tag:
## Advanced Features

### Storing Step Responses

You can store the text response of a step in a variable using the `as` attribute. This allows you to reuse the response in later steps or logic blocks.

```xml
Expand All @@ -67,6 +72,7 @@ You can store the text response of a step in a variable using the `as` attribute
```

#### Parse Step Responses as JSON

The response of a step will be automatically parsed as JSON if the JSON output schema is defined.

```xml
Expand All @@ -85,6 +91,7 @@ The response of a step will be automatically parsed as JSON if the JSON output s
[Learn more about JSON Output](/guides/prompt-manager/json-output).

### Storing Raw Messages

Some providers will return additional metadata along with the response. To store the entire message object, instead of just the text response (e.g., role, content, and additional metadata), use the `raw` attribute:

```xml
Expand All @@ -100,6 +107,7 @@ The `content` attribute will always be defined as an array of content objects, w
---

### Isolating Steps

Use the `isolated` attribute to prevent a step from inheriting context from previous steps. This can reduce unnecessary costs or confusion for the model.

```xml
Expand All @@ -124,10 +132,34 @@ In this example, the final step does not need to conside the full texts used in

---

### Limiting the number of steps

<Note>This feature is only available on the Latitude platform.</Note>

You can limit the number of steps executed in a chain by setting a `maxSteps` attribute on the main configuration section. This can help prevent infinite loops or excessive processing in long chains when creating complex workflows with steps within loops.

```xml
---
maxSteps: 5
---

{{ for item in list }}
<step>
...
</step>
{{ endfor }}
```

Read more about this configuration in the [Latitude Prompt Configuration](/guides/prompt-manager/configuration#maxsteps) guide.

---

## Real-World Use Cases

### Multi-Step Workflow

Chains are ideal for breaking down tasks like:

1. Analyzing data.
2. Generating intermediate results.
3. Combining results for a final output.
Expand All @@ -146,7 +178,9 @@ Chains are ideal for breaking down tasks like:
```

### Decision Trees

Use logic to adapt workflows based on intermediate results:

```xml
<step as="classification">
Classify this document into one of the following categories: A, B, or C.
Expand Down Expand Up @@ -176,6 +210,7 @@ To execute chains, use the `Chain` class. The chain evaluates the prompt step-by
To run a step, execute the `step` method of the chain instance. The first time `step` is called, it should not include any arguments. Subsequent calls must always pass the model response message from the previous step.

### Example: Using the Chain Class

```javascript
import { Chain } from 'promptl-ai';
import OpenAI from 'openai';
Expand Down
1 change: 1 addition & 0 deletions packages/constants/src/errors.ts
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,7 @@ export enum RunErrorCodes {
EvaluationRunMissingWorkspaceError = 'ev_run_missing_workspace_error',
EvaluationRunUnsupportedResultTypeError = 'ev_run_unsupported_result_type_error',
EvaluationRunResponseJsonFormatError = 'ev_run_response_json_format_error',
MaxStepCountExceededError = 'max_step_count_exceeded_error',
}
// NOTE: If you add a new error code, please add it to the pg enum in models/runErrors.ts

Expand Down
1 change: 1 addition & 0 deletions packages/core/drizzle/0117_max_steps_config.sql
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
ALTER TYPE "latitude"."run_error_code_enum" ADD VALUE 'max_step_count_exceeded_error';
Loading
Loading