Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ready to merge] Blog edited: prompt engineering techniques #3137

Merged
merged 10 commits into from
Jan 30, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 4 additions & 4 deletions bifrost/app/blog/blogs/chain-of-thought-prompting/src.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -292,11 +292,11 @@ Standard prompting methods mainly focus on getting the right answer from the LLM

### CoT vs. Few-Shot Prompting

Both techniques use examples, but they differ in an important way. Few-shot prompting gives the model just a few examples that are similar to what you want it to do. For instance, it might show a few solutions to problems that are alike.
Both techniques use examples, but they differ in an important way. <a href="https://docs.helicone.ai/guides/prompt-engineering/implement-few-shot-learning" target="_blank">Few-shot prompting</a> gives the model just a few examples that are similar to what you want it to do. For instance, it might show a few solutions to problems that are alike.

### CoT vs. Tree-of-Thought Prompting

Both Chain-of-Thought (CoT) and Tree-of-Thought (ToT) are prompting strategies that can be applied to large language models to improve their reasoning capabilities. CoT follows a linear approach where each step builds directly on previous reasoning. ToT uses a tree-like structure where multiple reasoning paths can be explored simultaneously.
Both Chain-of-Thought (CoT) and <a href="https://www.helicone.ai/blog/tree-of-thought-prompting" target="_blank">Tree-of-Thought (ToT)</a> are prompting strategies that can be applied to large language models to improve their reasoning capabilities. CoT follows a linear approach where each step builds directly on previous reasoning. ToT uses a tree-like structure where multiple reasoning paths can be explored simultaneously.

While both enhance problem-solving, they differ in complexity. CoT uses straightforward sequential reasoning, while ToT employs search algorithms like breadth-first search (BFS) and depth-first search (DFS) to systematically explore multiple solution paths. This makes ToT more computationally intensive but potentially more thorough for complex problems.

Expand Down Expand Up @@ -340,8 +340,8 @@ Chain-of-Thought prompting fundamentally changes how we interact with LLMs. By b

### You might find these useful:

- **<a href="https://docs.helicone.ai/guides/prompt-engineering/overview" target="_blank">Other Prompt Engineering Techniques</a>**
- **<a href="https://www.helicone.ai/blog/prompt-engineering-tools" target="_blank">Prompt Engineering Overview: All You Need to Know</a>**
- **<a href="https://www.helicone.ai/blog/prompt-engineering-tools" target="_blank">Prompt Engineering Tools & Techniques</a>**
- **<a href="https://www.helicone.ai/blog/test-your-llm-prompts" target="_blank">How to Test Your LLM Prompts</a>**
- **<a href="https://www.helicone.ai/blog/prompt-management" target="_blank">What is Prompt Management?</a>**

<Questions />
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,6 @@
"images": "/static/blog/product-hunt-experience/PH-cover.webp",
"time": "6 minute read",
"author": "Lina Lam",
"date": "Aug 26, 2024",
"date": "August 26, 2024",
"badge": "insight"
}
10 changes: 5 additions & 5 deletions bifrost/app/blog/blogs/prompt-engineering-tools/metadata.json
Original file line number Diff line number Diff line change
@@ -1,10 +1,10 @@
{
"title": "Prompt Engineering: Tools, Best Practices, and Techniques",
"title1": "Prompt Engineering: Tools, Best Practices, and Techniques",
"title2": "Prompt Engineering: Tools, Best Practices, and Techniques",
"title": "Prompt Engineering Tools & Techniques [Updated Jan 2025]",
"title1": "Prompt Engineering Tools & Techniques [Updated Jan 2025]",
"title2": "Prompt Engineering Tools & Techniques [Updated Jan 2025]",
"description": "Writing effective prompts is a crucial skill for developers working with large language models (LLMs). Here are the essentials of prompt engineering and the best tools to optimize your prompts.",
"images": "/static/blog/prompt-engineering-tools/cover.webp",
"time": "4 minute read",
"images": "/static/blog/prompt-engineering-tools/prompt-engineering-cover.webp",
"time": "8 minute read",
"author": "Lina Lam",
"date": "September 12, 2024",
"badge": "guide"
Expand Down
223 changes: 94 additions & 129 deletions bifrost/app/blog/blogs/prompt-engineering-tools/src.mdx

Large diffs are not rendered by default.

2 changes: 1 addition & 1 deletion bifrost/app/blog/blogs/prompt-management/metadata.json
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,6 @@
"images": "/static/blog/prompt-management/cover.webp",
"time": "5 minute read",
"author": "Lina Lam",
"date": "Aug 1, 2024",
"date": "August 1, 2024",
"badge": "guide"
}
6 changes: 3 additions & 3 deletions bifrost/app/blog/blogs/prompt-management/src.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ In this blog, we will dive into the challenges of managing prompts and what to l

Large Language Models (LLMs) can be taught to perform new tasks using **<span style={{color: '#0ea5e9'}}>in-context learning</span>**, which involves feeding a prompt with instructions and/or examples. This method allows LLMs to perform tasks without needing additional training or parameter updates.

**Prompt engineering is crucial for optimizing the model outputs.** Anyone working with AI products can be involved in crafting prompts using various techniques -- both developers and non-technical users.
**<a href="https://www.helicone.ai/blog/prompt-engineering-tools" target="_blank" rel="noopener">Prompt engineering</a> is crucial for optimizing the model outputs.** Anyone working with AI products can be involved in crafting prompts using various techniques -- both developers and non-technical users.

## What is Prompt Management?

Expand Down Expand Up @@ -80,14 +80,14 @@ That's why Helicone was designed to provide full prompt ownership and the easies
target="_blank"
rel="noopener"
>
Manage Prompts in Helicone
Docs: Create Prompts in Helicone
</a>
- <a
href="https://docs.helicone.ai/features/experiments"
target="_blank"
rel="noopener"
>
Creating Prompt Experiments in Helicone
Docs: Run Prompt Experiments in Helicone
</a>
- <a
href="https://www.deeplearning.ai/short-courses/chatgpt-prompt-engineering-for-developers/"
Expand Down
2 changes: 1 addition & 1 deletion bifrost/app/blog/blogs/slash-llm-cost/metadata.json
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,6 @@
"images": "/static/blog/slash-llm-cost/slash-cover.webp",
"time": "6 minute read",
"author": "Lina Lam",
"date": "Aug 30, 2024",
"date": "August 30, 2024",
"badge": "how-to"
}
20 changes: 11 additions & 9 deletions bifrost/app/blog/blogs/tree-of-thought-prompting/src.mdx
Original file line number Diff line number Diff line change
@@ -1,10 +1,10 @@
While the <a href="https://www.helicone.ai/blog/chain-of-thought-prompting" target="_blank">Chain of Thought (CoT)</a> prompting technique allows LLMs to thrive at step-wise reasoning problems, its inability to strategically look ahead and weigh different alternatives makes it fall short on tasks requiring decision-making.
While the Chain of Thought (CoT) prompting technique allows LLMs to thrive at step-wise reasoning problems, its inability to strategically look ahead and weigh different alternatives makes it fall short on tasks requiring decision-making.

The Tree of Thought (ToT) prompting technique unlocks new neural pathways for LLMs, encouraging them to explore multiple thoughts and self-evaluate at each step, even as they look ahead or backtrack to determine the next best move.

![Tree-of-Thought Prompting - Helicone](/static/blog/tree-of-thought/tree-of-thought-cover.webp)

According to <a href="https://arxiv.org/abs/2305.10601" target="_blank" rel="noopener">Yao et el (2023)</a>, the Tree of Thought (ToT) is a prompting framework that generalizes over the Chain-of-Thought (CoT) technique breaking the token-level, left-to-right decision-making barrier. This technique combines advanced search algorithms with the innate self-evaluative properties of LLMs to implement deliberate decision-making.
According to <a href="https://arxiv.org/abs/2305.10601" target="_blank" rel="noopener">Yao et el (2023)</a>, the Tree of Thought (ToT) is a prompting framework that generalizes over the <a href="https://www.helicone.ai/blog/chain-of-thought-prompting" target="_blank">Chain-of-Thought (CoT)</a> technique breaking the token-level, left-to-right decision-making barrier. This technique combines advanced search algorithms with the innate self-evaluative properties of LLMs to implement deliberate decision-making.

In this article, we will explain Tree of Thought prompting, examine how it works, and compare its performance to other prompting techniques.

Expand All @@ -20,7 +20,7 @@ In this article, we will explain Tree of Thought prompting, examine how it works

## What is Tree of Thought Prompting?

Tree of Thought (ToT) prompting is a new framework for language model inference proposed by <a href="https://arxiv.org/abs/2305.10601" target="_blank" rel="noopener">Yao et el (2023)</a> and <a href="https://arxiv.org/abs/2305.08291" target="_blank" rel="noopener">Long (2023)</a>, amongst other researchers in May 2023. The novel prompting technique uses intermediate reasoning steps to give LLMs complex, strategic reasoning power.
Tree of Thought (ToT) prompting is a new framework for language model inference proposed by Yao et el (2023) and <a href="https://arxiv.org/abs/2305.08291" target="_blank" rel="noopener">Long (2023)</a>, amongst other researchers in May 2023. The novel prompting technique uses intermediate reasoning steps to give LLMs complex, strategic reasoning power.

This technique takes on a human-like approach to problem-solving (trial and error), exhaustively working through every possible outcome in a problem/solution space. The computation progresses in a tree-like manner, following the most likely step at each turn, and backtracking when necessary until it finds the correct solution.

Expand Down Expand Up @@ -66,7 +66,7 @@ _Source: <a href="http://medium.com/@kapildevkhatik2" target="_blank" rel="noope

![Tree-of-Thought Prompting - How it compares with other prompting techniques](/static/blog/tree-of-thought/how-tot-compare.webp)

_Source: <a href="https://arxiv.org/abs/2305.10601" target="_blank" rel="noopener">Yao et el. (2023)</a>_
_Source: Yao et el. (2023)_

At a high level, ToT helps LLMs achieve deliberate reasoning by:

Expand All @@ -84,9 +84,9 @@ Here's a schematic demonstration of how the framework combines the DFS search st

![Tree-of-Thought Framework](/static/blog/tree-of-thought/tot-framework.webp)

_Source: <a href="https://arxiv.org/abs/2305.10601" target="_blank" rel="noopener">Yao et el. (2023)</a>_
_Source: Yao et el. (2023)_

**<a href="https://arxiv.org/abs/2305.08291" target="_blank" rel="noopener">Long (2023)</a>** augments LLMs with several modules including a ToT controller, which enables it to solve more specific problems. More precisely, this contraption combines reinforced learning with ToT, encouraging the controller to self-learn as it consumes data sets over time.
Long (2023) augments LLMs with several modules including a ToT controller, which enables it to solve more specific problems. More precisely, this contraption combines reinforced learning with ToT, encouraging the controller to self-learn as it consumes data sets over time.

In addition to the ToT controller, this suped-up LLM uses a memory module to track preceding token sequences. That way, it can easily retrace its steps and explore new directions. ToT-enabled LLMs can solve more complex puzzle games like Sudoku.

Expand Down Expand Up @@ -128,7 +128,7 @@ Instructing ChatGPT to provide diverse chains of thought that reach a consensus

Since the precision of an LLM output depends strongly on how well the prompt is constructed, using well-refined ToT prompts improves its performance.

**<a href="https://docs.helicone.ai/features/experiments" target="_blank" rel="noopener">Helicone's Experiments</a>** help users simplify the prompt creation and optimization process. This tool provides a playground where you can safely experiment with prompts and measure their success before pushing your code to a live environment.
Helicone's **<a href="https://docs.helicone.ai/features/experiments" target="_blank" rel="noopener">Experiments</a>** help users simplify the prompt creation and optimization process. This tool provides a playground where you can safely experiment with prompts and measure their success before pushing your code to a live environment.

![Tree-of-Thought Prompting - Running Experiments in Helicone](/static/blog/tree-of-thought/tot-experiments.webp)

Expand Down Expand Up @@ -178,9 +178,9 @@ You can always add more input rows or evaluators to your experiment for more com

Above is an example of forking an existing prompt to create a new variation.

## Tree of Thought Prompting Vs. Other Prompting Techniques (Benchmarks)
## Tree of Thought Prompting vs. Other Prompting Techniques

Compared to <a href="https://docs.helicone.ai/guides/prompt-engineering/overview" target="_blank" rel="noopener">preceding prompting techniques</a>, ToT shows massive improvement at planning/searching problems such as the game of 24, crossword puzzles, and creative writing, according to Yao et el. (2023).
Compared to <a href="https://helicone.ai/blog/prompt-engineering-tools" target="_blank" rel="noopener">other prompting techniques</a>, ToT shows massive improvement at planning/searching problems such as the game of 24, crossword puzzles, and creative writing, according to Yao et el. (2023).

![Tree-of-Thought Prompting - Benchmarks](/static/blog/tree-of-thought/tot-benchmarks.webp)

Expand Down Expand Up @@ -239,3 +239,5 @@ Based on Yao et el. (2023) benchmark tests, the Tree of Thought framework/prompt
>
Prompt Evaluation Explained: Random Sampling vs. Golden Datasets
</a>

<Questions />
18 changes: 9 additions & 9 deletions bifrost/app/blog/page.tsx
Original file line number Diff line number Diff line change
Expand Up @@ -254,6 +254,11 @@ const blogContent: BlogStructure[] = [
folderName: "chain-of-thought-prompting",
},
},
{
dynmaicEntry: {
folderName: "llm-api-providers",
},
},
{
dynmaicEntry: {
folderName: "rag-chunking-strategies",
Expand All @@ -271,12 +276,12 @@ const blogContent: BlogStructure[] = [
},
{
dynmaicEntry: {
folderName: "claude-3.5-sonnet-vs-openai-o1",
folderName: "crewai-vs-dify-ai",
},
},
{
dynmaicEntry: {
folderName: "crewai-vs-dify-ai",
folderName: "claude-3.5-sonnet-vs-openai-o1",
},
},
{
Expand Down Expand Up @@ -306,12 +311,12 @@ const blogContent: BlogStructure[] = [
},
{
dynmaicEntry: {
folderName: "crewai-vs-autogen",
folderName: "prompt-evaluation-for-llms",
},
},
{
dynmaicEntry: {
folderName: "prompt-evaluation-for-llms",
folderName: "crewai-vs-autogen",
},
},
{
Expand All @@ -324,11 +329,6 @@ const blogContent: BlogStructure[] = [
folderName: "llamaindex-vs-langchain",
},
},
{
dynmaicEntry: {
folderName: "llm-api-providers",
},
},
{
dynmaicEntry: {
folderName: "when-to-finetune",
Expand Down
Binary file not shown.
Binary file not shown.
Loading