This repository is sample graph data for developer. Samples for end user is here.
Source code: Interview
This sample graph was designed to simulate an interview with a famous person. The flow involves multiple agents working together to process user input, generate conversational context, and manage an iterative chat process.
- The user is prompted to input the name of a famous person through the
name
node. - The
context
node generates the interview context using the input name. - The
messages
node initializes the conversation with system and greeting messages. - The
chat
node manages the conversation flow using a nested graph, iterating 6 times. - Within the nested graph:
- The
groq
node generates a response from the model. - The
output
node formats the response and prints it to the console. - The
reducer
node appends the new message to the messages array. - The
swappedContext
node swaps the roles of interviewer and interviewee. - The
swappedMessages
node updates the messages for the next iteration.
- The
- The cycle repeats for 6 iterations, simulating a back-and-forth interview.
Source code: Interview JP
- The user inputs the name of the person to interview.
- The context for the interview is generated.
- Initial messages are set up.
- The nested graph executes 6 iterations, simulating a back-and-forth interview:
- The model generates a response.
- The response is translated to Japanese.
- The response is formatted and output.
- The messages are updated with the new response.
- The roles of interviewer and interviewee are swapped.
- The context and messages are updated.
- After the iterations, the final response is translated to Japanese and output.
Source code: Chat
This sample is an interactive chat loop, where the user can engage in a conversation with an AI model. The conversation continues in a loop until the user inputs "/bye".
- The loop starts with
continue
set totrue
. - The
userInput
node prompts the user for input. - The
checkInput
node checks if the user input is "/bye". If it is,continue
is set tofalse
, breaking the loop. - The
userMessage
node formats the user input as a message. - The
appendedMessages
node appends the user message to the conversation history. - The
groq
node sends the updated conversation history to the AI model for a response. - The
output
node formats and displays the AI's response. - The
reducer
node appends the AI's response to the conversation history. - The loop repeats until the user inputs "/bye".
Souce Code: Wikipedia
This sample is a Retrieval-Augmented Generation (RAG) application using GraphAI. This graph is designed to query information about Sam Bankman-Fried's final court sentence, retrieve relevant data from Wikipedia, process it, and finally generate a response using a Large Language Model (LLM) such as GPT-3.5.
- The
source
node provides the initial data. - The
wikipedia
node retrieves content from Wikipedia using the name from thesource
node. - The content is split into chunks (
chunks
), and embeddings are generated for these chunks (embeddings
). Simultaneously, an embedding for the topic is generated (topicEmbedding
). - The
similarityCheck
node calculates the similarity between the text chunks and the topic embedding. - The text chunks are sorted based on similarity (
sortedChunks
), and the most relevant chunks are concatenated up to a token limit (referenceText
). - A prompt is generated using the initial query and the reference text (
prompt
). - The
RagQuery
node sends the prompt to the LLM. Additionally, a one-shot query is performed using the initial query (OneShotQuery
).
Source code: Research
In this sample, a user inputs a topic, the system detects the language of the input, retrieves relevant information from Wikipedia, summarizes it, and then translates it if necessary.
- The
topic
node prompts the user to input a topic. - The
detector
node identifies the language and translates the topic to English if necessary. - The
wikipedia
node fetches and summarizes relevant information from Wikipedia. - The
translate
node ensures the summary is in the original language if it was not in English. - The result from the
translate
node is the final output of the workflow.
Source code: Reception
The goal of this application is to interactively gather specific information (name, date of birth, and gender) from a user through iterative conversations with a language model (LLM).
- The
system message
provides instructions to the LLM on the information to collect. - The
userInput
node prompts the user for input. - The
userMessage
node formats the user input. - The
appendedMessages
node updates the conversation with the user’s message. - The
llm
node sends the conversation to the LLM for processing. - The
argumentsParser
node extracts information if the LLM calls thereport
function. - The
output
node displays the LLM’s response. - The
reducer
node updates the conversation with the LLM’s response. - The
continue
node checks if the loop should continue based on the LLM’s response.
Source code: Meta Chat
This sample aims to generate another GraphAI graph that collects user information (Name, Date of Birth, and Gender) using an LLM (GPT-4) and then executes the generated graph.
- The
graphGenerator
node is essentially prompting the GPT-4 model to generate a GraphAI graph based on the provided documentation and instructions. - The
parser
node parses the JSON content generated by the GPT-4 model, which is the new GraphAI graph. - The
executer
node executes the newly generated GraphAI graph by nesting it within the current graph.
Source code: Describe Graph
- The
describer
node asks the GPT-4 to generate the description of the specified graph. - The
description
node extract the description from the generated message.
Source code: RSS Reader
This sample fetches and processes RSS feed data from a given URL, specifically from "https://www.theverge.com/microsoft/rss/index.xml". The flow extracts relevant content from the RSS feed, processes it through a series of agents, and translates the content into Japanese.
- The
url
node provides the RSS feed URL. - The
rssFeed
node fetches the RSS feed data from the URL. - The
entries
node filters the fetched data to include only the required properties (title
,link
,content
). - The
map
node processes each filtered entry concurrently (up to 4 entries), executing the nested graph for each entry:- The
template
node formats the entry'stitle
andcontent
. - The
query
node translates the formatted string into Japanese. - The
extractor
node outputs the translated content.
- The
Source code: Weather
THis sample simulates a conversation with a meteorologist. The system is capable of handling user inputs, querying a weather API, and managing the conversation iteratively based on user requests.
- The loop starts with the
continue
node set totrue
. - The user is prompted to input a location.
- The
checkInput
node determines if the user wants to continue or exit. - If the user input is valid, it is added to the conversation messages.
- The
llmCall
node processes the messages and calls the LLM. - The
output
node prints the LLM's response if available. - If the LLM requests a tool call, the nested graph
tool_calls
handles the request. - The
reducer
node updates the conversation messages. - The loop continues until the user inputs "/bye".