diff --git a/README.md b/README.md index 8215063..6d5f02b 100644 --- a/README.md +++ b/README.md @@ -8,7 +8,7 @@ Automatically test your Web Messenger Deployments Allows behaviour for Genesys Chatbots and Architect flows behind [Genesys' Web Messenger Deployments](https://help.mypurecloud.com/articles/web-messaging-overview/) to be automatically tested using: * **Scripted Dialogue** - I say "X" and expect "Y" in response ([example](https://github.com/ovotech/genesys-web-messaging-tester/tree/main/examples/cli-scripted-tests/example-pass.yml)) -* **Generative AI** - Converse with my chatbot and fail the test if it doesn't do "X" ([example](https://github.com/ovotech/genesys-web-messaging-tester/tree/main/examples/cli-ai-tests/example.yml)) +* **Generative AI** - Converse with my chatbot and fail the test if it doesn't do "X" ([examples](https://github.com/ovotech/genesys-web-messaging-tester/tree/main/examples/cli-ai-tests/)) Why? Well it makes testing: @@ -104,6 +104,11 @@ web-messaging-tester scripted tests/example.yml ### Testing with AI +This tool supports two GenAI providers: +* ChatGPT (`gpt-3.5-turbo` model by default) +* Google Vertex AI ([PaLM 2 Chat Bison model](https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/chat-bison)) + +#### Using ChatGPT Start by setting up an API key for ChatGPT: 1. [Create an API key for OpenAI](https://help.openai.com/en/articles/4936850-where-do-i-find-my-api-key) @@ -112,16 +117,19 @@ Start by setting up an API key for ChatGPT: Write a scenario file containing all the scenarios you wish to run along with the [ID and region of your Web Messenger Deployment](https://help.mypurecloud.com/articles/deploy-messenger/). -The scenarios are written as ChatGPT Prompts, these can take some fine-tuning to get -right ([see examples here](https://genesys-messenger-tester.makingchatbots.com/writing-tests/ai/example-prompts.html)). +The scenarios are written as prompts, these can take some fine-tuning to get right ([see examples here](https://genesys-messenger-tester.makingchatbots.com/writing-tests/ai/example-prompts.html)). The `terminatingPhrases` section defines the phrases you instruct ChatGPT to say to pass or fail a test. -> [examples/cli-ai-tests/example.yml](https://github.com/ovotech/genesys-web-messaging-tester/tree/main/examples/cli-ai-tests/example.yml) +> [examples/cli-ai-tests/chatgpt-example.yml](https://github.com/ovotech/genesys-web-messaging-tester/tree/main/examples/cli-ai-tests/chatgpt-example.yml) ```yaml config: deploymentId: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx region: xxxx.pure.cloud + ai: + provider: chatgpt + config: + temperature: 1 scenarios: "Accept survey": setup: @@ -153,6 +161,57 @@ web-messaging-tester ai tests/example.yml For a slightly more detailed guide see: [Let's test a Genesys chatbot with AI](https://www.linkedin.com/pulse/lets-test-genesys-chatbot-ai-lucas-woodward-dvrpc). +#### Using Google Vertex AI + +1. Create a Google Cloud Platform (GCP) account and enabled AI access to Vertex AI +2. Authenticate the machine running this testing tool, with GCP + * The easiest way is [setting up Application Default Credentials](https://cloud.google.com/docs/authentication/provide-credentials-adc#local-dev) +3. Define a prompt to provide the model with context on how to behave during testing + * Learn more in [Google's Introduction to prompt design](https://cloud.google.com/vertex-ai/docs/generative-ai/learn/introduction-prompt-design) + +The `terminatingPhrases` section defines the phrases you instruct PaLM 2 to say to pass or fail a test. + +> [examples/cli-ai-tests/google-vertex-ai-example.yml](https://github.com/ovotech/genesys-web-messaging-tester/tree/main/examples/cli-ai-tests/google-vertex-ai-example.yml) + +```yaml +config: + deploymentId: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx + region: xxxx.pure.cloud + ai: + provider: google-vertex-ai + config: + location: example-location + project: example-gcp-project + modelVersion: 002 +scenarios: + "Accept survey": + setup: + prompt: | + I want you to play the role of a customer talking to a company's online chatbot. You must not + break from this role, and all of your responses must be based on how a customer would realistically talk to a company's chatbot. + + To help you play the role of a customer consider the following points when writing a response: + * Respond to questions with as few words as possible + * Answer with the exact word when given options e.g. if asked to answer with either 'yes' or 'no' answer with either 'yes' or 'no' without punctuation, such as full stops + + As a customer you would like to leave feedback of a recent purchase of a light bulb you made where a customer service + rep was very helpful in finding the bulb with the correct fitting. + + If at any point in the company's chatbot repeats itself then say the word 'FAIL'. + + If you have understood your role and the purpose of your conversation with the company's chatbot then say the word 'Hello' + and nothing else. + terminatingPhrases: + pass: ["PASS"] + fail: ["FAIL"] +``` + +Then run the AI test by pointing to the scenario file in the terminal: + +```shell +web-messaging-tester ai tests/example.yml +``` + ## Example commands ``` @@ -191,6 +250,13 @@ Run 10 scenarios in parallel: web-messaging-tester scripted test-script.yaml --parallel 10 ``` +## Support + +If you have any questions then please feel free to: + +* Raise an issue on this [project's GitHub repository](https://github.com/ovotech/genesys-web-messaging-tester) +* [Drop me a message](https://www.linkedin.com/in/lucas-woodward-the-dev/) + ## Development * [Release Strategy](https://github.com/ovotech/genesys-web-messaging-tester/tree/main/docs/release-strategy.md) diff --git a/examples/cli-ai-tests/README.md b/examples/cli-ai-tests/README.md new file mode 100644 index 0000000..b4e8dea --- /dev/null +++ b/examples/cli-ai-tests/README.md @@ -0,0 +1,27 @@ +# CLI AI Examples + +## ChatGPT + +```shell +# .env +export DEPLOYMENT_ID=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx +export REGION=xxxx.pure.cloud +export OPENAI_API_KEY=xx-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx +``` + +```shell +export $(cat .env | xargs) && ./chatgpt-run.sh +``` + +## Vertex AI + +```shell +# .env +export DEPLOYMENT_ID=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx +export REGION=xxxx.pure.cloud +export GOOGLE_APPLICATION_CREDENTIALS=PATH/TO/JSON +``` + +```shell +export $(cat .env | xargs) && ./google-vertex-ai-run.sh +``` diff --git a/examples/cli-ai-tests/example.yml b/examples/cli-ai-tests/chatgpt-example.yml similarity index 95% rename from examples/cli-ai-tests/example.yml rename to examples/cli-ai-tests/chatgpt-example.yml index f0173a8..e4c20da 100644 --- a/examples/cli-ai-tests/example.yml +++ b/examples/cli-ai-tests/chatgpt-example.yml @@ -1,6 +1,10 @@ config: deploymentId: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx region: xxxx.pure.cloud + ai: + provider: chatgpt + config: + temperature: 1 scenarios: "Accept survey": setup: diff --git a/examples/cli-ai-tests/run.sh b/examples/cli-ai-tests/chatgpt-run.sh similarity index 50% rename from examples/cli-ai-tests/run.sh rename to examples/cli-ai-tests/chatgpt-run.sh index a884e9b..8dac6d5 100755 --- a/examples/cli-ai-tests/run.sh +++ b/examples/cli-ai-tests/chatgpt-run.sh @@ -10,4 +10,11 @@ function cleanup { } trap cleanup EXIT -web-messaging-tester ai example.yml -id $DEPLOYMENT_ID -r $REGION +web-messaging-tester ai chatgpt-example.yml -id $DEPLOYMENT_ID -r $REGION +if [ $? -ne 0 ] +then + echo "" + echo "=========" + echo "UNEXPECTED EXIT CODE $? - Example did not result in Exit Code of 0" + exit 1 +fi diff --git a/examples/cli-ai-tests/google-vertex-ai-example.yml b/examples/cli-ai-tests/google-vertex-ai-example.yml new file mode 100644 index 0000000..511c356 --- /dev/null +++ b/examples/cli-ai-tests/google-vertex-ai-example.yml @@ -0,0 +1,33 @@ +config: + deploymentId: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx + region: xxxx.pure.cloud + ai: + provider: google-vertex-ai + config: + location: example-location + project: example-gcp-project + modelVersion: "002" + examples: + - input: "What would you like to do today?" + output: "I would like to leave feedback, please" +scenarios: + "Accept survey": + setup: + prompt: | + I want you to play the role of a customer talking to a company's online chatbot. You must not + break from this role, and all of your responses must be based on how a customer would realistically talk to a company's chatbot. + + To help you play the role of a customer consider the following points when writing a response: + * Respond to questions with as few words as possible + * Answer with the exact word when given options e.g. if asked to answer with either 'yes' or 'no' answer with either 'yes' or 'no' without punctuation, such as full stops + + As a customer you would like to leave feedback of a recent purchase of a light bulb you made where a customer service + rep was very helpful in finding the bulb with the correct fitting. + + If at any point in the company's chatbot repeats itself then say the word 'FAIL'. + + If you have understood your role and the purpose of your conversation with the company's chatbot then say the word 'Hello' + and nothing else. + terminatingPhrases: + pass: ["PASS"] + fail: ["FAIL"] diff --git a/examples/cli-ai-tests/google-vertex-ai-run.sh b/examples/cli-ai-tests/google-vertex-ai-run.sh new file mode 100755 index 0000000..9a9bcd7 --- /dev/null +++ b/examples/cli-ai-tests/google-vertex-ai-run.sh @@ -0,0 +1,20 @@ +#sh + +# exit when any command fails +set -e + +npm install -g ../../packages/genesys-web-messaging-tester-cli + +function cleanup { + npm uninstall -g ../../packages/genesys-web-messaging-tester-cli +} +trap cleanup EXIT + +web-messaging-tester ai google-vertex-ai-example.yml -id $DEPLOYMENT_ID -r $REGION +if [ $? -ne 0 ] +then + echo "" + echo "=========" + echo "UNEXPECTED EXIT CODE $? - Example did not result in Exit Code of 0" + exit 1 +fi diff --git a/packages/genesys-web-messaging-tester-cli/README.md b/packages/genesys-web-messaging-tester-cli/README.md index 8215063..3866e2f 100644 --- a/packages/genesys-web-messaging-tester-cli/README.md +++ b/packages/genesys-web-messaging-tester-cli/README.md @@ -8,7 +8,7 @@ Automatically test your Web Messenger Deployments Allows behaviour for Genesys Chatbots and Architect flows behind [Genesys' Web Messenger Deployments](https://help.mypurecloud.com/articles/web-messaging-overview/) to be automatically tested using: * **Scripted Dialogue** - I say "X" and expect "Y" in response ([example](https://github.com/ovotech/genesys-web-messaging-tester/tree/main/examples/cli-scripted-tests/example-pass.yml)) -* **Generative AI** - Converse with my chatbot and fail the test if it doesn't do "X" ([example](https://github.com/ovotech/genesys-web-messaging-tester/tree/main/examples/cli-ai-tests/example.yml)) +* **Generative AI** - Converse with my chatbot and fail the test if it doesn't do "X" ([examples](https://github.com/ovotech/genesys-web-messaging-tester/tree/main/examples/cli-ai-tests/)) Why? Well it makes testing: @@ -104,6 +104,11 @@ web-messaging-tester scripted tests/example.yml ### Testing with AI +This tool supports two GenAI providers: +* ChatGPT (`gpt-3.5-turbo` model by default) +* Google Vertex AI ([PaLM 2 Chat Bison model](https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/chat-bison)) + +#### Using ChatGPT Start by setting up an API key for ChatGPT: 1. [Create an API key for OpenAI](https://help.openai.com/en/articles/4936850-where-do-i-find-my-api-key) @@ -112,16 +117,72 @@ Start by setting up an API key for ChatGPT: Write a scenario file containing all the scenarios you wish to run along with the [ID and region of your Web Messenger Deployment](https://help.mypurecloud.com/articles/deploy-messenger/). -The scenarios are written as ChatGPT Prompts, these can take some fine-tuning to get -right ([see examples here](https://genesys-messenger-tester.makingchatbots.com/writing-tests/ai/example-prompts.html)). +The scenarios are written as prompts, these can take some fine-tuning to get right ([see examples here](https://genesys-messenger-tester.makingchatbots.com/writing-tests/ai/example-prompts.html)). The `terminatingPhrases` section defines the phrases you instruct ChatGPT to say to pass or fail a test. -> [examples/cli-ai-tests/example.yml](https://github.com/ovotech/genesys-web-messaging-tester/tree/main/examples/cli-ai-tests/example.yml) +> [examples/cli-ai-tests/chatgpt-example.yml](https://github.com/ovotech/genesys-web-messaging-tester/tree/main/examples/cli-ai-tests/chatgpt-example.yml) ```yaml config: deploymentId: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx region: xxxx.pure.cloud + ai: + provider: chatgpt + config: + temperature: 1 +scenarios: + "Accept survey": + setup: + prompt: | + I want you to play the role of a customer talking to a company's online chatbot. You must not + break from this role, and all of your responses must be based on how a customer would realistically talk to a company's chatbot. + + To help you play the role of a customer consider the following points when writing a response: + * Respond to questions with as few words as possible + * Answer with the exact word when given options e.g. if asked to answer with either 'yes' or 'no' answer with either 'yes' or 'no' without punctuation, such as full stops + + As a customer you would like to leave feedback of a recent purchase of a light bulb you made where a customer service + rep was very helpful in finding the bulb with the correct fitting. + + If at any point in the company's chatbot repeats itself then say the word 'FAIL'. + + If you have understood your role and the purpose of your conversation with the company's chatbot then say the word 'Hello' + and nothing else. + terminatingPhrases: + pass: ["PASS"] + fail: ["FAIL"] +``` + +Then run the AI test by pointing to the scenario file in the terminal: + +```shell +web-messaging-tester ai tests/example.yml +``` + +For a slightly more detailed guide see: [Let's test a Genesys chatbot with AI](https://www.linkedin.com/pulse/lets-test-genesys-chatbot-ai-lucas-woodward-dvrpc). + +#### Using Google Vertex AI + +1. Create a Google Cloud Platform (GCP) account and enabled AI access to Vertex AI +2. Authenticate the machine running this testing tool, with GCP + * The easiest way is [setting up Application Default Credentials](https://cloud.google.com/docs/authentication/provide-credentials-adc#local-dev) +3. Define a prompt to provide the model with context on how to behave during testing + * Learn more in [Google's Introduction to prompt design](https://cloud.google.com/vertex-ai/docs/generative-ai/learn/introduction-prompt-design) + +The `terminatingPhrases` section defines the phrases you instruct PaLM 2 to say to pass or fail a test. + +> [examples/cli-ai-tests/google-vertex-ai-example.yml](https://github.com/ovotech/genesys-web-messaging-tester/tree/main/examples/cli-ai-tests/google-vertex-ai-example.yml) + +```yaml +config: + deploymentId: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx + region: xxxx.pure.cloud + ai: + provider: google-vertex-ai + config: + location: example-location + project: example-gcp-project + modelVersion: 002 scenarios: "Accept survey": setup: @@ -191,6 +252,13 @@ Run 10 scenarios in parallel: web-messaging-tester scripted test-script.yaml --parallel 10 ``` +## Support + +If you have any questions then please feel free to: + +* Raise an issue on this [project's GitHub repository](https://github.com/ovotech/genesys-web-messaging-tester) +* [Drop me a message](https://www.linkedin.com/in/lucas-woodward-the-dev/) + ## Development * [Release Strategy](https://github.com/ovotech/genesys-web-messaging-tester/tree/main/docs/release-strategy.md) diff --git a/packages/genesys-web-messaging-tester-cli/__tests__/commands/ai/configSectionLoaded.spec.ts b/packages/genesys-web-messaging-tester-cli/__tests__/commands/ai/configSectionLoaded.spec.ts index a0f94db..f8389fa 100644 --- a/packages/genesys-web-messaging-tester-cli/__tests__/commands/ai/configSectionLoaded.spec.ts +++ b/packages/genesys-web-messaging-tester-cli/__tests__/commands/ai/configSectionLoaded.spec.ts @@ -2,7 +2,7 @@ import { readFileSync } from 'fs'; import { Command } from 'commander'; import { AiTestCommandDependencies } from '../../../src/commands/aiTest/createAiTestCommand'; import { createCli } from '../../../src/createCli'; -import { OpenAI } from 'openai'; +import { ChatCompletionClient } from '../../../src/commands/aiTest/chatCompletionClients/chatCompletionClient'; describe('Session Config', () => { let fsReadFileSync: jest.MockedFunction; @@ -11,7 +11,8 @@ describe('Session Config', () => { AiTestCommandDependencies['webMessengerSessionFactory'] >; let conversationFactory: jest.Mocked; - let mockOpenApiChatCompletions: jest.Mocked>; + let mockOpenAiChatCompletionClient: jest.Mocked; + let mockGoogleAiChatCompletionClient: jest.Mocked; let cli: Command; @@ -37,7 +38,8 @@ describe('Session Config', () => { fsReadFileSync, fsAccessSync: jest.fn(), webMessengerSessionFactory, - openAiChatCompletionFactory: () => mockOpenApiChatCompletions, + openAiCreateChatCompletionClient: () => mockOpenAiChatCompletionClient, + googleAiCreateChatCompletionClient: () => mockGoogleAiChatCompletionClient, conversationFactory, processEnv: { OPENAI_API_KEY: 'test' }, }); @@ -49,6 +51,8 @@ config: deploymentId: test-deployment-id-1 region: test-region-1 origin: test-origin-1 + ai: + provider: chatgpt scenarios: Test: setup: @@ -57,16 +61,12 @@ scenarios: pass: ["PASS"] fail: ["FAIL"] `); - const completion: OpenAI.Chat.ChatCompletion = { - choices: [{ message: { role: 'system', content: 'PASS' }, finish_reason: 'stop', index: 0 }], - created: 0, - id: '', - model: '', - object: '', + mockOpenAiChatCompletionClient = { + getProviderName: jest.fn().mockReturnValue('mock-chatgpt'), + predict: jest.fn().mockResolvedValue({ role: 'customer', content: 'PASS' }), + preflightCheck: jest.fn().mockResolvedValue({ ok: true }), }; - mockOpenApiChatCompletions = { create: jest.fn().mockResolvedValue(completion) }; - await cli.parseAsync([...['node', '/path/to/cli'], 'ai', ...['/test/path']]); expect(webMessengerSessionFactory).toHaveBeenCalledWith({ @@ -76,27 +76,30 @@ scenarios: }); }); - test('session config not necessary if session config args provided', async () => { + test('session config only necessary for ai provider if session config args provided', async () => { fsReadFileSync.mockReturnValue(` - scenarios: - Test: - setup: - prompt: Test prompt - terminatingPhrases: - pass: ["PASS"] - fail: ["FAIL"] +config: + deploymentId: test-deployment-id-1 + region: test-region-1 + origin: test-origin-1 + ai: + provider: google-vertex-ai + config: + project: test-project + location: test-location +scenarios: + Test: + setup: + prompt: Test prompt + terminatingPhrases: + pass: ["PASS"] + fail: ["FAIL"] `); - const completion: OpenAI.Chat.ChatCompletion = { - choices: [{ message: { role: 'system', content: 'PASS' }, finish_reason: 'stop', index: 0 }], - created: 0, - id: '', - model: '', - object: '', - }; - - mockOpenApiChatCompletions = { - create: jest.fn().mockResolvedValue(completion), + mockGoogleAiChatCompletionClient = { + getProviderName: jest.fn().mockReturnValue('mock-google-vertex-ai'), + predict: jest.fn().mockResolvedValue({ role: 'customer', content: 'PASS' }), + preflightCheck: jest.fn().mockResolvedValue({ ok: true }), }; await cli.parseAsync([ diff --git a/packages/genesys-web-messaging-tester-cli/package.json b/packages/genesys-web-messaging-tester-cli/package.json index cfa644c..8c996a7 100644 --- a/packages/genesys-web-messaging-tester-cli/package.json +++ b/packages/genesys-web-messaging-tester-cli/package.json @@ -1,6 +1,6 @@ { "name": "@ovotech/genesys-web-messaging-tester-cli", - "version": "2.0.3", + "version": "3.0.0", "main": "lib/index.js", "types": "lib/index.d.ts", "license": "Apache-2.0", @@ -22,7 +22,7 @@ "lint": "eslint 'src/**/*.ts'", "test:execute:scripted": "ts-node src/index.ts scripted ../../examples/cli-scripted-tests/example.yml -id $DEPLOYMENT_ID -r $REGION -p 10", "test:execute:scripted:help": "ts-node src/index.ts --help", - "test:execute:ai": "ts-node src/index.ts ai ../../examples/cli-ai-tests/example.yml -id $DEPLOYMENT_ID -r $REGION", + "test:execute:ai": "ts-node src/index.ts ai ../../examples/cli-ai-tests/chatgpt-example.yml -id $DEPLOYMENT_ID -r $REGION", "test:execute:ai:help": "ts-node src/index.ts ai --help", "test:execute:help": "ts-node src/index.ts --help" }, @@ -30,6 +30,7 @@ "web-messaging-tester": "lib/index.js" }, "dependencies": { + "@google-cloud/aiplatform": "3.10.0", "@ovotech/genesys-web-messaging-tester": "^2.0.7", "chalk": "^4.1.2", "ci-info": "^3.5.0", @@ -39,7 +40,7 @@ "js-yaml": "^4.1.0", "listr2": "^5.0.5", "openai": "4.0.0-beta.12", - "purecloud-platform-client-v2": "^153.0.0" + "purecloud-platform-client-v2": "^187.0.0" }, "devDependencies": { "@types/humanize-duration": "^3.27.1", diff --git a/packages/genesys-web-messaging-tester-cli/src/commands/aiTest/chatCompletionClients/chatCompletionClient.ts b/packages/genesys-web-messaging-tester-cli/src/commands/aiTest/chatCompletionClients/chatCompletionClient.ts new file mode 100644 index 0000000..bcfc217 --- /dev/null +++ b/packages/genesys-web-messaging-tester-cli/src/commands/aiTest/chatCompletionClients/chatCompletionClient.ts @@ -0,0 +1,21 @@ +export interface PreflightError { + reasonForError: string; + ok: false; +} + +export interface PreflightSuccess { + ok: true; +} + +export type PreflightResult = PreflightError | PreflightSuccess; + +export interface Utterance { + role: 'customer' | 'bot'; + content: string; +} + +export interface ChatCompletionClient { + getProviderName(): string; + predict(context: string, conversationUtterances: Utterance[]): Promise; + preflightCheck(): Promise; +} diff --git a/packages/genesys-web-messaging-tester-cli/src/commands/aiTest/chatCompletionClients/chatGpt/createChatCompletionClient.ts b/packages/genesys-web-messaging-tester-cli/src/commands/aiTest/chatCompletionClients/chatGpt/createChatCompletionClient.ts new file mode 100644 index 0000000..a44b151 --- /dev/null +++ b/packages/genesys-web-messaging-tester-cli/src/commands/aiTest/chatCompletionClients/chatGpt/createChatCompletionClient.ts @@ -0,0 +1,71 @@ +import { ChatCompletionClient, PreflightResult, Utterance } from '../chatCompletionClient'; +import { OpenAI } from 'openai'; +import { ChatGptConfig } from '../../testScript/modelTypes'; + +type ChatCompletionMessage = OpenAI.Chat.Completions.CreateChatCompletionRequestMessage; + +export function createChatCompletionClient( + { model = 'gpt-3.5-turbo', temperature }: ChatGptConfig, + apiKey: string, + maxRetries = 5, +): ChatCompletionClient { + const chatCompletion = new OpenAI({ apiKey, maxRetries }).chat.completions; + + return { + getProviderName(): string { + return 'ChatGPT'; + }, + async preflightCheck(): Promise { + try { + await chatCompletion.create({ + model, + n: 1, // Number of choices + temperature: 0, + messages: [ + { + role: 'system', + content: 'You help people with math problems', + }, + { + role: 'user', + content: 'What is 1+1?', + }, + ], + }); + + return { ok: true }; + } catch (error) { + return { + ok: false, + reasonForError: error instanceof Error ? error.message : String(error), + }; + } + }, + + async predict(context: string, conversationUtterances: Utterance[]): Promise { + const messages: ChatCompletionMessage[] = [ + { + role: 'system', + content: context, + }, + ...conversationUtterances.map((u) => ({ + role: u.role === 'bot' ? 'user' : 'assistant', + content: u.content, + })), + ]; + + const { choices } = await chatCompletion.create({ + model, + n: 1, // Number of choices + temperature, + messages, + }); + + if (choices[0]?.message?.content) { + return { role: 'customer', content: choices[0].message.content }; + } else { + return null; + } + }, + }; +} diff --git a/packages/genesys-web-messaging-tester-cli/src/commands/aiTest/validateOpenAIEnvVariables.ts b/packages/genesys-web-messaging-tester-cli/src/commands/aiTest/chatCompletionClients/chatGpt/validateOpenAiEnvVariables.ts similarity index 100% rename from packages/genesys-web-messaging-tester-cli/src/commands/aiTest/validateOpenAIEnvVariables.ts rename to packages/genesys-web-messaging-tester-cli/src/commands/aiTest/chatCompletionClients/chatGpt/validateOpenAiEnvVariables.ts diff --git a/packages/genesys-web-messaging-tester-cli/src/commands/aiTest/chatCompletionClients/googleVertexAi/ModelRequest.ts b/packages/genesys-web-messaging-tester-cli/src/commands/aiTest/chatCompletionClients/googleVertexAi/ModelRequest.ts new file mode 100644 index 0000000..c295b79 --- /dev/null +++ b/packages/genesys-web-messaging-tester-cli/src/commands/aiTest/chatCompletionClients/googleVertexAi/ModelRequest.ts @@ -0,0 +1,32 @@ +/** + * @see https://cloud.google.com/vertex-ai/docs/generative-ai/model-reference/text-chat#request_body + */ +export interface PromptRequest { + context: string; + examples?: { + input: { content: string }; + output: { content: string }; + }[]; + messages: { + author: string; + content: string; + }[]; +} + +/** + * @see https://cloud.google.com/vertex-ai/docs/generative-ai/model-reference/text-chat#request_body + */ +export interface ParameterRequest { + temperature?: number; + maxOutputTokens?: number; + topP?: number; + topK?: number; + groundingConfig?: string; + stopSequences?: string[]; + candidateCount?: number; + logprobs?: number; + presencePenalty?: number; + frequencyPenalty?: number; + logitBias?: string; + seed?: number; +} diff --git a/packages/genesys-web-messaging-tester-cli/src/commands/aiTest/chatCompletionClients/googleVertexAi/createChatCompletionClient.ts b/packages/genesys-web-messaging-tester-cli/src/commands/aiTest/chatCompletionClients/googleVertexAi/createChatCompletionClient.ts new file mode 100644 index 0000000..3c4a3c2 --- /dev/null +++ b/packages/genesys-web-messaging-tester-cli/src/commands/aiTest/chatCompletionClients/googleVertexAi/createChatCompletionClient.ts @@ -0,0 +1,107 @@ +import { helpers, protos, v1 } from '@google-cloud/aiplatform'; +import { ParameterRequest, PromptRequest } from './ModelRequest'; +import { ChatCompletionClient, PreflightResult, Utterance } from '../chatCompletionClient'; +import { GoogleVertexAiConfig } from '../../testScript/modelTypes'; + +const { PredictionServiceClient } = v1; + +export function createChatCompletionClient({ + location, + project, + temperature, + topK, + topP, + seed, + modelVersion, + examples, +}: GoogleVertexAiConfig): ChatCompletionClient { + const version = modelVersion ? `@${modelVersion}` : ''; + + const endpoint = `projects/${project}/locations/${location}/publishers/google/models/chat-bison${version}`; + + const predictionServiceClient = new PredictionServiceClient({ + apiEndpoint: `${location}-aiplatform.googleapis.com`, + }); + + const parameters = helpers.toValue({ + ...(temperature ? { temperature } : {}), + ...(topK ? { topK } : {}), + ...(topP ? { topP } : {}), + ...(seed ? { seed } : {}), + } as ParameterRequest); + + return { + getProviderName(): string { + return 'Google Vertex AI'; + }, + async preflightCheck(): Promise { + const prompt: PromptRequest = { + context: 'You help people with math problems', + messages: [{ author: 'student', content: 'What is 1+1?' }], + }; + + const instanceValue = helpers.toValue(prompt); + + const request: protos.google.cloud.aiplatform.v1.IPredictRequest = { + endpoint, + instances: [instanceValue!], + parameters, + }; + + try { + await predictionServiceClient.predict(request); + return { ok: true }; + } catch (error) { + return { + ok: false, + reasonForError: error instanceof Error ? error.message : String(error), + }; + } + }, + + async predict(context: string, conversationUtterances: Utterance[]): Promise { + const prompt: PromptRequest = { + context, + ...(examples + ? { + examples: examples.map(({ input, output }) => ({ + input: { content: input }, + output: { content: output }, + })), + } + : {}), + messages: [ + // Google requires at least one message. This message is hopefully innocuous enough not to lead to an unexpected result. + { content: '...', author: 'bot' }, + ...conversationUtterances.map((u) => ({ + author: u.role, + content: u.content, + })), + ], + }; + + const instanceValue = helpers.toValue(prompt); + + const request: protos.google.cloud.aiplatform.v1.IPredictRequest = { + endpoint, + instances: [instanceValue!], + parameters, + }; + + const [response] = await predictionServiceClient.predict(request); + + for (const prediction of response?.predictions || []) { + const candidates = prediction.structValue?.fields?.candidates; + for (const candidate of candidates?.listValue?.values || []) { + const content = candidate.structValue?.fields?.content?.stringValue; + // const author = candidate.structValue?.fields?.author?.stringValue; + if (content) { + return { content: content.trim(), role: 'customer' }; + } + } + } + + return null; + }, + }; +} diff --git a/packages/genesys-web-messaging-tester-cli/src/commands/aiTest/createAiTestCommand.ts b/packages/genesys-web-messaging-tester-cli/src/commands/aiTest/createAiTestCommand.ts index bb35ef7..5803957 100644 --- a/packages/genesys-web-messaging-tester-cli/src/commands/aiTest/createAiTestCommand.ts +++ b/packages/genesys-web-messaging-tester-cli/src/commands/aiTest/createAiTestCommand.ts @@ -8,8 +8,6 @@ import { WebMessengerGuestSession, WebMessengerSession, } from '@ovotech/genesys-web-messaging-tester'; -import { ClientOptions, OpenAI } from 'openai'; -import { validateOpenAiEnvVariables } from './validateOpenAIEnvVariables'; import { Ui } from './ui'; import { validateSessionConfig } from './validateSessionConfig'; import { shouldEndConversation, ShouldEndConversationResult } from './prompt/shouldEndConversation'; @@ -17,41 +15,35 @@ import { readableFileValidator } from '../../fileSystem/readableFileValidator'; import { createYamlFileReader } from '../../fileSystem/yamlFileReader'; import { validatePromptScript } from './testScript/validatePromptScript'; import { CommandExpectedlyFailedError } from '../CommandExpectedlyFailedError'; - -/** - * This value can be between 0 and 1 and controls the randomness of ChatGPT's completions. - * 0 = Responses will be deterministic and repetitive - * ChatGPT will favour words (tokens) that have the highest probability of coming next in the text it is constructing - * 1 = Responses will include more variety and creativity - * ChatGPT will consider using words (tokens) that are less likely to come next in the text it is constructing - * - * @see https://platform.openai.com/docs/quickstart/adjust-your-settings - */ -const temperature = 0.6; +import { SupportedAiProviders } from './testScript/modelTypes'; +import * as googleAi from './chatCompletionClients/googleVertexAi/createChatCompletionClient'; +import * as openAi from './chatCompletionClients/chatGpt/createChatCompletionClient'; +import { ChatCompletionClient, Utterance } from './chatCompletionClients/chatCompletionClient'; +import { validateOpenAiEnvVariables } from './chatCompletionClients/chatGpt/validateOpenAiEnvVariables'; export interface AiTestCommandDependencies { command?: Command; ui?: Ui; - openAiChatCompletionFactory?: (config: ClientOptions) => Pick; + openAiCreateChatCompletionClient?: typeof openAi.createChatCompletionClient; + googleAiCreateChatCompletionClient?: typeof googleAi.createChatCompletionClient; webMessengerSessionFactory?: (sessionConfig: SessionConfig) => WebMessengerSession; conversationFactory?: (session: WebMessengerSession) => Conversation; processEnv?: NodeJS.ProcessEnv; fsReadFileSync?: typeof readFileSync; fsAccessSync?: typeof accessSync; - chatGptModel?: OpenAI.CompletionCreateParams['model']; } export function createAiTestCommand({ command = new Command(), ui = new Ui(), - openAiChatCompletionFactory = (config) => new OpenAI(config).chat.completions, + openAiCreateChatCompletionClient = openAi.createChatCompletionClient, + googleAiCreateChatCompletionClient = googleAi.createChatCompletionClient, webMessengerSessionFactory = (config) => new WebMessengerGuestSession(config, { IsAutomatedTest: 'true' }), conversationFactory = (session) => new Conversation(session), processEnv = process.env, fsReadFileSync = readFileSync, fsAccessSync = accessSync, - chatGptModel = 'gpt-3.5-turbo', }: AiTestCommandDependencies = {}): Command { const yamlFileReader = createYamlFileReader(fsReadFileSync); if (!ui) { @@ -82,14 +74,6 @@ export function createAiTestCommand({ throw new Error('No writeOut and/or writeErr'); } - const openAiEnvValidationResult = validateOpenAiEnvVariables(processEnv); - if (!openAiEnvValidationResult.openAikey) { - outputConfig.writeErr( - ui.validatingOpenAiEnvValidationFailed(openAiEnvValidationResult.error), - ); - throw new CommandExpectedlyFailedError(); - } - // 1. Read YAML file let testScriptFileContents: unknown; try { @@ -124,6 +108,34 @@ export function createAiTestCommand({ throw new CommandExpectedlyFailedError(); } + let chatCompletionClient: ChatCompletionClient | null = null; + if (validPromptScript.config.ai.provider === SupportedAiProviders.ChatGpt) { + const openAiEnvValidationResult = validateOpenAiEnvVariables(processEnv); + if (!openAiEnvValidationResult.openAikey) { + outputConfig.writeErr( + ui.validatingOpenAiEnvValidationFailed(openAiEnvValidationResult.error), + ); + throw new CommandExpectedlyFailedError(); + } + const chatGptConfig = validPromptScript.config.ai.config ?? {}; + chatCompletionClient = openAiCreateChatCompletionClient( + chatGptConfig, + openAiEnvValidationResult.openAikey, + ); + } else { + const googleAiConfig = validPromptScript.config.ai.config; + chatCompletionClient = googleAiCreateChatCompletionClient(googleAiConfig); + } + + // 5. Preflight check of AI library + const preflightCheckResult = await chatCompletionClient.preflightCheck(); + if (!preflightCheckResult.ok) { + outputConfig.writeErr( + ui.preflightCheckFailure(chatCompletionClient.getProviderName(), preflightCheckResult), + ); + throw new CommandExpectedlyFailedError(); + } + const totalScenarios = Object.keys(validPromptScript?.scenarios).length; if (totalScenarios > 1) { outputConfig.writeErr(ui.onlyOnePromptSupported(totalScenarios)); @@ -136,41 +148,24 @@ export function createAiTestCommand({ sessionConfigValidationResults.validSessionConfig, ); - const openaiChatCompletion = openAiChatCompletionFactory({ - apiKey: openAiEnvValidationResult.openAikey, - maxRetries: 5, - }); - - const transcript: TranscribedMessage[] = []; new SessionTranscriber(session).on('messageTranscribed', (msg: TranscribedMessage) => { ui.messageTranscribed(msg); - transcript.push(msg); }); const convo = conversationFactory(session); - const messages: OpenAI.Chat.Completions.CreateChatCompletionRequestMessage[] = [ - { - role: 'system', - content: scenario.setup.prompt, - }, - ]; + const messages: Utterance[] = []; let endConversation: ShouldEndConversationResult = { hasEnded: false, }; do { - const { choices } = await openaiChatCompletion.create({ - model: chatGptModel, - n: 1, // Number of choices - temperature, - messages, - }); + const utterance = await chatCompletionClient.predict(scenario.setup.prompt, messages); - if (choices[0].message?.content) { - messages.push({ role: 'assistant', content: choices[0].message.content }); - await convo.sendText(choices[0].message.content); + if (utterance) { + messages.push(utterance); + await convo.sendText(utterance.content); } else { - messages.push({ role: 'assistant', content: '' }); + messages.push({ role: 'customer', content: '' }); } endConversation = shouldEndConversation( @@ -180,8 +175,9 @@ export function createAiTestCommand({ ); if (!endConversation.hasEnded) { + // TODO Allow time to wait to be customised const chatBotResponses = await convo.waitForResponses(3000); - messages.push({ role: 'user', content: chatBotResponses.join('\n') }); + messages.push({ role: 'bot', content: chatBotResponses.join('\n') }); } endConversation = shouldEndConversation( @@ -189,6 +185,7 @@ export function createAiTestCommand({ scenario.setup.terminatingPhrases.fail, scenario.setup.terminatingPhrases.pass, ); + // TODO Handle bot ending conversation } while (!endConversation.hasEnded); outputConfig.writeOut(ui.testResult(endConversation)); diff --git a/packages/genesys-web-messaging-tester-cli/src/commands/aiTest/prompt/shouldEndConversation.spec.ts b/packages/genesys-web-messaging-tester-cli/src/commands/aiTest/prompt/shouldEndConversation.spec.ts index f2f16cf..e17fc2f 100644 --- a/packages/genesys-web-messaging-tester-cli/src/commands/aiTest/prompt/shouldEndConversation.spec.ts +++ b/packages/genesys-web-messaging-tester-cli/src/commands/aiTest/prompt/shouldEndConversation.spec.ts @@ -5,19 +5,15 @@ test('conversation ended when ChatGPT says special word', () => { const shouldEnd = shouldEndConversation( [ { - role: 'system', - content: 'Say hello', - }, - { - role: 'assistant', + role: 'customer', content: 'Hello', }, { - role: 'user', + role: 'bot', content: 'Hi', }, { - role: 'assistant', + role: 'customer', content: `${wordToIndicateEnd} - This has gone wrong`, }, ], diff --git a/packages/genesys-web-messaging-tester-cli/src/commands/aiTest/prompt/shouldEndConversation.ts b/packages/genesys-web-messaging-tester-cli/src/commands/aiTest/prompt/shouldEndConversation.ts index ad55e5e..fe262c6 100644 --- a/packages/genesys-web-messaging-tester-cli/src/commands/aiTest/prompt/shouldEndConversation.ts +++ b/packages/genesys-web-messaging-tester-cli/src/commands/aiTest/prompt/shouldEndConversation.ts @@ -1,5 +1,5 @@ -import { OpenAI } from 'openai'; import { containsTerminatingPhrases } from './containsTerminatingPhrases'; +import { Utterance } from '../chatCompletionClients/chatCompletionClient'; interface Reason { type: 'fail' | 'pass'; @@ -20,27 +20,27 @@ export type ShouldEndConversationResult = | ShouldEndConversationNotEndedResult; export function shouldEndConversation( - messages: OpenAI.Chat.Completions.CreateChatCompletionRequestMessage[], + utterances: Utterance[], failPhrases: string[], passPhrases: string[], ): ShouldEndConversationResult { - if (messages.length === 0) { + if (utterances.length === 0) { return { hasEnded: false }; } - const lastMessage = messages.slice(-1); + const lastMessage = utterances.slice(-1); if (lastMessage.length === 1 && lastMessage[0].content === '') { - const who = lastMessage[0].role === 'assistant' ? 'ChatGPT' : 'Chatbot'; + const who = lastMessage[0].role === 'customer' ? 'AI' : 'Chatbot'; return { hasEnded: true, reason: { type: 'fail', description: `${who} didn't have a response` }, }; } - const lastChatGptMsg = messages.filter((m) => m.role === 'assistant').slice(-1); + const lastAiMsg = utterances.filter((m) => m.role === 'customer').slice(-1); - if (lastChatGptMsg[0]?.content) { - const phraseResult = containsTerminatingPhrases(lastChatGptMsg[0].content, { + if (lastAiMsg[0]?.content) { + const phraseResult = containsTerminatingPhrases(lastAiMsg[0].content, { pass: passPhrases, fail: failPhrases, }); @@ -50,7 +50,7 @@ export function shouldEndConversation( hasEnded: true, reason: { type: phraseResult.phraseIndicates, - description: `Terminating phrase found in response: '${lastChatGptMsg[0].content}'`, + description: `Terminating phrase found in response: '${lastAiMsg[0].content}'`, }, }; } @@ -70,7 +70,7 @@ export function shouldEndConversation( // } // } - const lastTwoChatBotMsgs = messages.filter((m) => m.role === 'user').slice(-2); + const lastTwoChatBotMsgs = utterances.filter((m) => m.role === 'bot').slice(-2); if (lastTwoChatBotMsgs.length === 2) { const areMessagesTheSame = lastTwoChatBotMsgs[0].content === lastTwoChatBotMsgs[1].content; if (areMessagesTheSame) { diff --git a/packages/genesys-web-messaging-tester-cli/src/commands/aiTest/testScript/modelSchema.ts b/packages/genesys-web-messaging-tester-cli/src/commands/aiTest/testScript/modelSchema.ts new file mode 100644 index 0000000..99267b1 --- /dev/null +++ b/packages/genesys-web-messaging-tester-cli/src/commands/aiTest/testScript/modelSchema.ts @@ -0,0 +1,61 @@ +import Joi from 'joi'; +import { SupportedAiProviders, TestPromptFile } from './modelTypes'; + +export const schema = Joi.object({ + config: Joi.object({ + deploymentId: Joi.string(), + region: Joi.string(), + origin: Joi.string(), + ai: Joi.object({ + provider: Joi.string() + .valid(SupportedAiProviders.GoogleVertexAi, SupportedAiProviders.ChatGpt) + .required(), + config: Joi.when('provider', { + is: SupportedAiProviders.GoogleVertexAi, + then: Joi.object({ + location: Joi.string().required(), + project: Joi.string().required(), + temperature: Joi.number(), + topK: Joi.number(), + topP: Joi.number(), + seed: Joi.number(), + modelVersion: Joi.string().regex(/\d{3}/), + examples: Joi.array().items( + Joi.object({ + input: Joi.string().required(), + output: Joi.string().required(), + }).required(), + ), + }).required(), + }).when('provider', { + is: SupportedAiProviders.ChatGpt, + then: Joi.object({ + model: Joi.string(), + temperature: Joi.number(), + }), + }), + }), + }).required(), + scenarios: Joi.object() + .min(1) + .pattern( + /./, + Joi.object({ + setup: Joi.object({ + prompt: Joi.string().required(), + terminatingPhrases: Joi.object({ + pass: Joi.array().items(Joi.string()).min(1).required(), + fail: Joi.array().items(Joi.string()).min(1).required(), + }).required(), + }).required(), + followUp: Joi.object({ + prompt: Joi.string().required(), + terminatingPhrases: Joi.object({ + pass: Joi.array().items(Joi.string()).min(1).required(), + fail: Joi.array().items(Joi.string()).min(1).required(), + }).required(), + }).optional(), + }).required(), + ) + .required(), +}); diff --git a/packages/genesys-web-messaging-tester-cli/src/commands/aiTest/testScript/modelTypes.ts b/packages/genesys-web-messaging-tester-cli/src/commands/aiTest/testScript/modelTypes.ts new file mode 100644 index 0000000..5b7ed9b --- /dev/null +++ b/packages/genesys-web-messaging-tester-cli/src/commands/aiTest/testScript/modelTypes.ts @@ -0,0 +1,108 @@ +export interface AiScenarioSetupSection { + readonly prompt: string; + readonly terminatingPhrases: { + readonly pass: string[]; + readonly fail: string[]; + }; +} + +export interface AiScenarioFollowUpSection { + readonly prompt: string; + readonly terminatingPhrases: { + readonly pass: string[]; + readonly fail: string[]; + }; +} + +export interface AiScenarioSection { + setup: AiScenarioSetupSection; + followUp?: AiScenarioFollowUpSection; +} + +export enum SupportedAiProviders { + ChatGpt = 'chatgpt', + GoogleVertexAi = 'google-vertex-ai', +} + +export interface GoogleVertexAiConfig { + location: string; + project: string; + /** + * The temperature is used for sampling during response generation, which occurs when topP and topK are applied. Temperature controls the degree of randomness in token selection. Lower temperatures are good for prompts that require a less open-ended or creative response, while higher temperatures can lead to more diverse or creative results. A temperature of 0 means that the highest probability tokens are always selected. In this case, responses for a given prompt are mostly deterministic, but a small amount of variation is still possible. + * If the model returns a response that's too generic, too short, or the model gives a fallback response, try increasing the temperature. + * + * @see https://cloud.google.com/vertex-ai/docs/generative-ai/model-reference/text-chat#generative-ai-text-chat-nodejs:~:text=content%22%3A%20%22user%20message%22%0A%7D%5D-,temperature,-The%20temperature%20is + */ + temperature?: number; + /** + * Top-K changes how the model selects tokens for output. A top-K of 1 means the next selected token is the most probable among all tokens in the model's vocabulary (also called greedy decoding), while a top-K of 3 means that the next token is selected from among the three most probable tokens by using temperature. + * For each token selection step, the top-K tokens with the highest probabilities are sampled. Then tokens are further filtered based on top-P with the final token selected using temperature sampling. + * + * Specify a lower value for less random responses and a higher value for more random responses. + * + * @see https://cloud.google.com/vertex-ai/docs/generative-ai/model-reference/text-chat#generative-ai-text-chat-nodejs:~:text=Default%3A%201024-,topK,-Top%2DK%20changes + */ + topK?: number; + /** + * Top-P changes how the model selects tokens for output. Tokens are selected from the most (see top-K) to least probable until the sum of their probabilities equals the top-P value. For example, if tokens A, B, and C have a probability of 0.3, 0.2, and 0.1 and the top-P value is 0.5, then the model will select either A or B as the next token by using temperature and excludes C as a candidate. + * Specify a lower value for less random responses and a higher value for more random responses. + * + * @see https://cloud.google.com/vertex-ai/docs/generative-ai/model-reference/text-chat#generative-ai-text-chat-nodejs:~:text=Default%3A%2040-,topP,-Top%2DP%20changes + */ + topP?: number; + /** + * Decoder generates random noise with a pseudo random number generator, temperature * noise is added to logits before sampling. The pseudo random number generator (prng) takes a seed as input, it generates the same output with the same seed. + * If seed is not set, the seed used in decoder will not be deterministic, thus the generated random noise will not be deterministic. If seed is set, the generated random noise will be deterministic. + * + * @see https://cloud.google.com/vertex-ai/docs/generative-ai/model-reference/text-chat#generative-ai-text-chat-nodejs:~:text=Maximum%20value%3A%20100-,seed,-Decoder%20generates%20random + */ + seed?: number; + modelVersion?: string; + + /** + * Examples for chat prompts are a list of input-output pairs that demonstrate exemplary model output for a given input. Use examples to customize how the model responds to certain questions. + * + * @see https://cloud.google.com/vertex-ai/docs/generative-ai/chat/chat-prompts#examples + */ + examples?: { input: string; output: string }[]; +} + +export interface ChatGptConfig { + model?: string; + /** + * This value can be between 0 and 1 and controls the randomness of ChatGPT's completions. + * 0 = Responses will be deterministic and repetitive + * ChatGPT will favour words (tokens) that have the highest probability of coming next in the text it is constructing + * 1 = Responses will include more variety and creativity + * ChatGPT will consider using words (tokens) that are less likely to come next in the text it is constructing + * + * @see https://platform.openai.com/docs/quickstart/adjust-your-settings + */ + temperature?: number; +} + +export interface TestPromptFileBase { + readonly config: { + readonly deploymentId?: string; + readonly region?: string; + readonly origin?: string; + readonly ai: AiProviderConfig; + }; + readonly scenarios: { + [name: string]: AiScenarioSection; + }; +} + +export interface GoogleVertexAiConfigSection { + readonly provider: SupportedAiProviders.GoogleVertexAi; + readonly config: GoogleVertexAiConfig; +} + +export interface ChatGptConfigSection { + readonly provider: SupportedAiProviders.ChatGpt; + readonly config?: ChatGptConfig; +} + +export type TestPromptFile = + | TestPromptFileBase + | TestPromptFileBase; diff --git a/packages/genesys-web-messaging-tester-cli/src/commands/aiTest/testScript/parsePromptScript.spec.ts b/packages/genesys-web-messaging-tester-cli/src/commands/aiTest/testScript/parsePromptScript.spec.ts deleted file mode 100644 index 557e85c..0000000 --- a/packages/genesys-web-messaging-tester-cli/src/commands/aiTest/testScript/parsePromptScript.spec.ts +++ /dev/null @@ -1,76 +0,0 @@ -import { extractAiTestScenarios, TestPromptFile } from './parsePromptScript'; - -test('prompts converted', () => { - const testScriptFile: TestPromptFile = { - scenarios: { - 'test-name-of-test-1': { - setup: { - prompt: 'test-prompt-1', - terminatingPhrases: { - fail: ['test-failing-phrase-1'], - pass: ['test-passing-phrase-1'], - }, - }, - }, - 'test-name-of-test-2': { - setup: { - prompt: 'test-prompt-2', - terminatingPhrases: { - fail: ['test-failing-phrase-2'], - pass: ['test-passing-phrase-2'], - }, - }, - followUp: { - prompt: 'test-prompt-3', - terminatingPhrases: { - fail: ['test-failing-phrase-3'], - pass: ['test-passing-phrase-3'], - }, - }, - }, - }, - }; - - expect( - extractAiTestScenarios(testScriptFile, { - deploymentId: 'test-deployment-id', - region: 'test-region', - }), - ).toStrictEqual([ - { - sessionConfig: { - deploymentId: 'test-deployment-id', - region: 'test-region', - }, - name: 'test-name-of-test-1', - setup: { - prompt: 'test-prompt-1', - terminatingPhrases: { - fail: ['test-failing-phrase-1'], - pass: ['test-passing-phrase-1'], - }, - }, - }, - { - sessionConfig: { - deploymentId: 'test-deployment-id', - region: 'test-region', - }, - name: 'test-name-of-test-2', - setup: { - prompt: 'test-prompt-2', - terminatingPhrases: { - fail: ['test-failing-phrase-2'], - pass: ['test-passing-phrase-2'], - }, - }, - followUp: { - prompt: 'test-prompt-3', - terminatingPhrases: { - fail: ['test-failing-phrase-3'], - pass: ['test-passing-phrase-3'], - }, - }, - }, - ]); -}); diff --git a/packages/genesys-web-messaging-tester-cli/src/commands/aiTest/testScript/parsePromptScript.ts b/packages/genesys-web-messaging-tester-cli/src/commands/aiTest/testScript/parsePromptScript.ts deleted file mode 100644 index 7f53259..0000000 --- a/packages/genesys-web-messaging-tester-cli/src/commands/aiTest/testScript/parsePromptScript.ts +++ /dev/null @@ -1,65 +0,0 @@ -import { SessionConfig } from '@ovotech/genesys-web-messaging-tester'; - -export interface AiScenarioSetupSection { - readonly prompt: string; - readonly terminatingPhrases: { - readonly pass: string[]; - readonly fail: string[]; - }; -} -export interface AiScenarioFollowUpSection { - readonly prompt: string; - readonly terminatingPhrases: { - readonly pass: string[]; - readonly fail: string[]; - }; -} - -export interface AiScenarioSection { - setup: AiScenarioSetupSection; - followUp?: AiScenarioFollowUpSection; -} - -export interface TestPromptFile { - readonly config?: { - readonly deploymentId: string; - readonly region: string; - readonly origin?: string; - }; - readonly scenarios: { - [name: string]: AiScenarioSection; - }; -} - -export interface AiTestScript extends AiScenarioSection { - readonly sessionConfig: SessionConfig; - readonly name: string; -} - -export function extractAiTestScenarios( - testScript: Exclude, - sessionConfig: SessionConfig, -): AiTestScript[] { - return Object.entries(testScript.scenarios ?? []).map(([scenarioName, scenario]) => ({ - sessionConfig, - name: scenarioName, - setup: { - prompt: scenario.setup.prompt, - terminatingPhrases: { - pass: scenario.setup.terminatingPhrases.pass ?? [], - fail: scenario.setup.terminatingPhrases.fail ?? [], - }, - }, - ...(scenario.followUp - ? { - followUp: { - prompt: scenario.followUp.prompt, - terminatingPhrases: { - pass: scenario.followUp.terminatingPhrases.pass ?? [], - fail: scenario.followUp.terminatingPhrases.fail ?? [], - }, - }, - } - : {}), - })); -} diff --git a/packages/genesys-web-messaging-tester-cli/src/commands/aiTest/testScript/validatePromptScript.spec.ts b/packages/genesys-web-messaging-tester-cli/src/commands/aiTest/testScript/validatePromptScript.spec.ts index 736a9b6..66306eb 100644 --- a/packages/genesys-web-messaging-tester-cli/src/commands/aiTest/testScript/validatePromptScript.spec.ts +++ b/packages/genesys-web-messaging-tester-cli/src/commands/aiTest/testScript/validatePromptScript.spec.ts @@ -5,6 +5,14 @@ test('Valid', () => { config: { deploymentId: 'test-deployment-id', region: 'test-region', + ai: { + provider: 'google-vertex-ai', + config: { + project: 'test-project', + location: 'test-location', + examples: [{ input: 'Hi', output: 'Hello' }], + }, + }, }, scenarios: { 'test-name-of-test-1': { @@ -39,6 +47,19 @@ test('Valid', () => { config: { deploymentId: 'test-deployment-id', region: 'test-region', + ai: { + provider: 'google-vertex-ai', + config: { + location: 'test-location', + project: 'test-project', + examples: [ + { + input: 'Hi', + output: 'Hello', + }, + ], + }, + }, }, scenarios: { 'test-name-of-test-1': { diff --git a/packages/genesys-web-messaging-tester-cli/src/commands/aiTest/testScript/validatePromptScript.ts b/packages/genesys-web-messaging-tester-cli/src/commands/aiTest/testScript/validatePromptScript.ts index 64b6568..c48e654 100644 --- a/packages/genesys-web-messaging-tester-cli/src/commands/aiTest/testScript/validatePromptScript.ts +++ b/packages/genesys-web-messaging-tester-cli/src/commands/aiTest/testScript/validatePromptScript.ts @@ -1,35 +1,6 @@ -import Joi, { ValidationError } from 'joi'; -import { TestPromptFile } from './parsePromptScript'; - -const schema = Joi.object({ - config: Joi.object({ - deploymentId: Joi.string(), - region: Joi.string(), - origin: Joi.string(), - }).optional(), - scenarios: Joi.object() - .min(1) - .pattern( - /./, - Joi.object({ - setup: Joi.object({ - prompt: Joi.string().required(), - terminatingPhrases: Joi.object({ - pass: Joi.array().items(Joi.string()).min(1).required(), - fail: Joi.array().items(Joi.string()).min(1).required(), - }).required(), - }).required(), - followUp: Joi.object({ - prompt: Joi.string().required(), - terminatingPhrases: Joi.object({ - pass: Joi.array().items(Joi.string()).min(1).required(), - fail: Joi.array().items(Joi.string()).min(1).required(), - }).required(), - }).optional(), - }).required(), - ) - .required(), -}); +import { ValidationError } from 'joi'; +import { TestPromptFile } from './modelTypes'; +import { schema } from './modelSchema'; interface ValidatePromptScriptValidResult { validPromptScript: TestPromptFile; diff --git a/packages/genesys-web-messaging-tester-cli/src/commands/aiTest/ui.ts b/packages/genesys-web-messaging-tester-cli/src/commands/aiTest/ui.ts index 3fdab30..63a393f 100644 --- a/packages/genesys-web-messaging-tester-cli/src/commands/aiTest/ui.ts +++ b/packages/genesys-web-messaging-tester-cli/src/commands/aiTest/ui.ts @@ -3,6 +3,7 @@ import { ValidationError } from 'joi'; import { ShouldEndConversationEndedResult } from './prompt/shouldEndConversation'; import { TranscribedMessage } from '@ovotech/genesys-web-messaging-tester'; import { PhraseFound } from './prompt/containsTerminatingPhrases'; +import { PreflightError } from './chatCompletionClients/chatCompletionClient'; export class Ui { /** @@ -42,6 +43,14 @@ export class Ui { return Ui.trailingNewline(chalk.red(error?.message ?? 'Failed to validate Prompt Script')); } + public preflightCheckFailure(aiProvider: string, error: PreflightError): string { + return Ui.trailingNewline( + chalk.red( + `The check to ensure ${aiProvider} can be used failed due to:\n${error.reasonForError}`, + ), + ); + } + public testResult(result: ShouldEndConversationEndedResult): string { const resultMessage = result.reason.type === 'pass' diff --git a/packages/genesys-web-messaging-tester/README.md b/packages/genesys-web-messaging-tester/README.md index d126521..77187ea 100644 --- a/packages/genesys-web-messaging-tester/README.md +++ b/packages/genesys-web-messaging-tester/README.md @@ -76,6 +76,13 @@ Finally, run the test by executing the script: node examples/api/src/js-script.js ``` +## Support + +If you have any questions then please feel free to: + +* Raise an issue on this [project's GitHub repository](https://github.com/ovotech/genesys-web-messaging-tester) +* [Drop me a message](https://www.linkedin.com/in/lucas-woodward-the-dev/) + ## Debugging Messages sent between the client and Genesys' server can be output by setting the environment variable: diff --git a/yarn.lock b/yarn.lock index a885bff..9571224 100644 --- a/yarn.lock +++ b/yarn.lock @@ -508,6 +508,40 @@ __metadata: languageName: unknown linkType: soft +"@google-cloud/aiplatform@npm:3.10.0": + version: 3.10.0 + resolution: "@google-cloud/aiplatform@npm:3.10.0" + dependencies: + google-gax: ^4.0.3 + protobuf.js: ^1.1.2 + checksum: ad053bfdbd242d77cc650cba8c5cc0516a87df6165182368479120dc54a502d212ae54d3961c20794c85b4b98be550aa1b31089e1775e9b29d410c8b4faa42dd + languageName: node + linkType: hard + +"@grpc/grpc-js@npm:~1.9.6": + version: 1.9.14 + resolution: "@grpc/grpc-js@npm:1.9.14" + dependencies: + "@grpc/proto-loader": ^0.7.8 + "@types/node": ">=12.12.47" + checksum: 1e0821876fc55fa1d71a674e65db6227ca398f6ff77735bd44d8d4a554fa97dcddd06e7844c3d7da37350feafd824ec88af04f0ab0e0c2e0bc8f753939935240 + languageName: node + linkType: hard + +"@grpc/proto-loader@npm:^0.7.0, @grpc/proto-loader@npm:^0.7.8": + version: 0.7.10 + resolution: "@grpc/proto-loader@npm:0.7.10" + dependencies: + lodash.camelcase: ^4.3.0 + long: ^5.0.0 + protobufjs: ^7.2.4 + yargs: ^17.7.2 + bin: + proto-loader-gen-types: build/bin/proto-loader-gen-types.js + checksum: 4987e23b57942c2363b6a6a106e63efae636666cefa348778dfafef2ff72da7343c8587667521cb1d52482827bcd001dd535bdc27065110af56d9c7c176334c9 + languageName: node + linkType: hard + "@hapi/hoek@npm:^9.0.0": version: 9.3.0 resolution: "@hapi/hoek@npm:9.3.0" @@ -920,6 +954,7 @@ __metadata: version: 0.0.0-use.local resolution: "@ovotech/genesys-web-messaging-tester-cli@workspace:packages/genesys-web-messaging-tester-cli" dependencies: + "@google-cloud/aiplatform": 3.10.0 "@ovotech/genesys-web-messaging-tester": ^2.0.7 "@types/humanize-duration": ^3.27.1 "@types/jest": ^29.0.3 @@ -944,7 +979,7 @@ __metadata: listr2: ^5.0.5 openai: 4.0.0-beta.12 prettier: ^3.1.1 - purecloud-platform-client-v2: ^153.0.0 + purecloud-platform-client-v2: ^187.0.0 strip-ansi: ^6.0.1 ts-jest: ^29.1.1 ts-node: ^10.9.1 @@ -1008,6 +1043,79 @@ __metadata: languageName: node linkType: hard +"@protobufjs/aspromise@npm:^1.1.1, @protobufjs/aspromise@npm:^1.1.2": + version: 1.1.2 + resolution: "@protobufjs/aspromise@npm:1.1.2" + checksum: 011fe7ef0826b0fd1a95935a033a3c0fd08483903e1aa8f8b4e0704e3233406abb9ee25350ec0c20bbecb2aad8da0dcea58b392bbd77d6690736f02c143865d2 + languageName: node + linkType: hard + +"@protobufjs/base64@npm:^1.1.2": + version: 1.1.2 + resolution: "@protobufjs/base64@npm:1.1.2" + checksum: 67173ac34de1e242c55da52c2f5bdc65505d82453893f9b51dc74af9fe4c065cf4a657a4538e91b0d4a1a1e0a0642215e31894c31650ff6e3831471061e1ee9e + languageName: node + linkType: hard + +"@protobufjs/codegen@npm:^2.0.4": + version: 2.0.4 + resolution: "@protobufjs/codegen@npm:2.0.4" + checksum: 59240c850b1d3d0b56d8f8098dd04787dcaec5c5bd8de186fa548de86b86076e1c50e80144b90335e705a044edf5bc8b0998548474c2a10a98c7e004a1547e4b + languageName: node + linkType: hard + +"@protobufjs/eventemitter@npm:^1.1.0": + version: 1.1.0 + resolution: "@protobufjs/eventemitter@npm:1.1.0" + checksum: 0369163a3d226851682f855f81413cbf166cd98f131edb94a0f67f79e75342d86e89df9d7a1df08ac28be2bc77e0a7f0200526bb6c2a407abbfee1f0262d5fd7 + languageName: node + linkType: hard + +"@protobufjs/fetch@npm:^1.1.0": + version: 1.1.0 + resolution: "@protobufjs/fetch@npm:1.1.0" + dependencies: + "@protobufjs/aspromise": ^1.1.1 + "@protobufjs/inquire": ^1.1.0 + checksum: 3fce7e09eb3f1171dd55a192066450f65324fd5f7cc01a431df01bb00d0a895e6bfb5b0c5561ce157ee1d886349c90703d10a4e11a1a256418ff591b969b3477 + languageName: node + linkType: hard + +"@protobufjs/float@npm:^1.0.2": + version: 1.0.2 + resolution: "@protobufjs/float@npm:1.0.2" + checksum: 5781e1241270b8bd1591d324ca9e3a3128d2f768077a446187a049e36505e91bc4156ed5ac3159c3ce3d2ba3743dbc757b051b2d723eea9cd367bfd54ab29b2f + languageName: node + linkType: hard + +"@protobufjs/inquire@npm:^1.1.0": + version: 1.1.0 + resolution: "@protobufjs/inquire@npm:1.1.0" + checksum: ca06f02eaf65ca36fb7498fc3492b7fc087bfcc85c702bac5b86fad34b692bdce4990e0ef444c1e2aea8c034227bd1f0484be02810d5d7e931c55445555646f4 + languageName: node + linkType: hard + +"@protobufjs/path@npm:^1.1.2": + version: 1.1.2 + resolution: "@protobufjs/path@npm:1.1.2" + checksum: 856eeb532b16a7aac071cacde5c5620df800db4c80cee6dbc56380524736205aae21e5ae47739114bf669ab5e8ba0e767a282ad894f3b5e124197cb9224445ee + languageName: node + linkType: hard + +"@protobufjs/pool@npm:^1.1.0": + version: 1.1.0 + resolution: "@protobufjs/pool@npm:1.1.0" + checksum: d6a34fbbd24f729e2a10ee915b74e1d77d52214de626b921b2d77288bd8f2386808da2315080f2905761527cceffe7ec34c7647bd21a5ae41a25e8212ff79451 + languageName: node + linkType: hard + +"@protobufjs/utf8@npm:^1.1.0": + version: 1.1.0 + resolution: "@protobufjs/utf8@npm:1.1.0" + checksum: f9bf3163d13aaa3b6f5e6fbf37a116e094ea021c0e1f2a7ccd0e12a29e2ce08dafba4e8b36e13f8ed7397e1591610ce880ed1289af4d66cf4ace8a36a9557278 + languageName: node + linkType: hard + "@sideway/address@npm:^4.1.3": version: 4.1.4 resolution: "@sideway/address@npm:4.1.4" @@ -1065,6 +1173,13 @@ __metadata: languageName: node linkType: hard +"@tootallnate/once@npm:2": + version: 2.0.0 + resolution: "@tootallnate/once@npm:2.0.0" + checksum: ad87447820dd3f24825d2d947ebc03072b20a42bfc96cbafec16bff8bbda6c1a81fcb0be56d5b21968560c5359a0af4038a68ba150c3e1694fe4c109a063bed8 + languageName: node + linkType: hard + "@tsconfig/node10@npm:^1.0.7": version: 1.0.9 resolution: "@tsconfig/node10@npm:1.0.9" @@ -1134,6 +1249,13 @@ __metadata: languageName: node linkType: hard +"@types/caseless@npm:*": + version: 0.12.5 + resolution: "@types/caseless@npm:0.12.5" + checksum: f6a3628add76d27005495914c9c3873a93536957edaa5b69c63b46fe10b4649a6fecf16b676c1695f46aab851da47ec6047dcf3570fa8d9b6883492ff6d074e0 + languageName: node + linkType: hard + "@types/debug@npm:^4.1.8": version: 4.1.12 resolution: "@types/debug@npm:4.1.12" @@ -1217,6 +1339,13 @@ __metadata: languageName: node linkType: hard +"@types/long@npm:^4.0.0": + version: 4.0.2 + resolution: "@types/long@npm:4.0.2" + checksum: d16cde7240d834cf44ba1eaec49e78ae3180e724cd667052b194a372f350d024cba8dd3f37b0864931683dab09ca935d52f0c4c1687178af5ada9fc85b0635f4 + languageName: node + linkType: hard + "@types/ms@npm:*": version: 0.7.34 resolution: "@types/ms@npm:0.7.34" @@ -1243,6 +1372,15 @@ __metadata: languageName: node linkType: hard +"@types/node@npm:>=12.12.47, @types/node@npm:>=13.7.0": + version: 20.11.8 + resolution: "@types/node@npm:20.11.8" + dependencies: + undici-types: ~5.26.4 + checksum: 3189b51f7436bcd6ef376504b071775730ed68e5ff81b0e121283e5859f8072532cfdf8431fa904d1441577d833f04a2f32fdd119f7390f36f123f619ca4826a + languageName: node + linkType: hard + "@types/node@npm:^14.14.2": version: 14.18.63 resolution: "@types/node@npm:14.18.63" @@ -1259,6 +1397,18 @@ __metadata: languageName: node linkType: hard +"@types/request@npm:^2.48.8": + version: 2.48.12 + resolution: "@types/request@npm:2.48.12" + dependencies: + "@types/caseless": "*" + "@types/node": "*" + "@types/tough-cookie": "*" + form-data: ^2.5.0 + checksum: 20dfad0a46b4249bf42f09c51fbd4d02ec6738c5152194b5c7c69bab80b00eae9cc71df4489ffa929d0968d453ef7d0823d1f98871efed563a4fdb57bf0a4c58 + languageName: node + linkType: hard + "@types/semver@npm:^7.3.12": version: 7.5.5 resolution: "@types/semver@npm:7.5.5" @@ -1287,6 +1437,13 @@ __metadata: languageName: node linkType: hard +"@types/tough-cookie@npm:*": + version: 4.0.5 + resolution: "@types/tough-cookie@npm:4.0.5" + checksum: f19409d0190b179331586365912920d192733112a195e870c7f18d20ac8adb7ad0b0ff69dad430dba8bc2be09593453a719cfea92dc3bda19748fd158fe1498d + languageName: node + linkType: hard + "@types/triple-beam@npm:^1.3.2": version: 1.3.5 resolution: "@types/triple-beam@npm:1.3.5" @@ -1561,6 +1718,15 @@ __metadata: languageName: node linkType: hard +"agent-base@npm:6": + version: 6.0.2 + resolution: "agent-base@npm:6.0.2" + dependencies: + debug: 4 + checksum: f52b6872cc96fd5f622071b71ef200e01c7c4c454ee68bc9accca90c98cfb39f2810e3e9aa330435835eedc8c23f4f8a15267f67c6e245d2b33757575bdac49d + languageName: node + linkType: hard + "agent-base@npm:^7.0.2, agent-base@npm:^7.1.0": version: 7.1.0 resolution: "agent-base@npm:7.1.0" @@ -1743,13 +1909,14 @@ __metadata: languageName: node linkType: hard -"axios@npm:^0.27.2": - version: 0.27.2 - resolution: "axios@npm:0.27.2" +"axios@npm:^1.6.5": + version: 1.6.7 + resolution: "axios@npm:1.6.7" dependencies: - follow-redirects: ^1.14.9 + follow-redirects: ^1.15.4 form-data: ^4.0.0 - checksum: 38cb7540465fe8c4102850c4368053c21683af85c5fdf0ea619f9628abbcb59415d1e22ebc8a6390d2bbc9b58a9806c874f139767389c862ec9b772235f06854 + proxy-from-env: ^1.1.0 + checksum: 87d4d429927d09942771f3b3a6c13580c183e31d7be0ee12f09be6d5655304996bb033d85e54be81606f4e89684df43be7bf52d14becb73a12727bf33298a082 languageName: node linkType: hard @@ -1843,6 +2010,20 @@ __metadata: languageName: node linkType: hard +"base64-js@npm:^1.3.0": + version: 1.5.1 + resolution: "base64-js@npm:1.5.1" + checksum: 669632eb3745404c2f822a18fc3a0122d2f9a7a13f7fb8b5823ee19d1d2ff9ee5b52c53367176ea4ad093c332fd5ab4bd0ebae5a8e27917a4105a4cfc86b1005 + languageName: node + linkType: hard + +"bignumber.js@npm:^9.0.0": + version: 9.1.2 + resolution: "bignumber.js@npm:9.1.2" + checksum: 582c03af77ec9cb0ebd682a373ee6c66475db94a4325f92299621d544aa4bd45cb45fd60001610e94aef8ae98a0905fa538241d9638d4422d57abbeeac6fadaf + languageName: node + linkType: hard + "brace-expansion@npm:^1.1.7": version: 1.1.11 resolution: "brace-expansion@npm:1.1.11" @@ -1903,6 +2084,13 @@ __metadata: languageName: node linkType: hard +"buffer-equal-constant-time@npm:1.0.1": + version: 1.0.1 + resolution: "buffer-equal-constant-time@npm:1.0.1" + checksum: 80bb945f5d782a56f374b292770901065bad21420e34936ecbe949e57724b4a13874f735850dd1cc61f078773c4fb5493a41391e7bda40d1fa388d6bd80daaab + languageName: node + linkType: hard + "buffer-from@npm:^1.0.0": version: 1.1.2 resolution: "buffer-from@npm:1.1.2" @@ -2145,7 +2333,7 @@ __metadata: languageName: node linkType: hard -"combined-stream@npm:^1.0.8": +"combined-stream@npm:^1.0.6, combined-stream@npm:^1.0.8": version: 1.0.8 resolution: "combined-stream@npm:1.0.8" dependencies: @@ -2338,6 +2526,18 @@ __metadata: languageName: node linkType: hard +"duplexify@npm:^4.0.0": + version: 4.1.2 + resolution: "duplexify@npm:4.1.2" + dependencies: + end-of-stream: ^1.4.1 + inherits: ^2.0.3 + readable-stream: ^3.1.1 + stream-shift: ^1.0.0 + checksum: 964376c61c0e92f6ed0694b3ba97c84f199413dc40ab8dfdaef80b7a7f4982fcabf796214e28ed614a5bc1ec45488a29b81e7d46fa3f5ddf65bcb118c20145ad + languageName: node + linkType: hard + "eastasianwidth@npm:^0.2.0": version: 0.2.0 resolution: "eastasianwidth@npm:0.2.0" @@ -2345,6 +2545,15 @@ __metadata: languageName: node linkType: hard +"ecdsa-sig-formatter@npm:1.0.11, ecdsa-sig-formatter@npm:^1.0.11": + version: 1.0.11 + resolution: "ecdsa-sig-formatter@npm:1.0.11" + dependencies: + safe-buffer: ^5.0.1 + checksum: 207f9ab1c2669b8e65540bce29506134613dd5f122cccf1e6a560f4d63f2732d427d938f8481df175505aad94583bcb32c688737bb39a6df0625f903d6d93c03 + languageName: node + linkType: hard + "electron-to-chromium@npm:^1.4.535": version: 1.4.581 resolution: "electron-to-chromium@npm:1.4.581" @@ -2389,6 +2598,15 @@ __metadata: languageName: node linkType: hard +"end-of-stream@npm:^1.4.1": + version: 1.4.4 + resolution: "end-of-stream@npm:1.4.4" + dependencies: + once: ^1.4.0 + checksum: 530a5a5a1e517e962854a31693dbb5c0b2fc40b46dad2a56a2deec656ca040631124f4795823acc68238147805f8b021abbe221f4afed5ef3c8e8efc2024908b + languageName: node + linkType: hard + "enquirer@npm:^2.3.5": version: 2.4.1 resolution: "enquirer@npm:2.4.1" @@ -2700,6 +2918,13 @@ __metadata: languageName: node linkType: hard +"extend@npm:^3.0.2": + version: 3.0.2 + resolution: "extend@npm:3.0.2" + checksum: a50a8309ca65ea5d426382ff09f33586527882cf532931cb08ca786ea3146c0553310bda688710ff61d7668eba9f96b923fe1420cdf56a2c3eaf30fcab87b515 + languageName: node + linkType: hard + "fast-deep-equal@npm:^3.1.1, fast-deep-equal@npm:^3.1.3": version: 3.1.3 resolution: "fast-deep-equal@npm:3.1.3" @@ -2819,13 +3044,13 @@ __metadata: languageName: node linkType: hard -"follow-redirects@npm:^1.14.9": - version: 1.15.3 - resolution: "follow-redirects@npm:1.15.3" +"follow-redirects@npm:^1.15.4": + version: 1.15.5 + resolution: "follow-redirects@npm:1.15.5" peerDependenciesMeta: debug: optional: true - checksum: 584da22ec5420c837bd096559ebfb8fe69d82512d5585004e36a3b4a6ef6d5905780e0c74508c7b72f907d1fa2b7bd339e613859e9c304d0dc96af2027fd0231 + checksum: 5ca49b5ce6f44338cbfc3546823357e7a70813cecc9b7b768158a1d32c1e62e7407c944402a918ea8c38ae2e78266312d617dc68783fac502cbb55e1047b34ec languageName: node linkType: hard @@ -2846,6 +3071,17 @@ __metadata: languageName: node linkType: hard +"form-data@npm:^2.5.0": + version: 2.5.1 + resolution: "form-data@npm:2.5.1" + dependencies: + asynckit: ^0.4.0 + combined-stream: ^1.0.6 + mime-types: ^2.1.12 + checksum: 5134ada56cc246b293a1ac7678dba6830000603a3979cf83ff7b2f21f2e3725202237cfb89e32bcb38a1d35727efbd3c3a22e65b42321e8ade8eec01ce755d08 + languageName: node + linkType: hard + "form-data@npm:^4.0.0": version: 4.0.0 resolution: "form-data@npm:4.0.0" @@ -2925,6 +3161,28 @@ __metadata: languageName: node linkType: hard +"gaxios@npm:^6.0.0, gaxios@npm:^6.1.1": + version: 6.1.1 + resolution: "gaxios@npm:6.1.1" + dependencies: + extend: ^3.0.2 + https-proxy-agent: ^7.0.1 + is-stream: ^2.0.0 + node-fetch: ^2.6.9 + checksum: bb4a4e6c81847b690ee29e01294d2093eb9bb4f9e60bbf81fcc6cd3b274f3c551c50a9bc134e7e7019a9b116eac9d9df6af9f2519c695da7ddd785f36564da72 + languageName: node + linkType: hard + +"gcp-metadata@npm:^6.1.0": + version: 6.1.0 + resolution: "gcp-metadata@npm:6.1.0" + dependencies: + gaxios: ^6.0.0 + json-bigint: ^1.0.0 + checksum: 55de8ae4a6b7664379a093abf7e758ae06e82f244d41bd58d881a470bf34db94c4067ce9e1b425d9455b7705636d5f8baad844e49bb73879c338753ba7785b2b + languageName: node + linkType: hard + "gensync@npm:^1.0.0-beta.2": version: 1.0.0-beta.2 resolution: "gensync@npm:1.0.0-beta.2" @@ -3040,6 +3298,40 @@ __metadata: languageName: node linkType: hard +"google-auth-library@npm:^9.0.0": + version: 9.5.0 + resolution: "google-auth-library@npm:9.5.0" + dependencies: + base64-js: ^1.3.0 + ecdsa-sig-formatter: ^1.0.11 + gaxios: ^6.1.1 + gcp-metadata: ^6.1.0 + gtoken: ^7.0.0 + jws: ^4.0.0 + checksum: a90a414e291b055d4d5c3b9094d7bd69d67b959147e1d2266d7446268ac79561fdf0174f1d7b08e6b425039fb6b71276f9593cc2896e9ebdf63295c979634128 + languageName: node + linkType: hard + +"google-gax@npm:^4.0.3": + version: 4.2.1 + resolution: "google-gax@npm:4.2.1" + dependencies: + "@grpc/grpc-js": ~1.9.6 + "@grpc/proto-loader": ^0.7.0 + "@types/long": ^4.0.0 + abort-controller: ^3.0.0 + duplexify: ^4.0.0 + google-auth-library: ^9.0.0 + node-fetch: ^2.6.1 + object-hash: ^3.0.0 + proto3-json-serializer: ^2.0.0 + protobufjs: 7.2.6 + retry-request: ^7.0.0 + uuid: ^9.0.1 + checksum: 63dd96b47d24c67d87dd3d802bf0e2377ec3de7bc5d2124b87490d40be190aa21b9ff3475cf6a48adbcd06a55d6b52b4d4ba707d1eaa3b06ae7cb5473562c86c + languageName: node + linkType: hard + "gopd@npm:^1.0.1": version: 1.0.1 resolution: "gopd@npm:1.0.1" @@ -3063,6 +3355,16 @@ __metadata: languageName: node linkType: hard +"gtoken@npm:^7.0.0": + version: 7.0.1 + resolution: "gtoken@npm:7.0.1" + dependencies: + gaxios: ^6.0.0 + jws: ^4.0.0 + checksum: de1f65ebe77deb90931c29c76408e6bd097ac6f8d0b520164ac13449b39012ea8d710596d5a63ae508b2c9e49ef9f92cd7817d6fc97140668ba2e1ff30e2d418 + languageName: node + linkType: hard + "handlebars@npm:^4.7.7": version: 4.7.8 resolution: "handlebars@npm:4.7.8" @@ -3141,6 +3443,17 @@ __metadata: languageName: node linkType: hard +"http-proxy-agent@npm:^5.0.0": + version: 5.0.0 + resolution: "http-proxy-agent@npm:5.0.0" + dependencies: + "@tootallnate/once": 2 + agent-base: 6 + debug: 4 + checksum: e2ee1ff1656a131953839b2a19cd1f3a52d97c25ba87bd2559af6ae87114abf60971e498021f9b73f9fd78aea8876d1fb0d4656aac8a03c6caa9fc175f22b786 + languageName: node + linkType: hard + "http-proxy-agent@npm:^7.0.0": version: 7.0.0 resolution: "http-proxy-agent@npm:7.0.0" @@ -3151,6 +3464,16 @@ __metadata: languageName: node linkType: hard +"https-proxy-agent@npm:^5.0.0": + version: 5.0.1 + resolution: "https-proxy-agent@npm:5.0.1" + dependencies: + agent-base: 6 + debug: 4 + checksum: 571fccdf38184f05943e12d37d6ce38197becdd69e58d03f43637f7fa1269cf303a7d228aa27e5b27bbd3af8f09fd938e1c91dcfefff2df7ba77c20ed8dfc765 + languageName: node + linkType: hard + "https-proxy-agent@npm:^7.0.1": version: 7.0.2 resolution: "https-proxy-agent@npm:7.0.2" @@ -3947,6 +4270,15 @@ __metadata: languageName: node linkType: hard +"json-bigint@npm:^1.0.0": + version: 1.0.0 + resolution: "json-bigint@npm:1.0.0" + dependencies: + bignumber.js: ^9.0.0 + checksum: c67bb93ccb3c291e60eb4b62931403e378906aab113ec1c2a8dd0f9a7f065ad6fd9713d627b732abefae2e244ac9ce1721c7a3142b2979532f12b258634ce6f6 + languageName: node + linkType: hard + "json-buffer@npm:3.0.1": version: 3.0.1 resolution: "json-buffer@npm:3.0.1" @@ -3998,6 +4330,27 @@ __metadata: languageName: node linkType: hard +"jwa@npm:^2.0.0": + version: 2.0.0 + resolution: "jwa@npm:2.0.0" + dependencies: + buffer-equal-constant-time: 1.0.1 + ecdsa-sig-formatter: 1.0.11 + safe-buffer: ^5.0.1 + checksum: 8f00b71ad5fe94cb55006d0d19202f8f56889109caada2f7eeb63ca81755769ce87f4f48101967f398462e3b8ae4faebfbd5a0269cb755dead5d63c77ba4d2f1 + languageName: node + linkType: hard + +"jws@npm:^4.0.0": + version: 4.0.0 + resolution: "jws@npm:4.0.0" + dependencies: + jwa: ^2.0.0 + safe-buffer: ^5.0.1 + checksum: d68d07aa6d1b8cb35c363a9bd2b48f15064d342a5d9dc18a250dbbce8dc06bd7e4792516c50baa16b8d14f61167c19e851fd7f66b59ecc68b7f6a013759765f7 + languageName: node + linkType: hard + "keyv@npm:^4.5.3": version: 4.5.4 resolution: "keyv@npm:4.5.4" @@ -4075,6 +4428,13 @@ __metadata: languageName: node linkType: hard +"lodash.camelcase@npm:^4.3.0": + version: 4.3.0 + resolution: "lodash.camelcase@npm:4.3.0" + checksum: cb9227612f71b83e42de93eccf1232feeb25e705bdb19ba26c04f91e885bfd3dd5c517c4a97137658190581d3493ea3973072ca010aab7e301046d90740393d1 + languageName: node + linkType: hard + "lodash.memoize@npm:4.x": version: 4.1.2 resolution: "lodash.memoize@npm:4.1.2" @@ -4122,6 +4482,20 @@ __metadata: languageName: node linkType: hard +"long@npm:^5.0.0": + version: 5.2.3 + resolution: "long@npm:5.2.3" + checksum: 885ede7c3de4facccbd2cacc6168bae3a02c3e836159ea4252c87b6e34d40af819824b2d4edce330bfb5c4d6e8ce3ec5864bdcf9473fa1f53a4f8225860e5897 + languageName: node + linkType: hard + +"long@npm:~1.1.2": + version: 1.1.5 + resolution: "long@npm:1.1.5" + checksum: 3bca2d8a42ff93001c203b8421445b307d68d283532afcbdacdde7d5a6c9ecf45f3338c92d3774d0d627d847a6c7bd6aa2944abd1f43f40f6a3d4a8acd807528 + languageName: node + linkType: hard + "lru-cache@npm:^10.0.1, lru-cache@npm:^9.1.1 || ^10.0.0": version: 10.0.2 resolution: "lru-cache@npm:10.0.2" @@ -4438,7 +4812,7 @@ __metadata: languageName: node linkType: hard -"node-fetch@npm:^2.6.7": +"node-fetch@npm:^2.6.1, node-fetch@npm:^2.6.7, node-fetch@npm:^2.6.9": version: 2.7.0 resolution: "node-fetch@npm:2.7.0" dependencies: @@ -4513,6 +4887,13 @@ __metadata: languageName: node linkType: hard +"object-hash@npm:^3.0.0": + version: 3.0.0 + resolution: "object-hash@npm:3.0.0" + checksum: 80b4904bb3857c52cc1bfd0b52c0352532ca12ed3b8a6ff06a90cd209dfda1b95cee059a7625eb9da29537027f68ac4619363491eedb2f5d3dddbba97494fd6c + languageName: node + linkType: hard + "object-inspect@npm:^1.9.0": version: 1.13.1 resolution: "object-inspect@npm:1.13.1" @@ -4520,7 +4901,7 @@ __metadata: languageName: node linkType: hard -"once@npm:^1.3.0": +"once@npm:^1.3.0, once@npm:^1.4.0": version: 1.4.0 resolution: "once@npm:1.4.0" dependencies: @@ -4788,6 +5169,51 @@ __metadata: languageName: node linkType: hard +"proto3-json-serializer@npm:^2.0.0": + version: 2.0.1 + resolution: "proto3-json-serializer@npm:2.0.1" + dependencies: + protobufjs: ^7.2.5 + checksum: dfdb30f1453af356224c60c7106f9211167f142c1310696a24beb7d69c498ad15e6e0cc64e5a9585d1a24787a0be59a0662b6e673727a715f36622dc3a31abf5 + languageName: node + linkType: hard + +"protobuf.js@npm:^1.1.2": + version: 1.1.2 + resolution: "protobuf.js@npm:1.1.2" + dependencies: + long: ~1.1.2 + checksum: 1bebfb3ff540de0467ea45c77ba37f7884bd41f6eefb335f8f8195ea2833cb78c9186f0b1c0110c384c00a34737e05fdd43cd5664b61168b2818404b1510a11a + languageName: node + linkType: hard + +"protobufjs@npm:7.2.6, protobufjs@npm:^7.2.4, protobufjs@npm:^7.2.5": + version: 7.2.6 + resolution: "protobufjs@npm:7.2.6" + dependencies: + "@protobufjs/aspromise": ^1.1.2 + "@protobufjs/base64": ^1.1.2 + "@protobufjs/codegen": ^2.0.4 + "@protobufjs/eventemitter": ^1.1.0 + "@protobufjs/fetch": ^1.1.0 + "@protobufjs/float": ^1.0.2 + "@protobufjs/inquire": ^1.1.0 + "@protobufjs/path": ^1.1.2 + "@protobufjs/pool": ^1.1.0 + "@protobufjs/utf8": ^1.1.0 + "@types/node": ">=13.7.0" + long: ^5.0.0 + checksum: 3c62e48f7d50017ac3b0dcd2a58e617cf858f9fba56a488bd48b9aa3482893a75540052dbcb3c12dfbaab42b1d04964611175faf06bdadcd33a4ebac982a511e + languageName: node + linkType: hard + +"proxy-from-env@npm:^1.1.0": + version: 1.1.0 + resolution: "proxy-from-env@npm:1.1.0" + checksum: ed7fcc2ba0a33404958e34d95d18638249a68c430e30fcb6c478497d72739ba64ce9810a24f53a7d921d0c065e5b78e3822759800698167256b04659366ca4d4 + languageName: node + linkType: hard + "punycode@npm:^2.1.0": version: 2.3.1 resolution: "punycode@npm:2.3.1" @@ -4802,15 +5228,15 @@ __metadata: languageName: node linkType: hard -"purecloud-platform-client-v2@npm:^153.0.0": - version: 153.0.0 - resolution: "purecloud-platform-client-v2@npm:153.0.0" +"purecloud-platform-client-v2@npm:^187.0.0": + version: 187.0.0 + resolution: "purecloud-platform-client-v2@npm:187.0.0" dependencies: - axios: ^0.27.2 + axios: ^1.6.5 configparser: ^0.3.9 qs: ^6.10.3 winston: ^3.6.0 - checksum: f1afc3c3275b966845ba2df1a22512e2d04ba7de28c67dd2e9cc9ddf60f591db8e0b11c7a843690def129f5a8ee4695faaf3be55892e5bb8b87777af23412020 + checksum: 2ea1931adfa02759c93578a92c89e4c1e9d0921f52ddbd7c886ab97a05f55b9fe80db5d76e042f386fa649f933a7c7ab7793213decb79ba92146dddfb83b041b languageName: node linkType: hard @@ -4837,7 +5263,7 @@ __metadata: languageName: node linkType: hard -"readable-stream@npm:^3.4.0, readable-stream@npm:^3.6.0": +"readable-stream@npm:^3.1.1, readable-stream@npm:^3.4.0, readable-stream@npm:^3.6.0": version: 3.6.2 resolution: "readable-stream@npm:3.6.2" dependencies: @@ -4935,6 +5361,17 @@ __metadata: languageName: node linkType: hard +"retry-request@npm:^7.0.0": + version: 7.0.2 + resolution: "retry-request@npm:7.0.2" + dependencies: + "@types/request": ^2.48.8 + extend: ^3.0.2 + teeny-request: ^9.0.0 + checksum: 2d7307422333f548e5f40524978a344b62193714f6209c4f6a41057ae279804eb9bc8e0a277791e7b6f2d5d76068bdaca8590662a909cf1e6cfc3ab789e4c6b6 + languageName: node + linkType: hard + "retry@npm:^0.12.0": version: 0.12.0 resolution: "retry@npm:0.12.0" @@ -4985,7 +5422,7 @@ __metadata: languageName: node linkType: hard -"safe-buffer@npm:~5.2.0": +"safe-buffer@npm:^5.0.1, safe-buffer@npm:~5.2.0": version: 5.2.1 resolution: "safe-buffer@npm:5.2.1" checksum: b99c4b41fdd67a6aaf280fcd05e9ffb0813654894223afb78a31f14a19ad220bba8aba1cb14eddce1fcfb037155fe6de4e861784eb434f7d11ed58d1e70dd491 @@ -5213,6 +5650,22 @@ __metadata: languageName: node linkType: hard +"stream-events@npm:^1.0.5": + version: 1.0.5 + resolution: "stream-events@npm:1.0.5" + dependencies: + stubs: ^3.0.0 + checksum: 969ce82e34bfbef5734629cc06f9d7f3705a9ceb8fcd6a526332f9159f1f8bbfdb1a453f3ced0b728083454f7706adbbe8428bceb788a0287ca48ba2642dc3fc + languageName: node + linkType: hard + +"stream-shift@npm:^1.0.0": + version: 1.0.3 + resolution: "stream-shift@npm:1.0.3" + checksum: a24c0a3f66a8f9024bd1d579a533a53be283b4475d4e6b4b3211b964031447bdf6532dd1f3c2b0ad66752554391b7c62bd7ca4559193381f766534e723d50242 + languageName: node + linkType: hard + "string-length@npm:^4.0.1": version: 4.0.2 resolution: "string-length@npm:4.0.2" @@ -5293,6 +5746,13 @@ __metadata: languageName: node linkType: hard +"stubs@npm:^3.0.0": + version: 3.0.0 + resolution: "stubs@npm:3.0.0" + checksum: dec7b82186e3743317616235c59bfb53284acc312cb9f4c3e97e2205c67a5c158b0ca89db5927e52351582e90a2672822eeaec9db396e23e56893d2a8676e024 + languageName: node + linkType: hard + "supports-color@npm:^5.3.0": version: 5.5.0 resolution: "supports-color@npm:5.5.0" @@ -5364,6 +5824,19 @@ __metadata: languageName: node linkType: hard +"teeny-request@npm:^9.0.0": + version: 9.0.0 + resolution: "teeny-request@npm:9.0.0" + dependencies: + http-proxy-agent: ^5.0.0 + https-proxy-agent: ^5.0.0 + node-fetch: ^2.6.9 + stream-events: ^1.0.5 + uuid: ^9.0.0 + checksum: 9cb0ad83f9ca6ce6515b3109cbb30ceb2533cdeab8e41c3a0de89f509bd92c5a9aabd27b3adf7f3e49516e106a358859b19fa4928a1937a4ab95809ccb7d52eb + languageName: node + linkType: hard + "test-exclude@npm:^6.0.0": version: 6.0.0 resolution: "test-exclude@npm:6.0.0" @@ -5679,7 +6152,7 @@ __metadata: languageName: node linkType: hard -"uuid@npm:^9.0.0": +"uuid@npm:^9.0.0, uuid@npm:^9.0.1": version: 9.0.1 resolution: "uuid@npm:9.0.1" bin: @@ -5912,7 +6385,7 @@ __metadata: languageName: node linkType: hard -"yargs@npm:^17.3.1": +"yargs@npm:^17.3.1, yargs@npm:^17.7.2": version: 17.7.2 resolution: "yargs@npm:17.7.2" dependencies: