Skip to content

Commit

Permalink
SDK regeneration
Browse files Browse the repository at this point in the history
  • Loading branch information
fern-api[bot] committed Jan 23, 2025
1 parent 270694e commit a708146
Show file tree
Hide file tree
Showing 17 changed files with 312 additions and 88 deletions.
77 changes: 69 additions & 8 deletions .mock/definition/empathic-voice/__package__.yml
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,7 @@ types:
docs: >-
Type of Tool. Either `BUILTIN` for natively implemented tools, like web
search, or `FUNCTION` for user-defined tools.
inline: true
source:
openapi: stenographer-openapi.json
ReturnUserDefinedToolVersionType:
Expand All @@ -28,6 +29,7 @@ types:
docs: >-
Versioning method for a Tool. Either `FIXED` for using a fixed version
number or `LATEST` for auto-updating to the latest version.
inline: true
source:
openapi: stenographer-openapi.json
ReturnUserDefinedTool:
Expand Down Expand Up @@ -106,6 +108,7 @@ types:
docs: >-
Versioning method for a Prompt. Either `FIXED` for using a fixed version
number or `LATEST` for auto-updating to the latest version.
inline: true
source:
openapi: stenographer-openapi.json
ReturnPrompt:
Expand Down Expand Up @@ -178,6 +181,7 @@ types:
- STELLA
- SUNNY
docs: Specifies the base voice used to create the Custom Voice.
inline: true
source:
openapi: stenographer-openapi.json
PostedCustomVoiceParameters:
Expand Down Expand Up @@ -337,6 +341,7 @@ types:
- STELLA
- SUNNY
docs: The base voice used to create the Custom Voice.
inline: true
source:
openapi: stenographer-openapi.json
ReturnCustomVoiceParameters:
Expand Down Expand Up @@ -513,6 +518,7 @@ types:
For more information, see our guide on [using built-in
tools](/docs/empathic-voice-interface-evi/tool-use#using-built-in-tools).
inline: true
source:
openapi: stenographer-openapi.json
PostedBuiltinTool:
Expand Down Expand Up @@ -642,6 +648,7 @@ types:
- GROQ
- GOOGLE
docs: The provider of the supplemental language model.
inline: true
source:
openapi: stenographer-openapi.json
PostedLanguageModelModelResource:
Expand Down Expand Up @@ -698,6 +705,7 @@ types:
name: AccountsFireworksModelsLlamaV3P18BInstruct
- ellm
docs: String that specifies the language model to use with `model_provider`.
inline: true
source:
openapi: stenographer-openapi.json
PostedLanguageModel:
Expand Down Expand Up @@ -847,6 +855,7 @@ types:
docs: >-
The provider of the voice to use. Supported values are `HUME_AI` and
`CUSTOM_VOICE`.
inline: true
source:
openapi: stenographer-openapi.json
PostedVoice:
Expand Down Expand Up @@ -880,16 +889,26 @@ types:
- chat_started
- chat_ended
docs: Events this URL is subscribed to
inline: true
source:
openapi: stenographer-openapi.json
PostedWebhookSpec:
docs: URL and settings for a specific webhook to be posted to the server
properties:
url:
type: string
docs: URL to send the webhook to
docs: >-
The URL where event payloads will be sent. This must be a valid https
URL to ensure secure communication. The server at this URL must accept
POST requests with a JSON payload.
events:
docs: Events this URL is subscribed to
docs: >-
The list of events the specified URL is subscribed to.
See our [webhooks
guide](/docs/empathic-voice-interface-evi/configuration#supported-events)
for more information on supported events.
type: list<PostedWebhookEventType>
source:
openapi: stenographer-openapi.json
Expand All @@ -900,6 +919,7 @@ types:
docs: >-
Type of Tool. Either `BUILTIN` for natively implemented tools, like web
search, or `FUNCTION` for user-defined tools.
inline: true
source:
openapi: stenographer-openapi.json
ReturnBuiltinTool:
Expand Down Expand Up @@ -1083,6 +1103,7 @@ types:
- GROQ
- GOOGLE
docs: The provider of the supplemental language model.
inline: true
source:
openapi: stenographer-openapi.json
ReturnLanguageModelModelResource:
Expand Down Expand Up @@ -1143,6 +1164,7 @@ types:
name: AccountsFireworksModelsLlamaV3P18BInstruct
- ellm
docs: String that specifies the language model to use with `model_provider`.
inline: true
source:
openapi: stenographer-openapi.json
ReturnLanguageModel:
Expand Down Expand Up @@ -1224,6 +1246,7 @@ types:
docs: >-
The provider of the voice to use. Supported values are `HUME_AI` and
`CUSTOM_VOICE`.
inline: true
source:
openapi: stenographer-openapi.json
ReturnVoice:
Expand Down Expand Up @@ -1251,16 +1274,26 @@ types:
- chat_started
- chat_ended
docs: Events this URL is subscribed to
inline: true
source:
openapi: stenographer-openapi.json
ReturnWebhookSpec:
docs: Collection of webhook URL endpoints to be returned from the server
properties:
url:
type: string
docs: Webhook URL to send the event updates to
docs: >-
The URL where event payloads will be sent. This must be a valid https
URL to ensure secure communication. The server at this URL must accept
POST requests with a JSON payload.
events:
docs: Events this URL is subscribed to
docs: >-
The list of events the specified URL is subscribed to.
See our [webhooks
guide](/docs/empathic-voice-interface-evi/configuration#supported-events)
for more information on supported events.
type: list<ReturnWebhookEventType>
source:
openapi: stenographer-openapi.json
Expand Down Expand Up @@ -1407,6 +1440,7 @@ types:
- `ERROR`: The chat ended unexpectedly due to an error.
inline: true
source:
openapi: stenographer-openapi.json
ReturnChat:
Expand Down Expand Up @@ -1499,6 +1533,7 @@ types:
first) or `DESC` for descending order (reverse-chronological, with the
newest records first). This value corresponds to the `ascending_order`
query parameter used in the request.
inline: true
source:
openapi: stenographer-openapi.json
ReturnPagedChats:
Expand Down Expand Up @@ -1562,6 +1597,7 @@ types:
- `TOOL`: The function calling mechanism.
inline: true
source:
openapi: stenographer-openapi.json
ReturnChatEventType:
Expand Down Expand Up @@ -1594,6 +1630,7 @@ types:
- `FUNCTION_CALL_RESPONSE`: Contains the tool response.
inline: true
source:
openapi: stenographer-openapi.json
ReturnChatEvent:
Expand Down Expand Up @@ -1702,6 +1739,7 @@ types:
- `ERROR`: The chat ended unexpectedly due to an error.
inline: true
source:
openapi: stenographer-openapi.json
ReturnChatPagedEventsPaginationDirection:
Expand All @@ -1717,6 +1755,7 @@ types:
first) or `DESC` for descending order (reverse-chronological, with the
newest records first). This value corresponds to the `ascending_order`
query parameter used in the request.
inline: true
source:
openapi: stenographer-openapi.json
ReturnChatPagedEvents:
Expand Down Expand Up @@ -1830,6 +1869,7 @@ types:
- `CANCELED`: The reconstruction job has been canceled.
inline: true
source:
openapi: stenographer-openapi.json
ReturnChatAudioReconstruction:
Expand Down Expand Up @@ -1929,6 +1969,7 @@ types:
first) or `DESC` for descending order (reverse-chronological, with the
newest records first). This value corresponds to the `ascending_order`
query parameter used in the request.
inline: true
source:
openapi: stenographer-openapi.json
ReturnPagedChatGroups:
Expand Down Expand Up @@ -1984,6 +2025,7 @@ types:
first) or `DESC` for descending order (reverse-chronological, with the
newest records first). This value corresponds to the `ascending_order`
query parameter used in the request.
inline: true
source:
openapi: stenographer-openapi.json
ReturnChatGroupPagedChats:
Expand Down Expand Up @@ -2062,6 +2104,7 @@ types:
first) or `DESC` for descending order (reverse-chronological, with the
newest records first). This value corresponds to the `ascending_order`
query parameter used in the request.
inline: true
source:
openapi: stenographer-openapi.json
ReturnChatGroupPagedEvents:
Expand Down Expand Up @@ -2122,6 +2165,7 @@ types:
first) or `DESC` for descending order (reverse-chronological, with the
newest records first). This value corresponds to the `ascending_order`
query parameter used in the request.
inline: true
source:
openapi: stenographer-openapi.json
ReturnChatGroupPagedAudioReconstructions:
Expand Down Expand Up @@ -3195,12 +3239,29 @@ types:
event_name:
type: optional<literal<"chat_ended">>
docs: Always `chat_ended`.
end_time: integer
duration_seconds: integer
end_time:
type: integer
docs: Unix timestamp (in milliseconds) indicating when the session ended.
duration_seconds:
type: integer
docs: Total duration of the session in seconds.
end_reason:
type: WebhookEventChatStatus
caller_number: optional<string>
custom_session_id: optional<string>
docs: Reason for the session's termination.
caller_number:
type: optional<string>
docs: >-
Phone number of the caller in E.164 format (e.g., `+12223333333`).
This field is included only if the Chat was created via the [Twilio
phone calling](/docs/empathic-voice-interface-evi/phone-calling)
integration.
custom_session_id:
type: optional<string>
docs: >-
User-defined session ID. Relevant only when employing a [custom
language
model](/docs/empathic-voice-interface-evi/custom-language-model) in
the EVI Config.
extends:
- WebhookEventBase
source:
Expand Down
3 changes: 3 additions & 0 deletions .mock/definition/empathic-voice/chat.yml
Original file line number Diff line number Diff line change
Expand Up @@ -76,6 +76,7 @@ channel:
list of all available chat groups.
verbose_transcription:
type: optional<boolean>
default: false
docs: >-
A flag to enable verbose transcription. Set this query parameter to
`true` to have unfinalized user transcripts be sent to the client as
Expand All @@ -86,6 +87,7 @@ channel:
denotes whether the message is "interim" or "final."
access_token:
type: optional<string>
default: ''
docs: >-
Access token used for authenticating the client. If not provided, an
`api_key` must be provided to authenticate.
Expand All @@ -100,6 +102,7 @@ channel:
Guide](/docs/introduction/api-key#authentication-strategies).
api_key:
type: optional<string>
default: ''
docs: >-
API key used for authenticating the client. If not provided, an
`access_token` must be provided to authenticate.
Expand Down
37 changes: 24 additions & 13 deletions .mock/definition/empathic-voice/chatWebhooks.yml
Original file line number Diff line number Diff line change
@@ -1,21 +1,32 @@
imports:
root: __package__.yml
webhooks:
chatWebhook:
chatEnded:
method: POST
display-name: Chat Webhook
display-name: Chat Ended
headers: {}
payload: root.WebhookEvent
payload: root.WebhookEventChatEnded
examples:
- payload:
chat_group_id: chat_group_id
chat_id: chat_id
start_time: 1
chat_group_id: 9fc18597-3567-42d5-94d6-935bde84bf2f
chat_id: 470a49f6-1dec-4afe-8b61-035d3b2d63b0
config_id: 1b60e1a0-cc59-424a-8d2c-189d354db3f3
event_name: chat_ended
end_time: 1716244958546
duration_seconds: 180
end_reason: USER_ENDED
docs: Sent when an EVI chat ends.
chatStarted:
method: POST
display-name: Chat Started
headers: {}
payload: root.WebhookEventChatStarted
examples:
- payload:
chat_group_id: 9fc18597-3567-42d5-94d6-935bde84bf2f
chat_id: 470a49f6-1dec-4afe-8b61-035d3b2d63b0
config_id: 1b60e1a0-cc59-424a-8d2c-189d354db3f3
event_name: chat_started
start_time: 1716244940648
chat_start_type: new_chat_group
docs: >-
Webhook events are JSON payloads to your server during an EVI chat. You
can subscribe to specific events, and set which URLs should be notified in
the
[Config](/reference/empathic-voice-interface-evi/configs/create-config#request.body.webhooks)
resource. Read the [Webhook
Guide](/docs/empathic-voice-interface-evi/webhooks) for more information.
docs: Sent when an EVI chat is started.
2 changes: 2 additions & 0 deletions .mock/definition/expression-measurement/batch/__package__.yml
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,7 @@ service:
query-parameters:
limit:
type: optional<integer>
default: 50
docs: The maximum number of jobs to include in the response.
status:
type: optional<Status>
Expand Down Expand Up @@ -43,6 +44,7 @@ service:
`timestamp_ms`.
timestamp_ms:
type: optional<long>
default: 1704319392247
docs: |-
Provide a timestamp in milliseconds to filter jobs.
Expand Down
2 changes: 1 addition & 1 deletion .mock/fern.config.json
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
{
"organization" : "hume",
"version" : "0.45.1"
"version" : "0.50.6"
}
Loading

0 comments on commit a708146

Please sign in to comment.