- Feature Index (for more in-depth documetation about specific WebRTC features)
- GenesysCloudWebrtcSdk
- GenesysCloudMediaSession
- SdkError Class
- WebRTC SoftPhone
- WebRTC Screen Share
- WebRTC Video Conferencing
- WebRTC Media (media, devices, and permissions support)
To use the SDK with OAuth scopes, you will need the following scopes enabled:
- authorization
- conversations
- organizations
- notifications
These can be set in Genesys Cloud > Admin > Integrations > OAuth > Scope. Note that the scope options are not available when the "Grant Type" option is set to "Client Credentials"
This is a very basic usage of the WebRTC SDK. Be sure to read through the documentation for more advanced usage.
For an authenticated user, a valid accessToken is required on construction. The following example is for an authenticated user (for an unauthenticated user example, see WebRTC Screen Share).
import { GenesysCloudWebrtcSdk } from 'genesys-cloud-webrtc-sdk';
const sdk = new GenesysCloudWebrtcSdk({
accessToken: 'your-access-token'
});
// Optionally set up some SDK event listeners (not an exhaustive list)
sdk.on('sdkError' (event) => { /* do stuff with the error */ });
sdk.on('pendingSession' (event) => { /* pending session incoming */ });
sdk.on('sessionStarted' (event) => { /* a session just started */ });
sdk.initialize().then(() => {
// the web socket has connected and the SDK is ready to use
});
You can also access the latest version (or a specific version) via the CDN. Example
<!-- latest version -->
<script src="https://sdk-cdn.mypurecloud.com/webrtc-sdk/genesys-cloud-webrtc-sdk.bundle.min.js"></script>
<!-- or specified version -->
<script src="https://sdk-cdn.mypurecloud.com/webrtc-sdk/4.0.1/genesys-cloud-webrtc-sdk.bundle.min.js"></script>
<!-- then access the sdk via the window object -->
<script>
const sdk = new window.GenesysCloudWebrtcSdk(...); // same usage as above
</script>
constructor(config: ISdkConfig);
The ISdkConfig
's interface definition:
interface ISdkConfig {
environment?: string;
accessToken?: string;
organizationId?: string;
wsHost?: string;
autoConnectSessions?: boolean;
jidResource?: string;
disableAutoAnswer?: boolean;
logLevel?: LogLevels;
logger?: ILogger;
optOutOfTelemetry?: boolean;
allowedSessionTypes?: SessionTypes[];
defaults?: {
audioStream?: MediaStream;
audioElement?: HTMLAudioElement;
videoElement?: HTMLVideoElement;
videoResolution?: {
width: ConstrainULong;
height: ConstrainULong;
};
videoDeviceId?: string | null;
audioDeviceId?: string | null;
audioVolume?: number;
outputDeviceId?: string | null;
micAutoGainControl?: ConstrainBoolean;
micEchoCancellation?: ConstrainBoolean;
micNoiseSuppression?: ConstrainBoolean;
monitorMicVolume?: boolean;
};
}
environment?: string;
Domain to use.
Optional: default is mypurecloud.com
.
Available Options:
'mypurecloud.com',
'mypurecloud.com.au',
'mypurecloud.jp',
'mypurecloud.de',
'mypurecloud.ie',
'usw2.pure.cloud',
'cac1.pure.cloud',
'euw2.pure.cloud',
'apne2.pure.cloud'
accessToken?: string;
Access token received from authentication. Required for authenticated users (aka agents).
organizationId?: string;
Organization ID (aka the GUID). Required for unauthenticated users (aka guest).
wsHost?: string;
Optional: defaults to wss://streaming.${config.environment}
WebSocket Host.
autoConnectSessions?: boolean;
Optional: default true
Auto connect incoming softphone sessions (ie. sessions
coming from sdk.on('sessionStarted', (evt))
. If set
to false
, the session will need to be manually accepted
using sdk.acceptSession({ sessionId })
.
jidResource?: string;
Optional: default undefined
Specify the resource portion of the streaming-client jid. This is likely
only useful for internal purposes. This resource jid should be somewhat
unique. Example: setting the jidResource to
mediahelper_1d43c477-ab34-456b-91f5-c6a993c29f25
would result in full jid
that looks like <user_bare_jid>/mediahelper_1d43c477-ab34-456b-91f5-c6a993c29f25
.
The purpose of this property is simply a way to identify certain types of clients.
disableAutoAnswer?: boolean;
Optional: default false
Disable auto answering softphone calls. By default softphone calls will
respect the autoAnswer
flag passed in on the pendingSession
session object.
autoAnswer
is always true
for outbound calls and can also be set
in the user's phone settings.
logLevel?: LogLevels;
Optional: defaults to 'info'
.
Desired log level. Available options:
type LogLevels = 'log' | 'debug' | 'info' | 'warn' | 'error'
logger?: ILogger;
Logger to use. Must implement the ILogger
interface (see WebRTC properties for ILogger
definition).
Defaults to GenesysCloudClientLogger
which sends logs to the server (unless optOutOfTelemetry
is true
)
and outputs them in the console.
optOutOfTelemetry?: boolean;
Optional: default false
.
Opt out of sending logs to the server. Logs are only sent to the server
if the default GenesysCloudClientLogger is used. The default logger will
send logs to the server unless this option is true
allowedSessionTypes?: SessionTypes[];
Optional: defaults to all session types.
Allowed session types the sdk instance should handle. Only session types listed here will be handled. Available options passed in as an array:
enum SessionTypes {
softphone = 'softphone',
collaborateVideo = 'collaborateVideo',
acdScreenShare = 'screenShare'
}
example:
import { SessionTypes } from 'genesys-cloud-webrtc-sdk';
const sdk = new GenesysCloudWebrtcSdk({
allowedSessionTypes: [SessionTypes.collaborateVideo, SessionTypes.softphone],
// other config options
});
Optional. Defaults for various SDK functionality. See individual options for defaults and usage.
defaults?: {
audioStream?: MediaStream;
audioElement?: HTMLAudioElement;
videoElement?: HTMLVideoElement;
videoResolution?: {
width: ConstrainULong,
height: ConstrainULong
};
videoDeviceId?: string | null;
audioDeviceId?: string | null;
audioVolume?: number;
micAutoGainControl?: ConstrainBoolean;
micEchoCancellation?: ConstrainBoolean;
micNoiseSuppression?: ConstrainBoolean;
outputDeviceId?: string | null;
monitorMicVolume?: boolean;
};
audioStream?: MediaStream;
Optional: no default.
A default audio stream to accept softphone sessions with
if no audio stream was used when accepting the session
(ie: sdk.acceptSession({ id: 'session-id', mediaStream })
)
Warning: Firefox does not allow multiple microphone media tracks. using a default could cause the SDK to be unable to request any other audio device besides the active microphone – which would be the audio track on this default stream.
audioElement?: HTMLAudioElement;
Optional: no default. (See note about default behavior)
HTML Audio Element to attach incoming audio streams to.
Default behavior if this is not provided here or at
sdk.acceptSession()
is the sdk will create an HTMLAudioElement and append it to the DOM
videoElement?: HTMLVideoElement;
Optional: no default
HTML Video Element to attach incoming video streams to.
A video element is required for accepting incoming video
calls. If no video element is passed into sdk.acceptSession()
,
this default element will be used.
videoResolution?: {
width: ConstrainULong,
height: ConstrainULong
};
Optional: no default.
Video resolution to default to when requesting video media.
Note: if the resolution causes getUserMedia()
to fail
(which can happen sometimes in some browsers), the
SDK will retry without the resolution request.
This means this setting may or may not be used if
depending on the browser.
ConstrainULong type definition:
type ConstrainULong = number | {
exact?: number;
ideal?: number;
max?: number;
min?: number;
}
videoDeviceId?: string | null;
Optional: defaults to null
Default video device ID to use when starting camera media.
string
to request media for specified deviceIdnull|falsy
to request media system default device
audioDeviceId?: string | null;
Optional: defaults to null
Default audio device ID to use when starting microphone media.
string
to request media for specified deviceIdnull|falsy
to request media system default device
audioVolume?: number;
Optional: defaults to 100
Volume level to set on the audio/video elements when attaching media.
This value must be between 0-100
inclusive.
outputDeviceId?: string | null;
Optional: defaults to null
Default output device ID to use when starting camera media.
string
ID for output media device to usenull|falsy
to request media system default device
Not all browsers support output devices. For supported browsers, system default for output devices is always an empty string (ex:
''
)
micAutoGainControl?: ConstrainBoolean;
Optional: defaults to true
Automatic gain control is a feature in which a sound source automatically manages changes in the volume of its source media to maintain a steady overall volume level.
// ConstrainBoolean type
type ConstrainBoolean = boolean | {
exact?: boolean;
ideal?: boolean;
}
micEchoCancellation?: ConstrainBoolean;
Optional: defaults to true
Echo cancellation is a feature which attempts to prevent echo effects on a two-way audio connection by attempting to reduce or eliminate crosstalk between the user's output device and their input device. For example, it might apply a filter that negates the sound being produced on the speakers from being included in the input track generated from the microphone.
// ConstrainBoolean type
type ConstrainBoolean = boolean | {
exact?: boolean;
ideal?: boolean;
}
micNoiseSuppression?: ConstrainBoolean;
Optional: defaults to true
Noise suppression automatically filters the audio to remove or at least reduce background noise, hum caused by equipment, and the like from the sound before delivering it to your code.
// ConstrainBoolean type
type ConstrainBoolean = boolean | {
exact?: boolean;
ideal?: boolean;
}
monitorMicVolume?: boolean;
Optional: defaults to false
When true
all audio tracks created via the SDK
will have their volumes monitored and emited on
sdk.media.on('audioTrackVolume', evt)
.
See the SDK Media audioTrackVolume event events for more details.
Readonly string
of the SDK version in use.
Logger used by the SDK. It will implement the ILogger
interface. See constructor for details on how to set the SDK logger and log level.
interface ILogger {
/**
* Log a message to the location specified by the logger.
* The logger can decide if it wishes to implement `details`
* or `skipServer`.
*
* @param message message or error to log
* @param details any additional details to log
* @param skipServer should log skip server
*/
log(message: string | Error, details?: any, skipServer?: boolean): void;
/** see `log()` comment */
debug(message: string | Error, details?: any, skipServer?: boolean): void;
/** see `log()` comment */
info(message: string | Error, details?: any, skipServer?: boolean): void;
/** see `log()` comment */
warn(message: string | Error, details?: any, skipServer?: boolean): void;
/** see `log()` comment */
error(message: string | Error, details?: any, skipServer?: boolean): void;
}
SDK Media helper instance. See WebRTC Media for API and usage.
Setup the SDK for use and authenticate the user
- agents must have an accessToken passed into the constructor options
- guests need a securityCode (or the data received from an already redeemed securityCode). If the customerData is not passed in this will redeem the code for the data, else it will use the data passed in.
Declaration:
initialize(opts?: {
securityCode: string;
} | ICustomerData): Promise<void>;
Params:
- opts =
{ securityCode: 'shortCode received from agent to share screen' }
- or if the customer data has already been redeemed using
the securityCode (this is an advanced usage)
interface ICustomerData { conversation: { id: string; }; sourceCommunicationId: string; jwt: string; }
Returns: a Promise that is fulled one the web socket is connected and other necessary async tasks are complete.
Starts a softphone call session with the given peer or peers.
initialize()
must be called first.
Declaration:
startSoftphoneSession(softphoneSessionParams: IStartSoftphoneSessionParams): Promise<{id: string, selfUri: string}>;
Params:
softphoneSessionParams: IStartSoftphoneSessionParams
Required: Contains participant information for placing the call. See softphone#IStartSoftphoneSessionParams for full details on the request parameters.
Returns: a promise with an object containing the id
and selfUri
for the conversation.
Start a screen share. Start a screen share. Currently, screen share is only supported for guest users.
initialize()
must be called first.
Declaration:
startScreenShare(): Promise<MediaStream>;
Returns: MediaStream
promise for the selected screen stream
Start a video conference. Not supported for guests. Conferences can
only be joined by authenticated users from the same organization.
If inviteeJid
is provided, the specified user will receive a propose/pending session
they can accept and join the conference.
initialize()
must be called first.
Declaration:
startVideoConference(roomJid: string, inviteeJid?: string): Promise<{
conversationId: string;
}>;
Params:
roomJid: string
Required: jid of the conference to join. Can be made up if starting a new conference but must adhere to the format:<lowercase string>@conference.<lowercase string>
inviteeJid?: string
Optional: jid of a user to invite to this conference.
Returns: a promise with an object with the newly created conversationId
Update the output device for all incoming audio
- This will log a warning and not attempt to update the output device if the a broswer does not support output devices
- This will attempt to update all active sessions
- This does not update the sdk
defaultOutputDeviceId
Declaration:
updateOutputDevice(deviceId: string | true | null): Promise<void>;
Params:
deviceId: stirng | true | null
Required:string
deviceId for audio output device,true
for sdk default output, ornull
for system default
Returns: a promise that fullfils once the output deviceId has been updated
Update outgoing media for a specified session
sessionId
orsession
is required to find the session to updatestream
: if a stream is passed in, the session media will be updated to use the media on the stream. This supercedes deviceId(s) passed invideoDeviceId
&audioDeviceId
(superceded bystream
)undefined|false
: the sdk will not touch thevideo|audio
medianull
: the sdk will update thevideo|audio
media to system defaultstring
: the sdk will attempt to update thevideo|audio
media to the passed in deviceId
Note: this does not update the SDK default device(s)
Declaration:
updateOutgoingMedia (updateOptions: IUpdateOutgoingMedia): Promise<void>;
Params:
updateOptions: IUpdateOutgoingMedia
Required: device(s) to update- Basic interface:
interface IUpdateOutgoingMedia { session?: IExtendedMediaSession; sessionId?: string; stream?: MediaStream; videoDeviceId?: string | boolean | null; audioDeviceId?: string | boolean | null; }
- See media#IUpdateOutgoingMedia for full details on the request parameters
Returns: a promise that fullfils once the outgoing media devices have been updated
Update the default device(s) for the sdk. Pass in the following:
string
: sdk will update that default to the deviceIdnull
: sdk will update to system default deviceundefined
: sdk will not update that media deviceId
If updateActiveSessions
is true
, any active sessions will
have their outgoing media devices updated and/or the output
deviceId updated.
If updateActiveSessions
is false
, only the sdk defaults will be updated and
active sessions' media devices will not be touched.
Declaration:
updateDefaultDevices(options?: IMediaDeviceIds & {
updateActiveSessions?: boolean;
}): Promise<any>;
Params:
options?: IMediaDeviceIds & {updateActiveSessions?: boolean;}
Optional: defaults to{}
- Basic interface:
interface IMediaDeviceIds { videoDeviceId?: string | null; audioDeviceId?: string | null; outputDeviceId?: string | null; updateActiveSessions?: boolean; }
videoDeviceId?: string | null
Optional:string
for a desired deviceId.null|falsy
for system default device.audioDeviceId?: string | null
Optional:string
for a desired deviceId.null|falsy
for system default device.outputDeviceId?: string | null
Optional:string
for a desired deviceId.null|falsy
for system default device.updateActiveSessions?: boolean
flag to update active sessions' devices
- Basic interface:
Returns: a promise that fullfils once the default device values have been updated
Update the default media settings that exist in the sdk config.
If updateActiveSessions
is true
, any active sessions will
have their outgoing media devices updated and/or the output
deviceId updated.
If updateActiveSessions
is false
, only the sdk defaults will be updated and
active sessions' media devices will not be touched.
Declaration:
updateDefaultMediaSettings(options?: IMediaSettings & {
updateActiveSessions?: boolean;
}): Promise<any>;
Params:
options?: IMediaDeviceIds & {updateActiveSessions?: boolean;}
Optional: defaults to{}
- Basic interface:
interface IMediaSettings { micAutoGainControl?: ConstrainBoolean; micEchoCancellation?: ConstrainBoolean; micNoiseSuppression?: ConstrainBoolean; monitorMicVolume?: boolean; updateActiveSessions?: boolean; } type ConstrainBoolean = boolean | { exact?: boolean; ideal?: boolean; }
micAutoGainControl?: ConstrainBoolean
Optional. This will indicate the default audio constraint forautoGainControl
for future media. See https://developer.mozilla.org/en-US/docs/Web/API/MediaTrackConstraints/autoGainControlmicEchoCancellation?: ConstrainBoolean
Optional. This will indicate the default audio constraint forechoCancellation
for future media. https://developer.mozilla.org/en-US/docs/Web/API/MediaTrackConstraints/echoCancellationmicNoiseSuppression?: ConstrainBoolean
Optional. This will indicate the default audio constraint fornoiseSuppression
for future media. See https://developer.mozilla.org/en-US/docs/Web/API/MediaTrackConstraints/noiseSuppressionmonitorMicVolume?: boolean
Optional. Default setting for emittingaudioTrackVolume
events for future media.updateActiveSessions?: boolean
Optional. Flag to update active sessions' media.
- Basic interface:
Returns: a promise that fullfils once the default settings and active sessions have been updated (if specified)
Updates the audio volume for all active applicable sessions as well as the default volume for future sessions
Declaration:
updateAudioVolume (volume: number) {
Params:
volume: number
Required: Value must be between 0 and 100 inclusive
Returns: void
Mutes/Unmutes video/camera for a session and updates the conversation accordingly. Will fail if the session is not found. Incoming video is unaffected.
When muting, the camera track is destroyed. When unmuting, the camera media must be requested again.
NOTE: if no
unmuteDeviceId
is provided when unmuting, it will unmute and attempt to use the sdkdefaults.videoDeviceId
as the camera device
Declaration:
setVideoMute(muteOptions: ISessionMuteRequest): Promise<void>;
Params:
muteOptions: ISessionMuteRequest
Required:- Basic interface
interface ISessionMuteRequest { sessionId: string; mute: boolean; unmuteDeviceId?: string | boolean | null; }
sessionId: string
Required: session id to for which perform the actionmute: boolean
Required:true
to mute,false
to unmuteunmuteDeviceId?: string
Optional: the desired deviceId to use when unmuting,true
for sdk default,null
for system default,undefined
will attempt to use the sdk default device
- Basic interface
Returns: a promise that fullfils once the mute request has completed
Mutes/Unmutes audio/mic for a session and updates the conversation accordingly. Will fail if the session is not found. Incoming audio is unaffected.
NOTE: if no
unmuteDeviceId
is provided when unmuting AND there is no active audio stream, it will unmute and attempt to use the sdkdefaults.audioDeviceId
at the device
Declaration:
setAudioMute(muteOptions: ISessionMuteRequest): Promise<void>;
Params:
muteOptions: ISessionMuteRequest
Required:sessionId: string
Required: session id to for which perform the actionmute: boolean
Required:true
to mute,false
to unmuteunmuteDeviceId?: string
Optional: the desired deviceId to use when unmuting,true
for sdk default,null
for system default,undefined
will attempt to use the sdk default device- Basic interface:
interface ISessionMuteRequest { sessionId: string; mute: boolean; unmuteDeviceId?: string | boolean | null; }
Returns: a promise that fullfils once the mute request has completed
Set the accessToken the sdk uses to authenticate to the API.
Declaration:
setAccessToken(token: string): void;
Params:
token: string
Required: new access token
Returns: void
Accept a pending session based on the passed in ID.
Declaration:
acceptPendingSession(sessionId: string): Promise<void>;
Params:
sessionId: string
Required: id of the pending session to accept
Returns: a promise that fullfils once the session accept goes out
Reject a pending session based on the passed in ID.
Declaration:
rejectPendingSession(sessionId: string): Promise<void>;
Params:
sessionId: string
Required: id of the session to reject
Returns: a promise that fullfils once the session reject goes out
Accept a pending session based on the passed in ID.
Declaration:
acceptSession(acceptOptions: IAcceptSessionRequest): Promise<void>;
Params:
acceptOptions: IAcceptSessionRequest
Required: options with which to accept the session- Basic interface:
interface IAcceptSessionRequest { sessionId: string; mediaStream?: MediaStream; audioElement?: HTMLAudioElement; videoElement?: HTMLVideoElement; videoDeviceId?: string | boolean | null; audioDeviceId?: string | boolean | null; }
sessionId: string
Required: id of the session to acceptmediaStream?: MediaStream
Optional: media stream to use on the session. If this is provided, no media will be requested.audioElement?: HTMLAudioElement
Optional: audio element to attach incoming audio to default is sdkdefaults.audioElement
videoElement?: HTMLAudioElement
Optional: video element to attach incoming video to default is sdkdefaults.videoElement
. (only used for video sessions)videoDeviceId?: string | boolean | null;
Optional: See ISdkMediaDeviceIds for full detailsaudioDeviceId?: string | boolean | null;
Optional: See ISdkMediaDeviceIds for full details
- Basic interface:
Returns: a promise that fullfils once the session accept goes out
End an active session based on the session ID or conversation ID (one is required)
Declaration:
endSession(endOptions: IEndSessionRequest): Promise<void>;
Params:
endOptions: IEndSessionRequest
object with session ID or conversation ID- Basic interface:
interface IEndSessionRequest { sessionId?: string; conversationId?: string; reason?: JingleReason; }
sessionId?: string
Optional: id of the session to end. At leastsessionId
orconversationId
must be provided.conversation?: string
Optional: conversation id of the session to end. At leastsessionId
orconversationId
must be provided.reason?: JingleReason
Optional: defaults tosuccess
. This is for internal usage and should not be provided in custom applications.
- Basic interface:
Returns: a promise that fullfils once the session has ended
Disconnect the streaming connection
Declaration:
disconnect(): Promise<any>;
Params: none
Returns: a promise that fullfils once the web socket has disconnected
Reconnect the streaming connection
Declaration:
reconnect(): Promise<any>;
Params: none
Returns: a promise that fullfils once the web socket has reconnected
Ends all active sessions, disconnects the streaming-client, removes all event listeners, and cleans up media.
WARNING: calling this effectively renders the SDK instance useless. A new instance will need to be created after this is called.
Declaration:
destroy(): Promise<any>;
Params:
Returns: a promise that fullfils once all the cleanup tasks have completed
The WebRTC SDK extends the browser version of EventEmitter
.
Reference the NodeJS documentation for more information. The basic interface that is
inherited by the SDK is:
interface EventEmitter {
addListener(event: string | symbol, listener: (...args: any[]) => void): this;
on(event: string | symbol, listener: (...args: any[]) => void): this;
once(event: string | symbol, listener: (...args: any[]) => void): this;
removeListener(event: string | symbol, listener: (...args: any[]) => void): this;
off(event: string | symbol, listener: (...args: any[]) => void): this;
removeAllListeners(event?: string | symbol): this;
setMaxListeners(n: number): this;
getMaxListeners(): number;
listeners(event: string | symbol): Function[];
rawListeners(event: string | symbol): Function[];
emit(event: string | symbol, ...args: any[]): boolean;
listenerCount(event: string | symbol): number;
prependListener(event: string | symbol, listener: (...args: any[]) => void): this;
prependOnceListener(event: string | symbol, listener: (...args: any[]) => void): this;
eventNames(): Array<string | symbol>;
}
The SDK leverages strict-event-emitter-types to strongly type available events and their emitted values.
Emitted when a call session is being initiated for an outbound or inbound call.
pendingSession
is emitted for all softphone sessions and inbound 1-to-1 video
sessions.
Declaration:
sdk.on('pendingSession', (pendingSession: IPendingSession) => { });
Value of event:
interface IPendingSession {
id: string;
address: string;
conversationId: string;
autoAnswer: boolean;
sessionType: SessionTypes;
originalRoomJid: string;
fromUserId?: string;
}
id: string
– the unique Id for the session proposal; used to accept or reject the proposaladdress: string
– the address of the callerconversationId: string
– id for the associated conversation object (used in platform API requests)autoAnswer: boolean
– whether or not the client should auto answer the sessiontrue
for all outbound callsfalse
for inbound calls, unless Auto Answer is configured for the user by an admin public api request and/or push notifications
sessionType: SessionTypes
– type of pending session. See AllowedSessionTypes for a list of available values.originalRoomJid: string
– video specific alternate roomJid (for 1-to-1 video calls)fromUserId?: string
– Optional: the userId the call is coming from (for 1-to-1 video calls)
Emitted when a the call has been canceled due to remote disconnect or answer timeout
Declaration:
sdk.on('cancelPendingSession', (sessionId: string) => { });
Value of event:
sessionId: string
– the id of the session proposed and canceled
Emitted when another client belonging to this user has handled the pending session; used to stop ringing
Declaration:
sdk.on('handledPendingSession', (sessionId: string) => { });
Value of event:
sessionId: string
– the id of the session proposed and handled
Emitted when negotiation has started; before the session has connected
Declaration:
sdk.on('sessionStarted', (session: IExtendedMediaSession) => { });
Value of event:
session: IExtendedMediaSession
– the session that has started. See GenesysCloudMediaSession for details on the session object.
Emitted when a session has ended
Declaration:
sdk.on('sessionEnded', (session: IExtendedMediaSession, reason: JingleReason) => { });
Value of event:
session: IExtendedMediaSession
– the session that ended. See GenesysCloudMediaSession for details on the session object.reason: JingleReason
– the reason code for why the session ended. Available reasons:reason: { condition: "alternative-session" | "busy" | "cancel" | "connectivity-error" | "decline" | "expired" | "failed-application" | "failed-transport" | "general-error" | "gone" | "incompatible-parameters" | "media-error" | "security-error" | "success" | "timeout" | "unsupported-applications" | "unsupported-transports" }
Emitted when a session has ended
Declaration:
sdk.on('sdkError', (sdkError: SdkError) => { });
Value of event:
sdkError: SdkError
– error emitted by the sdk. See SdkError Class for more details.
Emitted when the SDK has successfull initialized – fired once after
await sdk.initialize({...})
finishes.
Declaration:
sdk.on('ready', () => { });
Value of event: void
Emitted when the underlying websocket has (re)connected
Declaration:
sdk.on('connected', (info: { reconnect: boolean }) => { });
Value of event:
info: { reconnect: boolean }
– indicator if it is a reconnect event
Emitted when the underlying websocket connection has
disconnected and is no longer attempting to reconnect automatically.
Should usually be followed by sdk.reconnect()
or reloading the application,
as this indicates a critical error
Declaration:
sdk.on('disconnected', (info?: any) => { });
Value of event:
info?: any
– usually a string of'Streaming API connection disconnected'
. This value should not be relied upon for anything other than logging.
Emitted for trace, debug, log, warn, and error messages from the SDK
Declaration:
sdk.on('trace', (level: string, message: string, details?: any) => { });
Value of event:
level: string
- the log level of the messagetrace|debug|log|warn|error
message: string
- the log messagedetails?: any
- details about the log message
This is the session object that manages WebRTC connections. The actual interface has been extended and should be imported like this (if using typescript):
import {
IExtendedMediaSession,
GenesysCloudWebrtcSdk
} from 'genesys-cloud-webrtc-sdk';
const sdk = new GenesysCloudWebrtcSdk({/* your config options */});
let activeSession: IExtendedMediaSession;
sdk.on('sessionStarted', (session) => {
activeSession = session; // `session` is already strongly typed
});
There are many properties, methods, and accessors on the IExtendedMediaSession
.
Since most of these are extended from 3rd party libraries, we will not go into
detail on each or list all of them. Instead, here is a brief list of the useful
properties and methods on the IExtendedMediaSession
session object:
interface IExtendedMediaSession extends GenesysCloudMediaSession {
id: string;
sid: string; // same as `id`
peerID: string;
conversationId: string;
active: boolean;
sessionType: SessionTypes;
pc: RTCPeerConnection;
get state(): string;
get connectionState(): string;
/**
* video session related props/functions
* Note: these are not guaranteed to exist on all sessions.
* See `WebRTC Video Conferencing` for more details
*/
originalRoomJid: string;
videoMuted?: boolean;
audioMuted?: boolean;
fromUserId?: string;
startScreenShare?: () => Promise<void>;
stopScreenShare?: () => Promise<void>;
pinParticipantVideo?: (participantId: string) => Promise<void>;
}
Session level events are events emitted from the session
objects themselves,
not the SDK instance library. These can be used if you want lower level access
and control.
Sessions implement the same EventEmitter
interface and strict-typings that the base WebRTC SDK does.
See SDK Events for the full list of inherited functions.
Emitted when the state of the session changes.
Declaration:
session.on('sessionState', (sessionState: 'starting' | 'pending' | 'active') => { });
Value of event:
sessionState: 'starting' | 'pending' | 'active'
– new state of the session
Emitted when the state of the underlying RTCPeerConnection state changes.
Declaration:
session.on('connectionState',
(connectionState: 'starting' | 'connecting' | 'connected' | 'interrupted' | 'disconnected' | 'failed') => { });
Value of event:
connectionState: 'starting' | 'connecting' | 'connected' | 'interrupted' | 'disconnected' | 'failed'
– new state of the RTCPeerConnection
Emits the ICE connection type
Declaration:
session.on('iceConnectionType', (iceConnectionType: {
localCandidateType: string,
relayed: boolean,
remoteCandidateType: string
})) => { });
Value of event:
iceConnectionType: ({localCandidateType: string, relayed: boolean, remoteCandidateType: string}
– information about the ICE connection
Emitted when a new peer media track is added to the session
Declaration:
session.on('peerTrackAdded', (track: MediaStreamTrack, stream?: MediaStream) => { });
Value of event:
track: MediaStreamTrack
– the media track that was addedstream?: MediaStream
– the media stream that was added
Emitted when a peer media track is removed from the session
Declaration:
session.on('peerTrackRemoved', (track: MediaStreamTrack, stream?: MediaStream) => { });
Value of event:
track: MediaStreamTrack
– the media track that was removedstream?: MediaStream
– the media stream that was removed
Emit stats for the underlying RTCPeerConnection.
See webrtc-stats-gatherer for more details and typings on stats collected.
Declaration:
session.on('stats', (stats: any) => { });
Value of event:
stats: any
– stats for the RTCPeerConnection. value emitted varies based on stat event type.
Emits when the end of candidate gathering; used to check for potential connection issues
Declaration:
session.on('endOfCandidates', () => { });
Value of event: void
Emits when the session ends
Declaration:
session.on('terminated', (reason: JingleReason) => { });
Value of event:
reason: JingleReason
– reason for session ending. See the SDK sessionEnded event for details onJingleReason
Emits when the session mutes
Declaration:
session.on('mute', (info: JingleInfo) => { });
Value of event:
reason: JingleInfo
– info regarding the mute- Basic interface:
interface JingleInfo { infoType: string; creator?: JingleSessionRole; name?: string; }
Emits when the session unmutes
Declaration:
session.on('unmute', (info: JingleInfo) => { });
Value of event:
reason: JingleInfo
– info regarding the mute- Basic interface: See mute
There are session events that are specific for video sessions. See WebRTC Video Conferencing for more info.
This is an Error wrapper class to give a little more detail regarding errors
thrown. The errors usually thrown by the SDK. However, there are a few instances
where the browser throws an error and the SDK will emit the "wrapped" error to
sdk.on('sdkError', (err) => { });
. If it wraps an existing error, it will keep
the error.name
and error.message
to avoid masking the original problem.
class SdkError extends Error {
type: SdkErrorTypes;
details: any;
/* inherited */
name: string;
message: string;
}
// Available Error types
enum SdkErrorTypes {
generic = 'generic',
initialization = 'initialization',
http = 'http',
invalid_options = 'invalid_options',
not_supported = 'not_supported',
session = 'session',
media = 'media'
}
The SDK will add the type
to give more clarity as to why the error was thrown.