A React-Native Bridge for the Google Dialogflow AI SDK.
Support for iOS 10+ and Android!
Dialogflow is a powerful tool for building delightful and natural conversational experiences. You can build chat and speech bots and may intergrate it in a lot of platform like twitter, facebook, slack, or alexa.
This package depends on react-native-voice, follow their readme to setup it.
Add react-native-dialogflow and link it:
npm install --save react-native-dialogflow react-native-voice
react-native link react-native-dialogflow
react-native link react-native-voice
Also, you need open the React Native xCode project and add two new keys into Info.plist
Just right click on Info.plist
-> Open As
-> Source Code
and paste these strings somewhere into root <dict>
tag
<key>NSSpeechRecognitionUsageDescription</key>
<string>Your usage description here</string>
<key>NSMicrophoneUsageDescription</key>
<string>Your usage description here</string>
Application will crash if you don't do this.
Import Dialogflow:
import Dialogflow from "react-native-dialogflow";
or for V2
import { Dialogflow_V2 } from "react-native-dialogflow"
Set the accessToken
and the language in your constructor:
constructor(props) {
super(props);
Dialogflow.setConfiguration(
"4xxxxxxxe90xxxxxxxxc372", Dialogflow.LANG_GERMAN
);
}
For V2 you can set the client_email
and private_key
of the credential json auth setup. In addition you have to set your projectId:
constructor(props) {
super(props);
Dialogflow_V2.setConfiguration(
"[email protected]",
'-----BEGIN PRIVATE KEY-----\nMIIEvgIBADAN...1oqO\n-----END PRIVATE KEY-----\n',
Dialogflow_V2.LANG_GERMAN,
'testv2-3b5ca'
);
}
Start listening with integrated speech recognition:
<Button onPress={() => {
Dialogflow.startListening(result=>{
console.log(result);
}, error=>{
console.log(error);
});
}}
/>
In iOS only you have to call finishListening()
. Android detects the end of your speech automatically. That's the reason why we didn't implement the finish method in Android.
// only for iOS
Dialogflow.finishListening();
// after this call your callbacks from the startListening will be executed.
For using your own speech recognition:
<Button onPress={() => {
Dialogflow.requestQuery("Some text for your Dialogflow agent", result=>console.log(result), error=>console.log(error));
}}
/>
For sending an event to Dialogflow (Contexts and Entities have no effect!):
Dialogflow.requestEvent(
"WELCOME",
{param1: "yo mr. white!"},
result=>{console.log(result);},
error=>{console.log(error);}
);
Set contexts (will take affect on next startListening or queryRequest):
const contexts = [{
name: "deals",
lifespan: 1,
parameters: {
Shop: "Rewe"
}
}];
Dialogflow.setContexts(contexts);
Reset all (non-permantent) contexts for current session:
Dialogflow.resetContexts(result=>{
console.log(result);
}, error=>{
console.log(error);
});
Set permanent contexts, which will be set automatically before every request. This is useful for e.g. access tokens in webhooks:
const permanentContexts = [{
name: "Auth",
// lifespan 1 is set automatically, but it's overrideable
parameters: {
AccessToken: "1234yo1234"
}
}];
Dialogflow.setPermanentContexts(permanentContexts);
Set UserEntities (will take affect on next startListening or queryRequest):
const entities = [{
"name":"shop",
"extend":true,
"entries":[
{
"value":"Media Markt",
"synonyms":["Media Markt"]
}
]
}];
Dialogflow.setEntities(entities);
Only in Android we have four additional methods: onListeningStarted
, onListeningCanceled
, onListeningFinished
and onAudioLevel
. In iOS they will be never called:
<Button onPress={() => {
Dialogflow.onListeningStarted(()=>{
console.log("listening started");
});
Dialogflow.onListeningCanceled(()=>{
console.log("listening canceled");
});
Dialogflow.onListeningFinished(()=>{
console.log("listening finished");
});
Dialogflow.onAudioLevel(level=>{
console.log(level);
});
Dialogflow.startListening(result=>{
console.log(result);
}, error=>{
console.log(error);
});
}}
/>
Note: Make sure you are setting the callbacks before startListening every single time again. Don't set the callbacks in e.g. constructor or componentsDidMount if you are executing startListening more than one times.
Set the language in your configuration:
Dialogflow.setConfiguration("4xxxxxxxe90xxxxxxxxc372", Dialogflow.LANG_GERMAN);
- LANG_CHINESE_CHINA
- LANG_CHINESE_HONGKONG
- LANG_CHINESE_TAIWAN
- LANG_DUTCH
- LANG_ENGLISH
- LANG_ENGLISH_GB
- LANG_ENGLISH_US
- LANG_FRENCH
- LANG_GERMAN
- LANG_ITALIAN
- LANG_JAPANESE
- LANG_KOREAN
- LANG_PORTUGUESE
- LANG_PORTUGUESE_BRAZIL
- LANG_RUSSIAN
- LANG_SPANISH
- LANG_UKRAINIAN
name | platform | param1 | param2 | param3 | param4 |
---|---|---|---|---|---|
setConfiguration (V1) |
both | accessToken: String | languageTag: String | ||
setConfiguration (V2) |
both | client_email: String | private_key: String | languageTag: String | projectId: String |
startListening |
both | resultCallback: (result: object)=>{} | errorCallback: (error: object)=>{} | ||
finishListening |
ios | ||||
requestQuery |
both | query: String | resultCallback: (result: object)=>{} | errorCallback: (error: object)=>{} | |
requestEvent |
both | eventName: String | eventData: Object | resultCallback: (result: object)=>{} | errorCallback: (error: object)=>{} |
onListeningStarted |
both | callback: ()=>{} | |||
onListeningCanceled |
none | callback: ()=>{} | |||
onListeningFinished |
both | callback: ()=>{} | |||
onAudioLevel |
android | callback: (level: number)=>{} | |||
setContexts |
both | array | |||
resetContexts |
both | resultCallback: (result: object)=>{} | errorCallback: (error: object)=>{} | ||
setPermanentContexts |
both | array | |||
setEntities (V1 only) |
both | array |
Sprachsteuerung mit Api.ai in einer React-Native App
Powered by innFactory