Skip to content

Custom Sentences

Donny F edited this page Mar 9, 2024 · 8 revisions

Custom sentences are used to control the View Assist device. For more information, see https://www.home-assistant.io/voice_control/custom_sentences/ These custom sentences can be added as a 'normal' automation using the Home Assistant web interface.

How's the weather

Here is an example custom sentence that changes the current view to the Weather View and speaks the current conditions:

alias: ASSIST - Weather example
description: ""
trigger:
  - platform: conversation
    command:
      - How's the weather
    id: weather
condition: []
action:
  - choose:
      - conditions:
          - condition: trigger
            id:
              - weather
        sequence:
          - set_conversation_response: >-
              Its {{ state_attr('weather.home', 'temperature') }} degrees and
              {{states.weather.home.state}}
          - service: browser_mod.navigate
            metadata: {}
            data:
              path: /dashboard-tablet/weather
            target:
              entity_id: sensor.tabletfullkiosk_browser_path

Note the ``service: browser_mod.navigate``` call. This is the mechanism used to change the view of the target entity by sending the path of the particular view we want to show.


Broadcast Message

alias: ASSIST - Broadcast
description: ""
trigger:
  - platform: conversation
    command:
      - broadcast {name}
    id: broadcast
action:
  - choose:
      - conditions:
          - condition: trigger
            id:
              - broadcast
        sequence:
          - set_conversation_response: Broadcasting
          - service: python_script.set_state
            data:
              entity_id: sensor.assistsat_viewlr
              title: Annoucement
          - service: python_script.set_state
            data:
              entity_id: sensor.assistsat_viewlr
              message: "{{ trigger.slots.name }}"
          - service: python_script.set_state
            data:
              entity_id: sensor.assistsat_viewlr
              font-size: 4vw
          - service: media_player.play_media
            data:
              media_content_id: /local/viewassist/broadcast.mp3
              media_content_type: MUSIC
            target:
              entity_id: media_player.tabletfullkiosk
            enabled: true
          - service: browser_mod.navigate
            metadata: {}
            data:
              path: /dashboard-tablet/info
            target:
              entity_id: sensor.tabletfullkiosk_browser_path
          - service: tts.google_translate_say
            data:
              cache: false
              entity_id: media_player.tabletfullkiosk
              message: "{{ states.sensor.assistsat_viewlr.attributes.message }}"
            enabled: true
          - service: notify.alexa_media_living_room_echo_show
            data:
              message: "{{ states.sensor.assistsat_viewlr.attributes.message }}"

Here's an example using the Information Card. The sentence triggers on the word 'broadcast' and will store anything said after that into the {name} variable. Note that the variables for the card title, message, and font size are set using the service: python_script.set_state (see install information). Having these values as variables allows us to reuse this card for multiple purposes. I use the same card for displaying shopping list and chore information.


Wikipedia Search

alias: ASSIST - Wikipedia Search
description: ""
trigger:
  - platform: conversation
    command:
      - who is {name}
      - what is [the] [a] [an] {name}
    id: who
condition: []
action:
  - choose:
      - conditions:
          - condition: trigger
            id:
              - who
        sequence:
          - service: rest_command.wiki_how
            response_variable: wiki_response
            enabled: true
            data:
              name: "{{ trigger.slots.name  |regex_replace(find=' ', replace='_') }}"
          - service: python_script.set_state
            data:
              entity_id: sensor.assistsat_viewlr
              title: Wikipedia Search
          - service: python_script.set_state
            data:
              entity_id: sensor.assistsat_viewlr
              message: "{{ wiki_response['content']['extract'] }}"
          - service: python_script.set_state
            data:
              entity_id: sensor.assistsat_viewlr
              image: "{{ wiki_response['content']['thumbnail']['source'] }}"
            enabled: true
          - service: python_script.set_state
            data:
              entity_id: sensor.assistsat_viewlr
              message_font_size: 2vw
          - set_conversation_response: Here's what I found on wikipedia
            enabled: true
          - service: browser_mod.navigate
            data:
              path: /dashboard-tablet/infopic
            target:
              entity_id: sensor.tabletfullkiosk_browser_path

Here's an example using the Information Picture Card. The sentence triggers on the words 'who is' and 'what is' will store anything said after that into the {name} variable. Note that the variables for the card title, message, image, and font size are set using the service: python_script.set_state (see install information). The REST API wiki_how command will need to be configured to use this as well.


Find these sentences and more in the repo at https://github.com/dinki/View-Assist/tree/main/Custom_Sentences

More custom sentences coming. Please be patient as the wiki gets populated