I’m confused by the different elements of HA’s voice assistant sentences.

  1. What’s the difference between a conversation and an intent_script? Per HA’s custom sentence example, a conversation has an intents sub-element, and an intent_script doesn’t. Does a conversation’s intent merely declare the element that will respond to the sentence, while an intent_script is purely the response (i.e., does an intents point to an intent_script)?

  2. HA then explains that while the example above defined the conversation and intent_script in configuration.yaml, you can also define intents in config/custom_sentences/. Should you use both of these methods simultaneously or will it cause conflict or degrade performance? I wouldn’t think you should define the same sentence in both places, but the data structure for their 2 examples are different - is 1 better than the other?

In configuration.yaml:

conversation:
  intents:
    YearOfVoice:
      - "how is the year of voice going"

In config/custom_sentences/en:

intents:
  SetVolume:
    data:
      - sentences:
          - "(set|change) {media_player} volume to {volume} [percent]"
          - "(set|change) [the] volume for {media_player} to {volume} [percent]"
  1. Then they say responses for existing intents can be customized as well in config/custom_sentences/. What’s the difference between a response and an intent_script? It seems like intent_script can only be defined in configuration.yaml and responses can only be defined in config/custom_sentences/` - is that right?

Thanks for any clarification you can share.

  • RandomLegend [He/Him]@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    Oh my that 4060Ti will reduce the response time to the bare minimum that whisper is capable of i am sure!

    You don’t have to customize the docker-compose at all. That’s the plug’n’play part. You have to make sure to build the docker image so it uses the makefile and the run.sh file.

    Also make sure your docker environment is able to use the 4060Ti.

    Easiest way is to run docker run -it --rm --gpus all ubuntu nvidia-smi and see if you get a proper nvidia-smi