You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
#64 implements a streaming text-to-speech, where the ROS node receives commands via a topic and executes them. Unfortunately, this means that the node does not return feedback to the web app. There are several UX improvements that could be made if the node returns feedback:
Changing "Play" to "Queue" while to robot is speaking, to make it clear that the text will not be played immediately.
Only showing the "Stop" button while the robot is speaking.
Allowing the operator to see the queue of utterances and where the robot currently is.
For all three, the most straightforward way to do so is to change the ROS interface to an action, so the web app knows when it is done speaking the utterance. The action interface will make it a bit less straightforward to queue utterances, but that should be doable. For the first two, the action interface will allow the web app to know when an utterance is done speaking. Using that information, the web app can maintain its own queue (which should mirror the ROS node's queue) and display that to the user (for the third point).
The text was updated successfully, but these errors were encountered:
hello-amal
changed the title
Display the TTS queue
TTS: Adaptivity based on whether the robot is speaking
Jul 12, 2024
#64 implements a streaming text-to-speech, where the ROS node receives commands via a topic and executes them. Unfortunately, this means that the node does not return feedback to the web app. There are several UX improvements that could be made if the node returns feedback:
For all three, the most straightforward way to do so is to change the ROS interface to an action, so the web app knows when it is done speaking the utterance. The action interface will make it a bit less straightforward to queue utterances, but that should be doable. For the first two, the action interface will allow the web app to know when an utterance is done speaking. Using that information, the web app can maintain its own queue (which should mirror the ROS node's queue) and display that to the user (for the third point).
The text was updated successfully, but these errors were encountered: