Skip to content

NVBG (Nonverbal behavior generator)

sagatake edited this page Aug 1, 2024 · 4 revisions

What is NVBG?

NVBG (Non Verbal Behavior Generator) has been merged to GRETA to allow the latter to perform gestures computed by NVBG. NVBG work with a software named SmartBody to perform the gesture that it has found.

Plenty of these gesture have been created also in GRETA to perform a map from NVBG Gesture and the GRETA Gesture using a mapping file

NVBG Traitement is located in two different place:

  • Planner
  • BML File Reader

In this way, GRETA will use be able to add the gestures found thanks NVBG to those found by the lecture of the FML or the BML. NVBG will generate a new BML file for the FML and BML with only the gesture found by itself in this way it's easy to look at what it has done.

NVBG needs only to know the Speech to perform its treatement. The Speech is available as a SpeechSignal and it can be found in the list of Signals which is in both Planner and the BML File Reader.

The Speech is extracted from the SpeechSignal and then is modified to be understandable by NVBG as a VrExpress Message.

How to use

Currently, we only support English and French.

  1. Prepare NVBG-compatible FML file.
  • Each FML file only accept single sentence at a time.
  • Required to insert time marker tags (e.g. ) starting from tm1 (not tm0) before, after, and between each words.
  • Required to insert a boundary tag covering entire time markers (e.g. if you have 10 words and 11 time markers, it should be )
  • These time marker tags, boundary tags, and speech text should be inside of speech section (in-between and ), which should be inside of the BML section (srounded by and tags)
  1. Launch Modular application and load "bin\Configurations\Greta\Greta - NVBG_MM_test_configuration.xml".

  2. In CharacterManager module, make sure you selected eather "EN"(English) or "FR"(French) as you wish.

  3. In CharacterManager module, make sure that your currently selected character has corresponding language parameter ("CEROPROC_LANG" if you use CereProce TTS or "MARYTTS_LANG" for MaryTTS TTS). If you use English version, it should be eather "en-US" or "en-UK" and "fr-FR" for French version.

  4. In NVBG_MM_Controller module, turn on NVBG by putting check mark. If you want to launch both ot NVBG and MeaningMiner, you can turn-on both of them together.

  5. In FML File Reader Meaning Miner module, Open your XML file which you prepared for NVBG and send it.

  6. After clicking send button, NVBG application and parseIt application (charniak parser with ActiveMQ) will be launched. Although first submitted request might be not work appropriatedly since the server applications might be not ready yet depending on your computer conditions, it should work properly from your second attempt (click send button).

Note

  • If you want to expand NVBG to non-space-separatable language such as Japanese or Chinese, you must need to think about how to replace charniak-like parser for your language. Also, you need to check NVBG source code in C# and Java to replace word-separation with any tokenizers. I guess it's going to be a heavy task.

  • Gestures are selected and generated randomly based on gesture generation rules defined in the following setup files.

NVBG init file

  • Specifying NVBG setup files.
  • Path: bin\NVBG\data\nvbg-toolkit\Brad.ini (English version) or bin\NVBG\data\nvbg-toolkit\Brad_FR.ini (French version)

NVBG rule file

  • Specifying keywords and corresponding gesture(or posture) names.
  • Path: bin\NVBG\data\nvbg-toolkit\rule_input_ChrBrad.xml

NVBG-Greta geture name mapping file

  • Mapping gesture name between names in NVBG and names in Greta.
  • Path: bin\mapping_file.txt

If you want to modify further, you might need to modify following setup files, which I don't know much about its functionarity.

  • Path: bin\NVBG\data\nvbg-toolkit\NVBG_transform.xml
  • Path: bin\NVBG\data\nvbg-toolkit\saliency_map_init_brad.xml

vrExpress Message

A vrExpress Message is the way that NVBG uses to communicate between its components. One doesn't need to specify the intentions (intention tags omitted), they will be computed automatically, instead it's mandatory to define a character that will take in charge the message process (harmony in the photo).

Two characters are generated by default by NVBG: Brad and Rachel

Launch NVBG

From the Planner or the BML File Reader , GRETA will launch NVBG and the Charniak Parser

In the run-toolkit-allC#.bat is specified to launch NVBG with some arguments:

  • Characters that will have to process the speech
  • Rules-Input file for each character (it will define how the character will treat the speech)
  • NVBG-Rules.xls: it will define the structure of the response
  • NVBG-Transform.xls: used by NVBG-Rules , define the structure of the response
  • NVBG-BahviorDescription.xls: used by NVBG-Rules, define the structure of the response

The parser is launched just after the launch of NVBG, it will parse the speech and send it ,once parsed, back to NVBG.

Find NVBG Gesture and Conversion to GRETA Gesture

From the Planner or the BML File Reader , GRETA will launch NVBG and the Charniak Parser and send to it the speech as vrExpress Message, then NVBG will send back the encrypted response of the treatement that will contain some tags that will be decrypted.Those elements contain the gesture that NVBG found for SmartBody.

We now need to convert the tags "animation" to the tag which is understandable by GRETA , to add the type of the gesture (i.e beat etc..), to change the gesture name using the mapping file (i.e NVBGBeatLow->GRETABeatLow)

The mapping file contains a the NVBG gestures, GRETA gestures and the type of the latter. Each line contains a NVBG Gesture and its version made in GRETA The type is stocked in another mapping file which will map the type of NVBG Gestures and GRETA Gesture.

The mapping file will contain the elements separated by "::" to make the file easy to parse. (i.e. NVBGBeatLow::GRETABeatLow), The same is done for the type.

At the last we need to delete from the line on the gesture some informations which are not useful (importance) and to add the start and end of the gesture.

At the end of the traitement we will have the line containing the gesture. We now need to add the "context" around the gesture, for that we recreate locally a BML using the speech and it will contain the gestures found by NVBG.

The last step is to use the convert the BML to Signals (GesturesSignals essentially) and to add these signals to the existing signals, then GRETA's BehaviorRealizer will process them.

Getting started with Greta

Greta Architecture

Quick start

Advanced

Functionalities

Core functionality

Auxiliary functionalities

Preview functionality

Nothing to show here

Previous functionality (possibly it still works, but not supported anymore)

Clone this wiki locally