Skip to content

Latest commit

 

History

History
166 lines (93 loc) · 8.02 KB

File metadata and controls

166 lines (93 loc) · 8.02 KB

ST multi-zone Time-of-Flight sensors hand posture recognition STM32 model zoo

This tutorial shows how to evaluate your pre-trained hand posture model on an STM32 board using STM32Cube.AI.

Table of contents

Before you start

Please check out STM32 model zoo for hand posture recognition.

1. Hardware setup

The Getting Started is running on a NUCLEO STM32 microcontroller board connected to a ST multi-zone Time-of-Flight sensors daughter board. This version supports the following boards only:

2. Software requirements

You need to download and install the following software:

  • STM32CubeIDE
  • If using STM32Cube.AI locally, open link and download the package, then extract here both '.zip' and '.pack' files.

3. Specifications

  • serie : STM32F4
  • IDE : GCC

Deploy pretrained model on STM32 board

1. Configure the yaml file

You can run a demo using a pretrained model from STM32 model zoo. Please refer to the YAML file provided alongside the TFlite model to fill the following sections in user_config.yaml (namely Dataset Configuration and Load model).

As an example, we will show how to deploy the model CNN2D_ST_HandPosture_8classes.h5 pretrained on ST_VL53L8CX_handposture_dataset dataset using the necessary parameters provided in CNN2D_ST_HandPosture_8classes_config.yaml.

1.1. General settings:

Configure the general section in user_config.yaml as the following:

plot

where:

  • project_name - String, name of the project.

1.2. Dataset configuration:

You need to specify some parameters related to the dataset and the preprocessing of the data in the user_config.yaml which will be parsed into a header file used to run the C application.

1.2.1. Dataset info:

Configure the dataset section in user_config.yaml as the following:

plot

where:

  • name - Dataset name
  • class_names - A list containing the classes name.
  • test_path - Path to the test set, needs to be provided to evaluate the model accuracy, else keep empty. To create your own test set, please follow these steps.

1.2.2. Preprocessing info:

To run inference in the C application, we need to apply on the input data the same preprocessing used when training the model.

To do so, you need to specify the preprocessing configuration in user_config.yaml as the following:

plot

  • Max_distance - Integer, in mm, the maximum distance of the hand from the sensor allowed for this application. If the distance is higher, the frame is filtered/removed from the dataset
  • Min_distance - Integer, in mm, the minimum distance of the hand from the sensor allowed for this application. If the distance is lower, the frame is filtered/removed from the dataset
  • Background_distance - Integer, in mm, the gap behind the hand, all zones above this gap will be removed

1.3. Load model:

You can run a demo using a pretrained model provided in STM32 model zoo for hand posture recognition. These models were trained on specific datasets (e.g. ST_VL53L5CX_handposture_dataset or ST_VL53L8CX_handposture_dataset).

Also, you can directly deploy your own pretrained model.

To do so, you need to configure the model section in user_config.yaml as the following:

plot

where:

  • model_type - A dictionary with keys relative to the model topology (see more). Example for CNN2D_ST_HandPosture {name : CNN2D_ST_HandPosture, version : v1}, else for a custom model use {name : custom}.
  • input_shape - A list of int [H, W, C] for the input resolution, e.g. [8, 8, 2].
  • model_path - Path to your model, the model can be in .h5, SavedModel format.

1.4. C project configuration:

To deploy the model in STM32H747I-DISCO board, we will use STM32Cube.AI to convert the model into optimized C code and STM32CubeIDE to build the C application and flash the board.

These steps will be done automatically by configuring the stm32ai section in user_config.yaml as the following:

plot

where:

  • c_project_path - Path to Getting Started project.
  • serie - STM32F4, only supported option for Getting Started.
  • IDE -GCC, only supported option for Getting Started.
  • verbosity - 0 or 1. Mode 0 is silent, and mode 1 displays messages when building and flashing C application on STM32 target.
  • version - Specify the STM32Cube.AI version used to benchmark the model, e.g. 7.3.0.
  • optimization - String, define the optimization used to generate the C model, options: "balanced", "time", "ram".
  • footprints_on_target - (Not used for this example). 'STM32H747I-DISCO' to use Developer Cloud Services to benchmark model and generate C code, else keep False (i.e. only local download of STM32Cube.AI will be used to get model footprints and C code w/o inference time).
  • path_to_stm32ai - Path to stm32ai executable file to use local download, else False.
  • path_to_cubeIDE - Path to stm32cubeide executable file.

2. Run deployment:

First you need to connect ST multi-zone Time-of-Flight daughter board P-NUCLEO-53L5A1 to the NUCLEO-F401RE board, then connect the Nucleo board to your computer using an usb cable.

The picture below shows the complete setup :

plot

Then, run the following command to build and flash the application on your board:

python deploy.py

3. Run the application in the Gesture EVK GUI

When the application is running on the NUCLEO-F401RE Nucleo board, it can be tested with the ST User Interface: STSW-IMG035_EVK (Gesture EVK). This tool can be downloaded on ST.com.

The implementation and the dataset are done with the following sensor orientation: plot

There are two ways to visualize the Hand Posture Model outputs:

Below are the different steps to open the dedicated Hand Posture widget to visualize the output of your application.

plot

A dedicated User Manual is available in this software STSW-IMG035_EVK (Gesture EVK).

4. Run the application in a serial terminal

Once, programmed the board can be connected through a serial terminal and the output of the inference can be seen in the serial terminal. To connect the serial port please follow the steps shown in the figure below: plot

To run the application, use the command line "enable" and press enter.

plot

The Tera Term terminal shows the output of one inference from the live data.

plot