You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Jan 25, 2023. It is now read-only.
I am using this hand tracking samples project (actually a forked version of it for Intelsense D400) for my Bachelors' thesis but I have encountered a problem.
My idea was to build a hand gesture recognition system that could tell what gesture is the system receiving as an input. I was hoping that the output could provide the label for the gestures' name (or the name of the dataset it belongs to at least), just as in this "dsamples" project but using the hand tracking system instead.
However, from what I have experienced with your hand-tracking project, the output resulting from the classification layer is a series of values describing fingers' angles and hand orientation mainly.
Is there a way in which the system could be trained so that the output of the classification stage provides the label for the gesture dataset name (as if we wanted to find out the gesture category each input belongs to)?? Maybe there's something that I am missing and there is actually a way of doing it.
I have already generated several datasets regarding different hand poses using realtime-annotator.cpp, and I have also tried training the cnn with those datasets simultaneously (with the train-cnn.cpp ), however, I haven't yet found the way so extract those dataset labels from the depth image input of a hand-gesture.
I would really appreciate any help on this topic I'm kind of stuck in this step and have been working on this for several months now.
Thanks in advance
The text was updated successfully, but these errors were encountered:
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
I am using this hand tracking samples project (actually a forked version of it for Intelsense D400) for my Bachelors' thesis but I have encountered a problem.
My idea was to build a hand gesture recognition system that could tell what gesture is the system receiving as an input. I was hoping that the output could provide the label for the gestures' name (or the name of the dataset it belongs to at least), just as in this "dsamples" project but using the hand tracking system instead.
However, from what I have experienced with your hand-tracking project, the output resulting from the classification layer is a series of values describing fingers' angles and hand orientation mainly.
Is there a way in which the system could be trained so that the output of the classification stage provides the label for the gesture dataset name (as if we wanted to find out the gesture category each input belongs to)?? Maybe there's something that I am missing and there is actually a way of doing it.
I have already generated several datasets regarding different hand poses using realtime-annotator.cpp, and I have also tried training the cnn with those datasets simultaneously (with the train-cnn.cpp ), however, I haven't yet found the way so extract those dataset labels from the depth image input of a hand-gesture.
I would really appreciate any help on this topic I'm kind of stuck in this step and have been working on this for several months now.
Thanks in advance
The text was updated successfully, but these errors were encountered: