-
-
Notifications
You must be signed in to change notification settings - Fork 31
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
yolo's Pose/keypoint detection #8
Comments
That will be great to implement for mobile devices, sorry but right now my
priority is different, I will surely put my time in maintaining this
repository as soon as I get some time.
…On Thu, 26 Sep, 2024, 8:10 AM 1369355119, ***@***.***> wrote:
Assigned #8 <#8> to
@surendramaran <https://github.com/surendramaran>.
—
Reply to this email directly, view it on GitHub
<#8 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/APXHH4NVFMDC4QQCWJALSULZYNX25AVCNFSM6AAAAABO37OQ3CVHI2DSMVQWIX3LMV45UABCJFZXG5LFIV3GK3TUJZXXI2LGNFRWC5DJN5XDWMJUGQYDQMZXG44TCNQ>
.
You are receiving this because you were assigned.Message ID:
***@***.***>
|
Hello, I'm glad to get your reply. So far I have developed a version of keypoint/pose on my own. But I'm not sure it's true, especially for the output of the model. My pose model output is [1,11,8400], which trains only one class, with two key points. So when I analyze, I think that these 11 channels are as follows: But I don't know if that's really true. So I want to know what to do with the output of the analytic model? Are there any standard methods for accurate analysis?
|
What should I do if I want to use yolo's Pose/keypoint detection? thank you
The text was updated successfully, but these errors were encountered: