-
Notifications
You must be signed in to change notification settings - Fork 83
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Training Own Neural Network #3
Comments
Hi @SmellingSalt, Once you have the TensorRT engine file, you need to make sure it gets included in the container image. That means changing this line specifically: Line 115 in 4992052
If the filename of your engine file is different from ours (most likely), change this line in the config file to point it correctly: Line 105 in 4992052
And finally, since your object labels differ from ours, you'll need to adapt these names in the code (from mask/no_mask to glasses/no_glasses I guess 😄 ): maskcam/maskcam/maskcam_inference.py Line 60 in 4992052
This documentation can also guide you in this process: In particular, some of the steps I just mentioned above, are also here: Good luck hacking the code and let us know if you managed to adapt the system! |
Thank you for your response! Line 105 in 4992052
To point to the newly trained neural network and run things. I will take your suggested approach from now on to make it more streamlined. Thank you! I still have issues regarding training and interfacing with the docker image provided by you. What I didAs my intentions were to first get the process to work, not caring about performance, I first wanted to understand the procedure to train and develop a By modifying this line Line 110 in 4992052
to point to the newly created resnet18 file and commenting out line 105 , I assumed I could get things to work.I unfortunately faced errors and could not get it to work on my custom resnet18 model. These errors I assume are owing to the fact that there are only 2 classes (mask/no mask) trained here and you use 3 classes (mask/no mask/ not visible), therefore the .trt model is an incompatible shape.
Help I needAs suggested by you, I will look into training a YOLOv4 model and use https://github.com/Tianxiaomo/pytorch-YOLOv4#51-convert-from-onnx-of-static-batch-size to convert it to a
Thank you for your time! |
Hi @SmellingSalt! I'm glad you're working on this.
|
Hello, I followed the procedure outlined here and managed to generate a On searching the internet for answers, I have come to the understanding that the So in order to get the Line 105 in 4992052
what was the procedure that you followed? Or am I making a mistake somewhere else? Also as a side note, I came across this answer on stack exchange, which claims the following
And from the official documentation of NVIDIA's tensorrt, I came across this line
So I would like to know if you did the conversion from |
Hey there, |
Hello, The only issue left now is with the class labels. I get the following errors
Changes I have madeFile maskcam_inference.pyI have changed LABEL_MASK = "mask"
LABEL_NO_MASK = "no_mask" # YOLOv4: no_mask
LABEL_MISPLACED = "misplaced"
LABEL_NOT_VISIBLE = "not_visible" To LABEL_MASK = "glasses"
LABEL_NO_MASK = "no_glasses" # YOLOv4: no_mask
LABEL_MISPLACED = "misplaced"
LABEL_NOT_VISIBLE = "not_visible" File maskcam_config.txtChange 1Line 117 in 4992052
to num-detected-classes=2 Change 2
to
Change 3labelfile-path=yolo/data/obj.names To point to the new File 3
|
Hi @SmellingSalt and @DonBraulio, |
Hey @SmellingSalt, I'm really sorry I missed your message in the noise. I hope you could find the solution to your problem. If not, I think that probably the thing you were missing was changing @Raphenri09 you should start your container in Development Mode and then run
Of course, you'll need to change the file names and check the path to that executable, but I hope it helps! |
Hello,
I am having trouble understanding the procedure to train my own detection model. I have a Jetson Nano 2GB and 4GB variant with me.
My objective is to detect if a person wears sunglasses or not. To accomplish this objective, my main queries are as follows.
My work flow is the exact same as mask cam, with remote deployment and web server accessing and the rest. I just need to change the object detection mechanism. Even the statistics that it provides will be unchanged.
Thank you.
The text was updated successfully, but these errors were encountered: