-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Running peoplenet with detectNet on jetPack6 #1882
Comments
I converted it to .engine file and got the following error : 3: Cannot find binding of given name: input_0 Command i run to convert it to a .engine file : |
so i fixed that error by updating the input_0 and output bindings, but the detection does not work and I get this warning below in every frame and none of the labels are detected
|
Hi @AkshatJain-TerraFirma, you may need to change this part of jetson-inference/c/detectNet.cpp Line 557 in e8361ae
It expects that detection ONNX models are made from the pytorch-ssd training scripts in the repo. Meanwhile the TAO peoplenet models normally fall under the jetson-inference/c/detectNet.cpp Line 583 in e8361ae
So you may need to change that if you are using a different ONNX. For the TAO models, this script uses tao-converter to build the TRT engine, which then jetson-inference can load (but as mentioned in the other issue, had not tried that on JP6)
|
Hello @dusty-nv
I downloaded peoplnet directly from : https://catalog.ngc.nvidia.com/orgs/nvidia/teams/tao/models/peoplenet. These are the contents of the downloaded folder :
labels.txt nvinfer_config.txt resnet34_peoplenet_int8.txt resnet34_peoplenet.onnx status.json
When i run the following script :
net = jetson_inference.detectNet(model="/home/akshat/jetson-inference/data/networks/peoplenet_deployable_quantized_onnx_v2.6.2/resnet34_peoplenet.onnx", labels="/home/akshat/jetson-inference/data/networks/peoplenet_deployable_quantized_onnx_v2.6.2/labels.txt",
input_blob="input_0", output_cvg="scores", output_bbox="boxes",
threshold=0.8)
I get the following error :
[TRT] 4: [network.cpp::validate::3162] Error Code 4: Internal Error (Network has dynamic or shape inputs, but no optimization profile has been defined.)
[TRT] device GPU, failed to build CUDA engine
[TRT] device GPU, failed to load /home/akshat/jetson-inference/data/networks/peoplenet_deployable_quantized_onnx_v2.6.2/resnet34_peoplenet.onnx
[TRT] detectNet -- failed to initialize.
Traceback (most recent call last):
File "/home/akshat/terrafirma/v2/operator_station/vehicle_control/detect.py", line 12, in
net = jetson_inference.detectNet(model="/home/akshat/jetson-inference/data/networks/peoplenet_deployable_quantized_onnx_v2.6.2/resnet34_peoplenet.onnx", labels="/home/akshat/jetson-inference/data/networks/peoplenet_deployable_quantized_onnx_v2.6.2/labels.txt",
Exception: jetson.inference -- detectNet failed to load network
Is this an issue with the parameters passed into detectNet method or does it need to be optimized to a .engine format? Do i manually have to run the tao converter on the .onnx file ?
(I am a complete beginner so sorry if these questions are silly)
The text was updated successfully, but these errors were encountered: