Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can not detect custom object using inference.py #353

Open
patsyuk03 opened this issue Apr 15, 2024 · 3 comments
Open

Can not detect custom object using inference.py #353

patsyuk03 opened this issue Apr 15, 2024 · 3 comments

Comments

@patsyuk03
Copy link

Hello.
I have generated data using this command:
python single_video_pybullet.py --nb_frames 10000 --scale 0.001 --path_single_obj ~/Deep_Object_Pose/scripts/nvisii_data_gen/models/Gear/google_16k/gear.obj --nb_distractors 0 --nb_object 10 --outf gear1/

And trained the model for 60 epoch on 9800 generated images:
python -m torch.distributed.launch --nproc_per_node=1 train.py --network dope --epochs 60 --batchsize 2 --outf tmp_gear1/ --data ../nvisii_data_gen/output/gear1/

image
image

When I run inference on the rest 200 generated images. The belief maps seems good, but there are no objects detected.

Here is the inference config:


> topic_camera: "/dope/webcam/image_raw"
> topic_camera_info: "/dope/webcam/camera_info"
> topic_publishing: "dope"
> input_is_rectified: True   # Whether the input image is rectified (strongly suggested!)
> downscale_height: 400      # if the input image is larger than this, scale it down to this pixel height
> 
> # Comment any of these lines to prevent detection / pose estimation of that object
> weights: {
>     #'obj':"tmp/net_epoch_99.pth"
>     'obj':"tmp_gear1/net_epoch_60.pth"
> }
> 
> # Type of neural network architecture
> architectures: {
>     'obj':"dope",
> }
> 
> 
> # Cuboid dimension in cm x,y,z
> dimensions: {
>     'obj':[11.9541015625, 3.00, 11.869906616210938]
> 
> }
> 
> class_ids: {
>     "obj": 1
> }
> 
> draw_colors: {
>     "obj": [13, 255, 128],  # green
> }
> 
> # optional: provide a transform that is applied to the pose returned by DOPE
> model_transforms: {
> #    "cracker": [[ 0,  0,  1,  0],
> #                [ 0, -1,  0,  0],
> #                [ 1,  0,  0,  0],
> #                [ 0,  0,  0,  1]]
> }
> 
> # optional: if you provide a mesh of the object here, a mesh marker will be
> # published for visualization in RViz
> # You can use the nvdu_ycb tool to download the meshes: https://github.com/NVIDIA/Dataset_Utilities#nvdu_ycb
> meshes: {
> #    "cracker": "file://path/to/Dataset_Utilities/nvdu/data/ycb/aligned_cm/003_cracker_box/google_16k/textured.obj",
> #    "gelatin": "file://path/to/Dataset_Utilities/nvdu/data/ycb/aligned_cm/009_gelatin_box/google_16k/textured.obj",
> #    "meat":    "file://path/to/Dataset_Utilities/nvdu/data/ycb/aligned_cm/010_potted_meat_can/google_16k/textured.obj",
> #    "mustard": "file://path/to/Dataset_Utilities/nvdu/data/ycb/aligned_cm/006_mustard_bottle/google_16k/textured.obj",
> #    "soup":    "file://path/to/Dataset_Utilities/nvdu/data/ycb/aligned_cm/005_tomato_soup_can/google_16k/textured.obj",
> #    "sugar":   "file://path/to/Dataset_Utilities/nvdu/data/ycb/aligned_cm/004_sugar_box/google_16k/textured.obj",
> #    "bleach":  "file://path/to/Dataset_Utilities/nvdu/data/ycb/aligned_cm/021_bleach_cleanser/google_16k/textured.obj",
> }
> 
> # optional: If the specified meshes are not in meters, provide a scale here (e.g. if the mesh is in centimeters, scale should be 0.01). default scale: 1.0.
> mesh_scales: {
>     "obj": 0.01
> }
> 
> # Config params for DOPE
> thresh_angle: 0.5
> thresh_map: 0.0001
> sigma: 3
> thresh_points: 0.1

Is there something that I can do to fix this?

@TontonTremblay
Copy link
Collaborator

wow this is such a beautiful example of training on a symmetrical object. You can fix this by retraining using the script to annotate the symmetries.

https://github.com/NVlabs/Deep_Object_Pose/tree/master/data_generation#handling-objects-with-symmetries

@patsyuk03
Copy link
Author

Thank you for the quick answer.

As I understand, it is similar to your example of a Hex screw object with rotational symmetry. However, my object, due to the hexagon in the center, is not entirely symmetrical rotationally, but I can see the centerline where it can be mirrored.

What would be the right way to define the symmetry in this case? Will it be possible for a model to distinguish between such small offsets of hexagon corners, or is the only option to just ignore it and define it as symmetrical rotationally?

@TontonTremblay
Copy link
Collaborator

There is an axe for each hexagon corner.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants