-
Notifications
You must be signed in to change notification settings - Fork 25
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
OpenCV calibration results for RIWMPBOT: Using -cpt flag doesn't change the results #965
Comments
I think I found the issue. Basically, the OpenCV calibration doesn't save the calibrated pattern's TF to the calibrated dataset, which means it doesn't matter if the |
Turns out this wasn't the issue; the OpenCV calibration function used only returns the base-to-camera TF (in the eye-to-hand case). I remember I initially tried using another OpenCV function which also returned the other TF but I remember it not working so we settled on using Either way, I think these results might warrant further discussion in our next meeting. |
The idea now is to create a script that copies the pattern TFs from an ATOM-calibrated dataset to a target dataset, so that when we compare the results from the evaluation procedures, it's a fair comparison. |
Hi @miguelriemoliveira! A base working script is done! An example to run: As we talked about today, before copying the tf, it checks whether that tf is static:
It then copies the tfs and outputs a new dataset! I'll clean up some things about the script and test getting the results we wanted using this later today. |
Great
…On Fri, Jul 12, 2024, 17:25 Diogo Vieira ***@***.***> wrote:
Hi @miguelriemoliveira <https://github.com/miguelriemoliveira>! A base
working script is done!
An example to run:
rosrun atom_evaluation copy_tfs_from_dataset.py -sd
$ATOM_DATASETS/riwmpbot_real/merged/atom_calibration.json -td
$ATOM_DATASETS/riwmpbot_real/merged/dataset.json -pll flange -cll
charuco_200x200_8x8
As we talked about today, before copying the tf, it checks whether that tf
is static:
https://github.com/lardemua/atom/blob/a5f8128c3b21794bb64d4eefb0bad9a9891b9135/atom_evaluation/scripts/other_calibrations/copy_tfs_from_dataset.py#L63-L91
It then copies the tfs and outputs a new dataset!
I'll clean up some things about the script and test getting the results we
wanted using this later today.
—
Reply to this email directly, view it on GitHub
<#965 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/ACWTHVSZGPJP7JLP7ESGY7DZL7YMZAVCNFSM6AAAAABKB3CJ3KVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDEMRVHAZDGOBQGE>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
So the results aren't great. I'll lay out what I did. First off, I ran the following:
This copied the calibrated pattern TFs to the dataset used for testing. I copied these for the testing dataset because the mixed dataset generated during evaluation uses the test dataset's pattern TFs. Afterwards, I ran the evaluation:
and got the results:
I'm not sure what might be going wrong here. Will keep investigating... EDIT: I accidentally closed the issue, and then reopened it. |
I'll try to re-do the process from beginning to end to make sure nothing went wrong along the way. So this is the process:
|
I think that's it but 1 and 2 are not in order. |
What you are doing (the procedure) seems to make sense. Now we get these errors
but what were the errors reported for tsai before doing the copy_tf? Worse or better?
did you test your script already? Did you confirm these transforms are copied? |
We could do an alternative to your procedure, where we first inject the atom estimated pattern transformations to a new train_dataset.json which is used by the tsai method ... perhaps it makes no difference. |
Edited it so it's in the right order now
They were worse. Running with the same datasets (there are several variants of the same dataset in the folder):
Yes, I double checked the datasets and the TFs were indeed copied...
This option seems smarter I think, though I think the atom estimated pattern transforms need to go in the test dataset (I can also copy it to both the test and train datasets, just to make sure). |
Yes, that was my idea. But create a second train dataset ... |
My new results, with the procedure made more explicit. To note: I'm using a copy of Splitting the dataset
Running ATOM calibration:
Results:
Copying the ATOM-calibrated pattern transformations to the test and train datasets
Calibrating with tsai
Results:
Evaluation:
Results:
So clearly there's something wrong somewhere... Will keep investigating. |
Hi @Kazadhum , thanks for the clear and detailed post. I do not see anything conceptually wrong with what you are doing. |
Thanks! I think it might be best if we meet, to analyze the problem together. In the meanwhile, I'll try to look at the different scripts to check if everything makes sense or if some script is using the wrong TFs or something of that nature... |
Just fixed the Simulated results for the Tsai method seem to validate this change:
I will now check if this works with the real system. |
Nice. This is very good news. Congratullations. |
New results, step-by-step, now using partial detections:Splitting the dataset:
Running ATOM calibration:
Results:
Copying the ATOM-calibrated pattern transformations to the test and train datasets
Calibrating with Tsai:
Evaluation:
Results:
There's probably some detail wrong somewhere... I'll keep investigating after lunch. |
Yeah. Something's going wrong still. I think after finding the problem with the partial collections we should go back and test without copying the pattern transforms to the train and test. In those cases, what do we get? In any case do not give up. You are close to the solution, I think. By the way, if you run this experiment in the simulated dataset (where you know Tsai actually calibrated very well) what do you get in the evaluation? |
Just did this, and the results weren't much better. I re-did the OpenCV calibration on a dataset without the copied TFs and ran the evaluation using a test dataset without copied TFs. The results:
So there's something wrong here still...
Testing first without copying the ATOM-estimated pattern TFs... OpenCV calibration of the train dataset:
Results:
So like we saw in the post above, all good so far. Now for the evaluation against itself (sanity check):
Results:
Also looks good here. Now we perform the actual evaluation (train dataset vs. test dataset):
Results:
So these results look good, but they'll probably look better if I copy the pattern TFs from an ATOM calibration. When I copy the pattern TFs, I get:
|
Hi @miguelriemoliveira! In light of today's progress, do you think we can close this issue? |
Yes, its very big now. Might as well start a new one if needed. |
Hello @miguelriemoliveira and @manuelgitgomes!
This issue is related to #912. When running the
single_rgb_evaluation
script for the realriwmpbot
:rosrun atom_evaluation single_rgb_evaluation -train_json $ATOM_DATASETS/riwmpbot_real/merged/hand_eye_tsai_rgb_world.json -test_json $ATOM_DATASETS/riwmpbot_real/merged/dataset.json -uic
we get:
Following your suggestion, I checked whether the pattern transforms were being copied over to the
mixed_dataset
. They weren't, but the flag-cpt/--copy_pattern_transforms
does it. The results are the same. The flag's implementation is as follows:atom/atom_evaluation/scripts/single_rgb_evaluation
Lines 127 to 130 in 6b58bf5
Which led me to believe that, since the results were the same with and without using
-cpt
, maybe the calibration configuration didn't have the patterns as 'fixed'. However, they do.The text was updated successfully, but these errors were encountered: