Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add a script to use OpenCV's calibration for the eye-to-hand case #943

Closed
Kazadhum opened this issue May 5, 2024 · 25 comments
Closed

Add a script to use OpenCV's calibration for the eye-to-hand case #943

Kazadhum opened this issue May 5, 2024 · 25 comments
Assignees
Labels
enhancement New feature or request

Comments

@Kazadhum
Copy link
Collaborator

Kazadhum commented May 5, 2024

The idea is to create a brother-script to cv_eye_in_hand.py which performs the same calibration (with the same four methods but applied to the eye-to-hand case.

I'm going to start working on this now.

Tagging @miguelriemoliveira, @manuelgitgomes and @brunofavs for visibility.

@Kazadhum
Copy link
Collaborator Author

Kazadhum commented May 5, 2024

Looking at the eye-in-hand script and thinking about what needs to change to accommodate the change of variant...

The first thing I notice is this verification:

# Check that the camera has rgb modality
if not dataset['sensors'][args['camera']]['modality'] == 'rgb':
atomError('Sensor ' + args['camera'] + ' is not of rgb modality.')
available_methods = ['tsai', 'park', 'horaud', 'andreff', 'daniilidis']
if args['method_name'] not in available_methods:
atomError('Unknown method. Select one from ' + str(available_methods))
if args['method_name'] == 'tsai':
method = cv2.CALIB_HAND_EYE_TSAI
elif args['method_name'] == 'park':
method = cv2.CALIB_HAND_EYE_PARK
elif args['method_name'] == 'horaud':
method = cv2.CALIB_HAND_EYE_HORAUD
elif args['method_name'] == 'andreff ':
method = cv2.CALIB_HAND_EYE_ANDREFF
elif args['method_name'] == 'daniilidis':
method = cv2.CALIB_HAND_EYE_DANIILIDIS
# Check the given hand link is in the chain from base to camera
chain = getChain(from_frame=args['base_link'],
to_frame=dataset['calibration_config']['sensors'][args['camera']]['link'],
transform_pool=dataset['collections'][selected_collection_key]['transforms'])
# chain is a list of dictionaries like this:
# [{'parent': 'forearm_link', 'child': 'wrist_1_link', 'key': 'forearm_link-wrist_1_link'},
# {'parent': 'wrist_1_link', 'child': 'wrist_2_link', 'key': 'wrist_1_link-wrist_2_link'}, ... ]
hand_frame_in_chain = False
for transform in chain:
if args['hand_link'] == transform['parent'] or args['hand_link'] == transform['child']:
hand_frame_in_chain = True
if not hand_frame_in_chain:
atomError('Selected hand link ' + Fore.BLUE + args['hand_link'] + Style.RESET_ALL +
' does not belong to the chain from base ' + Fore.BLUE + args['base_link'] +
Style.RESET_ALL + ' to the camera ' +
dataset['calibration_config']['sensors'][args['camera']]['link'])

This doesn't make much sense in the eye-to-hand case, since the hand is not supposed to be in the chain between the base and camera. It is, however, supposed to be in the chain between the base and pattern. I'll change this verification to check for that instead.

@Kazadhum
Copy link
Collaborator Author

Kazadhum commented May 5, 2024

Simply changing the camera to the pattern in this verification doesn't work, as I suspected, since the getChain() gets the tf chain from the base to the pattern (base to world to pattern, which doesn't go through the hand link). What I can do is check that we aren't in an eye-in-hand configuration, meaning I simply check that, in the chain from the camera link to the base link, the hand link isn't there.

The chain should be:
camera -> ... -> world -> ... -> base

If the hand link belongs to the chain, we know the configuration is wrong.

@Kazadhum
Copy link
Collaborator Author

Kazadhum commented May 5, 2024

The second verification, I'm not so sure about:

# Check the hand to camera chain is composed only of fixed transforms
chain = getChain(from_frame=args['hand_link'],
to_frame=dataset['calibration_config']['sensors'][args['camera']]['link'],
transform_pool=dataset['collections'][selected_collection_key]['transforms'])
for transform in chain:
if not dataset['transforms'][transform['key']]['type'] == 'fixed':
atomError('Chain from hand link ' + Fore.BLUE + args['hand_link'] + Style.RESET_ALL +
' to camera link ' + Fore.BLUE +
dataset['calibration_config']['sensors'][args['camera']]['link'] +
Style.RESET_ALL + ' contains non fixed transform ' + Fore.RED +
transform['key'] + Style.RESET_ALL + '. Cannot calibrate.')

Obviously, it doesn't really work in an eye-to-hand configuration. But should there be another verification here in it's place? Maybe I check if the hand-pattern tf is fixed?

@Kazadhum
Copy link
Collaborator Author

Kazadhum commented May 5, 2024

Added an additional check for if the hand link is in the chain between the pattern link and the base link... I wasn't thinking right before when I said it wouldn't work, but I think the additional check I added is good, if redundant (maybe its "mirror" should be included in the eye-in-hand?)

@Kazadhum
Copy link
Collaborator Author

Kazadhum commented May 5, 2024

This check is not working properly... I forgot the pattern link is not in the transformation pool acquired from the collections. But I think the check can still be made easily. I'll fix it soon.

Kazadhum added a commit that referenced this issue May 5, 2024
@Kazadhum
Copy link
Collaborator Author

Kazadhum commented May 5, 2024

Fixed this by checking for the pattern's parent link. That way the getChain() function works properly

@Kazadhum
Copy link
Collaborator Author

Kazadhum commented May 5, 2024

The second verification, I'm not so sure about:

# Check the hand to camera chain is composed only of fixed transforms
chain = getChain(from_frame=args['hand_link'],
to_frame=dataset['calibration_config']['sensors'][args['camera']]['link'],
transform_pool=dataset['collections'][selected_collection_key]['transforms'])
for transform in chain:
if not dataset['transforms'][transform['key']]['type'] == 'fixed':
atomError('Chain from hand link ' + Fore.BLUE + args['hand_link'] + Style.RESET_ALL +
' to camera link ' + Fore.BLUE +
dataset['calibration_config']['sensors'][args['camera']]['link'] +
Style.RESET_ALL + ' contains non fixed transform ' + Fore.RED +
transform['key'] + Style.RESET_ALL + '. Cannot calibrate.')

Obviously, it doesn't really work in an eye-to-hand configuration. But should there be another verification here in it's place? Maybe I check if the hand-pattern tf is fixed?

Again, I wasn't thinking right. We can just check if the TF from the hand link to the pattern is fixed. As above, though, we need to use the pattern link's parent for this:

# Check the hand to pattern chain is composed only of fixed transforms
# Since the transformation pool from a collection doesn't include the tf from the pattern link's parent to the pattern link, we must work with the parent. However, it might be the case that the hand link is the same as the pattern's parent link. In that case, it is known that the transform is fixed.

if args['hand_link'] != dataset['calibration_configuration']['calibration_patterns'][args['pattern']]['parent_link']:
    
    chain = getChain(from_frame=args['hand_link'],
                    to_frame=dataset['calibration_config']['calibration_patterns'][args['pattern']]['parent_link'],
                    transform_pool=dataset['collections'][selected_collection_key]['transforms'])

    for transform in chain:
        if not dataset['transforms'][transform['key']]['type'] == 'fixed':
            atomError('Chain from hand link ' + Fore.BLUE + args['hand_link'] + Style.RESET_ALL +
                    ' to pattern link ' + Fore.BLUE +
                    dataset['calibration_config']['calibration_patterns'][args['pattern']]['link'] +
                    Style.RESET_ALL + ' contains non fixed transform ' + Fore.RED +
                    transform['key'] + Style.RESET_ALL + '. Cannot calibrate.')

@Kazadhum
Copy link
Collaborator Author

Kazadhum commented May 5, 2024

The next step is to look at the transformations calculated and the ones we effectively want, which will change with regards to the eye-in-hand variant.

@miguelriemoliveira
Copy link
Member

This doesn't make much sense in the eye-to-hand case, since the hand is not supposed to be in the chain between the base and camera. It is, however, supposed to be in the chain between the base and pattern. I'll change this verification to check for that instead.

Right. Makes sense.

Obviously, it doesn't really work in an eye-to-hand configuration. But should there be another verification here in it's place? Maybe I check if the hand-pattern tf is fixed?

Perhaps check that the the camera to base chain only contains fixed transforms?

Also, perhaps check that the base to pattern contains the hand chain.
And that the hand to pattern chain transforms are all fixed.

@Kazadhum
Copy link
Collaborator Author

Kazadhum commented May 6, 2024

This doesn't make much sense in the eye-to-hand case, since the hand is not supposed to be in the chain between the base and camera. It is, however, supposed to be in the chain between the base and pattern. I'll change this verification to check for that instead.

Right. Makes sense.

Obviously, it doesn't really work in an eye-to-hand configuration. But should there be another verification here in it's place? Maybe I check if the hand-pattern tf is fixed?

Perhaps check that the the camera to base chain only contains fixed transforms?

Also, perhaps check that the base to pattern contains the hand chain. And that the hand to pattern chain transforms are all fixed.

Implemented!

Now, the OpenCV calibration returns $^{c}T_b$ (camera to base).

@miguelriemoliveira
Copy link
Member

Now, the OpenCV calibration returns the TFs:

this is using the cvcalibrateHandEye? It only returns one transformation, in the case of the eye-in-hand the hand to camera, in the case of the eye-to-hand the camera to base.

@Kazadhum
Copy link
Collaborator Author

Kazadhum commented May 6, 2024

You're right! The other method returned the other two, I was mistaken

@Kazadhum
Copy link
Collaborator Author

Kazadhum commented May 6, 2024

Running:

rosrun atom_evaluation cv_eye_to_hand.py -c rgb_world -p pattern_1 -json $ATOM_DATASETS/riwbot/train/dataset.json -ctgt -bl base_link -hl flange

returns:

Partial detection removed: label from collection 001 and pattern pattern_1, sensor rgb_world
Deleted collections: ['001']: at least one detection by a camera should be present.
After filtering, will use 9 collections: ['000', '002', '003', '004', '005', '006', '007', '008', '009']
Selected collection key is 000
Ground Truth b_T_c=
[[-0.876 -0.196  0.44   0.286]
 [ 0.152  0.754  0.639 -0.153]
 [-0.457  0.627 -0.631 -0.355]
 [ 0.     0.     0.     1.   ]]
estimated b_T_c=
[[-0.876 -0.196  0.44   0.286]
 [ 0.152  0.754  0.639 -0.153]
 [-0.457  0.627 -0.631 -0.355]
 [ 0.     0.     0.     1.   ]]
Etrans = 0.0 (m)
Erot = 0.0 (deg)
+--------------------------------------+-------------+---------+----------+-------------+------------+
|              Transform               | Description | Et0 [m] |  Et [m]  | Rrot0 [rad] | Erot [rad] |
+--------------------------------------+-------------+---------+----------+-------------+------------+
| tripod_center_support-rgb_world_link |  rgb_world  |   0.0   | 0.175453 |     0.0     |  1.455959  |
+--------------------------------------+-------------+---------+----------+-------------+------------+
Saved json output file to /home/diogo/atom_datasets/riwbot/train/hand_eye_tsai_rgb_world.json.

The good news is the calibration is working correctly and returning the correct tfs, since the estimated and GT $^{b}T_c$ are the same. There probably is an issue in the calculation of the tf from the tripod_center_support to the world_link...

@miguelriemoliveira
Copy link
Member

There probably is an issue in the calculation of the tf from the tripod_center_support to the world_link

right, if estimated b_T_c is correct, then the problem is surely in the computation of cp_T_cc.

@Kazadhum
Copy link
Collaborator Author

Kazadhum commented May 7, 2024

Hi @miguelriemoliveira! So I've been trying to figure this out since this morning... Still didn't get anywhere, but I may have detected an error in the eye-in-hand case.

So one of my doubts was: "If the b_T_c transform is correct, then why is the cp_T_cc not correct?"

Turns out that maybe b_T_c is also not correct. From cv_eye_in_hand.py:

if args['compare_to_ground_truth']:
# --------------------------------------------------
# Compare h_T_c hand to camera transform to ground truth
# --------------------------------------------------
h_T_c_ground_truth = getTransform(from_frame=args['hand_link'],
to_frame=dataset['calibration_config']['sensors'][args['camera']]['link'],
transforms=dataset['collections'][selected_collection_key]['transforms'])
print(Fore.GREEN + 'Ground Truth h_T_c=\n' + str(h_T_c_ground_truth))

Shouldn't we get the ground truth from dataset_ground_truth instead of dataset?

When I change this the results aren't good, so I wanted to check first. I believe this is an error because the tfs in dataset are changed to the estimated values here:

# Save to dataset
# Since the transformation cp_T_cc is static we will save the same transform to all collections
frame_key = generateKey(calibration_parent, calibration_child)
quat = tf.transformations.quaternion_from_matrix(cp_T_cc)
trans = cp_T_cc[0:3, 3].tolist()
for collection_key, collection in dataset['collections'].items():
dataset['collections'][collection_key]['transforms'][frame_key]['quat'] = quat
dataset['collections'][collection_key]['transforms'][frame_key]['trans'] = trans

So when we compare later we're just comparing a transformation against itself, right? If this is an error then maybe there isn't a problem in the calculation of cp_T_cc and instead it's a wider problem in the script.

@Kazadhum
Copy link
Collaborator Author

Kazadhum commented May 7, 2024

Maybe we should return to cv_eye_in_hand.py first and then apply whatever fix we find to cv_eye_to_hand.py

@miguelriemoliveira
Copy link
Member

Shouldn't we get the ground truth from dataset_ground_truth instead of dataset?

yes, you are right. But since we did not change the dataset it should be the same, no?

o when we compare later we're just comparing a transformation against itself,

ah, if we do this before then we must use the ground truth dataset.

Maybe we should return to cv_eye_in_hand.py first and then apply whatever fix we find to cv_eye_to_hand.py

I agree.

@Kazadhum
Copy link
Collaborator Author

Kazadhum commented May 8, 2024

Investigating now...

@Kazadhum
Copy link
Collaborator Author

Kazadhum commented May 8, 2024

Reopening the issue regarding the eye_in_hand.py (#912)...

@Kazadhum
Copy link
Collaborator Author

Kazadhum commented May 8, 2024

After changing the dataset used for comparison to dataset_ground_truth, we get the following results:

Ground Truth b_T_c=
[[ 0.922  0.12  -0.369  0.365]
 [ 0.388 -0.285  0.877 -1.444]
 [ 0.    -0.951 -0.309  0.883]
 [ 0.     0.     0.     1.   ]]
estimated b_T_c=
[[-0.876 -0.196  0.44   0.286]
 [ 0.152  0.754  0.639 -0.153]
 [-0.457  0.627 -0.631 -0.355]
 [ 0.     0.     0.     1.   ]]
Etrans = 871.935 (mm)
Erot = 83.585 (deg)
+--------------------------------------+-------------+---------+----------+-------------+------------+
|              Transform               | Description | Et0 [m] |  Et [m]  | Rrot0 [rad] | Erot [rad] |
+--------------------------------------+-------------+---------+----------+-------------+------------+
| tripod_center_support-rgb_world_link |  rgb_world  |   0.0   | 0.175453 |     0.0     |  1.455959  |
+--------------------------------------+-------------+---------+----------+-------------+------------+
Saved json output file to /home/diogo/atom_datasets/riwbot/train/hand_eye_tsai_rgb_world.json.

So it seems that the b_T_c is wrong and calibrateHandEye is not returning the TF we thought it was.

@Kazadhum
Copy link
Collaborator Author

Kazadhum commented May 8, 2024

Hello @miguelriemoliveira! I (apparently) found out what's wrong!

So, as I commented on #912, nothing was wrong with that, besides a typo in the translation error units. Eye-in-hand is working okay.

About eye-to-hand, now. I had to invert one of the tfs used as input for cv.calibrateHandEye(). By inverting b_T_h and using h_T_b as an input instead, I get:

Ground Truth b_T_c=
[[ 0.922  0.12  -0.369  0.365]
 [ 0.388 -0.285  0.877 -1.444]
 [ 0.    -0.951 -0.309  0.883]
 [ 0.     0.     0.     1.   ]]
estimated b_T_c=
[[ 0.922  0.121 -0.369  0.363]
 [ 0.388 -0.288  0.876 -1.439]
 [ 0.    -0.95  -0.312  0.883]
 [ 0.     0.     0.     1.   ]]
Etrans = 1.295 (mm)
Erot = 0.071 (deg)
+--------------------------------------+-------------+---------+----------+-------------+------------+
|              Transform               | Description | Et0 [m] |  Et [m]  | Rrot0 [rad] | Erot [rad] |
+--------------------------------------+-------------+---------+----------+-------------+------------+
| tripod_center_support-rgb_world_link |  rgb_world  |   0.0   | 0.002038 |     0.0     |  0.001232  |
+--------------------------------------+-------------+---------+----------+-------------+------------+
Saved json output file to /home/diogo/atom_datasets/riwbot/train/hand_eye_tsai_rgb_world.json.

It is now in compliance with the documentation:

image

So I just misread the documentation. I will now rename the variables and check if the comments make sense so the code is cleaner and I'll push it.

@Kazadhum
Copy link
Collaborator Author

Kazadhum commented May 8, 2024

Just finished refactoring the code so it's clear now that it uses h_T_b and not b_T_h. I believe this calibration method is operational now.

Tagging @miguelriemoliveira and @manuelgitgomes for visibility. Once you have confirmed this is working, this issue can be closed, I think.

@miguelriemoliveira
Copy link
Member

So I just misread the documentation. I will now rename the variables and check if the comments make sense so the code is cleaner and I'll push it.

Great news!

@miguelriemoliveira
Copy link
Member

Tagging @miguelriemoliveira and @manuelgitgomes for visibility. Once you have confirmed this is working, this issue can be closed, I think.

Really happy to ear this.

Congrats!

@Kazadhum
Copy link
Collaborator Author

Kazadhum commented May 8, 2024

Thank you @miguelriemoliveira! I think I'll close this one and try to implement Ali's method next. Afterwards I'll move on to Lidar-camera/Lidar-Lidar calibrations, I think.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Development

No branches or pull requests

2 participants