Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Localize in 3D, multi camera with overlaps (stereo) #1

Open
NicksonYap opened this issue Feb 4, 2019 · 10 comments
Open

Localize in 3D, multi camera with overlaps (stereo) #1

NicksonYap opened this issue Feb 4, 2019 · 10 comments

Comments

@NicksonYap
Copy link

Hi @noether

Great repo!

By configuring camerasInfo.xml i see the possibility of using multi camera, with / without overlaps in 2D

However, what about localizing in 3D, likely stereo (2 cameras with overlaps)

Regards

@noether
Copy link
Owner

noether commented Feb 4, 2019

Indeed,

Although I believe the performance in practice will not be so good by just adding the parameters from the cameras and their relative positions.

@NicksonYap
Copy link
Author

If by performance you mean accuracy then yes indeed

I gave a thought and it may require both mono and stereo camera calibration
(intrinsics for mono and stereo rectification for stereo)

I'm actually unsure how to perform run-time undistortion and stereo rectification together with the localization together

but given that the input image/video feed is already pre-undistorted and pre-rectified

I suppose all we need is a simple formula to convert 2D to 3D?

If so, and idea what to change?
Assuming we have only 2 cameras with decent amount of overlapping, and increase from there

@NicksonYap
Copy link
Author

I came across this multi cam calibration repo:
https://github.com/ethz-asl/kalibr/wiki/multiple-camera-calibration

But I trouble figuring out the maths to actually localize 😅

@hughhugh
Copy link

Hi. what kind of cameta do you use to implement the algorithm? Thx.

@noether
Copy link
Owner

noether commented May 18, 2019

Hej @hughhugh , I tested the algorithm with both: 2x Logitech C510 in parallel, and just the same camera alone.

@hughhugh
Copy link

hughhugh commented May 18, 2019

@noether Thank you. How many cameras are needed in visionloc for several rover like five generally?

@noether
Copy link
Owner

noether commented May 18, 2019

@hughhugh . You might have as many cameras as you wish. The more cameras, the more area you will cover.

With a single camera Logitech C510 I was able to cover an area of around 2x2.5 meters, and the camera was at about 1.80 meters high from the table.

You can see here an example of such a setup: https://www.youtube.com/watch?v=kS_yJiaA_1Y , the robots were "E-pucks" to give you an idea of their size.

@hughhugh
Copy link

hughhugh commented May 21, 2019

@noether Hi, I have ordered a logitech camera now from webstore. But how can I get the heading angle using this project? Environment : Ubuntu 18.04 64. Thank you.

ID (ASCII CODE of the Marker)
PosX of the Corner [pixels]
PosY of the Corner [pixels]
PosX of the Center [pixels]
PosY of the Center [pixels]
Heading [degrees], w.r.t the X axis counter clockwise [-pi, pi)

$ ./example_libvisionloc 4
I can get the position of the tag. but no heading angles.
And there is a little time delay in computing the position of the tag.

@hughhugh
Copy link

@noether
Hi. I have tested the ./example_libvisionloc 4 with 4 tags. It can print out the position information, but the heading angle are not given. The screen image is below:]

微信图片_20190522124517

Thank you.

@noether
Copy link
Owner

noether commented May 22, 2019

@hughhugh

Now it should print the heading in the example (please sync with the repo).

The example is not meant to be run in "real time" but as guidance for your application. Your camerasInfo.xml describes two cameras (and you are only using one). In the example, there is a for loop that will iterate always 1000 times for each camera. Note that in the example, even if the program has detected all the makers, the for loop will continue, hence the lag.

In your application, it is up to you how to manage when you should stop looking for markers (condition for the for loop to break). If time is critical for you over detecting all the possible expected markers, then the fastest is to do not have any for loop (just one iteration).

If you have several cameras, and you found all the expected markers in the first camera, maybe you would like to add a condition so that you do not look for markers in the second camera.

It is up to you how/when to stop looking for markers depending on your application.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants