Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

LATCH descriptor does not seem to work well on my dataset #5

Open
LingyuMa opened this issue Jul 22, 2016 · 33 comments
Open

LATCH descriptor does not seem to work well on my dataset #5

LingyuMa opened this issue Jul 22, 2016 · 33 comments
Assignees

Comments

@LingyuMa
Copy link

@mdaiter I have tried the LATCH binary and LATCH unsigned, it seems both of them work very well on my image set (it seems there are less features detected and much less matches found), is there any suggestions to tune the parameters or something?

@LingyuMa
Copy link
Author

Also, the matching is still not fast even I use GPU_LATCH,is there any way to speed it up?

@mdaiter
Copy link
Owner

mdaiter commented Jul 22, 2016

@LingyuMa what's your dataset and what numbers are you getting?

@mdaiter
Copy link
Owner

mdaiter commented Jul 22, 2016

@LingyuMa I also set the ratio parameter to 0.99 when matching: binary descriptors are sensitive to those sorts of changes, and these fluctuations can seriously kill matching ability.

@LingyuMa
Copy link
Author

I have also changed that to 0.99

@LingyuMa
Copy link
Author

@LingyuMa
Copy link
Author

@mdaiter What parameters are you using for matching?

@mdaiter
Copy link
Owner

mdaiter commented Jul 22, 2016

@LingyuMa I'm just using -r 0.99 . That's it...hm. How many putatives do you get, and how many geometrics? Are you using -g e or -g f?

@LingyuMa
Copy link
Author

@mdaiter Can you run my dataset in your computer (to see what will happen)? I am using fundamental matrix for filtering, so it is -g f. I have attached my matches.f.bin, not sure how to check it.

@mdaiter
Copy link
Owner

mdaiter commented Jul 22, 2016

./bin/openMVG_main_exportMatches -i outputLatch/sfm_data.json -d outputLatch -m outputLatch/matches.putative.bin -o matches will give you all of your matcher data back and export it to svgs. Curious to see the numbers.

@LingyuMa
Copy link
Author

it seems it gave me a bunch of svg to show matches, is there a way to show the total number?

@LingyuMa
Copy link
Author

@mdaiter for matches.f.bin, All I can say is that it gave me 136 image pairs. The pairs seem to be reasonable, though the number is 49.5MB, though it is much less than the sift (209.4MB), I can also see since the matching becomes sparse when it comes to LATCH

@mdaiter
Copy link
Owner

mdaiter commented Jul 22, 2016

The total number between each pair appears right before the end of the SVG file. x_y_n_.svg is the format, where x is the first ID of the image, y is the second ID of the image, and n is the number of matches between images

@LingyuMa
Copy link
Author

akaze
latch
Here are the two output matches svg files screenshots

@LingyuMa
Copy link
Author

@mdaiter the second image is LATCH

@LingyuMa
Copy link
Author

I know it is hard to see, but the number of image pairs is about 5 times less.

@mdaiter
Copy link
Owner

mdaiter commented Jul 22, 2016

@LingyuMa Can you send me the SIFT matches that align with the LATCH matches? It seems as though the SIFT matches compare sets whose LATCH equivalents aren't visible from your screenshots

@LingyuMa
Copy link
Author

The problem is that the matched images are not the same for the two descriptors. I'll see what I can do when I come back from lunch.

Sent from my iPhone

On Jul 22, 2016, at 11:11 AM, Matthew Daiter [email protected] wrote:

@LingyuMa Can you send me the SIFT matches that align with the LATCH matches? It seems as though the SIFT matches compare sets whose LATCH equivalents aren't visible from your screenshots


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.

@LingyuMa
Copy link
Author

@mdaiter Can you have a look at these two photos
selection_005
selection_004

@LingyuMa
Copy link
Author

the first one is Latch

@mdaiter
Copy link
Owner

mdaiter commented Jul 25, 2016

@LingyuMa these seem correct. Maybe @csp256 (original author of the library) could provide some insight, but I believe these are the results you should be receiving back from each image.

@LingyuMa
Copy link
Author

But the matching number seems much less than SIFT, which makes the global construction fails. Is there a way to increase matching number?

Sent from my iPhone

On Jul 25, 2016, at 5:08 AM, Matthew Daiter [email protected] wrote:

@LingyuMa these seem correct. Maybe @csp256 (original author of the library) could provide some insight, but I believe these are the results you should be receiving back from each image.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.

@mdaiter
Copy link
Owner

mdaiter commented Jul 25, 2016

@LingyuMa if you modify these two parameters: https://github.com/mdaiter/cudaLATCH/blob/cf05a8fdf19b83519e68cc0c184e334f83be18e5/params.hpp and here: https://github.com/mdaiter/openMVG/blob/custom/src/openMVG/matching_image_collection/gpu/params.hpp you'll be able to tune matching threshold and total allowed points to detect. Each increment in NUM_SM gives back 512 more key points.

@LingyuMa
Copy link
Author

Also, I have found that the matching time is still pretty slow compared with the openMVG default matching method+SIFT. Which is really weird, is there any way to accelerate it?

Sent from my iPhone

On Jul 25, 2016, at 8:55 AM, Matthew Daiter [email protected] wrote:

@LingyuMa if you modify these two parameters: https://github.com/mdaiter/cudaLATCH/blob/cf05a8fdf19b83519e68cc0c184e334f83be18e5/params.hpp and here: https://github.com/mdaiter/openMVG/blob/custom/src/openMVG/matching_image_collection/gpu/params.hpp you'll be able to tune matching threshold and total allowed points to detect. Each increment in NUM_SM gives back 512 more key points.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.

@mdaiter
Copy link
Owner

mdaiter commented Jul 25, 2016

@LingyuMa if you're using the LATCH_UNSIGNED method, I'd use the GPU_LATCH matching method; otherwise, you're technically comparing two different fundamental ways of matching. With SIFT, you'd have to run the BRUTE_FORCE_MATCHER_L2 in order to perform a fair comparison. I have the numbers on my computer, and it's far slower than the BRUTE_FORCE_HAMMING matcher.

@LingyuMa
Copy link
Author

@mdaiter the problem is I am using LATCH_UNSIGNED + GPU_LATCH and I compared it with SIFT + ANNL2, the speed does not improve.

@csp256
Copy link

csp256 commented Jul 25, 2016

Something is definitely up. The number of matches is what I would expect sometimes (10k ish), but much lower the rest (<1k). I am interpreting this as my code working, and something upstream being broken.

I really do not think that the ratio test makes sense in Hamming space: as a first order improvement you should impose a hard threshold between best and second best matches. This is done on the GPU matcher.

If the CPU matcher is slow you are probably being bit by the Intel popcount bug. Can you try the GPU matcher?

@mdaiter
Copy link
Owner

mdaiter commented Jul 25, 2016

Agreed with @csp256 . I'm curious: what is your total number of matches putatively? You can check by running the exportMatches command with matches.putative.bin instead of matches.f.bin

@mdaiter mdaiter self-assigned this Jul 25, 2016
@mdaiter
Copy link
Owner

mdaiter commented Jul 31, 2016

@LingyuMa if you're looking for a GPU Brute Force L2 matcher, I just finished one up and should be pushing code either today or tomorrow. It's on the default OpenCV version of the GPU matcher, but I'm implementing a CUDA dynamic parallel solution at the moment and will inform you when it's ready.

@mdaiter
Copy link
Owner

mdaiter commented Aug 3, 2016

@LingyuMa my GPU L2 Brute Force matcher is now finished. Feel free to use it with SIFT, PNNet, LATCH, DeepSiam2Stream or DeepSiam

@pmoulon
Copy link
Contributor

pmoulon commented Aug 3, 2016

Did you try to extract LATCH descriptors on SIFT keypoints?
Since there is a "clean" SIFT integration pending... it could be easy to test:
openMVG/openMVG#556
We can also test the LATCH descriptor on Affine detector (we can extract rectified patch regions and compute the descriptor on it). See there for Affine patch normalization: https://github.com/openMVG/openMVG/blob/master/src/openMVG_Samples/features_affine_demo/features_affine_demo.cpp (only the rotation invariance is missing, compute rectified patch rotation, and then rotate it)

@mdaiter
Copy link
Owner

mdaiter commented Aug 3, 2016

@LingyuMa and @pmoulon if you look on the Oxford testing branch, you'll have all of that code already integrated
SIFT Keypoints: https://github.com/mdaiter/cudaLATCH/blob/0a6a6285790f13559696bc54df3b23fa5a0b12b3/LatchClassifierOpenMVG.cpp

Affine Invariant points: (previous commit - looking to see where I left it)

@pmoulon
Copy link
Contributor

pmoulon commented Aug 3, 2016

Perhaps using the patch at the correct scale could improve the results. We can continue this discussion by mail if you want.

@mdaiter
Copy link
Owner

mdaiter commented Aug 3, 2016

@pmoulon what's your mail address? Mine's [email protected]

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants