-
Notifications
You must be signed in to change notification settings - Fork 150
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improve 2x2 eigen #694
base: master
Are you sure you want to change the base?
Improve 2x2 eigen #694
Conversation
Looks great to me, thanks! Could you add a test or tests which cover the cases which the previous code got wrong? Is this characterized by the case where the off diagonal elements are less than Would this fix also allow us to remove the special case for where |
I've improved the handling in the case of underflow as well. I wasn't able to remove the special case for diagonal matrices, as we still need to avoid dividing by zero. I've also added some more tests. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, no worries I'm happy to merge this more or less as is. Though I'd like it if you could clarify which cases you are targeting with the underflow tests (so we know what they are meant to test in the future).
test/eigen.jl
Outdated
m2 = SMatrix{2,2}(m2_a) | ||
@test (@inferred_maybe_allow SVector{2,ComplexF64} eigvals(m1, m2)) ≈ eigvals(m1_a, m2_a) | ||
@test (@inferred_maybe_allow SVector{2,ComplexF64} eigvals(Symmetric(m1), Symmetric(m2))) ≈ eigvals(Symmetric(m1_a), Symmetric(m2_a)) | ||
# issue #523 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
# issue #523 | |
# issues #523, #694 |
test/eigen.jl
Outdated
@test eigvecs(E) * SDiagonal(eigvals(E)) * eigvecs(E)' ≈ A | ||
A = SMatrix{2,2,Float64}((i, pfmin, pfmin, j)) | ||
E = eigen(Symmetric(A, uplo)) | ||
@test eigvecs(E) * SDiagonal(eigvals(E)) * eigvecs(E)' ≈ A |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cool, so can you add some comment about exactly which cases we are straddling here by examining the boundary between pfmin
vs nfmin
? We seem to have have x^2 == 0
for both of those and i ± x == 1.0
, i*j ± x == i*j
etc...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd love to merge this, but I'm unsure whether these tests are testing the right thing. Do you have some thoughts on the above query?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I looked into it again. You were right, they were testing the wrong thing. When I tested the right thing it wasn't working properly. I put some time in to make it work and simplify things, mostly by using the hypot function which already has a lot of work put into making it accurate.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems so close, I'd love to get it in the next release but there's one problems still to resolve - the use of zip
in the tests seems confusing and might be wrong.
Also, how does this change as a whole impact performance?
Anyway it's great that you've managed to delete half the code from the original implementation. That's what I like to see!
test/eigen.jl
Outdated
smallest_normal = floatmin(zero) | ||
largest_subnormal = prevfloat(smallest_normal) | ||
epsilon = eps(1.0) | ||
one_p_epsilon = 1.0 + epsilon |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Quibble: Better written nextfloat(1.0)
— it's the same in this case, but if you'd written 1.0 - epsilon
that's not equal to prevfloat(1.0)
which is a bit of a gotcha given that the boundary in floating point discretization density lies on the powers of two.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍
test/eigen.jl
Outdated
epsilon = eps(1.0) | ||
one_p_epsilon = 1.0 + epsilon | ||
degenerate = (zero, -1, 1, smallest_non_zero, smallest_normal, largest_subnormal, epsilon, one_p_epsilon, -one_p_epsilon) | ||
@testset "2×2 degenerate cases" for (i, j, k) in zip(degenerate,degenerate,degenerate), uplo in (:U, :L) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think this zip
does what you want because i,j,k
are always the same? Though I'm not sure what you want!
Perhaps you meant to use for (i, j, k) in Iterators.product(degenerate,degenerate,degenerate)
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You're totally right, that's what I meant
Where is this one at?
Do we have an answer to this one? |
I tested the performance impact of this just now: as far as I can see, there is a speed-up for Hermitian{ComplexF64, ...} matricesMaster: julia> @benchmark StaticArrays._eig(Size{(2,2)}(), A, true, true) setup=begin
a = @SMatrix rand(ComplexF64,2,2)
A = Hermitian(a + a')
end
BenchmarkTools.Trial: 10000 samples with 940 evaluations.
Range (min … max): 101.915 ns … 324.681 ns ┊ GC (min … max): 0.00% … 0.00%
Time (median): 102.766 ns ┊ GC (median): 0.00%
Time (mean ± σ): 105.722 ns ± 9.921 ns ┊ GC (mean ± σ): 0.00% ± 0.00%
▅█▁ ▁▃▁ ▁▁ ▁
███▆▅▄▆▆▆▅▅▅▆███▇█████▇▇▇▆▅▅▅▅▅▅▄▅▅▅▄▅▄▄▅▄▅▄▄▄▅▅▄▄▄▅▅▅▄▄▄▄▃▄▃ █
102 ns Histogram: log(frequency) by time 149 ns <
Memory estimate: 0 bytes, allocs estimate: 0. PR: (speed-up) julia> @benchmark StaticArrays._eig(Size{(2,2)}(), A, true, true) setup=begin
a = @SMatrix rand(ComplexF64,2,2)
A = Hermitian(a + a')
end
BenchmarkTools.Trial: 10000 samples with 979 evaluations.
Range (min … max): 64.147 ns … 324.821 ns ┊ GC (min … max): 0.00% … 0.00%
Time (median): 66.394 ns ┊ GC (median): 0.00%
Time (mean ± σ): 67.999 ns ± 7.887 ns ┊ GC (mean ± σ): 0.00% ± 0.00%
▄▆▆█▂ ▁▂▁▁ ▁ ▁
█████▆▄▅▃▆▅▆▆▆▄▆██████████▇▇▇▇▆▆▅▅▅▅▅▅▄▅▅▅▄▆▄▅▅▄▅▁▄▅▄▅▃▅▅▄▄▄ █
64.1 ns Histogram: log(frequency) by time 104 ns <
Memory estimate: 0 bytes, allocs estimate: 0. Hermitian{Float64, ...} matricesMaster: julia> @benchmark StaticArrays._eig(Size{(2,2)}(), A, true, true) setup=begin
a = @SMatrix rand(Float64,2,2)
A = Hermitian(a + a')
end
BenchmarkTools.Trial: 10000 samples with 999 evaluations.
Range (min … max): 8.909 ns … 69.670 ns ┊ GC (min … max): 0.00% … 0.00%
Time (median): 9.009 ns ┊ GC (median): 0.00%
Time (mean ± σ): 9.230 ns ± 1.770 ns ┊ GC (mean ± σ): 0.00% ± 0.00%
█▄ ▁
██▆▄▄▄▄▁▃▄▅▄▃▃▃▄▁▁▁▄▅▇▇▆▅▄▅▄▄▃▄▄▅▄▄▄▁▃▃▁▃▃▁▁▁▃▁▁▁▁▁▃▁▄▃▃▄▅ █
8.91 ns Histogram: log(frequency) by time 18.9 ns <
Memory estimate: 0 bytes, allocs estimate: 0. PR: (slow-down) julia> @benchmark StaticArrays._eig(Size{(2,2)}(), A, true, true) setup=begin
a = @SMatrix rand(Float64,2,2)
A = Hermitian(a + a')
end
BenchmarkTools.Trial: 10000 samples with 994 evaluations.
Range (min … max): 29.879 ns … 204.930 ns ┊ GC (min … max): 0.00% … 0.00%
Time (median): 31.489 ns ┊ GC (median): 0.00%
Time (mean ± σ): 32.438 ns ± 5.469 ns ┊ GC (mean ± σ): 0.00% ± 0.00%
▁▆▃ █▃ ▁
███████▆▆▅▄▄▄▅▁▃▄▄▅▆▅▆▅▅▄▁▄▅▇█▇█▇▇▆▇▇▆▆▇▅▇▅▇▇▅▅▅▆▆▆█▇▇▆▅▆▆▅▆ █
29.9 ns Histogram: log(frequency) by time 51.6 ns <
Memory estimate: 0 bytes, allocs estimate: 0. |
Using the fact that the eigenvectors must be orthonormal, v21 and v22, can be computed from v21 and v11. In addition to being more performant, this is actually more robust in some cases, for instance with [1.0 -9.465257061381386e-20; -9.465257061381386e-20 1.0], the previous code would return duplicated eigenvectors.