Replies: 1 comment 1 reply
-
The thing is, we never run anywhere close to |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I was excited to see the custom PyTorch implementation of Lanczos iterations in gpytorch.utils - awesome work! Now I am trying to understand how Lanczos is used to accelerate various applications, and I'm looking to get some clarity.
I see that many of the core LazyTensor methods (e.g.
diagonalization
,root_decomposition
) offer a "lanczos" option. My understanding is that, by performing Lanczos to obtain a tridiagonalization as pre-processing, we can then quickly solve for eigenvalues/eigenvectors by exploiting the tridiagonal structure of the resulting T matrix. However, stepping through the code I see that the ensuing steps use the methodlanczos_tridiag_to_diag
which ultimately callstorch.symeig
. Since torch.symeig is oblivious to the special structure of T, it seems like the lanczos pre-processing steps are overhead - we might as well call symeig on the original matrix.I'm wondering if there is something that I'm missing. Does gpytorch use eigen decomposition routines optimized for tridiagonal matrices? SciPy has specialized functions of this kind (e.g.
linalg.eigh_tridiagonal
andlinalg.eig_banded
) which are 3-5x faster than generic eigendecomposition with tridiagonal matrices. These functions call LAPACK routines such as ?stemr and ?sbevd for the decomposition. They also have specialized tridiagonal linear solvers based on ?ptsv and ?gtsv (those algorithms are insanely fast!). I would love to see some of this stuff written in PyTorch.Beta Was this translation helpful? Give feedback.
All reactions