Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement morphisms of elliptic surfaces given by translations #3630

Merged

Conversation

HechtiDerLachs
Copy link
Collaborator

CC: @simonbrandhorst

Sections seem to be nasty. In the example added for the tests I couldn't get all charts to produce the local ideals for 2*P.

experimental/Schemes/elliptic_surface.jl Show resolved Hide resolved
Comment on lines +2110 to +2113
function morphism_from_section(
X::EllipticSurface, P::EllipticCurvePoint;
divisor::AbsWeilDivisor=_section(X, P)
)
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This works now, but returns two maps: The identification IP^1 with the section and the inclusion of the latter.

@HechtiDerLachs
Copy link
Collaborator Author

@afkafkafk13 : There's a lazy type for strict transforms of ideal sheaves here now. Maybe you would like to have a look.

Comment on lines 180 to 185
lat = algebraic_lattice(X)
A = [intersect(a, b) for a in lat, b in lat]

ll = Oscar._pushforward_lattice_along_isomorphism(trans)
B = [intersect(a, b) for a in ll, b in ll]
@test A == B
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added some more expensive tests here. Maybe we should disable them again, but leave them here as comment for later reference.

@simonbrandhorst
Copy link
Collaborator

I just tried to compute the algebraic lattice with this branch and after 12 hours the computation did not complete. It was stuck in the computation of some gluings.

@simonbrandhorst
Copy link
Collaborator

For the record on master I got
392.821593 seconds (774.37 M allocations: 667.636 GiB, 41.50% gc time, 0.08% compilation time)
(the amount of allocations is scary)

@HechtiDerLachs
Copy link
Collaborator Author

HechtiDerLachs commented Apr 22, 2024

I just tried to compute the algebraic lattice with this branch and after 12 hours the computation did not complete. It was stuck in the computation of some gluings.

I can confirm these findings. Interestingly, it did not get stuck on any Groebner basis computation, but on divexact for two multivariate polynomials over a finite field. @fieker informed me the other day that these can be unexpectedly expensive. But these divisions are hardcoded into the fraction fields for addition and multiplication of elements and these are, in turn, used by the localizations. So we can not switch them off. It seems that there are just some gluings which are too hard to even be spelled out.

This problem does not occur on the second last commit 54288df and this suggests to me that we are only stepping on a mine which has been there all the time. This unfortunate step is probably taken due to new heuristics implemented with cheap_sub_ideal for StrictTransformIdealSheaf.

All this is pretty worrying to me since the desingularizations we are considering here should be regarded as rather simple, but yet they can not be fully computed. @afkafkafk13 : Does this seem familiar from your earlier experiences in Singular's desingularization?

@HechtiDerLachs
Copy link
Collaborator Author

the amount of allocations is scary

It is. Looking at some intermediate computations with ProfileView, it seems that a lot of time is spent on conversions from and to the Singular side. That might also be a reason for the massive number of allocations. But I'm not sure.

@afkafkafk13
Copy link
Collaborator

the amount of allocations is scary

It is. Looking at some intermediate computations with ProfileView, it seems that a lot of time is spent on conversions from and to the Singular side. That might also be a reason for the massive number of allocations. But I'm not sure.

Yes, this observation makes a lot of sense to me: the overhead for translation back and forth does not make a large difference for each single instance of non-trivial computations, but working on covered schemes we have a huge amount of these (often smaller) non-trivial computations and this just adds up.

@afkafkafk13
Copy link
Collaborator

I just tried to compute the algebraic lattice with this branch and after 12 hours the computation did not complete. It was stuck in the computation of some gluings.

I can confirm these findings. Interestingly, it did not get stuck on any Groebner basis computation, but on divexact for two multivariate polynomials over a finite field. @fieker informed me the other day that these can be unexpectedly expensive. But these divisions are hardcoded into the fraction fields for addition and multiplication of elements and these are, in turn, used by the localizations. So we can not switch them off. It seems that there are just some gluings which are too hard to even be spelled out.

This problem does not occur on the second last commit 54288df and this suggests to me that we are only stepping on a mine which has been there all the time. This unfortunate step is probably taken due to new heuristics implemented with cheap_sub_ideal for StrictTransformIdealSheaf.

All this is pretty worrying to me since the desingularizations we are considering here should be regarded as rather simple, but yet they can not be fully computed. @afkafkafk13 : Does this seem familiar from your earlier experiences in Singular's desingularization?

Here I have two bits of input:
A) Avoid computing over fraction fields of multivariate polynomial rings as much as possible. Emulate whatever is possible in a polynomial ring setting (I am not saying quotient ring!). If you are dealing with results of sequences of blow-ups and you are not facing subsets of (rather young) exceptional divisors, you might consider passing via a common ancester instead of glueing.
B) Often you do not need all charts to have the full information, but you need to either choose carefully, what you need, or start a parallel race. Try a heuristic (taking into accout the number of monomials of the data and distance in the tree of blow-up charts) to find a smaller set of gluings to actually be computed. Note that passing through 2 or 3 rather easy gluings of nearby charts in the tree of charts can be a lot less expensive than computing the direct gluing map.

@HechtiDerLachs
Copy link
Collaborator Author

Here I have two bits of input:

But if I understand this correctly, this does not save us from such gluing-mines in total, correct? It's just a guessing game to avoid stepping on them. That's rather unsatisfactory in a linear environment where we can not perform "race-to-win" tasks in parallel on different workers.

@afkafkafk13
Copy link
Collaborator

Here I have two bits of input:

But if I understand this correctly, this does not save us from such gluing-mines in total, correct? It's just a guessing game to avoid stepping on them. That's rather unsatisfactory in a linear environment where we can not perform "race-to-win" tasks in parallel on different workers.

Indeed A is just a guessing game.
For B: The passage through the tree of charts is fine for nice centers (which you should be seeing in your easy example after simplification), but becomes nasty, whenever your center is described using complicated polynomials (explosion of the number of terms after just a few steps!). This is more than just guesswork. Without this and a game of ignoring charts without new information (X does not meet chart outside the locus covered by other charts), you are quickly 'dead'.

This is the background, why I was so interested to have all this in parallel in the first place. In a non-sequential setting, you are allowed to simply let thoses mines explode on some of the nodes, clean up the debris in due time (after heureka) and then reuse the nodes for new tasks.
Other lesson learned in this context are: forget (free!!!) all refinements after use, as soon as your data has been expressed in terms of a coarser covering. By using an evitable finer covering, you nearly always loose more than you gain.

@HechtiDerLachs
Copy link
Collaborator Author

The bibtool tests fail due to external reasons. Let's see whether they get back to normal.

experimental/Schemes/BlowupMorphism.jl Show resolved Hide resolved
experimental/Schemes/BlowupMorphism.jl Show resolved Hide resolved
experimental/Schemes/BlowupMorphism.jl Show resolved Hide resolved
experimental/Schemes/IdealSheaves.jl Show resolved Hide resolved
experimental/Schemes/IdealSheaves.jl Show resolved Hide resolved
experimental/Schemes/IdealSheaves.jl Outdated Show resolved Hide resolved
experimental/Schemes/elliptic_surface.jl Show resolved Hide resolved
src/Rings/mpoly-localizations.jl Outdated Show resolved Hide resolved
test/AlgebraicGeometry/Schemes/elliptic_surface.jl Outdated Show resolved Hide resolved
test/AlgebraicGeometry/Schemes/elliptic_surface.jl Outdated Show resolved Hide resolved
@simonbrandhorst
Copy link
Collaborator

In conclusion I think that this can be merged once the tests are adjusted and some parts of the comments dealt with.

It is a big step forward in performance. Thank you for your work @HechtiDerLachs !

@HechtiDerLachs
Copy link
Collaborator Author

About the radical questions: Yes, the code is up to date. The method for is_subset has been overwritten so that only radical_membership is used on the affine charts.

@simonbrandhorst
Copy link
Collaborator

Did you check that the test runs reasonably fast?

@HechtiDerLachs
Copy link
Collaborator Author

Did you check that the test runs reasonably fast?

Roughly 12 minutes for the newly added test. Still a bit much, so we might want to leave a comment that this test is a candidate for being disabled? Any suggestions/opinions, @fingolfin ?

@simonbrandhorst
Copy link
Collaborator

11 Minutes is far too much. I guess we could live with something that takes 2 minutes.

@simonbrandhorst
Copy link
Collaborator

The good news is that the old tests which were previously too slow now all run in less than a minute. I think we can enable some of them now.

@simonbrandhorst
Copy link
Collaborator

My suggestion would be to disable the test for now, get it merged and then do the testing in a different PR.

@simonbrandhorst simonbrandhorst enabled auto-merge (squash) April 25, 2024 08:45
Copy link

codecov bot commented Apr 25, 2024

Codecov Report

Attention: Patch coverage is 50.48544% with 153 lines in your changes are missing coverage. Please review.

Project coverage is 82.67%. Comparing base (e482be6) to head (d6e22ec).
Report is 9 commits behind head on master.

Additional details and impacted files
@@            Coverage Diff             @@
##           master    #3630      +/-   ##
==========================================
- Coverage   82.79%   82.67%   -0.13%     
==========================================
  Files         577      577              
  Lines       77731    77995     +264     
==========================================
+ Hits        64357    64479     +122     
- Misses      13374    13516     +142     
Files Coverage Δ
experimental/Schemes/BlowupMorphism.jl 89.42% <100.00%> (-0.74%) ⬇️
experimental/Schemes/Types.jl 88.70% <100.00%> (+0.57%) ⬆️
experimental/Schemes/critical_locus.jl 0.00% <ø> (ø)
...eometry/Schemes/AffineSchemes/Morphisms/Methods.jl 93.75% <100.00%> (ø)
...etry/Schemes/AffineSchemes/Objects/Constructors.jl 75.34% <ø> (ø)
...aicGeometry/Schemes/Covering/Objects/Attributes.jl 98.00% <100.00%> (+0.17%) ⬆️
src/Rings/mpoly-localization_types.jl 100.00% <100.00%> (ø)
src/Rings/mpolyquo-localizations.jl 71.33% <100.00%> (-0.25%) ⬇️
experimental/Schemes/SimplifiedAffineScheme.jl 82.03% <83.33%> (ø)
.../Schemes/PrincipalOpenSubset/Objects/Attributes.jl 77.27% <75.00%> (-2.73%) ⬇️
... and 5 more

... and 8 files with indirect coverage changes

@simonbrandhorst simonbrandhorst merged commit e1dd191 into oscar-system:master Apr 25, 2024
29 checks passed
@HechtiDerLachs HechtiDerLachs deleted the elliptic_surface_translation branch April 25, 2024 10:09
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants