-
Notifications
You must be signed in to change notification settings - Fork 1.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Enhancement] Limit memory used by ParallelIterable in Iceberg #54219
Conversation
@stephen-shelby @Youngwb plz help review |
[FE Incremental Coverage Report]✅ pass : 0 / 0 (0%) |
how many manifest files and data files in your case? I think if you use version 3.3 or above, this case may go to distributed plan, not local plan. |
|
could you try to |
@Mergifyio rebase |
Signed-off-by: zhaohehuhu <[email protected]>
✅ Branch has been successfully rebased |
|
OK. I will take a try. |
[Java-Extensions Incremental Coverage Report]✅ pass : 0 / 0 (0%) |
[BE Incremental Coverage Report]✅ pass : 0 / 0 (0%) |
Can we merge this pr firstly ? Iceberg has fixed some internal issues, so it’s fine to update the Iceberg version, just like Trino did.@stephen-shelby @gengjun-git |
I failed to set plan_mode to distributed due to the issue(Failed to open the off-heap table scanner.) |
you could check more detail msg in the fe.log. |
SQL Error [1064] [42000]: Failed to execute metadata collection job. Failed to open the off-heap table scanner. java exception details: java.lang.NoClassDefFoundError: Could not initialize class de.javakaffee.kryoserializers.UnmodifiableCollectionsSerializer When the execution plan mode is switched to distributed, the issue occurs like above. This is a compatibility issue between JDK 17 and Kryo(may be fixed by #55016) |
closed. someone did it. |
Why I'm doing:
The ConcurrentLinkedQueue doesn't have a size limitation in Iceberg 1.6.0. When the iceberg table is large, the queue will also become really large, which will cause OOM in the FE.
As per the above screenshot, the ConcurrentLinkedQueue is consuming 91%+ of the heap memory and causing the Frontend (FE) to go down, this indicates a significant memory management issue due to unbounded growth of the queue.
This issue was resolved with Iceberg 1.6.1(#10691)
What I'm doing:
as title
What type of PR is this:
Does this PR entail a change in behavior?
If yes, please specify the type of change:
Checklist:
Bugfix cherry-pick branch check: