-
Notifications
You must be signed in to change notification settings - Fork 810
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Hook up partition compaction end to end implementation #6510
Hook up partition compaction end to end implementation #6510
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
@@ -698,15 +754,26 @@ func (c *Compactor) stopping(_ error) error { | |||
} | |||
|
|||
func (c *Compactor) running(ctx context.Context) error { | |||
// Ensure an initial cleanup occurred as first thing when running compactor. | |||
if err := services.StartAndAwaitRunning(ctx, c.blocksCleaner); err != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a specific reason why we have to move cleaning here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Because cleaner cycle might be running for a while depending on how many tenants and how big each tenants are. We don't want to compactor got into unhealthy state in the ring because of long running cleaner process.
|
||
func (f *DisabledDeduplicateFilter) DuplicateIDs() []ulid.ULID { | ||
return nil | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we make these types private?
Can you add some comment to DisabledDeduplicateFilter
. We want to disable duplicate filter because it makes no sense for partitioning compactor as we always have duplicates?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The DefaultDeduplicateFilter
from Thanos would mark duplicate blocks if those blocks are from same group and having same source blocks. In partitioning compactor, partitions from same time range would always or eventually have same source as it is the natural of partitioning compactor. We don't want those blocks got filtered out when doing grouping for next level compaction.
|
||
globalMaxt := blocks[0].Meta().MaxTime | ||
g, _ := errgroup.WithContext(ctx) | ||
g.SetLimit(8) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this a sane default to set in Cortex?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
With my test 8 is good enough to keep CPU busy during compaction. I am wondering if this number is too high for end user, would this just cause CPU usage peaked at 100% for longer time?
…fecycle Signed-off-by: Alex Le <[email protected]>
Signed-off-by: Alex Le <[email protected]>
Signed-off-by: Alex Le <[email protected]>
Signed-off-by: Alex Le <[email protected]>
Signed-off-by: Alex Le <[email protected]>
Signed-off-by: Alex Le <[email protected]>
5e89a40
to
be06845
Compare
Signed-off-by: Alex Le <[email protected]>
Signed-off-by: Alex Le <[email protected]>
Signed-off-by: Alex Le <[email protected]>
Signed-off-by: Alex Le <[email protected]>
…#6510) * Implemented partition compaction end to end with custom compaction lifecycle Signed-off-by: Alex Le <[email protected]> * removed unused variable Signed-off-by: Alex Le <[email protected]> * tweak test Signed-off-by: Alex Le <[email protected]> * tweak test Signed-off-by: Alex Le <[email protected]> * refactor according to comments Signed-off-by: Alex Le <[email protected]> * tweak test Signed-off-by: Alex Le <[email protected]> * check context error inside sharded posting Signed-off-by: Alex Le <[email protected]> * fix lint Signed-off-by: Alex Le <[email protected]> * fix integration test for memberlist Signed-off-by: Alex Le <[email protected]> * make compactor initial wait cancellable Signed-off-by: Alex Le <[email protected]> --------- Signed-off-by: Alex Le <[email protected]>
What this PR does:
Implement partitioning compaction related lifecycle functions to make partitioning compaction end to end working.
PartitionCompactionBlockDeletableChecker
makes sure no parent blocks got deleted after each compaction. Cleaner would handle parent blocks clean up for partitioning compaction.ShardedBlockPopulator
would useShardedPosting
to including particular series in the result block.ShardedCompactionLifecycleCallback
is used to emit partitioning compaction metrics at beginning and end of compaction. It also initializeShardedBlockPopulator
for each compaction.Which issue(s) this PR fixes:
Fixes #
Checklist
CHANGELOG.md
updated - the order of entries should be[CHANGE]
,[FEATURE]
,[ENHANCEMENT]
,[BUGFIX]