-
Notifications
You must be signed in to change notification settings - Fork 809
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Hook up partition compaction end to end implementation #6510
base: master
Are you sure you want to change the base?
Conversation
…fecycle Signed-off-by: Alex Le <[email protected]>
Signed-off-by: Alex Le <[email protected]>
Signed-off-by: Alex Le <[email protected]>
Signed-off-by: Alex Le <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
@@ -698,15 +754,26 @@ func (c *Compactor) stopping(_ error) error { | |||
} | |||
|
|||
func (c *Compactor) running(ctx context.Context) error { | |||
// Ensure an initial cleanup occurred as first thing when running compactor. | |||
if err := services.StartAndAwaitRunning(ctx, c.blocksCleaner); err != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a specific reason why we have to move cleaning here?
|
||
func (f *DisabledDeduplicateFilter) DuplicateIDs() []ulid.ULID { | ||
return nil | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we make these types private?
Can you add some comment to DisabledDeduplicateFilter
. We want to disable duplicate filter because it makes no sense for partitioning compactor as we always have duplicates?
|
||
globalMaxt := blocks[0].Meta().MaxTime | ||
g, _ := errgroup.WithContext(ctx) | ||
g.SetLimit(8) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this a sane default to set in Cortex?
} | ||
if b.Meta().MaxTime > globalMaxt { | ||
globalMaxt = b.Meta().MaxTime | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Although it is part of the original tsdb implementation and we also pass tsdb metrics in the function. I feel it is kind of unecessary to check block overlapping for partitioning compactor. Especially the info log above seems always there
symbols := make(map[string]struct{}) | ||
var builder labels.ScratchBuilder | ||
for postings.Next() { | ||
err := labelsFn(postings.At(), &builder, &bufChks) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can pass bufChks
as nil. If it is nil then Decoder.Series
will skip decoding chunk metadata.
What this PR does:
Implement partitioning compaction related lifecycle functions to make partitioning compaction end to end working.
PartitionCompactionBlockDeletableChecker
makes sure no parent blocks got deleted after each compaction. Cleaner would handle parent blocks clean up for partitioning compaction.ShardedBlockPopulator
would useShardedPosting
to including particular series in the result block.ShardedCompactionLifecycleCallback
is used to emit partitioning compaction metrics at beginning and end of compaction. It also initializeShardedBlockPopulator
for each compaction.Which issue(s) this PR fixes:
Fixes #
Checklist
CHANGELOG.md
updated - the order of entries should be[CHANGE]
,[FEATURE]
,[ENHANCEMENT]
,[BUGFIX]