Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Hook up partition compaction end to end implementation #6510

Merged
merged 10 commits into from
Jan 21, 2025

Conversation

alexqyle
Copy link
Contributor

@alexqyle alexqyle commented Jan 15, 2025

What this PR does:

Implement partitioning compaction related lifecycle functions to make partitioning compaction end to end working.

  • PartitionCompactionBlockDeletableChecker makes sure no parent blocks got deleted after each compaction. Cleaner would handle parent blocks clean up for partitioning compaction.
  • ShardedBlockPopulator would use ShardedPosting to including particular series in the result block.
  • ShardedCompactionLifecycleCallbackis used to emit partitioning compaction metrics at beginning and end of compaction. It also initialize ShardedBlockPopulator for each compaction.

Which issue(s) this PR fixes:
Fixes #

Checklist

  • Tests updated
  • Documentation added
  • CHANGELOG.md updated - the order of entries should be [CHANGE], [FEATURE], [ENHANCEMENT], [BUGFIX]

Copy link
Contributor

@danielblando danielblando left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Copy link
Contributor

@yeya24 yeya24 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@@ -698,15 +754,26 @@ func (c *Compactor) stopping(_ error) error {
}

func (c *Compactor) running(ctx context.Context) error {
// Ensure an initial cleanup occurred as first thing when running compactor.
if err := services.StartAndAwaitRunning(ctx, c.blocksCleaner); err != nil {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a specific reason why we have to move cleaning here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because cleaner cycle might be running for a while depending on how many tenants and how big each tenants are. We don't want to compactor got into unhealthy state in the ring because of long running cleaner process.


func (f *DisabledDeduplicateFilter) DuplicateIDs() []ulid.ULID {
return nil
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we make these types private?
Can you add some comment to DisabledDeduplicateFilter. We want to disable duplicate filter because it makes no sense for partitioning compactor as we always have duplicates?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The DefaultDeduplicateFilter from Thanos would mark duplicate blocks if those blocks are from same group and having same source blocks. In partitioning compactor, partitions from same time range would always or eventually have same source as it is the natural of partitioning compactor. We don't want those blocks got filtered out when doing grouping for next level compaction.


globalMaxt := blocks[0].Meta().MaxTime
g, _ := errgroup.WithContext(ctx)
g.SetLimit(8)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this a sane default to set in Cortex?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

With my test 8 is good enough to keep CPU busy during compaction. I am wondering if this number is too high for end user, would this just cause CPU usage peaked at 100% for longer time?

pkg/compactor/sharded_block_populator.go Outdated Show resolved Hide resolved
pkg/compactor/sharded_posting.go Outdated Show resolved Hide resolved
@alexqyle alexqyle force-pushed the partition-compaction-e2e branch from 5e89a40 to be06845 Compare January 21, 2025 18:39
@danielblando danielblando merged commit c1a2134 into cortexproject:master Jan 21, 2025
17 checks passed
@alexqyle alexqyle deleted the partition-compaction-e2e branch January 23, 2025 21:35
alexqyle added a commit to alexqyle/cortex that referenced this pull request Jan 31, 2025
…#6510)

* Implemented partition compaction end to end with custom compaction lifecycle

Signed-off-by: Alex Le <[email protected]>

* removed unused variable

Signed-off-by: Alex Le <[email protected]>

* tweak test

Signed-off-by: Alex Le <[email protected]>

* tweak test

Signed-off-by: Alex Le <[email protected]>

* refactor according to comments

Signed-off-by: Alex Le <[email protected]>

* tweak test

Signed-off-by: Alex Le <[email protected]>

* check context error inside sharded posting

Signed-off-by: Alex Le <[email protected]>

* fix lint

Signed-off-by: Alex Le <[email protected]>

* fix integration test for memberlist

Signed-off-by: Alex Le <[email protected]>

* make compactor initial wait cancellable

Signed-off-by: Alex Le <[email protected]>

---------

Signed-off-by: Alex Le <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants