-
Notifications
You must be signed in to change notification settings - Fork 467
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
db: shared compaction concurrency limit across multiple Pebble instances #3813
Comments
NB: this is orthogonal to #1329 which would adjust the node-level concurrency limit based on the available CPU. We would still need a limiter component that would decide which among the queue of potential compactions gets to run next. |
Cockroach side issue, if we explore a solution higher up: cockroachdb/cockroach#74697 |
anish-shanbhag
added a commit
to anish-shanbhag/pebble
that referenced
this issue
Aug 28, 2024
compaction concurrency in a multi-store configuration. Each Pebble store (i.e. an instance of *DB) still maintains its own per-store compaction concurrency which is controlled by `opts.MaxConcurrentCompactions`. However, in a multi-store configuration, disk I/O is a per-store resource while CPU is shared across stores. A significant portion of compaction is CPU-intensive, and so this ensures that excessive compactions don't interrupt foreground CPU tasks even if the disks are capable of handling the additional throughput from those compactions. The shared compaction concurrency only applies to automatic compactions This means that delete-only compactions are excluded because they are expected to be cheap, as are flushes because they should never be blocked. Fixes: cockroachdb#3813 Informs: cockroachdb/cockroach#74697
anish-shanbhag
added a commit
to anish-shanbhag/pebble
that referenced
this issue
Aug 28, 2024
This change adds a new compaction pool which enforces a global max compaction concurrency in a multi-store configuration. Each Pebble store (i.e. an instance of *DB) still maintains its own per-store compaction concurrency which is controlled by `opts.MaxConcurrentCompactions`. However, in a multi-store configuration, disk I/O is a per-store resource while CPU is shared across stores. A significant portion of compaction is CPU-intensive, and so this ensures that excessive compactions don't interrupt foreground CPU tasks even if the disks are capable of handling the additional throughput from those compactions. The shared compaction concurrency only applies to automatic compactions This means that delete-only compactions are excluded because they are expected to be cheap, as are flushes because they should never be blocked. Fixes: cockroachdb#3813 Informs: cockroachdb/cockroach#74697
anish-shanbhag
added a commit
to anish-shanbhag/pebble
that referenced
this issue
Aug 29, 2024
This change adds a new compaction pool which enforces a global max compaction concurrency in a multi-store configuration. Each Pebble store (i.e. an instance of *DB) still maintains its own per-store compaction concurrency which is controlled by `opts.MaxConcurrentCompactions`. However, in a multi-store configuration, disk I/O is a per-store resource while CPU is shared across stores. A significant portion of compaction is CPU-intensive, and so this ensures that excessive compactions don't interrupt foreground CPU tasks even if the disks are capable of handling the additional throughput from those compactions. The shared compaction concurrency only applies to automatic compactions This means that delete-only compactions are excluded because they are expected to be cheap, as are flushes because they should never be blocked. Fixes: cockroachdb#3813 Informs: cockroachdb/cockroach#74697
anish-shanbhag
added a commit
to anish-shanbhag/pebble
that referenced
this issue
Aug 29, 2024
This change adds a new compaction pool which enforces a global max compaction concurrency in a multi-store configuration. Each Pebble store (i.e. an instance of *DB) still maintains its own per-store compaction concurrency which is controlled by `opts.MaxConcurrentCompactions`. However, in a multi-store configuration, disk I/O is a per-store resource while CPU is shared across stores. A significant portion of compaction is CPU-intensive, and so this ensures that excessive compactions don't interrupt foreground CPU tasks even if the disks are capable of handling the additional throughput from those compactions. The shared compaction concurrency only applies to automatic compactions This means that delete-only compactions are excluded because they are expected to be cheap, as are flushes because they should never be blocked. Fixes: cockroachdb#3813 Informs: cockroachdb/cockroach#74697
anish-shanbhag
pushed a commit
to anish-shanbhag/pebble
that referenced
this issue
Aug 30, 2024
This change adds a new compaction pool which enforces a global max compaction concurrency in a multi-store configuration. Each Pebble store (i.e. an instance of *DB) still maintains its own per-store compaction concurrency which is controlled by `opts.MaxConcurrentCompactions`. However, in a multi-store configuration, disk I/O is a per-store resource while CPU is shared across stores. A significant portion of compaction is CPU-intensive, and so this ensures that excessive compactions don't interrupt foreground CPU tasks even if the disks are capable of handling the additional throughput from those compactions. The shared compaction concurrency only applies to automatic compactions This means that delete-only compactions are excluded because they are expected to be cheap, as are flushes because they should never be blocked. Fixes: cockroachdb#3813 Informs: cockroachdb/cockroach#74697
anish-shanbhag
pushed a commit
to anish-shanbhag/pebble
that referenced
this issue
Aug 30, 2024
This change adds a new compaction pool which enforces a global max compaction concurrency in a multi-store configuration. Each Pebble store (i.e. an instance of *DB) still maintains its own per-store compaction concurrency which is controlled by `opts.MaxConcurrentCompactions`. However, in a multi-store configuration, disk I/O is a per-store resource while CPU is shared across stores. A significant portion of compaction is CPU-intensive, and so this ensures that excessive compactions don't interrupt foreground CPU tasks even if the disks are capable of handling the additional throughput from those compactions. The shared compaction concurrency only applies to automatic compactions This means that delete-only compactions are excluded because they are expected to be cheap, as are flushes because they should never be blocked. Fixes: cockroachdb#3813 Informs: cockroachdb/cockroach#74697
anish-shanbhag
added a commit
to anish-shanbhag/pebble
that referenced
this issue
Aug 30, 2024
This change adds a new compaction pool which enforces a global max compaction concurrency in a multi-store configuration. Each Pebble store (i.e. an instance of *DB) still maintains its own per-store compaction concurrency which is controlled by `opts.MaxConcurrentCompactions`. However, in a multi-store configuration, disk I/O is a per-store resource while CPU is shared across stores. A significant portion of compaction is CPU-intensive, and so this ensures that excessive compactions don't interrupt foreground CPU tasks even if the disks are capable of handling the additional throughput from those compactions. The shared compaction concurrency only applies to automatic compactions This means that delete-only compactions are excluded because they are expected to be cheap, as are flushes because they should never be blocked. Fixes: cockroachdb#3813 Informs: cockroachdb/cockroach#74697
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
In CockroachDB multi-store deployments, especially with large numbers of stores (8 or more), the CPU consumed by compactions can be significant. We do need a per store compaction concurrency limit, since disk bandwidth is a per-store resource, but we should additionally have a shared compaction concurrency limiter.
This shared limiter should fairly adjudicate on which compaction gets to run next based on a score of how important it is. For example,
Jira issue: PEBBLE-230
Epic CRDB-41111
The text was updated successfully, but these errors were encountered: