-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
mark RangeRequest.all() as @RestrictedApi #7375
base: develop
Are you sure you want to change the base?
Conversation
this means callers have to justify their use of full range requests
Generate changelog in
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
don't have context on some existing uses of RangeRequest.all()
@@ -1691,6 +1692,7 @@ private static String schemaChangeDescriptionForPutMetadataForTables(Collection< | |||
} | |||
|
|||
@Override | |||
@AllowedRangeRequest(justification = "RangeRequest.all only invoked for equality check") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
but is truncateTables bad/expensive?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The method checks if the request is equal to rangerequest.all() and in that case, as an optimisation, truncated the table instead of writing deletes on the full range. Truncate does not create tombstones so in that perspective it's better, but not sure about the performance. @jeremyk-91 do you know if this is expensive?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
let's make it consistent with the other place where the justification is same: "a full range request is constructed only for an equality check"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
With regard to truncate: I'd check with @inespot, but my understanding is that yes, truncate is much cheaper than just deleting. It does require all Cassandra nodes to be up and the coordinator needs to get an ack from each node that the data was deleted, but this should be comparatively fast. Also, in practice, deletes generally require consistency all and will similarly be affected by single node degradation (strictly speaking, there is an edge case where deletion is probably faster, outlined below in (*), but I don't think we care in practice).
(*) The "normal" method involves reading all cells (which only requires consistency quorum) and then deleting these cells. That does require consistency all; but technically if there's a situation where no 2 nodes owning the same data are down (or are slow), and for rows that actually exist in the table, no single node owning the same data is down (or is slow), legacy works (or is fast) while truncate does not (or is slow). This is more of an academic edge case, though.
@@ -47,6 +48,7 @@ public V1TransactionsTableRangeDeleter( | |||
} | |||
|
|||
@Override | |||
@AllowedRangeRequest(justification = "???") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
any context on what's calling this/why doing this deletion is ok?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jeremyk-91 is this on the restore path?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, only used on the restore path and for the transaction table, which in normal operation does not have any tombstones written. (Yes, there is a risk if you restore a stack twice in succession with no intervening compaction; should we make an internal note of this on SOPs or similar?)
@@ -89,6 +90,7 @@ private Set<TableReference> getConservativeTables(Set<TableReference> tables) { | |||
} | |||
|
|||
@Override | |||
@AllowedRangeRequest(justification = "deleting a range is cheaper than reading one") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
actually true?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
truncating the table is a cheaper option to deleting the full range
@@ -163,6 +164,7 @@ private static PartialCopyStats copyInternal( | |||
return stats; | |||
} | |||
|
|||
@AllowedRangeRequest(justification = "???") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
size estimator is ok because..? we only ever read one batch?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ohh this is spicy, this in theory can be very expensive if for reading that one batch we need to contact all cassandra nodes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Today at least: only used by the large internal product, which is not generally deployed with Cassandra.
(This probably should itself be restricted, though probably something for another day...)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should do an announcement before pushing out this change to lower the possibility of teams falling behind on upgrading AtlasDB and to make sure they understand why we're doing this.
@@ -1691,6 +1692,7 @@ private static String schemaChangeDescriptionForPutMetadataForTables(Collection< | |||
} | |||
|
|||
@Override | |||
@AllowedRangeRequest(justification = "RangeRequest.all only invoked for equality check") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The method checks if the request is equal to rangerequest.all() and in that case, as an optimisation, truncated the table instead of writing deletes on the full range. Truncate does not create tombstones so in that perspective it's better, but not sure about the performance. @jeremyk-91 do you know if this is expensive?
@@ -1691,6 +1692,7 @@ private static String schemaChangeDescriptionForPutMetadataForTables(Collection< | |||
} | |||
|
|||
@Override | |||
@AllowedRangeRequest(justification = "RangeRequest.all only invoked for equality check") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
let's make it consistent with the other place where the justification is same: "a full range request is constructed only for an equality check"
@@ -47,6 +48,7 @@ public V1TransactionsTableRangeDeleter( | |||
} | |||
|
|||
@Override | |||
@AllowedRangeRequest(justification = "???") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jeremyk-91 is this on the restore path?
@@ -89,6 +90,7 @@ private Set<TableReference> getConservativeTables(Set<TableReference> tables) { | |||
} | |||
|
|||
@Override | |||
@AllowedRangeRequest(justification = "deleting a range is cheaper than reading one") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
truncating the table is a cheaper option to deleting the full range
@@ -163,6 +164,7 @@ private static PartialCopyStats copyInternal( | |||
return stats; | |||
} | |||
|
|||
@AllowedRangeRequest(justification = "???") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ohh this is spicy, this in theory can be very expensive if for reading that one batch we need to contact all cassandra nodes.
/** | ||
* Mark that this range request has been thought about and explicitly decided to be ok. | ||
*/ | ||
public @interface AllowedRangeRequest { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was wondering if having an enum field for high level categorisation would be helpful for future analysis for why services are relying on rangerequests. Like background task, search use case, backfill, etc?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
interesting! could be useful, yes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for writing this! Was interesting to refresh myself on the various contexts where this is used as well. @sverma30's enum suggestion would probably be nice, and nothing blocking on my end - I've mentioned some of the "answers" to the ???
statements which we should include.
/** | ||
* Mark that this range request has been thought about and explicitly decided to be ok. | ||
*/ | ||
public @interface AllowedRangeRequest { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
interesting! could be useful, yes.
@@ -1691,6 +1692,7 @@ private static String schemaChangeDescriptionForPutMetadataForTables(Collection< | |||
} | |||
|
|||
@Override | |||
@AllowedRangeRequest(justification = "RangeRequest.all only invoked for equality check") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
With regard to truncate: I'd check with @inespot, but my understanding is that yes, truncate is much cheaper than just deleting. It does require all Cassandra nodes to be up and the coordinator needs to get an ack from each node that the data was deleted, but this should be comparatively fast. Also, in practice, deletes generally require consistency all and will similarly be affected by single node degradation (strictly speaking, there is an edge case where deletion is probably faster, outlined below in (*), but I don't think we care in practice).
(*) The "normal" method involves reading all cells (which only requires consistency quorum) and then deleting these cells. That does require consistency all; but technically if there's a situation where no 2 nodes owning the same data are down (or are slow), and for rows that actually exist in the table, no single node owning the same data is down (or is slow), legacy works (or is fast) while truncate does not (or is slow). This is more of an academic edge case, though.
@@ -163,6 +164,7 @@ private static PartialCopyStats copyInternal( | |||
return stats; | |||
} | |||
|
|||
@AllowedRangeRequest(justification = "???") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Today at least: only used by the large internal product, which is not generally deployed with Cassandra.
(This probably should itself be restricted, though probably something for another day...)
@@ -47,6 +48,7 @@ public V1TransactionsTableRangeDeleter( | |||
} | |||
|
|||
@Override | |||
@AllowedRangeRequest(justification = "???") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, only used on the restore path and for the transaction table, which in normal operation does not have any tombstones written. (Yes, there is a risk if you restore a stack twice in succession with no intervening compaction; should we make an internal note of this on SOPs or similar?)
* know that the overall number of rows read will be small, etc. | ||
*/ | ||
String justification(); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: newline
this means callers have to justify their use of full range requests
After this PR:
==COMMIT_MSG==
Mark RangeRequest.all as @RestrictedApi
==COMMIT_MSG==
Priority:
Concerns / possible downsides (what feedback would you like?):
Is documentation needed?:
Compatibility
Does this PR create any API breaks (e.g. at the Java or HTTP layers) - if so, do we have compatibility?:
Does this PR change the persisted format of any data - if so, do we have forward and backward compatibility?:
The code in this PR may be part of a blue-green deploy. Can upgrades from previous versions safely coexist? (Consider restarts of blue or green nodes.):
Does this PR rely on statements being true about other products at a deployment - if so, do we have correct product dependencies on these products (or other ways of verifying that these statements are true)?:
Does this PR need a schema migration?
Testing and Correctness
What, if any, assumptions are made about the current state of the world? If they change over time, how will we find out?:
What was existing testing like? What have you done to improve it?:
If this PR contains complex concurrent or asynchronous code, is it correct? The onus is on the PR writer to demonstrate this.:
If this PR involves acquiring locks or other shared resources, how do we ensure that these are always released?:
Execution
How would I tell this PR works in production? (Metrics, logs, etc.):
Has the safety of all log arguments been decided correctly?:
Will this change significantly affect our spending on metrics or logs?:
How would I tell that this PR does not work in production? (monitors, etc.):
If this PR does not work as expected, how do I fix that state? Would rollback be straightforward?:
If the above plan is more complex than “recall and rollback”, please tag the support PoC here (if it is the end of the week, tag both the current and next PoC):
Scale
Would this PR be expected to pose a risk at scale? Think of the shopping product at our largest stack.:
Would this PR be expected to perform a large number of database calls, and/or expensive database calls (e.g., row range scans, concurrent CAS)?:
Would this PR ever, with time and scale, become the wrong thing to do - and if so, how would we know that we need to do something differently?:
Development Process
Where should we start reviewing?:
If this PR is in excess of 500 lines excluding versions lock-files, why does it not make sense to split it?:
Please tag any other people who should be aware of this PR:
@jeremyk-91
@raiju