Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

mark RangeRequest.all() as @RestrictedApi #7375

Draft
wants to merge 3 commits into
base: develop
Choose a base branch
from

Conversation

bavardage
Copy link
Contributor

this means callers have to justify their use of full range requests

After this PR:

==COMMIT_MSG==
Mark RangeRequest.all as @RestrictedApi
==COMMIT_MSG==

Priority:

Concerns / possible downsides (what feedback would you like?):

Is documentation needed?:

Compatibility

Does this PR create any API breaks (e.g. at the Java or HTTP layers) - if so, do we have compatibility?:

Does this PR change the persisted format of any data - if so, do we have forward and backward compatibility?:

The code in this PR may be part of a blue-green deploy. Can upgrades from previous versions safely coexist? (Consider restarts of blue or green nodes.):

Does this PR rely on statements being true about other products at a deployment - if so, do we have correct product dependencies on these products (or other ways of verifying that these statements are true)?:

Does this PR need a schema migration?

Testing and Correctness

What, if any, assumptions are made about the current state of the world? If they change over time, how will we find out?:

What was existing testing like? What have you done to improve it?:

If this PR contains complex concurrent or asynchronous code, is it correct? The onus is on the PR writer to demonstrate this.:

If this PR involves acquiring locks or other shared resources, how do we ensure that these are always released?:

Execution

How would I tell this PR works in production? (Metrics, logs, etc.):

Has the safety of all log arguments been decided correctly?:

Will this change significantly affect our spending on metrics or logs?:

How would I tell that this PR does not work in production? (monitors, etc.):

If this PR does not work as expected, how do I fix that state? Would rollback be straightforward?:

If the above plan is more complex than “recall and rollback”, please tag the support PoC here (if it is the end of the week, tag both the current and next PoC):

Scale

Would this PR be expected to pose a risk at scale? Think of the shopping product at our largest stack.:

Would this PR be expected to perform a large number of database calls, and/or expensive database calls (e.g., row range scans, concurrent CAS)?:

Would this PR ever, with time and scale, become the wrong thing to do - and if so, how would we know that we need to do something differently?:

Development Process

Where should we start reviewing?:

If this PR is in excess of 500 lines excluding versions lock-files, why does it not make sense to split it?:

Please tag any other people who should be aware of this PR:
@jeremyk-91
@raiju

this means callers have to justify their use of full range requests
@changelog-app
Copy link

changelog-app bot commented Oct 18, 2024

Generate changelog in changelog/@unreleased

What do the change types mean?
  • feature: A new feature of the service.
  • improvement: An incremental improvement in the functionality or operation of the service.
  • fix: Remedies the incorrect behaviour of a component of the service in a backwards-compatible way.
  • break: Has the potential to break consumers of this service's API, inclusive of both Palantir services
    and external consumers of the service's API (e.g. customer-written software or integrations).
  • deprecation: Advertises the intention to remove service functionality without any change to the
    operation of the service itself.
  • manualTask: Requires the possibility of manual intervention (running a script, eyeballing configuration,
    performing database surgery, ...) at the time of upgrade for it to succeed.
  • migration: A fully automatic upgrade migration task with no engineer input required.

Note: only one type should be chosen.

How are new versions calculated?
  • ❗The break and manual task changelog types will result in a major release!
  • 🐛 The fix changelog type will result in a minor release in most cases, and a patch release version for patch branches. This behaviour is configurable in autorelease.
  • ✨ All others will result in a minor version release.

Type

  • Feature
  • Improvement
  • Fix
  • Break
  • Deprecation
  • Manual task
  • Migration

Description

Mark RangeRequest.all as @RestrictedApi

Check the box to generate changelog(s)

  • Generate changelog entry

Copy link
Contributor Author

@bavardage bavardage left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

don't have context on some existing uses of RangeRequest.all()

@@ -1691,6 +1692,7 @@ private static String schemaChangeDescriptionForPutMetadataForTables(Collection<
}

@Override
@AllowedRangeRequest(justification = "RangeRequest.all only invoked for equality check")
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

but is truncateTables bad/expensive?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The method checks if the request is equal to rangerequest.all() and in that case, as an optimisation, truncated the table instead of writing deletes on the full range. Truncate does not create tombstones so in that perspective it's better, but not sure about the performance. @jeremyk-91 do you know if this is expensive?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

let's make it consistent with the other place where the justification is same: "a full range request is constructed only for an equality check"

Copy link
Contributor

@jeremyk-91 jeremyk-91 Oct 28, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

With regard to truncate: I'd check with @inespot, but my understanding is that yes, truncate is much cheaper than just deleting. It does require all Cassandra nodes to be up and the coordinator needs to get an ack from each node that the data was deleted, but this should be comparatively fast. Also, in practice, deletes generally require consistency all and will similarly be affected by single node degradation (strictly speaking, there is an edge case where deletion is probably faster, outlined below in (*), but I don't think we care in practice).

(*) The "normal" method involves reading all cells (which only requires consistency quorum) and then deleting these cells. That does require consistency all; but technically if there's a situation where no 2 nodes owning the same data are down (or are slow), and for rows that actually exist in the table, no single node owning the same data is down (or is slow), legacy works (or is fast) while truncate does not (or is slow). This is more of an academic edge case, though.

@@ -47,6 +48,7 @@ public V1TransactionsTableRangeDeleter(
}

@Override
@AllowedRangeRequest(justification = "???")
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

any context on what's calling this/why doing this deletion is ok?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jeremyk-91 is this on the restore path?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, only used on the restore path and for the transaction table, which in normal operation does not have any tombstones written. (Yes, there is a risk if you restore a stack twice in succession with no intervening compaction; should we make an internal note of this on SOPs or similar?)

@@ -89,6 +90,7 @@ private Set<TableReference> getConservativeTables(Set<TableReference> tables) {
}

@Override
@AllowedRangeRequest(justification = "deleting a range is cheaper than reading one")
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

actually true?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

truncating the table is a cheaper option to deleting the full range

@@ -163,6 +164,7 @@ private static PartialCopyStats copyInternal(
return stats;
}

@AllowedRangeRequest(justification = "???")
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

size estimator is ok because..? we only ever read one batch?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ohh this is spicy, this in theory can be very expensive if for reading that one batch we need to contact all cassandra nodes.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Today at least: only used by the large internal product, which is not generally deployed with Cassandra.

(This probably should itself be restricted, though probably something for another day...)

Copy link
Contributor

@sverma30 sverma30 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should do an announcement before pushing out this change to lower the possibility of teams falling behind on upgrading AtlasDB and to make sure they understand why we're doing this.

@@ -1691,6 +1692,7 @@ private static String schemaChangeDescriptionForPutMetadataForTables(Collection<
}

@Override
@AllowedRangeRequest(justification = "RangeRequest.all only invoked for equality check")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The method checks if the request is equal to rangerequest.all() and in that case, as an optimisation, truncated the table instead of writing deletes on the full range. Truncate does not create tombstones so in that perspective it's better, but not sure about the performance. @jeremyk-91 do you know if this is expensive?

@@ -1691,6 +1692,7 @@ private static String schemaChangeDescriptionForPutMetadataForTables(Collection<
}

@Override
@AllowedRangeRequest(justification = "RangeRequest.all only invoked for equality check")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

let's make it consistent with the other place where the justification is same: "a full range request is constructed only for an equality check"

@@ -47,6 +48,7 @@ public V1TransactionsTableRangeDeleter(
}

@Override
@AllowedRangeRequest(justification = "???")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jeremyk-91 is this on the restore path?

@@ -89,6 +90,7 @@ private Set<TableReference> getConservativeTables(Set<TableReference> tables) {
}

@Override
@AllowedRangeRequest(justification = "deleting a range is cheaper than reading one")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

truncating the table is a cheaper option to deleting the full range

@@ -163,6 +164,7 @@ private static PartialCopyStats copyInternal(
return stats;
}

@AllowedRangeRequest(justification = "???")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ohh this is spicy, this in theory can be very expensive if for reading that one batch we need to contact all cassandra nodes.

/**
* Mark that this range request has been thought about and explicitly decided to be ok.
*/
public @interface AllowedRangeRequest {
Copy link
Contributor

@sverma30 sverma30 Oct 21, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was wondering if having an enum field for high level categorisation would be helpful for future analysis for why services are relying on rangerequests. Like background task, search use case, backfill, etc?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

interesting! could be useful, yes.

Copy link
Contributor

@jeremyk-91 jeremyk-91 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for writing this! Was interesting to refresh myself on the various contexts where this is used as well. @sverma30's enum suggestion would probably be nice, and nothing blocking on my end - I've mentioned some of the "answers" to the ??? statements which we should include.

/**
* Mark that this range request has been thought about and explicitly decided to be ok.
*/
public @interface AllowedRangeRequest {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

interesting! could be useful, yes.

@@ -1691,6 +1692,7 @@ private static String schemaChangeDescriptionForPutMetadataForTables(Collection<
}

@Override
@AllowedRangeRequest(justification = "RangeRequest.all only invoked for equality check")
Copy link
Contributor

@jeremyk-91 jeremyk-91 Oct 28, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

With regard to truncate: I'd check with @inespot, but my understanding is that yes, truncate is much cheaper than just deleting. It does require all Cassandra nodes to be up and the coordinator needs to get an ack from each node that the data was deleted, but this should be comparatively fast. Also, in practice, deletes generally require consistency all and will similarly be affected by single node degradation (strictly speaking, there is an edge case where deletion is probably faster, outlined below in (*), but I don't think we care in practice).

(*) The "normal" method involves reading all cells (which only requires consistency quorum) and then deleting these cells. That does require consistency all; but technically if there's a situation where no 2 nodes owning the same data are down (or are slow), and for rows that actually exist in the table, no single node owning the same data is down (or is slow), legacy works (or is fast) while truncate does not (or is slow). This is more of an academic edge case, though.

@@ -163,6 +164,7 @@ private static PartialCopyStats copyInternal(
return stats;
}

@AllowedRangeRequest(justification = "???")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Today at least: only used by the large internal product, which is not generally deployed with Cassandra.

(This probably should itself be restricted, though probably something for another day...)

@@ -47,6 +48,7 @@ public V1TransactionsTableRangeDeleter(
}

@Override
@AllowedRangeRequest(justification = "???")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, only used on the restore path and for the transaction table, which in normal operation does not have any tombstones written. (Yes, there is a risk if you restore a stack twice in succession with no intervening compaction; should we make an internal note of this on SOPs or similar?)

* know that the overall number of rows read will be small, etc.
*/
String justification();
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: newline

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants