Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add num_workers to minimum schemas for cluster tables as a long #1302

Open
neilbest-db opened this issue Oct 3, 2024 · 1 comment
Open
Assignees
Labels
bug Something isn't working schema change Requires a schema change
Milestone

Comments

@neilbest-db
Copy link
Contributor

Overwatch Version

Issue started appearing during testing for the 0.8.2.0 release when upgrading existing deployments, but not new deployments.

Describe the bug

The working theory is that the type of num_workers changed upstream in the REST API responses. This is under active evaluation as of 2024-10-03 Thu. If so, what is happening is that the target tables were already created according to the former type of int previously received in the API response payloads but new responses cannot be merged into the table because Spark does not down-cast such types (only up-casts, like int -> long, the reverse of this scenario).

@neilbest-db neilbest-db added bug Something isn't working schema change Requires a schema change labels Oct 3, 2024
@neilbest-db neilbest-db added this to the 0.9.0.0 milestone Oct 3, 2024
@neilbest-db neilbest-db self-assigned this Oct 7, 2024
@arodriguezf
Copy link

I had the same issue with the upgrade from version 0.7.2.2.1 to version 0.8.0.0

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working schema change Requires a schema change
Projects
None yet
Development

No branches or pull requests

2 participants