Add num_workers
to minimum schemas for cluster tables as a long
#1302
Labels
Milestone
num_workers
to minimum schemas for cluster tables as a long
#1302
Overwatch Version
Issue started appearing during testing for the 0.8.2.0 release when upgrading existing deployments, but not new deployments.
Describe the bug
The working theory is that the type of
num_workers
changed upstream in the REST API responses. This is under active evaluation as of 2024-10-03 Thu. If so, what is happening is that the target tables were already created according to the former type ofint
previously received in the API response payloads but new responses cannot be merged into the table because Spark does not down-cast such types (only up-casts, likeint
->long
, the reverse of this scenario).The text was updated successfully, but these errors were encountered: