-
Notifications
You must be signed in to change notification settings - Fork 8.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[#192624] Fix archive for incompatible_cluster_routing_allocation
#193741
[#192624] Fix archive for incompatible_cluster_routing_allocation
#193741
Conversation
Pinging @elastic/kibana-core (Team:Core) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe your strategy for updating the ZIP file is additive, aka doesn't remove existing data.
I tried using this instead:
ditto -c -k --sequesterRsrc --keepParent .es/8.16.0/data src/core/server/integration_tests/saved_objects/migrations/archives/7.13.0_concurrent_5k_foo.zip
Which got me from:
gsoldevila@MacBook archives % ls -lh
-rw-r--r-- 1 gsoldevila staff 260K Sep 23 18:11 7.13.0_concurrent_5k_foo.zip
To an archive less than half the size:
gsoldevila@MacBook archives % ls -lh
-rw-r--r-- 1 gsoldevila staff 121K Sep 23 18:12 7.13.0_concurrent_5k_foo.zip
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
NIT could we re-package the file?
💛 Build succeeded, but was flaky
Failed CI StepsMetrics [docs]
To update your PR or re-run it, just comment with: cc @afharo |
…tion` (elastic#193741) (cherry picked from commit df2ccb6)
💚 All backports created successfully
Note: Successful backport PRs will be merged automatically after passing CI. Questions ?Please refer to the Backport tool documentation |
…location` (#193741) (#193763) # Backport This will backport the following commits from `main` to `8.x`: - [[#192624] Fix archive for `incompatible_cluster_routing_allocation` (#193741)](#193741) <!--- Backport version: 9.4.3 --> ### Questions ? Please refer to the [Backport tool documentation](https://github.com/sqren/backport) <!--BACKPORT [{"author":{"name":"Alejandro Fernández Haro","email":"[email protected]"},"sourceCommit":{"committedDate":"2024-09-23T17:33:47Z","message":"[#192624] Fix archive for `incompatible_cluster_routing_allocation` (#193741)","sha":"df2ccb6789537a5a7474cc8a1601696188551352","branchLabelMapping":{"^v9.0.0$":"main","^v8.16.0$":"8.x","^v(\\d+).(\\d+).\\d+$":"$1.$2"}},"sourcePullRequest":{"labels":["chore","Team:Core","release_note:skip","v9.0.0","backport:prev-minor"],"title":"[#192624] Fix archive for `incompatible_cluster_routing_allocation`","number":193741,"url":"https://github.com/elastic/kibana/pull/193741","mergeCommit":{"message":"[#192624] Fix archive for `incompatible_cluster_routing_allocation` (#193741)","sha":"df2ccb6789537a5a7474cc8a1601696188551352"}},"sourceBranch":"main","suggestedTargetBranches":[],"targetPullRequestStates":[{"branch":"main","label":"v9.0.0","branchLabelMappingKey":"^v9.0.0$","isSourceBranch":true,"state":"MERGED","url":"https://github.com/elastic/kibana/pull/193741","number":193741,"mergeCommit":{"message":"[#192624] Fix archive for `incompatible_cluster_routing_allocation` (#193741)","sha":"df2ccb6789537a5a7474cc8a1601696188551352"}}]}] BACKPORT--> Co-authored-by: Alejandro Fernández Haro <[email protected]>
## Summary Addresses #167676 Addresses #158318 Addresses #163254 Addresses #163255 #### Fix for `multiple_es_nodes.test.ts` Inspired on #193899 1. Start both nodes of ES 8.17.0 with the affected data-archive on separate terminals: 1. Node 01: `yarn es snapshot --version=8.17.0 --data-archive src/core/server/integration_tests/saved_objects/migrations/archives/7.13.0_5k_so_node_01.zip --base-path .es/node01` 2. Node 02: `yarn es snapshot --version=8.17.0 --data-archive src/core/server/integration_tests/saved_objects/migrations/archives/7.13.0_5k_so_node_02.zip --base-path .es/node02` 2. After ES is ready (without starting Kibana), reindex the index `.kibana_7.13.0_002` 1. Retrieve the settings from the original index via `curl -L 'http://localhost:9200/.kibana_7.13.0_002' -H 'Content-Type: application/json' -H 'kbn-xsrf: test' -H 'Authorization: Basic c3lzdGVtX2luZGljZXNfc3VwZXJ1c2VyOmNoYW5nZW1l' -d ''` 2. Create the target index with those settings: ```shell curl -L -X PUT 'http://localhost:9200/.kibana_7.13.0_003' -H 'Content-Type: application/json' -H 'kbn-xsrf: test' -H 'Authorization: Basic c3lzdGVtX2luZGljZXNfc3VwZXJ1c2VyOmNoYW5nZW1l' -d '{ "mappings": { "properties": { "bar": { "properties": { "status": { "type": "text", "fields": { "keyword": { "type": "keyword", "ignore_above": 256 } } } } }, "coreMigrationVersion": { "type": "keyword" }, "foo": { "properties": { "status": { "type": "text", "fields": { "keyword": { "type": "keyword", "ignore_above": 256 } } } } }, "migrationVersion": { "dynamic": "true", "properties": { "bar": { "type": "text", "fields": { "keyword": { "type": "keyword", "ignore_above": 256 } } }, "foo": { "type": "text", "fields": { "keyword": { "type": "keyword", "ignore_above": 256 } } } } }, "namespace": { "type": "keyword" }, "namespaces": { "type": "keyword" }, "originId": { "type": "keyword" }, "references": { "type": "nested", "properties": { "id": { "type": "keyword" }, "name": { "type": "keyword" }, "type": { "type": "keyword" } } }, "type": { "type": "keyword" }, "updated_at": { "type": "date" } } }, "settings": { "index": { "hidden": "true", "number_of_shards": "1", "number_of_replicas": "0" } } }' ``` 3. Reindex the content: `curl -L 'http://localhost:9200/_reindex' -H 'Content-Type: application/json' -H 'kbn-xsrf: test' -H 'Authorization: Basic c3lzdGVtX2luZGljZXNfc3VwZXJ1c2VyOmNoYW5nZW1l' -d '{ "source": { "index": ".kibana_7.13.0_002" }, "dest": { "index": ".kibana_7.13.0_003" } }'` 4. Remove the old index and recreate the aliases ```shell curl -L 'http://localhost:9200/_aliases' -H 'Content-Type: application/json' -H 'kbn-xsrf: test' -H 'Authorization: Basic c3lzdGVtX2luZGljZXNfc3VwZXJ1c2VyOmNoYW5nZW1l' -d '{ "actions": [ { "add": { "index": ".kibana_7.13.0_003", "alias": ".kibana_7.13.0_001" } }, { "remove_index": {"index": ".kibana_7.13.0_002" } }, { "add": { "index": ".kibana_7.13.0_003", "alias": ".kibana_7.13.0" } }, { "add": { "index": ".kibana_7.13.0_003", "alias": ".kibana" } } ] }' ``` 3. Stop both ES nodes. 4. Compress both archives ```shell cd .es/node01/8.17.0 rm -rf data/nodes # we need to remove this dir or it fails to start again zip -r ../../../src/core/server/integration_tests/saved_objects/migrations/archives/7.13.0_5k_so_node_01.zip data -x "*/\.*" cd ../../../ cd .es/node02/8.17.0 rm -rf data/nodes # we need to remove this dir or it fails to start again zip -r ../../../src/core/server/integration_tests/saved_objects/migrations/archives/7.13.0_5k_so_node_02.zip data -x "*/\.*" cd ../../../ ``` 5. Run the tests to confirm that the issue is fixed: `yarn test:jest_integration src/core/server/integration_tests/saved_objects/migrations/group3/multiple_es_nodes.test.ts` #### Fix for `incompatible_cluster_routing_allocation.test.ts` Inspired on #193741 ```shell # 1. Start ES 8.17.0 with the affected data-archive yarn es snapshot --version=8.17.0 --data-archive src/core/server/integration_tests/saved_objects/migrations/archives/8.0.0_v1_migrations_sample_data_saved_objects.zip # ... after ES has completely started up, stop it. # 2. Compress the archive cd .es/8.17.0 zip -r ../../src/core/server/integration_tests/saved_objects/migrations/archives/8.0.0_v1_migrations_sample_data_saved_objects.zip data -x "*/\.*" cd ../../ # 3. Run the tests to confirm that the issue is fixed. yarn test:jest_integration src/core/server/integration_tests/saved_objects/migrations/group3/incompatible_cluster_routing_allocation.test.ts ``` #### Fix for `read_batch_size.test.ts` Inspired on #193899 ```shell # 1. Start ES 8.17.0 with the affected data-archive yarn es snapshot --version=8.17.0 --data-archive src/core/server/integration_tests/saved_objects/migrations/archives/8.4.0_with_sample_data_logs.zip # ... after ES has completely started up, stop it. # 2. Compress the archive cd .es/8.17.0 zip -r ../../src/core/server/integration_tests/saved_objects/migrations/archives/8.4.0_with_sample_data_logs.zip data -x "*/\.*" cd ../../ # 3. Run the tests to confirm that the issue is fixed. yarn test:jest_integration src/core/server/integration_tests/saved_objects/migrations/group3/read_batch_size.test.ts ```
…6641) ## Summary Addresses elastic#167676 Addresses elastic#158318 Addresses elastic#163254 Addresses elastic#163255 #### Fix for `multiple_es_nodes.test.ts` Inspired on elastic#193899 1. Start both nodes of ES 8.17.0 with the affected data-archive on separate terminals: 1. Node 01: `yarn es snapshot --version=8.17.0 --data-archive src/core/server/integration_tests/saved_objects/migrations/archives/7.13.0_5k_so_node_01.zip --base-path .es/node01` 2. Node 02: `yarn es snapshot --version=8.17.0 --data-archive src/core/server/integration_tests/saved_objects/migrations/archives/7.13.0_5k_so_node_02.zip --base-path .es/node02` 2. After ES is ready (without starting Kibana), reindex the index `.kibana_7.13.0_002` 1. Retrieve the settings from the original index via `curl -L 'http://localhost:9200/.kibana_7.13.0_002' -H 'Content-Type: application/json' -H 'kbn-xsrf: test' -H 'Authorization: Basic c3lzdGVtX2luZGljZXNfc3VwZXJ1c2VyOmNoYW5nZW1l' -d ''` 2. Create the target index with those settings: ```shell curl -L -X PUT 'http://localhost:9200/.kibana_7.13.0_003' -H 'Content-Type: application/json' -H 'kbn-xsrf: test' -H 'Authorization: Basic c3lzdGVtX2luZGljZXNfc3VwZXJ1c2VyOmNoYW5nZW1l' -d '{ "mappings": { "properties": { "bar": { "properties": { "status": { "type": "text", "fields": { "keyword": { "type": "keyword", "ignore_above": 256 } } } } }, "coreMigrationVersion": { "type": "keyword" }, "foo": { "properties": { "status": { "type": "text", "fields": { "keyword": { "type": "keyword", "ignore_above": 256 } } } } }, "migrationVersion": { "dynamic": "true", "properties": { "bar": { "type": "text", "fields": { "keyword": { "type": "keyword", "ignore_above": 256 } } }, "foo": { "type": "text", "fields": { "keyword": { "type": "keyword", "ignore_above": 256 } } } } }, "namespace": { "type": "keyword" }, "namespaces": { "type": "keyword" }, "originId": { "type": "keyword" }, "references": { "type": "nested", "properties": { "id": { "type": "keyword" }, "name": { "type": "keyword" }, "type": { "type": "keyword" } } }, "type": { "type": "keyword" }, "updated_at": { "type": "date" } } }, "settings": { "index": { "hidden": "true", "number_of_shards": "1", "number_of_replicas": "0" } } }' ``` 3. Reindex the content: `curl -L 'http://localhost:9200/_reindex' -H 'Content-Type: application/json' -H 'kbn-xsrf: test' -H 'Authorization: Basic c3lzdGVtX2luZGljZXNfc3VwZXJ1c2VyOmNoYW5nZW1l' -d '{ "source": { "index": ".kibana_7.13.0_002" }, "dest": { "index": ".kibana_7.13.0_003" } }'` 4. Remove the old index and recreate the aliases ```shell curl -L 'http://localhost:9200/_aliases' -H 'Content-Type: application/json' -H 'kbn-xsrf: test' -H 'Authorization: Basic c3lzdGVtX2luZGljZXNfc3VwZXJ1c2VyOmNoYW5nZW1l' -d '{ "actions": [ { "add": { "index": ".kibana_7.13.0_003", "alias": ".kibana_7.13.0_001" } }, { "remove_index": {"index": ".kibana_7.13.0_002" } }, { "add": { "index": ".kibana_7.13.0_003", "alias": ".kibana_7.13.0" } }, { "add": { "index": ".kibana_7.13.0_003", "alias": ".kibana" } } ] }' ``` 3. Stop both ES nodes. 4. Compress both archives ```shell cd .es/node01/8.17.0 rm -rf data/nodes # we need to remove this dir or it fails to start again zip -r ../../../src/core/server/integration_tests/saved_objects/migrations/archives/7.13.0_5k_so_node_01.zip data -x "*/\.*" cd ../../../ cd .es/node02/8.17.0 rm -rf data/nodes # we need to remove this dir or it fails to start again zip -r ../../../src/core/server/integration_tests/saved_objects/migrations/archives/7.13.0_5k_so_node_02.zip data -x "*/\.*" cd ../../../ ``` 5. Run the tests to confirm that the issue is fixed: `yarn test:jest_integration src/core/server/integration_tests/saved_objects/migrations/group3/multiple_es_nodes.test.ts` #### Fix for `incompatible_cluster_routing_allocation.test.ts` Inspired on elastic#193741 ```shell # 1. Start ES 8.17.0 with the affected data-archive yarn es snapshot --version=8.17.0 --data-archive src/core/server/integration_tests/saved_objects/migrations/archives/8.0.0_v1_migrations_sample_data_saved_objects.zip # ... after ES has completely started up, stop it. # 2. Compress the archive cd .es/8.17.0 zip -r ../../src/core/server/integration_tests/saved_objects/migrations/archives/8.0.0_v1_migrations_sample_data_saved_objects.zip data -x "*/\.*" cd ../../ # 3. Run the tests to confirm that the issue is fixed. yarn test:jest_integration src/core/server/integration_tests/saved_objects/migrations/group3/incompatible_cluster_routing_allocation.test.ts ``` #### Fix for `read_batch_size.test.ts` Inspired on elastic#193899 ```shell # 1. Start ES 8.17.0 with the affected data-archive yarn es snapshot --version=8.17.0 --data-archive src/core/server/integration_tests/saved_objects/migrations/archives/8.4.0_with_sample_data_logs.zip # ... after ES has completely started up, stop it. # 2. Compress the archive cd .es/8.17.0 zip -r ../../src/core/server/integration_tests/saved_objects/migrations/archives/8.4.0_with_sample_data_logs.zip data -x "*/\.*" cd ../../ # 3. Run the tests to confirm that the issue is fixed. yarn test:jest_integration src/core/server/integration_tests/saved_objects/migrations/group3/read_batch_size.test.ts ``` (cherry picked from commit 3d254c2) # Conflicts: # src/core/server/integration_tests/saved_objects/migrations/group3/multiple_es_nodes.test.ts
Summary
Related #192624.
ES complained because the archive was created with
8.0.0
, but ES 9.0.0 requires the datastore to be upgraded to 8.16.0. The following steps have been followed:Checklist
For maintainers