Skip to content

Commit

Permalink
Merge pull request #4408 from Manimaran-MM/black_version_upgrade
Browse files Browse the repository at this point in the history
[BLOCKER]: fix reformatting based on new black version
  • Loading branch information
openshift-merge-bot[bot] authored Jan 29, 2025
2 parents 53116d6 + cb8e923 commit b2df186
Show file tree
Hide file tree
Showing 28 changed files with 296 additions and 296 deletions.
26 changes: 13 additions & 13 deletions ceph/rados/rados_scrub.py
Original file line number Diff line number Diff line change
@@ -1,23 +1,23 @@
"""
This module contains the methods required for scrubbing.
This module contains the methods required for scrubbing.
1.To set the parameters for scrubbing initially required the
cluster time and day details.get_cluster_date method provides
the details
1.To set the parameters for scrubbing initially required the
cluster time and day details.get_cluster_date method provides
the details
2.set_osd_configuration method used to set the configuration
parameters on the cluster.
2.set_osd_configuration method used to set the configuration
parameters on the cluster.
3.get_osd_configuration method is used to get the configured parameters
on the cluster.
3.get_osd_configuration method is used to get the configured parameters
on the cluster.
NOTE: With set_osd_configuration & get_osd_configuration methods can
use to set the get the any OSD configuration parameters.
NOTE: With set_osd_configuration & get_osd_configuration methods can
use to set the get the any OSD configuration parameters.
4. get_pg_dump method is used to get the pg dump details from the cluster
4. get_pg_dump method is used to get the pg dump details from the cluster
5. verify_scrub method used for the verification of scheduled scrub
happened or not.
5. verify_scrub method used for the verification of scheduled scrub
happened or not.
"""

import datetime
Expand Down
10 changes: 5 additions & 5 deletions ceph/rados/utils.py
Original file line number Diff line number Diff line change
@@ -1,9 +1,9 @@
"""
This module contains the wrapper functions to perform general ceph cluster modification operations.
1. Remove OSD
2. Add OSD
3. Set osd out
3. Zap device path
This module contains the wrapper functions to perform general ceph cluster modification operations.
1. Remove OSD
2. Add OSD
3. Set osd out
3. Zap device path
"""

from json import loads
Expand Down
2 changes: 1 addition & 1 deletion tests/ceph_ansible/purge_cluster.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
""" Purges the Ceph the cluster"""
"""Purges the Ceph the cluster"""

import datetime
import re
Expand Down
2 changes: 1 addition & 1 deletion tests/ceph_ansible/purge_dashboard.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
""" Module to purge ceph dashboard."""
"""Module to purge ceph dashboard."""

import json

Expand Down
2 changes: 1 addition & 1 deletion tests/misc_env/sosreport.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
""" Collecting logs using sosreport from all nodes in cluster Except Client Node."""
"""Collecting logs using sosreport from all nodes in cluster Except Client Node."""

import re

Expand Down
4 changes: 2 additions & 2 deletions tests/rados/scheduled_scrub_scenarios.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
"""
This module contains the methods required to check the scheduled scrubbing scenarios.
Based on the test cases setting the scrub parameters and verifying the functionality.
This module contains the methods required to check the scheduled scrubbing scenarios.
Based on the test cases setting the scrub parameters and verifying the functionality.
"""

import os
Expand Down
2 changes: 1 addition & 1 deletion tests/rados/test_bluestore_configs.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
""" Module to verify scenarios related to BlueStore config changes"""
"""Module to verify scenarios related to BlueStore config changes"""

import time

Expand Down
2 changes: 1 addition & 1 deletion tests/rados/test_replica1.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
""" Test module to verify replica-1 non-resilient pool functionalities"""
"""Test module to verify replica-1 non-resilient pool functionalities"""

import time
from copy import deepcopy
Expand Down
44 changes: 22 additions & 22 deletions tests/rbd/test_performance_immutable_cache.py
Original file line number Diff line number Diff line change
@@ -1,26 +1,26 @@
"""Test case covered -
CEPH-83581376 - verify the performance with immutable cache
and without immutable cache for IO operations
Pre-requisites :
1. Cluster must be up and running with capacity to create pool
2. We need atleast one client node with ceph-common package,
conf and keyring files
Test Case Flow:
1. Create RBD based pool and an Image
2. Enable the immutable cache client settings
3. Install the ceph-immutable-object-cache package
4. Create a unique Ceph user ID, the keyring
5. Enable the ceph-immutable-object-cache daemon with created client
6. Write some data to the image using FIO
7. Perform snapshot,protect and clone of rbd images
8. Read the data from cloned images first time with map, mount, unmount
9. Read the data from cloned images second time with map, mount, unmount
10. note down the time differnce of first read and second read make sure
second read should be less time compare to first read in cache
11. Repeat the above operations without immutable cache
12.check the performance make sure cache gives good performance
CEPH-83581376 - verify the performance with immutable cache
and without immutable cache for IO operations
Pre-requisites :
1. Cluster must be up and running with capacity to create pool
2. We need atleast one client node with ceph-common package,
conf and keyring files
Test Case Flow:
1. Create RBD based pool and an Image
2. Enable the immutable cache client settings
3. Install the ceph-immutable-object-cache package
4. Create a unique Ceph user ID, the keyring
5. Enable the ceph-immutable-object-cache daemon with created client
6. Write some data to the image using FIO
7. Perform snapshot,protect and clone of rbd images
8. Read the data from cloned images first time with map, mount, unmount
9. Read the data from cloned images second time with map, mount, unmount
10. note down the time differnce of first read and second read make sure
second read should be less time compare to first read in cache
11. Repeat the above operations without immutable cache
12.check the performance make sure cache gives good performance
"""

from test_rbd_immutable_cache import configure_immutable_cache
Expand Down
30 changes: 15 additions & 15 deletions tests/rbd/test_rbd_compression.py
Original file line number Diff line number Diff line change
@@ -1,19 +1,19 @@
"""Test case covered -
CEPH-83574644 - Validate "rbd_compression_hint" config
settings on globally, Pool level, and image level.
Pre-requisites :
1. Cluster must be up and running with capacity to create pool
2. We need atleast one client node with ceph-common package,
conf and keyring files
Test Case Flow:
1. Create a pool and an Image, write some data on it
2. set bluestore_compression_mode to passive to enable rbd_compression_hint feature
3. Set compression_algorithm, compression_mode and compression_ratio for the pool
4. verify "rbd_compression_hint" to "compressible" on global, pool and image level
5. verify "rbd_compression_hint" to "incompressible" on global, pool and image level
6. Repeat the above steps for ecpool
CEPH-83574644 - Validate "rbd_compression_hint" config
settings on globally, Pool level, and image level.
Pre-requisites :
1. Cluster must be up and running with capacity to create pool
2. We need atleast one client node with ceph-common package,
conf and keyring files
Test Case Flow:
1. Create a pool and an Image, write some data on it
2. set bluestore_compression_mode to passive to enable rbd_compression_hint feature
3. Set compression_algorithm, compression_mode and compression_ratio for the pool
4. verify "rbd_compression_hint" to "compressible" on global, pool and image level
5. verify "rbd_compression_hint" to "incompressible" on global, pool and image level
6. Repeat the above steps for ecpool
"""

import json
Expand Down
38 changes: 19 additions & 19 deletions tests/rbd/test_rbd_dm_cache.py
Original file line number Diff line number Diff line change
@@ -1,23 +1,23 @@
"""Test case covered -
CEPH-83575581 - Verify the usage of ceph RBD images as
dm-cache and dm-write cache from LVM side to enhance cache mechanism.
Pre-requisites :
1. Cluster must be up and running with capacity to create pool
2. We need atleast one client node with ceph-common package,
conf and keyring files
Test Case Flow:
1. Create RBD based pool and an Image
2. get rbd based image disk using rbd map
3. Create physical volume for RBD based disk
4. Create volume group for RBD based disk
5. Create cache disk, meta disk and data disk for the volume group
6. make disk as dm-cache and dm-write cache based type of cache specified
7. Create Xfs file system on metadata parted disk for file system purpose
8. mount some files on cache disk and write some I/O on it.
9. create snapshot of cache disk image
10. check ceph health status
CEPH-83575581 - Verify the usage of ceph RBD images as
dm-cache and dm-write cache from LVM side to enhance cache mechanism.
Pre-requisites :
1. Cluster must be up and running with capacity to create pool
2. We need atleast one client node with ceph-common package,
conf and keyring files
Test Case Flow:
1. Create RBD based pool and an Image
2. get rbd based image disk using rbd map
3. Create physical volume for RBD based disk
4. Create volume group for RBD based disk
5. Create cache disk, meta disk and data disk for the volume group
6. make disk as dm-cache and dm-write cache based type of cache specified
7. Create Xfs file system on metadata parted disk for file system purpose
8. mount some files on cache disk and write some I/O on it.
9. create snapshot of cache disk image
10. check ceph health status
"""

from tests.rbd.rbd_utils import initial_rbd_config
Expand Down
34 changes: 17 additions & 17 deletions tests/rbd/test_rbd_immutable_cache.py
Original file line number Diff line number Diff line change
@@ -1,21 +1,21 @@
"""Test case covered -
CEPH-83574134 - Configure immutable object cache daemon
and validate client RBD objects
Pre-requisites :
1. Cluster must be up and running with capacity to create pool
2. We need atleast one client node with ceph-common package,
conf and keyring files
Test Case Flow:
1. Create RBD based pool and an Image
2. Enable the immutable cache client settings
3. Install the ceph-immutable-object-cache package
4. Create a unique Ceph user ID, the keyring
5. Enable the ceph-immutable-object-cache daemon with created client
6. Write some data to the image using FIO
7. Perform snapshot,protect and clone of rbd images
8. Read the cloned images from the cache path
CEPH-83574134 - Configure immutable object cache daemon
and validate client RBD objects
Pre-requisites :
1. Cluster must be up and running with capacity to create pool
2. We need atleast one client node with ceph-common package,
conf and keyring files
Test Case Flow:
1. Create RBD based pool and an Image
2. Enable the immutable cache client settings
3. Install the ceph-immutable-object-cache package
4. Create a unique Ceph user ID, the keyring
5. Enable the ceph-immutable-object-cache daemon with created client
6. Write some data to the image using FIO
7. Perform snapshot,protect and clone of rbd images
8. Read the cloned images from the cache path
"""

import time
Expand Down
30 changes: 15 additions & 15 deletions tests/rbd/test_rbd_immutable_cache_cluster_operations.py
Original file line number Diff line number Diff line change
@@ -1,23 +1,23 @@
"""Perform cluster-related operations along with
immutable object cache test parallel.
immutable object cache test parallel.
Pre-requisites :
We need atleast one client node with ceph-common and fio packages,
conf and keyring files
Pre-requisites :
We need atleast one client node with ceph-common and fio packages,
conf and keyring files
Test cases covered -
CEPH-83574132 - Immutable object cache with cluster operations.
Test cases covered -
CEPH-83574132 - Immutable object cache with cluster operations.
Test Case Flow -
Test Case Flow -
1)Create multiple rbd pool and multiple images using rbd commands
2)Write some data to the images using FIO
3)Create multiple clones images in different pools from the multiple parent image created
4)Read the cloned images of different pools in parallel and check the cache status
5) Restart the Mon, OSD, and cluster target and parallelly run the IO and cache status
6) Remove mon and add mon back to cluster with parallelly run the IO and cache status
8) check ceph health status
9) Perform test on both Replicated and EC pool
1)Create multiple rbd pool and multiple images using rbd commands
2)Write some data to the images using FIO
3)Create multiple clones images in different pools from the multiple parent image created
4)Read the cloned images of different pools in parallel and check the cache status
5) Restart the Mon, OSD, and cluster target and parallelly run the IO and cache status
6) Remove mon and add mon back to cluster with parallelly run the IO and cache status
8) check ceph health status
9) Perform test on both Replicated and EC pool
"""

import time
Expand Down
36 changes: 18 additions & 18 deletions tests/rbd_mirror/rbd_mirror_reconfigure_primary_cluster.py
Original file line number Diff line number Diff line change
@@ -1,22 +1,22 @@
"""Test case covered -
CEPH-9476- Primary cluster permanent failure,
Recreate the primary cluster and Re-establish mirror with newly created cluster as Primary.
Pre-requisites :
1. Cluster must be up and running with capacity to create pool
(At least with 64 pgs)
2. We need atleast one client node with ceph-common package,
conf and keyring files
Test case flows:
1) After site-a failure, promote site-b cluster as primary
2) Create a pool with same name as secondary pool has
3) Perform the mirroring bootstrap for cluster peers
4) copy and import bootstrap token to peer cluster
5) verify peer cluster got added successfully after failback
6) verify all the images from secondary mirrored to primary
7) Demote initially promoted secondary as primary
8) Promote newly created mirrored cluster as primary
CEPH-9476- Primary cluster permanent failure,
Recreate the primary cluster and Re-establish mirror with newly created cluster as Primary.
Pre-requisites :
1. Cluster must be up and running with capacity to create pool
(At least with 64 pgs)
2. We need atleast one client node with ceph-common package,
conf and keyring files
Test case flows:
1) After site-a failure, promote site-b cluster as primary
2) Create a pool with same name as secondary pool has
3) Perform the mirroring bootstrap for cluster peers
4) copy and import bootstrap token to peer cluster
5) verify peer cluster got added successfully after failback
6) verify all the images from secondary mirrored to primary
7) Demote initially promoted secondary as primary
8) Promote newly created mirrored cluster as primary
"""

# import datetime
Expand Down
32 changes: 16 additions & 16 deletions tests/rbd_mirror/rbd_mirror_reconfigure_secondary_cluster.py
Original file line number Diff line number Diff line change
@@ -1,20 +1,20 @@
"""Test case covered -
CEPH-9477- Secondary cluster permanent failure,
Recreate the secondary cluster and Re-establish mirror with newly created cluster as secondary.
Pre-requisites :
1. Cluster must be up and running with capacity to create pool
(At least with 64 pgs)
2. We need atleast one client node with ceph-common package,
conf and keyring files
Test case flows:
1) After site-b failure, bring up new cluster for secondary
2) Create a pool with same name as secondary pool
3) Perform the mirroring bootstrap for cluster peers
4) copy and import bootstrap token to peer cluster
5) verify peer cluster got added successfully after failover
6) verify all the images from primary mirrored to secondary
CEPH-9477- Secondary cluster permanent failure,
Recreate the secondary cluster and Re-establish mirror with newly created cluster as secondary.
Pre-requisites :
1. Cluster must be up and running with capacity to create pool
(At least with 64 pgs)
2. We need atleast one client node with ceph-common package,
conf and keyring files
Test case flows:
1) After site-b failure, bring up new cluster for secondary
2) Create a pool with same name as secondary pool
3) Perform the mirroring bootstrap for cluster peers
4) copy and import bootstrap token to peer cluster
5) verify peer cluster got added successfully after failover
6) verify all the images from primary mirrored to secondary
"""

import ast
Expand Down
18 changes: 9 additions & 9 deletions tests/rbd_mirror/test_rbd_mirror_cloned_image.py
Original file line number Diff line number Diff line change
@@ -1,14 +1,14 @@
"""Test case covered - CEPH-83576099
Test Case Flow:
1. Configure snapshot based mirroring between two clusters
2. create snapshots of an image
3. protect the snapshot of an image
4. clone the snapshot to new image
5. Tried to enable cloned images for snapshot-based mirroring it should not allow
6. Flatten the cloned image
7. Enable snapshot based mirroring for the flattened child image
8. Perform test steps for both Replicated and EC pool
Test Case Flow:
1. Configure snapshot based mirroring between two clusters
2. create snapshots of an image
3. protect the snapshot of an image
4. clone the snapshot to new image
5. Tried to enable cloned images for snapshot-based mirroring it should not allow
6. Flatten the cloned image
7. Enable snapshot based mirroring for the flattened child image
8. Perform test steps for both Replicated and EC pool
"""

from tests.rbd.rbd_utils import Rbd
Expand Down
Loading

0 comments on commit b2df186

Please sign in to comment.