Pool migration ignores rdonly status of target #7115
Replies: 8 comments 1 reply
-
I believe you are right: the One could argue this is a bug (the pool is read-only, so should accept any more data), but also one could argue this isn't a bug (the admin told me to do this, so I'm overriding the pool's settings). For now, I'm marking this request an enhancement, but somebody should decide whether this behaviour is actually a bug. If it isn't a bug then one possible solution would be to add an option to the |
Beta Was this translation helpful? Give feedback.
-
Hi Onno, I'm also facing this when I have to take out servers for replacement. Cheers |
Beta Was this translation helpful? Give feedback.
-
The main misunderstanding comes from the fact, that |
Beta Was this translation helpful? Give feedback.
-
I would say: |
Beta Was this translation helpful? Give feedback.
-
Good catch. I missed that it was a pool-manager So, I agree with @kofemann . Just to add my 2c-worth, another option might be to update pools so they have an extra state (or extra states) where the pool would be disabled for client-triggered requests (upload/download/stage), but would still allow sys-admin triggered requests. This would have broadly the same effect as the current That would be a bigger change, but (I think) would bring clarity to a somewhat confusing situation. |
Beta Was this translation helpful? Give feedback.
-
Thanks guys, I wasn't aware that the poolmanager rdonly was something different than the pool rdonly. Thanks for clearing that up. I like @kofemann's idea to have the poolmanager's |
Beta Was this translation helpful? Give feedback.
-
Disabling downloads from a pool while draining it with a |
Beta Was this translation helpful? Give feedback.
-
this issues looks like a good candidate to test github discussions :). So, I here am I. 😎 |
Beta Was this translation helpful? Give feedback.
-
Dear dCache devs,
I have a strong impression that the "migration move" function of a pool ignores the rdonly (read only) status of a target pool.
This is a bit impractical when you have two pools that you want to drain simultaneously with the poolgroup as a target. They keep on writing files to each other, so it's difficult to get them to a stage where they are empty and can be decommissioned.
So, what I do is set the source pools to rdonly, in the PoolManager:
psu set pool shark11_atlasdisk rdonly
psu set pool shark12_atlasdisk rdonly
Then I drain them both like this:
migration move -concurrency=6 -target=pgroup atlas_writediskpools
And then they are writing files to many pools, including each other. I wouldn't expect that, since they are rdonly. It's actually worse: because they are getting emptier, they are becoming increasingly attractive target pools for each other's migrations.
I work around this by setting the max diskspace to 10 (bytes) repeatedly, so that they don't have any available space to write into. But that's a bit clumsy.
Cheers,
Onno
Beta Was this translation helpful? Give feedback.
All reactions