You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello:
I noticed that physical page would always be in Channel 0 and Channel 1 no matter how many channels I actually configure, because the CH_BITS is set to 1. And I think this would hamper parallelism and read/write performance. So I changed the CH_BITS to 3 and make rsv = 6 to utilize 8 channels. But I got two very confusing problems.
My ZNS configuration is:
LOGICAL_PAGE_SIZE = ZNS_PAGE_SIZE = 4KB
8 channels, 4 chips/channel, 2 planes/chip , 32 blocks/plane
1GB / zone, 32 zone in total.
My first question is why improving parallelism seems hamper read performance while helps write performance?
I used fio to test the read/write performance, my fio command is: fio --ioengine=psync --direct=1 --filename=/dev/nvme0n1 --rw=write --iodepth=16 --bs=32k --group_reporting --zonemode=zbd --name=seqwrite --offset_increment=0z --size=16z
Then I changed the CH_BITS to 3. and conducted the same fio experiments as above. Results show that write bandwidth rises to 72.3MB/s as expected but read bandwidth falls to 78.2MB/s. Why would this happen?
My second question is why is performance increasment limited when I tried to improve read/write performance?
For example, if we managed to compress multiple lpn data into 1 ppn , then we can definitely improve write performance. But as I tried, it seems that the improvement is limited to around 5x even though the compression ratio is really high.
I said this by two facts:
when I compress 118 lpn into 1 ppn, the write bandwidth improvement is 5.6x. When I set the compressed data size to zero, which means hole dataset would be written in 1 ppn no matter how large it actually is , the write bandwidth improvement is still 5.6x.
The maximum improvement is the same when CH_BITS is set to 1 and to 3. But it is expected to have more improvement when CH_BITS is 3 because CH_BITS=3 can utilize all 8 channels.
These two problems have been confusing me for a long time. I'd be very grateful if you would kindly answer them. Thanks a lot!
The text was updated successfully, but these errors were encountered:
Hello:
I noticed that physical page would always be in Channel 0 and Channel 1 no matter how many channels I actually configure, because the CH_BITS is set to 1. And I think this would hamper parallelism and read/write performance. So I changed the CH_BITS to 3 and make rsv = 6 to utilize 8 channels. But I got two very confusing problems.
My ZNS configuration is:
LOGICAL_PAGE_SIZE = ZNS_PAGE_SIZE = 4KB
8 channels, 4 chips/channel, 2 planes/chip , 32 blocks/plane
1GB / zone, 32 zone in total.
My first question is why improving parallelism seems hamper read performance while helps write performance?
I used fio to test the read/write performance, my fio command is:
fio --ioengine=psync --direct=1 --filename=/dev/nvme0n1 --rw=write --iodepth=16 --bs=32k --group_reporting --zonemode=zbd --name=seqwrite --offset_increment=0z --size=16z
fio --ioengine=psync --direct=1 --filename=/dev/nvme0n1 --rw=read --offset_increment=0z --size=2z --group_reporting --zonemode=zbd --bs=32k --name=seqread --numjobs=8
When CH_BITS=1, the read/write performance is shown below, and we can see that write bandwidth is only 19.6MB/s, and read is 241MB/s.
Then I changed the CH_BITS to 3. and conducted the same fio experiments as above. Results show that write bandwidth rises to 72.3MB/s as expected but read bandwidth falls to 78.2MB/s. Why would this happen?
My second question is why is performance increasment limited when I tried to improve read/write performance?
For example, if we managed to compress multiple lpn data into 1 ppn , then we can definitely improve write performance. But as I tried, it seems that the improvement is limited to around 5x even though the compression ratio is really high.
I said this by two facts:
These two problems have been confusing me for a long time. I'd be very grateful if you would kindly answer them. Thanks a lot!
The text was updated successfully, but these errors were encountered: