Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Looking for a way to modify the characteristics of flash device #6

Closed
HongweiQin opened this issue Jul 16, 2018 · 3 comments
Closed

Comments

@HongweiQin
Copy link
Contributor

Hi,
I'm using FEMU to emulate a whitebox SSD.
Is there a way to modify the characteristics (e.g. page read/write latency, block erase latency) of the emulated flash device?
Thanks in advance.

@huaicheng
Copy link
Contributor

huaicheng commented Jul 18, 2018

Yes, you can use the following methods:

(1). In hw/block/femu/femu-oc.c, line 78-83 defines the default latency numbers for page read/program and block erase, profiled from our CNEX OCSSD. You can modify them directly and recompile FEMU.

(2). A more convenient way requires no femu code change. FEMU already defines a NVMe command (opcode: 0xee) for you to change the latency characteristics in flight. In guest OS, you can use the admin-passthru subcommand provided by nvme-cli tool, for example,

$ sudo nvme admin-passthru -d /dev/nvme0n1 --opcode=0xee --cdw10=100000 --cdw11=60000 --cdw12=3000000 --cdw13=1000000 --cdw14=7000000 --cdw15=60000

Where:

  • cdw10=100000; set NAND upper page read latency to 100us
  • cdw11=60000; set NAND lower page read latency to 60us
  • cdw12=3000000; set NAND upper page program latency to 3ms
  • cdw13=1000000; set NAND lower page program latency to 1ms
  • cdw14=7000000; set NAND block erase time to 7ms
  • cdw15=60000; set page channel transfer time to 60us (correspondingly, the channel bandwidth is: 1s / 60us * 16KB = 260MB/s); 16KB is the page size.

You can set all of them to 0 to disable latency emulation when you want to warmup the emulated SSD faster. And restore the latency emulation when running your real workload.

Hope this helps.

@HongweiQin
Copy link
Contributor Author

@huaicheng Thanks for your help.
BTW, if I set those values into a pretty low point, should I consider the latency of DRAM?
For example, if I set the read latency to L, is the actual latency L or L+Ldram?

@huaicheng
Copy link
Contributor

It's L. FEMU delay emulation logics already counts in all the software overhead, including I/O emulation + DRAM access. If you set them to a very small value (e.g. less than the software overhead itself), you won't get accurate latency emulation.

@huaicheng huaicheng pinned this issue Nov 12, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants