Blocking qubits in multiple GPUs #2172
Unanswered
QuantumFran
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi,
I am currently working on a distributed quantum computing simulator, using qiskit-aer and GPUs. We are performing different tests and we have doubts about how to calculate the blocking qubits for our use case. According to the official documentation, the formula to calculate blocking qubits with GPUs is the following:
sizeof(complex) * 2^(blocking_qubits+4) < size of the smallest memory space in bytes.
From this formula we understand that:
From the formula we do not understand if the 4 that we add to the value of the blocking qubits is a constant value or it varies depending on the number of GPUs used for the simulation. If it is constant, why do we add 4?
As I mentioned, we have done several tests on AWS, distributing the simulation between different g5.xlarge instances (24 GiB dedicated RAM for GPU and 16 GiB Host RAM). According to the formula, the maximum value for blocking qubits should be 27. However, we have run 35 and 36 qubit circuits distributing between 16 and 32 g5.xlarge instances with a blocking qubit value of 29 and it worked, so we have assumed that the “size of the smallest memory space in bytes” has to be multiplied by the number of instances.
In the case of a 36 qubit circuit with single precision distributed over 32 instances, the “size of the smallest memory space in bytes” would be:
If we try a value of 30 for blocking qubits it should work.
However, it returns the following error:
Could you help us understand the formula and find the optimal value for blocking qubits?
Thank you in advance.
Best regards.
Beta Was this translation helpful? Give feedback.
All reactions