Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] Differentiating Trotter with parameter-shift returns all 0s #6333

Open
1 task done
astralcai opened this issue Oct 3, 2024 · 0 comments · May be fixed by #6432
Open
1 task done

[BUG] Differentiating Trotter with parameter-shift returns all 0s #6333

astralcai opened this issue Oct 3, 2024 · 0 comments · May be fixed by #6432
Assignees
Labels
bug 🐛 Something isn't working

Comments

@astralcai
Copy link
Contributor

astralcai commented Oct 3, 2024

Expected behavior

Trotter can be differentiated properly:

dev = qml.device("default.qubit")

@qml.qnode(dev)
def circ(time, coeffs):
    h = qml.dot(coeffs, [qml.PauliX(0), qml.PauliZ(0)])
    qml.TrotterProduct(h, time, n=1, order=1)
    return qml.expval(qml.Hadamard(0))

time = qml.numpy.array(1.5)
coeffs = qml.numpy.array([1.23, -0.45])
>>> qml.jacobian(circ)(time, coeffs)
(array(0.9068424), array([ 1.10590537e+00, -2.27646668e-16]))

Actual behavior

When using parameter shift, the above fails:

dev = qml.device("default.qubit")

@qml.qnode(dev, diff_method="parameter-shift")
def circ(time, coeffs):
    h = qml.dot(coeffs, [qml.PauliX(0), qml.PauliZ(0)])
    qml.TrotterProduct(h, time, n=1, order=1)
    return qml.expval(qml.Hadamard(0))

time = qml.numpy.array(1.5)
coeffs = qml.numpy.array([1.23, -0.45])
>>> qml.jacobian(circ)(time, coeffs)
(array(0.90684259), array([0., 0.]))

Additional information

The issue was discovered as part of #6282, and the relevant test in test_trotter.py is xfailed, when fixed, remove the xfail from the relevant test.

Source code

No response

Tracebacks

No response

System information

Name: PennyLane
Version: 0.39.0.dev10
Summary: PennyLane is a cross-platform Python library for quantum computing, quantum machine learning, and quantum chemistry. Train a quantum computer the same way as a neural network.
Home-page: https://github.com/PennyLaneAI/pennylane
Author:
Author-email:
License: Apache License 2.0
Location: /Users/astral.cai/Workspace/pennylane/venv/lib/python3.10/site-packages
Requires: appdirs, autograd, autoray, cachetools, networkx, numpy, packaging, pennylane-lightning, requests, rustworkx, scipy, toml, typing-extensions
Required-by: PennyLane-Catalyst, PennyLane_Lightning, PennyLane_Lightning_Kokkos

Platform info:           macOS-15.0-arm64-arm-64bit
Python version:          3.10.14
Numpy version:           1.26.4
Scipy version:           1.12.0
Installed devices:
- default.clifford (PennyLane-0.39.0.dev19)
- default.gaussian (PennyLane-0.39.0.dev19)
- default.mixed (PennyLane-0.39.0.dev19)
- default.qubit (PennyLane-0.39.0.dev19)
- default.qutrit (PennyLane-0.39.0.dev19)
- default.qutrit.mixed (PennyLane-0.39.0.dev19)
- default.tensor (PennyLane-0.39.0.dev19)
- null.qubit (PennyLane-0.39.0.dev19)
- reference.qubit (PennyLane-0.39.0.dev19)
- lightning.qubit (PennyLane_Lightning-0.39.0.dev27)
- lightning.kokkos (PennyLane_Lightning_Kokkos-0.38.0)
- nvidia.custatevec (PennyLane-Catalyst-0.9.0.dev15)
- nvidia.cutensornet (PennyLane-Catalyst-0.9.0.dev15)
- oqc.cloud (PennyLane-Catalyst-0.9.0.dev15)
- softwareq.qpp (PennyLane-Catalyst-0.9.0.dev15)

Existing GitHub issues

  • I have searched existing GitHub issues to make sure the issue does not already exist.
@astralcai astralcai added the bug 🐛 Something isn't working label Oct 3, 2024
astralcai added a commit that referenced this issue Oct 8, 2024
)

The purpose of this story is to check if every operator can be
differentiated properly with the parameter shift method. The gradient
calculated using parameter shift is compared with backprop to verify
that they are equal.

There are some cases where this does not apply:
1. Backprop does not produce the correct gradient in some cases such as
for `StatePrep` and for `QubitUnitary`, as it does not take into account
the constraint that the matrix must remain unitary, making this test
invalid.
2. Some operator takes integers as parameters, such as `BasisState`. In
this case, it does not make sense to take the gradient of integer
parameters.

For these cases, a `skip_differentiation` toggle is added to
`assert_valid` such that the differentiation check is skipped for these
operators.

Three bugs are found as a result of adding this check. The relevant
tests are xfailed:
#6331
#6333
#6340

Some other minor bug fixes are also included in this PR.

[sc-65197]
Fixes #6311
@andrijapau andrijapau self-assigned this Oct 22, 2024
austingmhuang pushed a commit that referenced this issue Oct 23, 2024
)

The purpose of this story is to check if every operator can be
differentiated properly with the parameter shift method. The gradient
calculated using parameter shift is compared with backprop to verify
that they are equal.

There are some cases where this does not apply:
1. Backprop does not produce the correct gradient in some cases such as
for `StatePrep` and for `QubitUnitary`, as it does not take into account
the constraint that the matrix must remain unitary, making this test
invalid.
2. Some operator takes integers as parameters, such as `BasisState`. In
this case, it does not make sense to take the gradient of integer
parameters.

For these cases, a `skip_differentiation` toggle is added to
`assert_valid` such that the differentiation check is skipped for these
operators.

Three bugs are found as a result of adding this check. The relevant
tests are xfailed:
#6331
#6333
#6340

Some other minor bug fixes are also included in this PR.

[sc-65197]
Fixes #6311
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug 🐛 Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants