Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

stats missing when using pretrained pi0 policy #694

Open
2 tasks done
IrvingF7 opened this issue Feb 7, 2025 · 1 comment
Open
2 tasks done

stats missing when using pretrained pi0 policy #694

IrvingF7 opened this issue Feb 7, 2025 · 1 comment

Comments

@IrvingF7
Copy link

IrvingF7 commented Feb 7, 2025

System Info

- `lerobot` version: 0.1.0
- Platform: Linux-5.14.0-284.86.1.el9_2.x86_64-x86_64-with-glibc2.35
- Python version: 3.11.11
- Huggingface_hub version: 0.28.1
- Dataset version: 3.2.0
- Numpy version: 2.1.3
- PyTorch version (GPU?): 2.6.0+cu124 (True)
- Cuda version: 12040
- Using GPU in script?: True

Information

  • One of the scripts in the examples/ folder of LeRobot
  • My own task or dataset (give details below)

Reproduction

First of all, thanks for open-sourcing the amazing pi0 codebase.

To reproduce my error

  1. first instantiate the policy via policy = PI0Policy.from_pretrained("lerobot/pi0")

  2. then, modify the config.json it downloaded to replace the empty input features with

    "input_features": {
        "observation.image.top": {
          "shape": [
            3,
            224,
            224
          ],
          "type": "VISUAL"
        },
        "observation.image.left": {
          "shape": [
            3,
            224,
            224
          ],
          "type": "VISUAL"
        },
        "observation.image.right": {
          "shape": [
            3,
            224,
            224
          ],
          "type": "VISUAL"
        },
        "observation.state": {
          "shape": [
            7
          ],
          "type": "STATE"
        }
      },
  1. I then acquired pictures and robot state from a simulator (Simpler+Maniskill, to be precise, but I don't think this matters much?), and then packed them into a dictionary to feed to select_action

I then got

Traceback (most recent call last):
  File "/scratch/zf540/pi0/aqua-vla/experiments/envs/simpler/test_ckpts_in_simpler.py", line 222, in <module>
    eval_simpler()
  File "/scratch/zf540/pi0/aqua-vla/.venv/lib/python3.11/site-packages/draccus/argparsing.py", line 225, in wrapper_inner
    response = fn(cfg, *args, **kwargs)
               ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/scratch/zf540/pi0/aqua-vla/experiments/envs/simpler/test_ckpts_in_simpler.py", line 174, in eval_simpler
    action = policy.select_action(observation)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/scratch/zf540/pi0/aqua-vla/.venv/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/scratch/zf540/pi0/lerobot/lerobot/common/policies/pi0/modeling_pi0.py", line 276, in select_action
    batch = self.normalize_inputs(batch)
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/scratch/zf540/pi0/aqua-vla/.venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/scratch/zf540/pi0/aqua-vla/.venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/scratch/zf540/pi0/aqua-vla/.venv/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/scratch/zf540/pi0/lerobot/lerobot/common/policies/normalize.py", line 155, in forward
    assert not torch.isinf(mean).any(), _no_stats_error_str("mean")
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError: `mean` is infinity. You should either initialize with `stats` as an argument, or use a pretrained model.

From reading issue #293 I thought if you load a model with from_pretrained, then you do not need to specify dataset stats?

Expected behavior

I expect to see some action output from the model. Doesn't have to be a correct rollout, but I want to see the pipeline working so I can tweak it further.

@yunlongwang996
Copy link

same problem

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants