You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, we pass trt_arg_inputs and trt_kwarg_inputs to compile_module https://github.com/pytorch/TensorRT/blob/main/py/torch_tensorrt/dynamo/_compiler.py#L682. These are actually not being used. The prepare inputs call also fails sometimes during graph parsing for dry run. Since we read all the input info from graph metadata now, we can consider removing user inputs being passed around internally.
To Reproduce
Steps to reproduce the behavior:
Expected behavior
Environment
Build information about Torch-TensorRT can be found by turning on debug messages
Torch-TensorRT Version (e.g. 1.0.0):
PyTorch Version (e.g. 1.0):
CPU Architecture:
OS (e.g., Linux):
How you installed PyTorch (conda, pip, libtorch, source):
Build command you used (if compiling from source):
Are you using local sources or building from archives:
Python version:
CUDA version:
GPU models and configuration:
Any other relevant information:
Additional context
The text was updated successfully, but these errors were encountered:
Bug Description
Currently, we pass
trt_arg_inputs
andtrt_kwarg_inputs
to compile_module https://github.com/pytorch/TensorRT/blob/main/py/torch_tensorrt/dynamo/_compiler.py#L682. These are actually not being used. The prepare inputs call also fails sometimes during graph parsing for dry run. Since we read all the input info from graph metadata now, we can consider removing user inputs being passed around internally.To Reproduce
Steps to reproduce the behavior:
Expected behavior
Environment
conda
,pip
,libtorch
, source):Additional context
The text was updated successfully, but these errors were encountered: