-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add preprocessing of list inputs for op by op execution #300
Conversation
|
Codecov ReportAll modified and coverable lines are covered by tests ✅
✅ All tests successful. No failed tests found. Additional details and impacted files@@ Coverage Diff @@
## main #300 +/- ##
===========================
===========================
☔ View full report in Codecov by Sentry. |
tt_torch/dynamo/backend.py
Outdated
@@ -336,6 +336,16 @@ def pre_process_inputs(self, *inputs): | |||
if not inp.is_contiguous(): | |||
inp = inp.contiguous() | |||
processed_inputs.append(inp) | |||
elif isinstance(inp, list): | |||
for ele in inp: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can you please move this bit into a helper, and call it in both cases (i.e. if tensor or if element in list). Otherwise, looks great.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please address comment, then all is good.
8fe87e0
to
e4af9f1
Compare
|
closes #301
Ticket
#301
Problem description
aten::cat (concatenate) operation is failing for op-by-op execution due to mismatch in numbers of inputs
What's changed
Inputs preprocessing for op-by-op execution is updated to process inputs of type list.
Nightly tests (https://github.com/tenstorrent/tt-torch/actions/runs/13270505478) report shows the improvement.
Checklist