-
Notifications
You must be signed in to change notification settings - Fork 474
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Remove unused memory checks to speed up compute
#2719
Conversation
This pull request was exported from Phabricator. Differential Revision: D68995122 |
This pull request was exported from Phabricator. Differential Revision: D68995122 |
Summary: Pull Request resolved: pytorch#2719 The memory checks here have non-significant overhead in every compute step as there are a lot of tensor size calls involved here. In our runs, this accounted for around 20% time spent in the rec metric compute step. Given that this is not being used anymore, let's remove this call. This diff removes the call from the metric_module. In the next set of diffs, I'll remove the argument from the callsites. Differential Revision: D68995122
79d9099
to
ecdabfd
Compare
This pull request was exported from Phabricator. Differential Revision: D68995122 |
Summary: The memory checks here have non-significant overhead in every compute step as there are a lot of tensor size calls involved here. In our runs, this accounted for around 20% time spent in the rec metric compute step. Given that this is not being used anymore, let's remove this call. This diff removes the call from the metric_module. In the next set of diffs, I'll remove the argument from the callsites. Reviewed By: fegin Differential Revision: D68995122
ecdabfd
to
0d39b2d
Compare
This pull request was exported from Phabricator. Differential Revision: D68995122 |
1 similar comment
This pull request was exported from Phabricator. Differential Revision: D68995122 |
Summary: Pull Request resolved: pytorch#2719 The memory checks here have non-significant overhead in every compute step as there are a lot of tensor size calls involved here. In our runs, this accounted for around 20% time spent in the rec metric compute step. Given that this is not being used anymore, let's remove this call. This diff removes the call from the metric_module. In the next set of diffs, I'll remove the argument from the callsites. Reviewed By: fegin Differential Revision: D68995122
0d39b2d
to
c743013
Compare
This pull request was exported from Phabricator. Differential Revision: D68995122 |
Summary: Pull Request resolved: pytorch#2719 The memory checks here have non-significant overhead in every compute step as there are a lot of tensor size calls involved here. In our runs, this accounted for around 20% time spent in the rec metric compute step. Given that this is not being used anymore, let's remove this call. This diff removes the call from the metric_module. In the next set of diffs, I'll remove the argument from the callsites. Reviewed By: fegin Differential Revision: D68995122
c743013
to
37938e7
Compare
Summary: The memory checks here have non-significant overhead in every compute step as there are a lot of tensor size calls involved here. In our runs, this accounted for around 20% time spent in the rec metric compute step. Given that this is not being used anymore, let's remove this call. This diff removes the call from the metric_module. In the next set of diffs, I'll remove the argument from the callsites. Reviewed By: fegin Differential Revision: D68995122
37938e7
to
14b341c
Compare
This pull request was exported from Phabricator. Differential Revision: D68995122 |
Summary:
The memory checks here have non-significant overhead in every compute step as there are a lot of tensor size calls involved here.
In our runs, this accounted for around 20% time spent in the rec metric compute step.
Given that this is not being used anymore, let's remove this call.
This diff removes the call from the metric_module. In the next set of diffs, I'll remove the argument from the callsites.
Differential Revision: D68995122