-
Notifications
You must be signed in to change notification settings - Fork 439
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Recreate MRD in case of any error #2970
base: master
Are you sure you want to change the base?
Conversation
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## master #2970 +/- ##
==========================================
- Coverage 75.21% 75.14% -0.07%
==========================================
Files 125 125
Lines 17734 17745 +11
==========================================
- Hits 13339 13335 -4
- Misses 3831 3843 +12
- Partials 564 567 +3
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
@@ -220,6 +232,9 @@ func (mrdWrapper *MultiRangeDownloaderWrapper) Read(ctx context.Context, buf []b | |||
|
|||
if e != nil && e != io.EOF { | |||
e = fmt.Errorf("Error in Add Call: %w", e) | |||
mrdWrapper.muCreateMRD.Lock() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Couple of things:
- We shouldn't create mrd on every error. Should be created only when mrd is closed by go-sdk.
- If there are x callbacks failed on the mrd, then we end up creating mrd x number of times even with the muCreateMRD lock. This is the thing which we wanted to avoid in the first place and didn't take this route.
Lets work with go-sdk in coming up with the right solution.
Changes to ensure MRD is recreated if any error is thrown from the Add call as well. Currently, after successful recreation of MRD, we are not recreating it.
Using a new variable to determine creation of MRD (instead of setting it to nil) to simplify the code (i.e. to prevent an additional mutex lock before calling add)
Performance testing: Tested random read with 50 threads for 1M/128K block size and there is no regression. Also tested the parallel read scenario (50 threads randomly reading from the same file) and there's no regression in this scenario as well.
b/394500981