Skip to content

Commit

Permalink
Try to tune garbage collection a little
Browse files Browse the repository at this point in the history
  • Loading branch information
sethkor committed Jun 15, 2019
1 parent 29bffef commit dae8701
Show file tree
Hide file tree
Showing 2 changed files with 3 additions and 1 deletion.
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -127,7 +127,7 @@ For optimal throughput consider using a S3 VPC Gateway endpoint if you are execu

And remember the performance of the source storage device is important, you don't want to choke it reading lots of data at once. Use an optimized iops device or SAN.

For S3 to S3 with different AWS profiles, object must be downloaded from the source and the uploaded. To cater for large objects and to limit memory usage this is done utilizing multi parts by relies on GO garbage collector to tidy things up promptly which doesn't always happen Some further optimizations are WIP but expect this operation to consume a bit of memory.
For S3 to S3 with different AWS profiles, object must be downloaded from the source and the uploaded. To cater for large objects and to limit memory usage this is done utilizing multi parts and will attempt to limit in memory stoarge to approximatley 200MB. This relies on GO garbage collector to tidy things up promptly which doesn't always happen. Some further optimizations are WIP but expect this operation to consume a bit of memory.

### Multiparts
Multipart chunks size for Upload or Download is set to 5MB. Any objects greater than 5MB in size shal be sent in multiparts. `s3kor` will send up to 5 parts concurrently per object.
Expand Down
2 changes: 2 additions & 0 deletions dest.go
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@ import (
"fmt"
"io"
"io/ioutil"
"runtime"
"sync"

"github.com/aws/aws-sdk-go/aws/awsutil"
Expand Down Expand Up @@ -259,6 +260,7 @@ func (rp *RemoteCopy) remoteCopyObject() (func(object *s3.Object) error, error)

err = rp.uploadChunks(params.Bucket, aws.String(rp.cp.target.Path+"/"+(*object.Key)[len(rp.cp.source.Path):]), resp.UploadId, chunks)

runtime.GC()
return err
}, nil
}
Expand Down

0 comments on commit dae8701

Please sign in to comment.