Skip to content

Commit

Permalink
Fix a bug where metadata was not copied for large objects greater tha…
Browse files Browse the repository at this point in the history
…n 5GB (#16)
  • Loading branch information
sethkor authored Nov 2, 2020
1 parent c4bfdba commit d287e78
Show file tree
Hide file tree
Showing 2 changed files with 3 additions and 0 deletions.
2 changes: 2 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -108,6 +108,8 @@ For optimal throughput consider using a S3 VPC Gateway endpoint if you are execu

And remember the performance of the source storage device is important, you don't want to choke it reading lots of data at once. Use an optimized iops device or SAN.

Meta data is copied when copying from S3 to S3.

### Supports different AWS account credentials for source and destination buckets

If you ever need to copy objects to an account which you don't own and need a seperate set of AWS credentials to access it, s3kor is perfect for the job. For S3 to S3 with different AWS credentials, objects must be downloaded from the source first and then uploaded. To cater for large objects and to limit memory usage this is done utilizing multi parts of 5MB in size and will attempt to limit in memory storage. This relies on GO garbage collector to tidy things up promptly which doesn't always happen. Some further optimizations are WIP but expect this operation to consume a bit of memory.
Expand Down
1 change: 1 addition & 0 deletions multicopy.go
Original file line number Diff line number Diff line change
Expand Up @@ -444,6 +444,7 @@ func (u *multicopyer) copy() (*CopyOutput, error) {

params := &s3.CreateMultipartUploadInput{}
awsutil.Copy(params, u.in.Input)
params.Metadata = u.head.Metadata

// Create the multipart
resp, err := u.cfg.S3.CreateMultipartUploadWithContext(u.ctx, params, u.cfg.RequestOptions...)
Expand Down

0 comments on commit d287e78

Please sign in to comment.