Replies: 2 comments
-
There are few options for this we could include this in our code/apis, or include as additional configuration. Using code: import { bucket } from '@nitric/sdk';
const files = bucket('files', {
class: 'standard'
});
// This could also be configured at an environment level (e.g. dev/staging/prod) using ENV variables)
const STORAGE_CLASS = process.env.DEFAULT_STORAGE_CLASS || "standard";
const files = bucket('files', {
class: STORAGE_CLASS
});
// OR could support provider specific configuration if necessary (rather than with an abstract mapping)
const files = bucket('files', {
aws: {
storageClass: 'INTELLIGENT_TIERING'
},
gcp: {
storageClass: 'NEARLINE'
},
}); Using config (nitric stack file): buckets:
default:
class: standard
my-bucket-name:
class: cold Extra things to consider:
|
Beta Was this translation helpful? Give feedback.
0 replies
-
With new provider configuration available, this should be relatively straightforward to implement for named buckets. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
It is a very common requirement to configure storage classes on buckets based on the performance and access requirements of objects stored in that bucket.
Ultimately storage class is determined at the
object
/file
level, but being able to set a default for a bucket when creating new objects makes sense.Beta Was this translation helpful? Give feedback.
All reactions