-
Notifications
You must be signed in to change notification settings - Fork 218
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Item status check: error on maximum item size exceedance and test with specific identifier/access key #485
base: master
Are you sure you want to change the base?
Conversation
…h specific identifier/access key Fixes jjjake#293
Also replaces the abandoned PR #297 |
Thanks again @JustAnotherArchivist! This looks good, but I'm actually going to look into what it'd take to get item_size/files_count limit info from s3.us.archive.org rather than hard-coding it here. I'll keep you posted. |
print(f'warning: {args["<identifier>"]} is over limit, and not accepting requests. ' | ||
'Expect 503 SlowDown errors.', | ||
file=sys.stderr) | ||
sys.exit(1) | ||
elif item.item_size >= MAX_ITEM_SIZE: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
elif item.item_size >= MAX_ITEM_SIZE: | |
elif item.item_size > MAX_ITEM_SIZE: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This would require some testing whether IA's servers still accept any upload (including an empty file) if the item is exactly 1 TiB. Might be tricky though since I think the metadata files, which get modified after every upload, also count towards the item size.
@@ -160,19 +163,22 @@ def main(argv, session): | |||
sys.exit(1) | |||
|
|||
# Status check. | |||
if args['<identifier>']: | |||
item = session.get_item(args['<identifier>']) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we really want to get
an item that could be 1TB or more before we do a status-check?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Well, we need the Item
object for both the size status check and the actual upload. While this structure means we needlessly fetch the item metadata when S3 is overloaded, it avoids more complicated conditions (e.g. first run the S3 overload check if --status-check
is present, then fetch item metadata if the identifier is present, then check the item size if both are present, then exit successfully if --status-check
is present), which in my opinion leads to less readable code. Alternatives are two lines with get_item
calls, which is just as ugly, or some sort of lazy evaluation, which is somewhat complicated to implement. So I found this to be the least awkward solution.
Fixes #293