You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As it is currently, we take up a lot of space (~4 MB) each day with unzipped copies of the results of a scrape. We were considering making it zipped data before @nwinklareth left, if I am recalling correctly. Or am I wrong?
The text was updated successfully, but these errors were encountered:
Storing 10 years of snapshots will take up 65% or so of the remaining 31G
of free disk space, so it is not that pressing of a concern.
Upon reflection I think that the snapshots should be moved to Amazon S3
drive, this would minimize the storage and delivery cost for that data.
Norbert
On Sat, Jul 5, 2014 at 1:52 PM, Brian Everett Peterson < [email protected]> wrote:
As it is currently, we take up a lot of space (~4 MB) each day with
unzipped copies of the results of a scrape. We were considering making it
zipped data before Norbert left, I just remembered.
—
Reply to this email directly or view it on GitHub #453.
Hey, that's a good idea. Because money required for hosting IS a concern where space is not so much, as you point out. Should I make that a future issue?
As it is currently, we take up a lot of space (~4 MB) each day with unzipped copies of the results of a scrape. We were considering making it zipped data before @nwinklareth left, if I am recalling correctly. Or am I wrong?
The text was updated successfully, but these errors were encountered: