-
Notifications
You must be signed in to change notification settings - Fork 60
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
pyexcelerate consumes lot of memory when huge data needs to be written #57
Comments
If you're constrained on memory, have you considered writing to |
I did not use csv as we have some headers with different colors etc. Are you aware how i can convert csv to xlsx in a pythonic way? Thanks for the suggestion. I can try |
If you need header styles, then you're out of luck and cannot go via |
6GB. Report generation is our last step. All before steps (there are lots of them) take 4GB. Report alone takes 2GB (for 3 lac records in 2 reports each) and just overshoots 6GB and we get Out of Memory error. |
What sorts of data are you writing? The library shouldn't be using that much memory unless you're passing objects to it. If you're doing that, you might want to serialize everything to strings before passing it to PyExcelerate. Also, could you try the latest dev branch? It includes some performance optimizations that may help here. |
I convert everything to strings before we write report. But we have lot of columns( from A to AB). Overall report size is 84MB |
Yeah I would say try the latest dev branch and see if it helps. |
Just checking in, have you had a chance to try the dev branch? |
Nope very busy with regular work. Will definitely try and let you know
…On 18-Jul-2017 7:54 PM, "Kevin Wang" ***@***.***> wrote:
Just checking in, have you had a chance to try the dev branch?
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#57 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AYSyFAKquazrK01L2vsgIXWDymcOba68ks5sPMAkgaJpZM4OL9zT>
.
|
i'd have same issue here, under the same situation i think constante memory is really a important option so i think its better to gave user the choice of which strategy they want |
Did you find any solution to this "problem"? |
I have a business need where I have huge data , more than 3 lac rows per worksheet. Writing to xlsx file using pyexcelerate is very fast , but it consumes lot of memory while writing . For me memory is a constraint. So i'm forced to use xlswriter which has constant_memory = True. It will be great if you can add a feature like that to pyexcelerate.
Or can i do something to optimize memory?
I use this call work_book.new_sheet(sheet_name, data=sheet_rows) to write to xlsx file. sheet_rows can be a very huge data structure.
The text was updated successfully, but these errors were encountered: