Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Stream files line by line when parsing to avoid memory limits #463

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

lasalvavida
Copy link

@lasalvavida lasalvavida commented Oct 20, 2016

These changes allow nedb to load large databases without running into the string length limit for node by streaming the file line by line instead of trying to load it all at once.

Tested with a large randomly generated ~256MB database that will load after these changes.

I don't know what the browser-based implications here are since this does remove Persistence.prototype.treatRawData, but I'd be happy to help resolve them if you could point me in the right direction.

@pi0
Copy link

pi0 commented Nov 30, 2016

Merged into nedb-core

@gschier
Copy link

gschier commented Dec 29, 2016

Any chance this will be merged into NeDB?

@pi0
Copy link

pi0 commented Dec 30, 2016

@gschier Any reason why not to switch nedbhq releases at this time? it is fully compatible + many more fixes and we will try to keep up to date with upstream if actively maintained again :)

@gschier
Copy link

gschier commented Dec 30, 2016

@pi0 I took a look but couldn't figure out whether it was affiliated with this project or not, how it differed, or what the purpose of it was. An introduction section on the README explaining purpose, roadmap, etc would be very useful! 👍

@JamesMGreene
Copy link
Contributor

Seems like a symmetrical method for writing as a stream would also be necessary for the same reason.

@lasalvavida
Copy link
Author

@JamesMGreene, shouldn't be. Pretty sure nedb writes in append mode, hence why it is possible to generate a large database that nedb cannot close and reopen.

@JamesMGreene
Copy link
Contributor

JamesMGreene commented Aug 14, 2017

@lasalvavida Not quite accurate. Although NeDB does do its updates in append mode, it also does a full datafile rewrite at the end of the initial loading process and during every compaction operation.

@JamesMGreene
Copy link
Contributor

To be more clearly linked, we believe this would fix:

@FightSCJP
Copy link

Will it be merged into nedb? What is the conclusion? How should the user solve this problem? Abandon nedb to seek other database solutions?

@xeoshow
Copy link

xeoshow commented Aug 29, 2018

Hope this feature could be merged into the master branch...

@huynguyend191
Copy link

How actually can I load 1Gb of data. I use but it's not working
var Datastore = require('nedb'); var db = new Datastore({ filename: './data.db', autoload: true });

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants