Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Design flaw in the Raft <-> BadgerDB implementation #49

Open
christian-roggia opened this issue Sep 21, 2020 · 1 comment
Open

Design flaw in the Raft <-> BadgerDB implementation #49

christian-roggia opened this issue Sep 21, 2020 · 1 comment

Comments

@christian-roggia
Copy link
Contributor

christian-roggia commented Sep 21, 2020

During our investigation of why the size of our database was endlessly growing, even when no data was being written to cete, we figured out that there is an important design flaw in how BadgerDB and Raft interact.

The flaw is explained as follows:

  1. The cete server is started
  2. Data is sent to the server
  3. Raft generates new snapshots at regular intervals
  4. Badger writes new vlog files with the logs related to the incoming data
  5. The server is shutdown

Here starts the issue:

  1. The server is restarted
  2. Raft restores the latest snapshot with all key-value pairs snapshotted up to this point
  3. All pairs are replayed through a call to Set() which stores the data in Badger
  4. Badger writes again all pairs coming from the snapshot, generating new logs which will be stored in the vlog files
  5. The server is shutdown - go to 6 and repeat this over

TL;DR: every time the server is restarted all kv pairs are replayed in badger, causing a massive increase in the size of the database and eventually leading to a disk full.

Please note that while KV pairs are being replayed, the garbage collector is not useful. This also causes a massive consumption of resources (CPU, RAM, I/O) at startup time. The situation is even worse when a Kubernetes environment is taken into account where probes could kill the process if it takes to long to start - causing an exponential growth of the issue.

The three options that I could think of to solve this issue are the following:

  • Snapshots restore is disabled on start via config.NoSnapshotRestoreOnStart = true, but can be executed manually in order to recover from disasters (which is what we use since we are running on a single node)
  • Badger is cleaned completely at startup via db.DropAll() and the snapshot is used to re-populate the database (RAM, CPU, I/O intensive)
  • Snapshots use an index and only the records that have an index greater of what is available in badger is replayed (aka incremental snapshots)
@christian-roggia
Copy link
Contributor Author

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant