Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question about ephemeral clone with max_open_files !=-1 #341

Open
jecaro opened this issue Oct 2, 2024 · 0 comments
Open

Question about ephemeral clone with max_open_files !=-1 #341

jecaro opened this issue Oct 2, 2024 · 0 comments

Comments

@jecaro
Copy link

jecaro commented Oct 2, 2024

I'm trying to implement a read-only replica using the feature: ephemeral clone. This comment describes basically what I do. The only difference is that I don't have a destination bucket. But despite that, it's all the same.

Looking at my log I can see:

2024/10/02-09:40:36.036061 7fcdaac78000 [cloud_env_impl] SanitizeDirectory info.   No destination bucket specified and options.max_open_files != -1  so sst files from src bucket /data are not copied into local dir /data at startup

We've put a limit for this option (max_open_files = 16000), as we've seen a big memory consumption using -1. However, this logline raises some questions:

  • When using something different to -1, not all files are copied when we open the DB. But some of them are, and all of them we'll be at some point right?
  • How does that work? Is there some background job doing it? I couldn't find the info in the code. If someone can point me to the right place that'd be helpful 🙏
  • When using -1, are all the files copied before the DB is ready to serve requests? If so, that means that opening a DB with a large amount of data will be very long, right?

Thanks in advance for any response/remark/feedback

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant