Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cache-large-write doesn't work occasionally #5577

Closed
zhoucheng361 opened this issue Jan 18, 2025 · 2 comments
Closed

cache-large-write doesn't work occasionally #5577

zhoucheng361 opened this issue Jan 18, 2025 · 2 comments
Assignees

Comments

@zhoucheng361
Copy link
Contributor

What happened:
https://github.com/juicedata/juicefs/actions/runs/12847215619/job/35823266995
Test sript:

./juicefs mount $META_URL /tmp/jfs -d --cache-large-write 
dd if=/dev/zero of=/tmp/jfs/test1 bs=1M count=200
./juicefs warmup /tmp/jfs/test1 --check 2>&1 | tee warmup.log

Log:

Start Test: test_cache_large_write
Filesystem      Size  Used Avail Use% Mounted on
/dev/root        73G   57G   16G  79% /
umount_jfs /tmp/jfs sqlite3://test.db
-r-------- 1 root root 2515 Jan 18 20:35 /tmp/jfs/.config
umount is /tmp/jfs, pids is 19380
wait mount process 19380 exit in 60 seconds
1, mount process count is 1
wait mount process to be killed...
2, mount process count is 0
mount process is killed
start flush meta: sqlite3://test.db
remove meta file test.db succeed
flush meta succeed
Added `myminio` successfully.
mc: <ERROR> Failed to remove `myminio/test` recursively. The specified bucket does not exist
2025/01/18 20:36:08.678539 juicefs[20580] <INFO>: Meta address: sqlite3://test.db [[email protected]:515]
2025/01/18 20:36:08.679327 juicefs[20580] <INFO>: Data use file:///var/jfs/myjfs/ [[email protected]:509]
2025/01/18 20:36:08.730135 juicefs[20580] <INFO>: Volume is formatted as {
  "Name": "myjfs",
  "UUID": "7e3bd[777](https://github.com/juicedata/juicefs/actions/runs/12847215619/job/35823266995#step:5:778)-ceab-4173-9439-4431ea2bc4f1",
  "Storage": "file",
  "Bucket": "/var/jfs/",
  "BlockSize": 4096,
  "Compression": "none",
  "EncryptAlgo": "aes256gcm-rsa",
  "TrashDays": 1,
  "MetaVersion": 1,
  "MinClientVersion": "1.1.0-A",
  "DirStats": true,
  "EnableACL": false
} [[email protected]:546]
2025/01/18 20:36:09.457209 juicefs[20588] <INFO>: Meta address: sqlite3://test.db [[email protected]:515]
2025/01/18 20:36:09.458898 juicefs[20588] <INFO>: Data use file:///var/jfs/myjfs/ [[email protected]:640]
2025/01/18 20:36:10.961922 juicefs[20588] <INFO>: OK, myjfs is ready at /tmp/jfs [checkMountpoint@mount_unix.go:225]
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 0.22063 s, 951 MB/s
..2025/01/18 20:36:12.184854 juicefs[20632] <INFO>: check cache: 1 files checked, 0 B of 200 MiB (0.0%) cached [[email protected]:302]
2025/01/18 20:36:12.918966 juicefs[20645] <INFO>: Meta address: sqlite3://test.db [[email protected]:515]
2025/01/18 20:36:12.920694 juicefs[20645] <INFO>: Data use file:///var/jfs/myjfs/ [[email protected]:640]
2025/01/18 20:36:13.421702 juicefs[20645] <INFO>: OK, myjfs is ready at /tmp/jfs [checkMountpoint@mount_unix.go:225]
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 0.212155 s, 988 MB/s
2025/01/18 20:36:15.291544 juicefs[20674] <INFO>: check cache: 1 files checked, 0 B of 200 MiB (0.0%) cached [[email protected]:302]
cache ratio(0) should be more than expected_ratio(90) after warmup
test_cache_large_write exit with 1
Test Failed: test_cache_large_write(.github/scripts/cache.sh) in 10 seconds

What you expected to happen:
the cached size should not be 0 when we write a 200M file when using mount option cache-large-write

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?

Environment:

  • JuiceFS version (use juicefs --version) or Hadoop Java SDK version:
  • Cloud provider or hardware configuration running JuiceFS:
  • OS (e.g cat /etc/os-release):
  • Kernel (e.g. uname -a):
  • Object storage (cloud provider and region, or self maintained):
  • Metadata engine info (version, cloud provider managed or self maintained):
  • Network connectivity (JuiceFS to metadata engine, JuiceFS to object storage):
  • Others:
@zhoucheng361 zhoucheng361 added the kind/bug Something isn't working label Jan 18, 2025
@jiefenghuang jiefenghuang self-assigned this Jan 21, 2025
@jiefenghuang jiefenghuang removed the kind/bug Something isn't working label Jan 21, 2025
@jiefenghuang
Copy link
Contributor

increase the buffer size may solve

@jiefenghuang
Copy link
Contributor

cause by #5619

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants