Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MacOS: Context leak detected, msgtracer returned -1 #23

Open
altunenes opened this issue Oct 19, 2024 · 11 comments
Open

MacOS: Context leak detected, msgtracer returned -1 #23

altunenes opened this issue Oct 19, 2024 · 11 comments
Labels
bug Something isn't working

Comments

@altunenes
Copy link

Hello! Thank you for this work again! I tried to run the example exactly https://github.com/thewh1teagle/sherpa-rs/blob/main/examples/diarize.rs here with:
"
let segment_model_path = "model.onnx"; //latest seg. model from sherpa...
let embedding_model_path = "wespeaker_en_voxceleb_CAM++.onnx";
let wav_path = "normalized_audio.wav";"

Context leak detected, msgtracer returned -1
Context leak detected, msgtracer returned -1
Context leak detected, msgtracer returned -1
🗣️ Diarizing... 0% 🎯
Context leak detected, msgtracer returned -1
Context leak detected, msgtracer returned -1
🗣️ Diarizing... 0% 🎯
🗣️ Diarizing... 0% 🎯
🗣️ Diarizing... 1% 🎯
🗣️ Diarizing... 1% 🎯
🗣️ Diarizing... 1% 🎯
🗣️ Diarizing... 1% 🎯
🗣️ Diarizing... 2% 🎯
🗣️ Diarizing... 2% 🎯

The code works without any problems. However, I wanted to bring this warning message to your attention.

@thewh1teagle
Copy link
Owner

@altunenes
I can't reproduce on Windows 11.
Does it happens with the latest version of sherpa-rs?

@altunenes
Copy link
Author

yes, it still persists. Also, I see this on the pyannote too and maybe whisper-rs related thing.

I've never seen it on Windows either. It's like a “warning” message that only appears on macos and it's very difficult to understand its origin.

note: Since it doesn't prevent the code from working and doesn't cause a major “leak” (at least as far as I can see), I didn't care too much, but I wanted to let you know anyway. :-)

@feelingsonice
Copy link

Same here on MacOs, except it happens when running the tts example using Piper English (commands from the first comment).

Something notable is that first time running gave me "Context leak detected, msgtracer returned -1" followed by "Segmentation fault". Ran it again without doing anything and it produced the proper audio.wave file instead of the "segmentation fault".

@thewh1teagle
Copy link
Owner

thewh1teagle commented Dec 18, 2024

Something notable is that first time running gave me "Context leak detected, msgtracer returned -1" followed by "Segmentation fault". Ran it again without doing anything and it produced the proper audio.wave file instead of the "segmentation fault".

Try to set the provider to cpu by --provider cpu
The error should happen only when coreml enabled.
By the way I created another great Rust library especially for TTS and it works great: piper-rs

@feelingsonice
Copy link

$ cargo run --example tts --features="tts" -- --provider cpu --text 'liliana, the most beautiful and lovely assistant of our team!' --output audio.wav --tokens "vits-piper-en_US-amy-low/tokens.txt" --model "vits-piper-en_US-amy-low/en_US-amy-low.onnx" --data-dir "vits-piper-en_US-amy-low/espeak-ng-data"

    Finished `dev` profile [unoptimized + debuginfo] target(s) in 0.10s
     Running `target/debug/examples/tts --provider cpu --text 'liliana, the most beautiful and lovely assistant of our team'\!'' --output audio.wav --tokens vits-piper-en_US-amy-low/tokens.txt --model vits-piper-en_US-amy-low/en_US-amy-low.onnx --data-dir vits-piper-en_US-amy-low/espeak-ng-data`
Context leak detected, msgtracer returned -1
Context leak detected, msgtracer returned -1
Context leak detected, msgtracer returned -1
Context leak detected, msgtracer returned -1
Context leak detected, msgtracer returned -1
Context leak detected, msgtracer returned -1
Created audio.wav

Same thing as the second run unless I'm doing something wrong?

@thewh1teagle
Copy link
Owner

Same thing as the second run unless I'm doing something wrong?

the provider argument wasn't used in the example so so it still used coreml. I fixed it now so you can git pull and run again.

@feelingsonice
Copy link

Yeah that worked :)

@thewh1teagle
Copy link
Owner

Yeah that worked :)

Great! I also noticed that it works faster on CPU with macOS m1. weird.

@feelingsonice
Copy link

Yeah I noticed it too. Worked faster for the whisper example too. Maybe cuz the models are small and offloading to the GPU introduces a lot overhead?

@thewh1teagle
Copy link
Owner

Yeah I noticed it too. Worked faster for the whisper example too. Maybe cuz the models are small and offloading to the GPU introduces a lot overhead?

I think that it's related to onnxruntime operators. the platform claims to 'support' many backends. in the reality It didn't worked for me fast in them (Eg. coreml, directml)

@altunenes
Copy link
Author

I do not get any warning msg on Windows 11
both CUDA & CPU

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants