(This is no longer just a command line wrapper...) Now it's BARK INFINITY! π
Bark Infinity started as a humble π» command line wrapper, a CLI π¬. Built from simple keyword commands, it was a proof of concept π§ͺ, a glimmer of potential π‘.
Bark Infinity evolved π§¬, expanding across dimensions π. Infinite Length π΅π, Infinite Voices ππ, and a true high point in human history: π Infinite Awkwardness πΊ.
But for some people, the time-tested command line interface was not a good fit. Many couldn't even try Bark π, struggling with CUDA gods π© and being left with cryptic error messages π§ and a chaotic computer πΎ. Many people felt veryβ¦ UN INFINITE.
π Bark Infinity πΎ was born in the command line, and Bark Infinity grew within the command line. We live in the era where old fashioned command line applications are wrapped in β¨fancy Gradio Uisπ and π±οΈOne Click Installers. We all must adapt to a changing world, right?
Or do we?
Is it solution an abomination? It this actually a neat compromise between ease of use and power when you need it? Is this just a horrible shortcut because programmer had never Gradio before and didn't want add menus for a billion parameters that rarely get used? FIND OUT THIS WEEK. At the very leask the Bark loaded model stays in memory so the lag time is great. (Sorry I ran out of emojis for the last few paragraphs.)
For real though, I also tested a One Click installer. It successfully installed ONE TIME on my other computer. And as we all know 1 is infinitely more than 0. So
- Ramshackle Gradio App
- One Click installer with a 100% success rate so far.
I'll probably post to a dev branch TUE night and try to find a few brave volunteers can test the click installer and confirm it doesn't eat all their files.
I did take a quick look at the 'voice cloning' to see if it worth bulding that in because a lot of people ask, and maybe I'm missing something, but I couldn't get a single good example better than what I could get in a few mintues in Tortoise TTS.
Discover cool new voices and reuse them. Performers, musicians, sound effects, two party dialog scenes. Save and share them. Every audio clip saves a speaker.npz file with the voice. To reuse a voice, move the generated speaker.npz file (named the same as the .wav file) to the "prompts" directory inside "bark" where all the other .npz files are.
π With random celebrity appearances!
(I accidently left a bunch of voices in the repo, some of them are pretty good. Use --history_prompt 'en_fiery' for the same voice as the audio sample right after this sentence.)
whoami.mp4
Any length prompt and audio clips. Sometimes the final result is seamless, sometimes it's stable (but usually not both!).
π΅ Now with Slowly Morphing Rick Rolls! Can you even spot the seams in the most earnest Rick Rolls you've ever heard in your life?
but_are_we_strangers_to_love_really.mp4
Can your text-to-speech model stammer and stall like a student answering a question about a book they didn't read? Bark can. That's the human touch. The semantic touch. You can almost feel the awkward silence through the screen.
Are you tired of telling your TTS model what to say? Why not take a break and let your TTS model do the work for you. With enough patience and Confused Travolta Mode, Bark can finish your jokes for you.
almost_a_real_joke.mp4
Truly we live in the future. It might take 50 tries to get a joke and it's probabably an accident, but all 49 failures are also very amusing so it's a win/win. (That's right, I set a single function flag to False in a Bark and raved about the amazing new feature. Everything here is small potatoes really.)
reaching_for_the_words.mp4
BARK INFINITY is possible because Bark is such an amazingly simple and powerful model that even I could poke around easily.
For music, I recommend using the --split_by_lines and making sure you use a multiline string as input. You'll generally get better results if you manually split your text, which I neglected to provide an easy way to do because I stayed too late listening to 100 different Bark versions of a scene an Andor and failed Why was 6 afraid of 7 jokes.
--text_prompt Text prompt. If not provided, a set of default prompts will be used defined in this file.
--history_prompt Optional. Choose a speaker from the list of languages. Use --list_speakers to see all available options.
--text_temp Text temperature. Default is 0.7.
--waveform_temp Waveform temperature. Default is 0.7.
--filename Output filename. If not provided, a unique filename will be generated based on the text prompt and other parameters.
--output_dir Output directory. Default is 'bark_samples'.
--list_speakers List all preset speaker options instead of generating audio.
--use_smaller_models Use for GPUs with less than 10GB of memory, or for more speed.
--less_gpu To use the CPU for step 1 (text to semantic tokens) and a bit of final work, even if a GPU is present, to reduce VRAM requirements.
--iterations Number of iterations. Default is 1.
--split_by_words Breaks text_prompt into <14 second audio clips every x words.
--split_by_lines Breaks text_prompt into <14 second audio clips every x lines.
--stable_mode Choppier and not as natural sounding, but much more stable for very long audio files.
--confused_travolta_mode Just for fun. Try it, and you'll understand. π€·
--prompt_file Optional. The path to a file containing the text prompt. Overrides the --text_prompt option if provided.
--prompt_file_separator Optional. The separator used to split the content of the prompt_file into multiple text prompts.
- Clone the Bark repository:
git clone https://github.com/JonathanFly/bark.git
- Install the required package:
pip install soundfile
- Run the example command:
python bark_perform.py --text_prompt "It is a mistake to think you can solve any major problems just with potatoes... or can you? (and the next page, and the next page...)" --split_by_words 35
If you can't get Bark installed, you might try this one-click installer: https://github.com/Fictiverse/bark/releases - but you'll still need to clone or copy all the files in this specific bark repo into the bark directory because I don't know what I'm doing.
I haven't posted much lately I dipped my toes back into a bit twitter.com/jonathanfly
Original Bark README:
Examples | Model Card | Playground Waitlist
Bark is a transformer-based text-to-audio model created by Suno. Bark can generate highly realistic, multilingual speech as well as other audio - including music, background noise and simple sound effects. The model can also produce nonverbal communications like laughing, sighing and crying. To support the research community, we are providing access to pretrained model checkpoints ready for inference.
from bark import SAMPLE_RATE, generate_audio
from IPython.display import Audio
text_prompt = """
Hello, my name is Suno. And, uh β and I like pizza. [laughs]
But I also have other interests such as playing tic tac toe.
"""
audio_array = generate_audio(text_prompt)
Audio(audio_array, rate=SAMPLE_RATE)
pizza.webm
To save audio_array
as a WAV file:
from scipy.io.wavfile import write as write_wav
write_wav("/path/to/audio.wav", SAMPLE_RATE, audio_array)
Bark supports various languages out-of-the-box and automatically determines language from input text. When prompted with code-switched text, Bark will even attempt to employ the native accent for the respective languages in the same voice.
text_prompt = """
Buenos dΓas Miguel. Tu colega piensa que tu alemΓ‘n es extremadamente malo.
But I suppose your english isn't terrible.
"""
audio_array = generate_audio(text_prompt)
miguel.webm
Bark can generate all types of audio, and, in principle, doesn't see a difference between speech and music. Sometimes Bark chooses to generate text as music, but you can help it out by adding music notes around your lyrics.
text_prompt = """
βͺ In the jungle, the mighty jungle, the lion barks tonight βͺ
"""
audio_array = generate_audio(text_prompt)
lion.webm
Bark has the capability to fully clone voices - including tone, pitch, emotion and prosody. The model also attempts to preserve music, ambient noise, etc. from input audio. However, to mitigate misuse of this technology, we limit the audio history prompts to a limited set of Suno-provided, fully synthetic options to choose from for each language. Specify following the pattern: {lang_code}_speaker_{number}
.
text_prompt = """
I have a silky smooth voice, and today I will tell you about
the exercise regimen of the common sloth.
"""
audio_array = generate_audio(text_prompt, history_prompt="en_speaker_1")
sloth.webm
Note: since Bark recognizes languages automatically from input text, it is possible to use for example a german history prompt with english text. This usually leads to english audio with a german accent.
You can provide certain speaker prompts such as NARRATOR, MAN, WOMAN, etc. Please note that these are not always respected, especially if a conflicting audio history prompt is given.
text_prompt = """
WOMAN: I would like an oatmilk latte please.
MAN: Wow, that's expensive!
"""
audio_array = generate_audio(text_prompt)
latte.webm
pip install git+https://github.com/suno-ai/bark.git
or
git clone https://github.com/suno-ai/bark
cd bark && pip install .
Bark has been tested and works on both CPU and GPU (pytorch 2.0+
, CUDA 11.7 and CUDA 12.0).
Running Bark requires running >100M parameter transformer models.
On modern GPUs and PyTorch nightly, Bark can generate audio in roughly realtime. On older GPUs, default colab, or CPU, inference time might be 10-100x slower.
If you don't have new hardware available or if you want to play with bigger versions of our models, you can also sign up for early access to our model playground here.
Similar to Vall-E and some other amazing work in the field, Bark uses GPT-style models to generate audio from scratch. Different from Vall-E, the initial text prompt is embedded into high-level semantic tokens without the use of phonemes. It can therefore generalize to arbitrary instructions beyond speech that occur in the training data, such as music lyrics, sound effects or other non-speech sounds. A subsequent second model is used to convert the generated semantic tokens into audio codec tokens to generate the full waveform. To enable the community to use Bark via public code we used the fantastic EnCodec codec from Facebook to act as an audio representation.
Below is a list of some known non-speech sounds, but we are finding more every day. Please let us know if you find patterns that work particularly well on Discord!
[laughter]
[laughs]
[sighs]
[music]
[gasps]
[clears throat]
β
or...
for hesitationsβͺ
for song lyrics- capitalization for emphasis of a word
MAN/WOMAN:
for bias towards speaker
Supported Languages
Language | Status |
---|---|
English (en) | β |
German (de) | β |
Spanish (es) | β |
French (fr) | β |
Hindi (hi) | β |
Italian (it) | β |
Japanese (ja) | β |
Korean (ko) | β |
Polish (pl) | β |
Portuguese (pt) | β |
Russian (ru) | β |
Turkish (tr) | β |
Chinese, simplified (zh) | β |
Arabic | Coming soon! |
Bengali | Coming soon! |
Telugu | Coming soon! |
- nanoGPT for a dead-simple and blazing fast implementation of GPT-style models
- EnCodec for a state-of-the-art implementation of a fantastic audio codec
- AudioLM for very related training and inference code
- Vall-E, AudioLM and many other ground-breaking papers that enabled the development of Bark
Bark is licensed under a non-commercial license: CC-BY 4.0 NC. The Suno models themselves may be used commercially. However, this version of Bark uses EnCodec
as a neural codec backend, which is licensed under a non-commercial license.
Please contact us at [email protected]
if you need access to a larger version of the model and/or a version of the model you can use commercially.
Weβre developing a playground for our models, including Bark.
If you are interested, you can sign up for early access here.