Skip to content
forked from suno-ai/bark

🔊 Text-Prompted Generative Audio Model

License

Notifications You must be signed in to change notification settings

COOL0716STAR/bark

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

🐶 Bark

Twitter Open In Colab

Examples | Model Card

Bark is a transformer-based text-to-audio model created by Suno. It can generate highly realistic multilingual speech, other audio, including music and background noise, and speaker emotions like laughing, sighing and crying. To support the community we give access to pretrained model checkpoints ready for inference.

🤖 Usage

Open In Colab

from bark import SAMPLE_RATE, generate_audio
from IPython.display import Audio

text_prompt = """
     Hello, my name is Suno. And, uh — and I like pizza. [laughs] 
     But I also have other interests such as playing tic tac toe.
"""
audio_array = generate_audio(text_prompt)
Audio(audio_array, rate=SAMPLE_RATE)
pizza.webm

🌎 Foreign Language

Bark supports various languages out-of-the-box and automatically determines language from input text. Code-switched text will even realistically use the same voice and add an accent.

text_prompt = """
    Buenos días Miguel. Tu colega piensa que tu alemán es extremadamente malo. 
    But I suppose your english isn't terrible.
"""
audio_array = generate_audio(text_prompt)
miguel.webm

🎶 Music

Bark can generate all types of audio, and in principle doesn't see a difference between speech and music. Sometimes it chooses to generate text as music, but you can help it out by adding notes around your lyrics.

text_prompt = """
    ♪ In the jungle, the mighty jungle, the lion barks tonight ♪
"""
audio_array = generate_audio(text_prompt)
lion.webm

👥 Speaker Prompts

You can provide certain speaker prompts such as NARRATOR, MAN, WOMAN, etc. (Note that these are not always respected, especially if a conflicting audio history prompt is given.)

text_prompt = """
    WOMAN: I would like an oatmilk latte please.
    MAN: Wow, that's expensive!
"""
audio_array = generate_audio(text_prompt)
latte.webm

🎤 Voice/Audio Cloning

Bark has the capability to fully clone voices as well pick up music, ambience, etc. from input clips. However, to avoid misuse of this technology we limit the audio history prompts to a limited set of Suno-provided, fully synthetic options to choose from.

text_prompt = """
    I have a silky smooth voice, and today I will tell you about 
    the exercise regimen of the common sloth.
"""
audio_array = generate_audio(text_prompt, history_prompt="speech_0")
sloth.webm

💻 Installation

pip install git+https://github.com/suno-ai/bark.git

or

git clone https://github.com/suno-ai/bark
cd bark && pip install . 

🛠️ Hardware and Inference Speed

Bark has been tested and works on both CPU and GPU (pytorch 2.0+, CUDA 11.7 and CUDA 12.0). Running Bark requires running >100M parameter transformer models. On modern GPUs and PyTorch nightly, Bark can generate audio in roughly realtime. On older GPUs, default colab, or CPU, inference time might be 10-100x slower.

If you don't have new hardware available or if you want to play with bigger versions of our models, you can also sign up for early access to our Studio here.

⚙️ Details

Similar to Vall-E and some other amazing work in the field, Bark uses GPT-style models to generate audio from scratch. Different from Vall-E, the initial text prompt is embedded into high-level semantic tokens without the use of phonemes. It can therefore generalize to arbitrary instructions beyond speech that occur in the training data, such as music lyrics, sound effects or other non-speech sounds. A subsequent second model is used to convert the generated semantic tokens into audio codec tokens to generate the full waveform. To enable the community to use Bark via public code we used the fantastic EnCodec codec from Facebook to act as an audio representation.

Below is a list of some known non-speech sounds, but we are finding more every day. Please let us know if you find patterns that work particularly well on Discord!

  • [laughter]
  • [laughs]
  • [sighs]
  • [music]
  • [gasps]
  • [clears throat]
  • or ... for hesitations
  • for song lyrics
  • capitalization for emphasis of a word
  • MAN/WOMAN: for bias towards speaker

Supported Languages

Language Status
Chinese (Mandarin)
English
French
German
Hindi
Italian
Japanese
Korean
Polish
Portuguese
Russian
Spanish
Turkish
Arabic Coming soon!
Bengali Coming soon!
Telugu Coming soon!

🙏 Appreciation

  • nanoGPT for a dead-simple and blazing fast implementation of gpt-style models
  • EnCodec for a state-of-the-art implementation of a fantastic audio codec
  • AudioLM for very related training and inference code
  • Vall-E, AudioLM and many other ground-breaking papers that enabled the development of Bark

© License

Bark is licensed under a non-commercial CC-BY 4.0 NC. The Suno models themselves may be used commercially. However, this version of Bark uses EnCodec as a neural codec backend, which is licensed under a non-commercial license.

Please contact us at [email protected] if you need access to a larger version of the model and/or a version of the model you can use commercially.

📱 Community

🎧 Suno Studio (Early Access)

We’re developing a web interface for our models, including Bark.

You can sign up for early access here.

About

🔊 Text-Prompted Generative Audio Model

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 56.6%
  • Jupyter Notebook 43.4%