English's Github Trend's slide
Trending Slides
Japanese Github Trend's slide
Trending Slides
@openai
[Blog] [Paper] [Model card] [Colab example]
Whisper is a general-purpose speech recognition model. It is trained on a large dataset of diverse audio and is also a multi-task model that can perform multilingual speech recognition as well as speech translation and language identification.
Approach
A Transformer sequence-to-sequence model is trained on various speech processing tasks, including multilingual speech recognition, speech translation, spoken language identification, and voice activity detection. All of these tasks are jointly represented as a sequence of tokens to be predicted by the decoder, allowing for a single model to replace many different stages of a traditional speech processing pipeline. The multitask training format uses a set of special tokens that serve as task specifiers or classification targets.
Setup
We used Python 3.9.9 and PyTorch 1.10.1 to train and test our models, but the codebase is expected to be compatible with Python 3.8-3.10 and recent PyTorch versions. The codebase also depends on a few Python packages, most notably HuggingFace Transformers for their fast tokenizer implementation and ffmpeg-python for reading audio files. You can download and install (or update to) the latest release of Whisper with the following command:
Alternatively, the following command will pull and install the latest commit from this repository, along with its Python dependencies:
To update the package to the latest version of this repository, please run:
It also requires the command-line tool
ffmpeg
to be installed on your system, which is available from most package managers:You may need
rust
installed as well, in case tokenizers does not provide a pre-built wheel for your platform. If you see installation errors during thepip install
command above, please follow the Getting started page to install Rust development environment. Additionally, you may need to configure thePATH
environment variable, e.g.export PATH="$HOME/.cargo/bin:$PATH"
. If the installation fails withNo module named 'setuptools_rust'
, you need to installsetuptools_rust
, e.g. by running:Available models and languages
There are five model sizes, four with English-only versions, offering speed and accuracy tradeoffs. Below are the names of the available models and their approximate memory requirements and relative speed.
tiny
39 M
tiny.en
tiny
~1 GB
~32x
base
74 M
base.en
base
~1 GB
~16x
small
244 M
small.en
small
~2 GB
~6x
medium
769 M
medium.en
medium
~5 GB
~2x
large
1550 M
N/A
large
~10 GB
1x
For English-only applications, the
.en
models tend to perform better, especially for thetiny.en
andbase.en
models. We observed that the difference becomes less significant for thesmall.en
andmedium.en
models.Whisper’s performance varies widely depending on the language. The figure below shows a WER (Word Error Rate) breakdown by languages of Fleurs dataset, using the
large-v2
model. More WER and BLEU scores corresponding to the other models and datasets can be found in Appendix D in the paper. The smaller is better.Command-line usage
The following command will transcribe speech in audio files, using the
medium
model:The default setting (which selects the
small
model) works well for transcribing English. To transcribe an audio file containing non-English speech, you can specify the language using the--language
option:Adding
--task translate
will translate the speech into English:Run the following to view all available options:
See tokenizer.py for the list of all available languages.
Python usage
Transcription can also be performed within Python:
Internally, the
transcribe()
method reads the entire file and processes the audio with a sliding 30-second window, performing autoregressive sequence-to-sequence predictions on each window.Below is an example usage of
whisper.detect_language()
andwhisper.decode()
which provide lower-level access to the model.More examples
Please use the 🙌 Show and tell category in Discussions for sharing more example usages of Whisper and third-party extensions such as web demos, integrations with other tools, ports for different platforms, etc.
License
The code and the model weights of Whisper are released under the MIT License. See LICENSE for further details.