Skip to content

F5-TTS:一个伪造流利与忠实语音的童话故事生成工具

Published:

原文链接


F5-TTS: A Fairytaler that Fakes Fluent and Faithful Speech with Flow MatchingF5-TTS:通过流量匹配伪造流利忠实语音的童话故事

python arXiv demo hfspace msspace lab lab

F5-TTS: Diffusion Transformer with ConvNeXt V2, faster trained and inference.F5-TTS:采用 ConvNeXt V2 的扩散变压器,训练和推理速度更快。

E2 TTS: Flat-UNet Transformer, closest reproduction from paper.E2 TTS:Flat-UNet Transformer,最接近纸张复制品。

Sway Sampling: Inference-time flow step sampling strategy, greatly improves performance摇摆采样 :推理时间流步长采样策略,大幅提升性能

Thanks to all the contributors !感谢所有贡献者!

News  新闻

Installation  安装

Create a separate environment if needed如果需要,请创建单独的环境

# Create a python 3.10 conda env (you could also use virtualenv)
conda create -n f5-tts python=3.10
conda activate f5-tts

Install PyTorch with matched device使用匹配的设备安装 PyTorch

NVIDIA GPU  英伟达 GPU

# Install pytorch with your CUDA version, e.g.
pip install torch==2.4.0+cu124 torchaudio==2.4.0+cu124 --extra-index-url https://download.pytorch.org/whl/cu124

AMD GPU  AMD 显卡

# Install pytorch with your ROCm version (Linux only), e.g.
pip install torch==2.5.1+rocm6.2 torchaudio==2.5.1+rocm6.2 --extra-index-url https://download.pytorch.org/whl/rocm6.2

Intel GPU  英特尔 GPU

# Install pytorch with your XPU version, e.g.
# Intel® Deep Learning Essentials or Intel® oneAPI Base Toolkit must be installed
pip install torch torchaudio --index-url https://download.pytorch.org/whl/test/xpu

# Intel GPU support is also available through IPEX (Intel® Extension for PyTorch)
# IPEX does not require the Intel® Deep Learning Essentials or Intel® oneAPI Base Toolkit
# See: https://pytorch-extension.intel.com/installation?request=platform

Apple Silicon  苹果硅

# Install the stable pytorch, e.g.
pip install torch torchaudio

Then you can choose one from below:然后您可以从以下中选择一个:

1. As a pip package (if just for inference)1. 作为 pip 包(如果只是为了推理)

pip install f5-tts

2. Local editable (if also do training, finetuning)2. 本地可编辑(如果也做训练、微调)

git clone https://github.com/SWivid/F5-TTS.git
cd F5-TTS
# git submodule update --init --recursive  # (optional, if use bigvgan as vocoder)
pip install -e .

Docker usage also available也提供 Docker 使用

# Build from Dockerfile
docker build -t f5tts:v1 .

# Run from GitHub Container Registry
docker container run --rm -it --gpus=all --mount 'type=volume,source=f5-tts,target=/root/.cache/huggingface/hub/' -p 7860:7860 ghcr.io/swivid/f5-tts:main

# Quickstart if you want to just run the web interface (not CLI)
docker container run --rm -it --gpus=all --mount 'type=volume,source=f5-tts,target=/root/.cache/huggingface/hub/' -p 7860:7860 ghcr.io/swivid/f5-tts:main f5-tts_infer-gradio --host 0.0.0.0

Runtime  运行

Deployment solution with Triton and TensorRT-LLM.使用 Triton 和 TensorRT-LLM 的部署解决方案。

Benchmark Results  基准测试结果

Decoding on a single L20 GPU, using 26 different prompt_audio & target_text pairs, 16 NFE.在单个 L20 GPU 上解码,使用 26 个不同的 prompt_audio 和 target_text 对,16 个 NFE。

ModelConcurrency  并发Avg Latency  平均延迟RTFMode  模式
F5-TTS Base (Vocos)  F5-TTS 底座 (Vocos)2253 ms  253 毫秒0.0394Client-Server  客户端-服务器
F5-TTS Base (Vocos)  F5-TTS 底座 (Vocos)1 (Batch_size)  1 (Batch_size)-0.0402Offline TRT-LLM  离线 TRT-LLM
F5-TTS Base (Vocos)  F5-TTS 底座 (Vocos)1 (Batch_size)  1 (Batch_size)-0.1467Offline Pytorch  离线 Pytorch

See detailed instructions for more information.有关更多信息,请参阅详细说明

Inference  推理

1. Gradio App  1. Gradio 应用程序

Currently supported features:当前支持的功能:

# Launch a Gradio app (web interface)
f5-tts_infer-gradio

# Specify the port/host
f5-tts_infer-gradio --port 7860 --host 0.0.0.0

# Launch a share link
f5-tts_infer-gradio --share

NVIDIA device docker compose file exampleNVIDIA 设备 docker compose 文件示例

services:
  f5-tts:
    image: ghcr.io/swivid/f5-tts:main
    ports:
      - "7860:7860"
    environment:
      GRADIO_SERVER_PORT: 7860
    entrypoint: ["f5-tts_infer-gradio", "--port", "7860", "--host", "0.0.0.0"]
    deploy:
      resources:
        reservations:
          devices:
            - driver: nvidia
              count: 1
              capabilities: [gpu]

volumes:
  f5-tts:
    driver: local

2. CLI Inference  2. CLI 推理

# Run with flags
# Leave --ref_text "" will have ASR model transcribe (extra GPU memory usage)
f5-tts_infer-cli --model F5TTS_v1_Base \
--ref_audio "provide_prompt_wav_path_here.wav" \
--ref_text "The content, subtitle or transcription of reference audio." \
--gen_text "Some text you want TTS model generate for you."

# Run with default setting. src/f5_tts/infer/examples/basic/basic.toml
f5-tts_infer-cli
# Or with your own .toml file
f5-tts_infer-cli -c custom.toml

# Multi voice. See src/f5_tts/infer/README.md
f5-tts_infer-cli -c src/f5_tts/infer/examples/multi/story.toml

Training  训练

1. With Hugging Face Accelerate1. 与 Hugging Face Accelerate

Refer to training & finetuning guidance for best practice.有关最佳实践,请参阅培训和微调指南

2. With Gradio App  2. 使用 Gradio 应用程序

# Quick start with Gradio web interface
f5-tts_finetune-gradio

Read training & finetuning guidance for more instructions.阅读培训和微调指南以获取更多说明。

Evaluation  评估

Development  发展

Use pre-commit to ensure code quality (will run linters and formatters automatically):使用预提交来确保代码质量(将自动运行 linter 和格式化程序):

pip install pre-commit
pre-commit install

When making a pull request, before each commit, run:在发出拉取请求时,在每次提交之前,运行:

pre-commit run --all-files

Note: Some model components have linting exceptions for E722 to accommodate tensor notation.注意:某些模型组件对 E722 具有 linting 例外,以适应张量表示法。

Acknowledgements  确认

Citation  引文

If our work and codebase is useful for you, please cite as:如果我们的工作和代码库对您有用,请引用:

@article{chen-etal-2024-f5tts,
      title={F5-TTS: A Fairytaler that Fakes Fluent and Faithful Speech with Flow Matching}, 
      author={Yushen Chen and Zhikang Niu and Ziyang Ma and Keqi Deng and Chunhui Wang and Jian Zhao and Kai Yu and Xie Chen},
      journal={arXiv preprint arXiv:2410.06885},
      year={2024},
}

License  许可证

Our code is released under MIT License. The pre-trained models are licensed under the CC-BY-NC license due to the training data Emilia, which is an in-the-wild dataset. Sorry for any inconvenience this may cause.我们的代码是在 MIT 许可下发布的。由于训练数据 Emilia,预训练模型在 CC-BY-NC 许可下获得许可,这是一个野外数据集。对于由此造成的任何不便,我们深表歉意。


Previous Post
IndexTTS:工业级可控高效的零样本文本转语音系统
Next Post
使用Claude Code获得优异结果的技巧