Sonic
Sonic: Shifting Focus to Global Audio Perception in Portrait Animation
👋 Join our QQ Chat Group
🔥🔥🔥 NEWS
2025/01/17
: Our Online huggingface Demo is released.
2025/01/17
: Thank you to NewGenAI for promoting our Sonic and creating a Windows-based tutorial on YouTube.
2024/12/16
: Our Online Demo is released.
🎥 Demo
Input | Output | Input | Output |
---|---|---|---|
![]() | anime1.mp4 | ![]() | female_diaosu.mp4 |
![]() | hair.mp4 | ![]() | leonnado.mp4 |
For more visual demos, please visit our Page.
🧩 Community Contributions
If you develop/use Sonic in your projects, welcome to let us know.
📑 Updates
2025/01/14
: Our inference code and weights are released. Stay tuned, we will continue to polish the model.
📜 Requirements
- An NVIDIA GPU with CUDA support is required.
- The model is tested on a single 32G GPU.
- Tested operating system: Linux
🔑 Inference
Installtion
- install pytorch
pip3 install -r requirements.txt
- All models are stored in
checkpoints
by default, and the file structure is as follows
Sonic
├──checkpoints
│ ├──Sonic
│ │ ├──audio2bucket.pth
│ │ ├──audio2token.pth
│ │ ├──unet.pth
│ ├──stable-video-diffusion-img2vid-xt
│ │ ├──...
│ ├──whisper-tiny
│ │ ├──...
│ ├──RIFE
│ │ ├──flownet.pkl
│ ├──yoloface_v5m.pt
├──...
Download by huggingface-cli
follow
python3 -m pip install "huggingface_hub[cli]"
huggingface-cli download LeonJoe13/Sonic --local-dir checkpoints
huggingface-cli download stabilityai/stable-video-diffusion-img2vid-xt --local-dir checkpoints/stable-video-diffusion-img2vid-xt
huggingface-cli download openai/whisper-tiny --local-dir checkpoints/whisper-tiny
or manully download pretrain model, svd-xt and whisper-tiny to checkpoints/
Run demo
python3 demo.py \
'/path/to/input_image' \
'/path/to/input_audio' \
'/path/to/output_video'
🔗 Citation
@misc{ji2024sonicshiftingfocusglobal,
title={Sonic: Shifting Focus to Global Audio Perception in Portrait Animation},
author={Xiaozhong Ji and Xiaobin Hu and Zhihong Xu and Junwei Zhu and Chuming Lin and Qingdong He and Jiangning Zhang and Donghao Luo and Yi Chen and Qin Lin and Qinglin Lu and Chengjie Wang},
year={2024},
eprint={2411.16331},
archivePrefix={arXiv},
primaryClass={cs.MM},
url={https://arxiv.org/abs/2411.16331},
}