This repository contains the official implementation of ACDiT, an innovative model combining the strengths of autoregressive modeling and diffusion transformers. ACDiT introduces a flexible blockwise generation mechanism, achieving superior performance in both image and video generation tasks.
ACDiT (Autoregressive Conditional Diffusion Transformer) interpolates between token-wise autoregressive modeling and full-sequence diffusion by introducing a block-based paradigm. Inherent advantages include:
- Simultaneously learns the causal interdependence across blocks with autoregressive modeling and the non-causal dependence within blocks with diffusion modeling.
- Endowed with clean continuous visual input.
- Makes full use of KV-Cache for flexible autoregressive generation.
The generation process of ACDiT, where pixels in each block are denoised simultaneously conditioned on previously generated clean contexts.
(a) SCAM for training | (b) Inference Process | (c) 3D view of ACDiT |
---|---|---|
ACDiT is easy to implement, as simple as adding a Skip-Causal Attention Mask to the current DiT architecture during training, as shown in (a), where each noised block can only attend previous clean blocks and itself. During inference, ACDiT utilizes KV-Cache for efficient autoregressive inference.
To implement the SCAM for both customization and efficiency, we use FlexAttention provided by Pytorch 2.5. The training codes will be released soon.
We provide the model weights for ACDiT-XL/H-img/vid through the download links below.
Model Name | Image | Video |
---|---|---|
ACDiT-XL | ACDiT-XL-img | ACDiT-XL-vid |
ACDiT-H | ACDiT-H-img | ACDiT-H-vid |
To set up the runtime environment for this project, install the required dependencies using the provided requirements.txt file:
pip install -r requirements.txt
After downloading the checkpoints, you can use the following scripts to generate image or videos:
python3 sample_img.py --ckpt ACDiT-H-img.pt
python3 sample_vid.py --ckpt ACDiT-H-vid.pt
Following the evaluation protocal of DiT, we use ADM's evaluation suite to compute FID, Inception Score and other metrics.
This code is mainly built upon DiT repository.
This project is licensed under the MIT License.
If our work assists your research, feel free to give us a star ⭐ or cite us using:
@article{ACDiT,
title={ACDiT: Interpolating Autoregressive Conditional Modeling and Diffusion Transformer},
author={Hu, Jinyi and Hu, Shengding and Song, Yuxuan and Huang, Yufei and Wang, Mingxuan and Zhou, Hao and Liu, Zhiyuan and Ma, Wei-Ying and Sun, Maosong},
journal={arXiv preprint arXiv:2412.07720},
year={2024}
}