Skip to content

This repository gives the official implementation of Realistic and Efficient Face Swapping: A Unified Approach with Diffusion Models (WACV 2025)

License

Notifications You must be signed in to change notification settings

CV-Synthesis/REFace

 
 

Repository files navigation

REFace

This repository gives the official implementation of Realistic and Efficient Face Swapping: A Unified Approach with Diffusion Models (WACV 2025)

Example

Sanoojan Baliah, Qinliang Lin, Shengcai Liao, Xiodan Liang, and Muhammad Haris Khan

Abstract

Despite promising progress in face swapping task, realistic swapped images remain elusive, often marred by artifacts, particularly in scenarios involving high pose variation, color differences, and occlusion. To address these issues, we propose a novel approach that better harnesses diffusion models for face-swapping by making following core contributions. (a) We propose to re-frame the face-swapping task as a self-supervised, train-time inpainting problem, enhancing the identity transfer while blending with the target image. (b) We introduce a multi-step Denoising Diffusion Implicit Model (DDIM) sampling during training, reinforcing identity and perceptual similarities. (c) Third, we introduce CLIP feature disentanglement to extract pose, expression, and lighting information from the target image, improving fidelity. (d) Further, we introduce a mask shuffling technique during inpainting training, which allows us to create a so-called universal model for swapping, with an additional feature of head swapping. Ours can swap hair and even accessories, beyond traditional face swapping. Unlike prior works reliant on multiple off-the-shelf models, ours is a relatively unified approach and so it is resilient to errors in other off-the-shelf models. Extensive experiments on FFHQ and CelebA datasets validate the efficacy and robustness of our approach, showcasing high-fidelity, realistic face-swapping with minimal inference time. Our code is available here (https://github.com/Sanoojan/REFace)

News

  • 2024-09-10 Release training code
  • 2024-09-10 Release test benchmark.
  • 2024-09-14 Release checkpoints and other dependencies

Requirements

A suitable conda environment named REFace can be created and activated with:

conda env create -f environment.yaml
conda activate REFace

Pretrained model

Download our trained model here.

Other dependencies

Download the following models from the provided links and place them in the corresponding paths to perform face swapping and quantitative evaluation.

face parsing model (segmentation)

Other_dependencies/face_parsing/79999_iter.pth

Arcface ID retrieval model

Other_dependencies/arcface/model_ir_se50.pth

Landmark detection model

Other_dependencies/DLIB_landmark_det/shape_predictor_68_face_landmarks.dat

Expression model (For quantitative analysis only)

Other_dependencies/face_recon/epoch_latest.pth eval_tool/Deep3DFaceRecon_pytorch_edit/BFM/*.mat

pose model (For quantitative analysis only)

Other_dependencies/Hopenet_pose/hopenet_robust_alpha1.pkl

Testing

To test our model on a dataset with facial masks (Follow dataset preparation), you can use scripts/inference_test_bench.py. For example,

CUDA_VISIBLE_DEVICES=${device} python scripts/inference_test_bench.py \
    --outdir "${Results_dir}" \
    --config "${CONFIG}" \
    --ckpt "${CKPT}" \
    --scale 3.5 \
    --n_samples 10 \
    --device_ID ${device} \
    --dataset "CelebA" \
    --ddim_steps 50

or simply run:

sh inference_test_bench.sh

For a choosen folder of source and targets do faceswapping run this:

sh inference_selected.sh

Training

Data preparing

  • Download CelebAHQ dataset

The data structure is like this:

dataset/FaceData
├── CelebAMask-HQ
│  ├── CelebA-HQ-img
│  │  ├── 0.png
│  │  ├── 1.png
│  │  ├── ...
│  ├── CelebA-HQ-mask
│  │  ├── Overall_mask
│  │  │   ├── 0.png
│  │  │   ├── ...

Download the pretrained model of Stable Diffusion

We utilize the pretrained Stable Diffusion v1-4 as initialization, please download the pretrained models from Hugging Face and save the model to directory pretrained_models. Then run the following script to add zero-initialized weights for 5 additional input channels of the UNet (4 for the encoded masked-image and 1 for the mask itself).

python scripts/modify_checkpoints.py

Training REFace

To train a new model on CelebAHQ, you can use main_swap.py. For example,

python -u main_swap.py \
--logdir models/REFace/ \
--pretrained_model pretrained_models/sd-v1-4-modified-9channel.ckpt \
--base configs/train.yaml \
--scale_lr False 

or simply run:

sh train.sh

Test Benchmark

We build a test benchmark for quantitative analysis.

Quantitative Results

By default we assume the original dataset images, selected source images and target images and corresponding swapped images are generated. To evaluate the face swapping in terms if FID, ID retrieval, Pose and Expression simply run:

bash inference_test_bench.sh

Citing Us

@article{baliah2024realisticefficientfaceswapping,
  title={Realistic and Efficient Face Swapping: A Unified Approach with Diffusion Models},
  author={Sanoojan Baliah and Qinliang Lin and Shengcai Liao and Xiaodan Liang and Muhammad Haris Khan},
  journal={arXiv preprint arXiv:2409.07269},
  year={2024}
}

Acknowledgements

This code borrows heavily from Paint-By-Example.

Maintenance

Please open a GitHub issue for any help. If you have any questions regarding the technical details, feel free to contact us.

License

(MIT)See License

About

This repository gives the official implementation of Realistic and Efficient Face Swapping: A Unified Approach with Diffusion Models (WACV 2025)

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 99.3%
  • Shell 0.7%