Skip to content

Attn-GAN: A deep learning method for generating virtual CT from MRI for dose calculation of BNCT

Notifications You must be signed in to change notification settings

ustclzy/SARUNet-Pytorch

 
 

Repository files navigation

SARU-Net: A Self Attention ResUnet to generate synthetic CT images for MRI-only BNCT treatment planning



该项目还未完成。

Note: The project is not yet complete.

Todo List:

  • SARU++
  • SARU
  • VNet/Unet/Resnet/pix2pix

Table of Content

Preparation

  • Linux or macOS
  • Python 3
  • CPU or NVIDIA GPU + CUDA CuDNN

Environment setup

We advise the creation of a new conda environment including all necessary packages. The repository includes a requirements file. Please create and activate the new environment with

conda env create -f requirements.yml
conda activate attngan

Dataset preparation

Running those commands should result in a similar directory structure:

root
  datasets
    MRICT
      train
          patient_001_001.png
          ...
          patient_002_001.png
          ...
		  patient_100_025.png
      test
      	  patient_101_001.png
      	  ...
          patient_102_002.png
          ...
		  patient_110_025.png
      val
          patient_111_001.png
          ...
          patient_112_002.png
          ...
          ...

Our pre-trained model used 130 + patient cases, for a total of about 4500 image pairs, while performing data enhancement methods such as random flipping, random scaling, and random cropping.

Pretrained weights

We release a pretrained set of weights to allow reproducibility of our results. The weights are downloadable from Google Drive(or 百度云). Once downloaded, unpack the file in the root of the project and test them with the inference notebook.

All the models were trained on 2*NVIDIA 12GB TITAN V.

Training

The training routine of Attn-GAN is mainly based on the pix2pix codebase, available with details in the official repository.

To launch a default training, run

python train.py --data_root path/to/data --gpu_ids 0,1,2 --netG attnunet --netD basic --model pix2pix --name attnunet-gan

MAE Result of 13 patients

BackBone Params MEAN MAE STD MEAN ME STD MEAN RMSE STD
UNet 13.395 M 124.2 36.56 64.07 30.77 338.66 68.39
ResNet 11.371M 71.28 13.34 -3.27 8.48 202.82 37.78
DeepUNet 41.823M 73.8 15.77 -2.2 13.35 212.8 40.86
Pix2Pix 44.588M 89.72 19.42 5.07 16.68 238.08 42.47
DenseUNet 49.518M 130.32 31.42 58.57 33.8 348.42 73.07
SARU-Net (ours) 16.212M 62.61 11.26 9.77 7.84 183.04 30.26

SAG

COR

TRANS

CBAM MODELS

Spatial Attention

Channel Attention

### Attention ResBlock

Code structure

To help users better understand and use our code, we briefly overview the functionality and implementation of each package and each module here.

Pull Request

You are always welcome to contribute to this repository by sending a pull request. Please run flake8 --ignore E501 . and python ./scripts/test_before_push.py before you commit the code. Please also update the code structure overview accordingly if you add or remove files.

Citation

If you use this code for your research, please cite our papers.

@inproceedings{}
}

Other Languages

简体中文

Related Projects

contrastive-unpaired-translation (CUT) CycleGAN-Torch | pix2pix-Torch | pix2pixHD| BicycleGAN | vid2vid | SPADE/GauGAN iGAN | GAN Dissection | GAN Paint

Acknowledgments

Our code is inspired by pytorch-CycleGAN-and-pix2pix and pytorch-CBAM

About

Attn-GAN: A deep learning method for generating virtual CT from MRI for dose calculation of BNCT

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 73.4%
  • Jupyter Notebook 26.6%