A framework to accomplish the pre- and intra- operative view fusion (PIVF) in augmented reality laparoscopic partial nephrectomy (AR-LPN). It uses rendering probes to store the information of the preoperative 3D model from different viewpoints, trains a deep neural network (ProFEN) to distinguish 2D render results from different viewpoints, and exploits prior knowledge to select the best matching probe from a restricted area.
The dataset directory should be like the following:
─ 2d3d_dataset
├─ .mask (the masks in the intraoperative views)
│ ├─ mask1.png
│ └─ ...
├─ restrictions.json (speicifies the prior-knowledge restricted area of each type)
├─ case 0
│ ├─ label (for evaluation)
│ │ ├─ 00001.png
│ │ └─ ...
│ ├─ orig.nii.gz (preoperative view, segmented ct volume)
│ ├─ clip.mp4 (intraoperative view, laparoscopic video)
│ ├─ prior.json (specifies which type this case belongs to)
│ ├─ ** mesh.gltf (will be generated by `prepare_dataset.py`) **
│ ├─ ** 00001.png (will be generated by `prepare_dataset.py`) **
│ └─ ...
└─ ...
or specify each file or directory name in paths.py.
bash ./fast_run.shThis command will do following steps:
- Install the dependencies.
- Prepare the dataset. Run
prepare_dataset.pyto generate conitnuous image sequences and 3d models. - Generate probes. Run
probe.pyto generate probes surrounding the 3D mesh model. - Train the model. Run
trian.pyto train the ProFEN and the TrackNet. - Do the fusion. Run
fusion.pyto do the fuse.
Case1.
case1.mp4
Case4.