Pytorch implementation of GPNet.
- Ubuntu 16.04
- pytorch 0.4.1
- CUDA 8.0 or CUDA 9.2
Our depth images are saved in .exr files, please install the OpenEXR, then run pip install OpenEXR.
cd lib/pointnet2mkdir build && cd buildcmake .. && make
Our dataset is available at Google Driver. Backup (2qln).
The simulation environment is built on PyBullet. You can use pip to install the python packages:
pip install pybullet
pip install attrdict
pip install collections
pip install joblib
pip install gc
Please look for the details of our simulation configurations in the directory simulator.
CUDA_VISIBLE_DEVICES=0,1 python train.py --tanh --grid --dataset_root path_to_dataset
The Pretrained model is here.
CUDA_VISIBLE_DEVICES=0,1 python test.py --tanh --grid --dataset_root path_to_dataset --resume pretrained_model/checkpoint_440.pth.tar
Then it will generate the predicted grasps saved in .npz files in pretrained_model/test/epoch440/view0. The file pretrained_model/test/epoch440/nms_poses_view0.txt contains the predicted grasps after nms.
You can use the following script to abtain the success rate and coverage rate.
CUDA_VISIBLE_DEVICES=0 python topk_percent_coverage_precision.py -pd pretrained_model/test/epoch440/view0 -gd path_to_gt_annotations
To test the predicted grasps in simulation environment:
cd simulator
python -m simulateTest.simulatorTestDemo -t pretrained_model/test/epoch440/nms_poses_view0.txt
@article{wu2020grasp,
title={Grasp Proposal Networks: An End-to-End Solution for Visual Learning of Robotic Grasps},
author={Wu, Chaozheng and Chen, Jian and Cao, Qiaoyu and Zhang, Jianchi and Tai, Yunxin and Sun, Lin and Jia, Kui},
journal={arXiv preprint arXiv:2009.12606},
year={2020}
}
The code of pointnet2 are borrowed from Pointnet2_PyTorch.