I’m a autodrive & RL developer focusing on mobile robot navigation and manipulation in unstructured environments.
I specialize in tackling Sim2Real transfer challenges and spatiotemporal synchronization & sensor fusion (Lidar / IMU / GNSS).
- RL & AI — PyTorch, Gymnasium, Stable-Baselines3 (or other RL toolkits)
- Simulation & Dev — Gazebo, Docker, CMake, ROS-Middleware / Bridging
- SLAM & Perception — ROS / ROS2, LIO-SAM, FAST-LIO2, GTSAM, PCL, Ceres
| Project | Description |
|---|---|
| LIO-SAM-Robosense-Adapter | A tightly-coupled lidar-inertial odometry adapted for Robosense lidar, with robust timestamp handling and NaN-point & missing-ring recovery. |
| Autonomous Forklift Navigation & Manipulation (RL) | Hierarchical RL based forklift navigation → pick → place. Supports long-sequence tasks in simulation (ROS2 + Gazebo + PyTorch + PPO + Behavior Tree), with Sim2Real pipeline. |
| Sensor Calibration & Tools | Multi-sensor extrinsic calibration + ROS1/ROS2 bridging + GPS → NavSatFix conversion & time-sync toolset (FAST-LIO2 + Ceres + bridging nodes). |
Add more projects here — public repos, forks, labs, or demos you find important.
- Continually improving robotics navigation / perception under challenging real-world conditions
- Exploring advanced RL-based manipulation and multi-sensor fusion systems
- Open to collaboration, discussion, or any interesting robotics / AI projects
- GitHub: @chan-yuu
- (Optional) Email / Other socials: …
“Translating Sensor Data into Intelligent Action.”

