A replacement for the aging (ssl-vision)[https://github.com/RoboCup-SSL/ssl-vision]. The shape based blob detector and decentralized software architecture is intended to minimize setup time and improve detection rates in uneven illumination conditions. It currently supports Teledyne FLIR (Spinnaker), Matrix Vision (Bluefox3/mvIMPACT) and OpenCV camera backends.
The vision_processor is the image processing component that processes a camera feed
to multicast the detected robot and ball positions and a debug video livestream.
The geometry publisher geom_publisher.py publishes the field geometry
for all vision_processors, teams and the game controller.
cam_viewer.py opens the mpv video player with the camera streams from the vision_processor instances.
Required only for the geometry publisher and camera viewer.
mpv is optional and only required for the camera viewer.
Debian/Ubuntu based distributions: apt install protobuf-compiler python3-protobuf python3-yaml mpv
Arch based distributions: pacman -S make python-protobuf python-yaml mpv
Installation with PIP: pip install protobuf pyyaml
- Install the camera SDK required for your camera type
(Arch user repository mvIMPACT:
mvimpact-acquireSpinnaker:spinnaker-sdk) - Debian/Ubuntu/Arch Linux/Manjaro: Run
./setup.sh. If the script wants to install an OpenCL driver for the wrong GPU (e.g. integrated graphics card) or you want, need or have a different OpenCL driver skip the driver installation withSKIP_DRIVERS=1 ./setup.sh.
-
Install the camera SDK required for your camera type (Arch user repository mvIMPACT:
mvimpact-acquireSpinnaker:spinnaker-sdk) -
Install the required dependencies:
- cmake
- Eigen3
- ffmpeg
- gcc
- OpenCL (GPU based runtime recommended)
- OpenCV
- protobuf
- yaml-cpp
-
Compile vision_processor:
cmake -B build . make -C build vision_processor
- Complete the dependency installation and compilation section.
- Configure one
config-minimal.ymlorconfig.ymlfor each camera, skip thegeometrysection for now. The camera ids are assigned like in ssl-vision:
- Start
build/vision_processor config[X].ymlfor each camera. - Tune the orientation and position of each camera.
You can view the camera feeds with
python/cam_viewer.py --cameras <X>. - Restart the
vision_processors for the generation of a new sample imageimg/sample.[X].pngand complete thegeometrysection of each camera config with it. Visual explanation how to determineline_corners:
- Restart all
vision_processors to reload the config. - Modify
geometry[X].ymlto match your field geometry. (for simple use cases configuring the field size, penalty area and goal will suffice) - Start
python/geom_publisher.py geometry[X].yml.
If no OpenCL platform can be found despite OpenCL driver installation, check the users groups and permission to compute
on the GPU (e.g. for Intel: ).
The video livestream cycles through 4 different views:
- Raw camera data
If the data is very bright, dark or miscolored consider adjusting
the camera
gain,exposureandwhite_balancein yourconfig[X].yml. - Reprojected color delta
If the visible reprojected image extent does not match the field boundary the geometry calibration has issues.
If color blobs are desaturated in the center your image might be overexposed/too bright (
gain,exposure). - Gradient dot product All color blobs should be visible here as black and white checkered rings.
- Blob circularity score
If the blob score of some undetected blobs is too faint consider reducing the
circularitythreshold.
If blobs are attributed the wrong color (or balls are seemingly undetected despite high blob scores)
adjust the reference colors under color.
If nothing helps:
Activate stream: raw_feed: true in your config[X].yml and record the video livestream
with ffmpeg -protocol_whitelist file,rtp,udp -i python/cam[X].sdp cam[X].mp4.
Publish the resulting video including your config[X].yml and geometry[X].yml for further remote analysis.
