Tags: MASKOR/mapit
Tags
This is the first release of mapit.
Its a framework for storing and managing 3D (or robotic) data and keeping the
history of its manipulation in a repository. The data is stored in Entities and
is structured in Layers. It comes with a network interface and an QML based GUI
which features a multiview approach. With multiview multiple users can display
the same data over network, also on a dedicated VR system.
*Tools*
Tools are programs that work on the repository without direct manipulation of
the data. The most common tools are listed below.
checkout: checkout a commit or branch to a new workspace.
execute: execution of operators on a workspace (this can manipulation
the data indirect by executing operators).
gui: a QML based GUI to execute operators and display entities.
visualization: a library with common controls to develop user
interfaces/tools for mapit.
visualization_vr: a renderer for entities in virtual reality. It can be synced
over network with the gui.
ls: list all files of a workspace.
commit: commits a workspace to the repository.
diff: shows the diff two commits.
log: displays the commit history started at a commit.
mapitd: scripts to start mapit in deamon mode for network connections.
export_workspace: copies the data of a workspace into the local file system,
generic export of data out of a mapit repository,
stream_to_ros: playback of a workspace to ROS in a "bag play" like fashion.
Currently supported layers are pointcloud and tf.
*Layertypes*
Layertypes represent the data types (serialization and deserialization) that can
be stored in mapit and are easy extendable, currently supported are:
pointcloud2, las, tf (including a port of the tf2 library), asset, octomap,
openvdb, primitive, pose_path and boundingbox
*Operators*
Operators are able to change the data in mapit.
For loading data we have load_bags, load_pointcloud, load_primitive, load_tfs,
load_asset, load_path, load_las and load_las_from_csv.
For simple manipulation we have edit_entity, copy, delete, write_raw, las2pcd,
levelset_to_mesh, scale_pointcloud and tf_add_random
And for working with entities and its data we have
filter_approximate_voxel_grid, filter_radius_outlier_removal,
filter_statistical_outlier_removal, filter_uniform_sampling,
moving_least_squares, normalestimation, voxelgridfilter, reg_local,
surfrecon_openvdb, surface_moving_least_squares, surface_reconstruction,
ovdb_smooth
*Next milestones features*
metadata: In the design of mapit a deletion of data of intermediate steps was
planned. E.g. only the raw sensor data and the end result have to be
stored physically on disk. But the intermediate data should not be
lost, therefore the metadata should keep track of all operators
executed on the data. In case that the intermediate data needs to be
recovered, the operators can be executed again.
Furthermore this could enable a rebasing like mechanism where a set of
operators (or a pipeline) from a project is taken and executed on a
different project.
REFs and branches: The System currently only knows workspaces and commits. The
next step would be to introduce REFs and branches pointing to
these REFs.
merging: When different user coming together to work on the same project a
concept of merging is needed. While a complex mechanism to merge
different data might not be possible or have to be implemented for each
data separately we want at least support a conflict-free or
user-selected merging in the next release.
push/pull: A mechanism to communicate between different mapit repositories
is planned.