# ORB-SLAM3 & RTAB-Map

## ORB-SLAM3 (visual SLAM, widely used)

[ORB-SLAM3](https://github.com/UZ-SLAMLab/ORB_SLAM3) (Campos et al., 2020) is a widely used open-source **visual SLAM** system: it tracks the camera (or camera rig), estimates egomotion, and builds a **sparse 3D map** from ORB feature tracks. It supports **monocular**, **stereo**, and **RGB-D** input, and includes **loop closing** and **multi-map** handling.

**Why it’s relevant for Konnex:** the [SLAM 3D map](/subnets-workload-classes/slam-3d-map.md) subnet is about **mesh / geometry / semantics** and **verifiable** sensor data. A miner can run ORB-SLAM3 (or a derivative) to produce **trajectories**, **keyframe poses**, and **3D landmarks** for validators to compare against ground truth or scoring rules, alongside the [Proof-of-Physical-Work](/understand-konnex/contracts-and-popw.md) bundle.

| Resource | URL                                                                                                                                                |
| -------- | -------------------------------------------------------------------------------------------------------------------------------------------------- |
| Code     | [github.com/UZ-SLAMLab/ORB\_SLAM3](https://github.com/UZ-SLAMLab/ORB_SLAM3)                                                                        |
| Paper    | *ORB-SLAM3: An Accurate Open-Source Library for Visual, Visual-Inertial and Multi-Map SLAM* — [arXiv:2007.11898](https://arxiv.org/abs/2007.11898) |

**Outputs:** keyframe poses, **MapPoints** (sparse 3D), trajectory. A **dense** mesh usually needs an extra **fusion** step (TSDF, Poisson, etc.) on top of depth or MVS, depending on your pipeline.

**Hardware:** runs on CPU; GPU use depends on the build. **Visual-inertial** modes expect a calibrated IMU in the stack per upstream documentation.

***

## RTAB-Map (dense 3D maps, popular in ROS)

[RTAB-Map](http://introlab.github.io/rtabmap/) is a common choice in **ROS / ROS2** when teams want **online** mapping with **loop closure** and **export** to occupancy grids, point clouds, or meshes for navigation and inspection. It is often a practical complement to “research” VSLAM stacks: more turnkey for **room-scale** RGB-D / stereo (and many LiDAR–RGB setups) in integration tutorials.

| Resource | URL                                                                |
| -------- | ------------------------------------------------------------------ |
| Project  | [introlab.github.io/rtabmap](http://introlab.github.io/rtabmap/)   |
| Code     | [github.com/introlab/rtabmap](https://github.com/introlab/rtabmap) |

**Picking a stack:** use ORB-SLAM3 as a **reference** VIO/VSLAM baseline; use RTAB-Map when you need **out-of-the-box** mapping exports in a **ROS**-centric 3D mapping workflow.

***

## Konnex alignment

* Miners deliver reconstructions, trajectories, and signed sensor payloads as the subnet contract specifies.
* Validators check fidelity and consistency (geometry, semantics, or both) per subnet design.

## See also

* [SLAM 3D map](/subnets-workload-classes/slam-3d-map.md)
* [PI-0 (manipulation)](/supported-ai-models/pi0.md) — different modality, same “miner–validator” idea
* [AI models overview](/supported-ai-models/ai.md)


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.konnex.world/supported-ai-models/orb-slam3.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
