01. Installation
OpenEyes requires Python 3.10+ and suitable edge hardware for accelerated inference.
bash
# 1. Clone repository
git clone https://github.com/mandarwagh9/openeyes
cd openeyes
# 2. Install dependencies
pip install -r requirements.txt
# 3. Optional Jetson performance mode
sudo bash scripts/jetson_perf.sh
02. Quick Start
Use module execution syntax for local runs:
bash
python -m src.main --camera 0
python -m src.main --camera 0 --enable-face --enable-gesture --enable-pose
python -m src.main --deepstream --camera 0
03. CLI Reference
| Flag | Description |
|---|---|
| --camera [ID] | Camera device index. |
| --video [FILE] | Process video file instead of camera stream. |
| --enable-face | Enable face detection pipeline. |
| --enable-gesture | Enable gesture recognition pipeline. |
| --enable-pose | Enable body pose estimation. |
| --deepstream | Enable DeepStream inference pipeline. |
| --api | Start REST API server. |
| --world-model | Enable predictive world model mode. |
04. ROS2 Telemetry
OpenEyes can publish detection and state outputs to ROS2 topics.
bash
ros2 launch openeyes openeyes.launch.py device:=cuda ros2:=true
| Topic | Payload |
|---|---|
| /vision/detections | Object detections |
| /vision/depth | Depth image or map output |
| /vision/faces | Face detections |
| /vision/gestures | Gesture status |
| /vision/poses | Pose detections |
| /vision/status | System state |
05. Troubleshooting
- Camera not found: verify camera permissions and selected index.
- Slow startup: first-run model initialization can take additional time.
- No ROS2 data: ensure ROS2 dependencies are installed and sourced, then allow startup to finish.
- DeepStream issues: verify NVIDIA stack and matching runtime libraries.