Live Capabilities

Demos & Use Cases

Experience the computational power of OpenEyes across different hardware platforms. From edge-native world models to real-time ROS2 multi-topic nodes, run these commands to instantly deploy advanced vision capabilities.

demo_capabilities.py
$ python -m src.main --test-suite all
[SYS] Verifying capabilities...
[OK] YOLOv10n INT8 Object Detection
[OK] MediaPipe Gesture & FaceMesh
[OK] LeWorldModel Spatial Awareness
[OK] Headless FastAPI Server
[OK] ROS2 Multithreaded Executor
[SYS] All sub-systems ready. 60 FPS verified.

Framework Configurations

Drop these exact commands into your terminal to spin up specific OpenEyes architectures.

👁️

Core Vision Pipeline

Core Vision Pipeline Demo

Spins up the DeepStream appsink pipeline with YOLO inference, FaceMesh (max 3 faces), and 33-point body pose estimation via MediaPipe at an aggregated 40-60 FPS on edge GPUs.

python -m src.main --camera 0 \
--enable-face --enable-gesture \
--enable-pose --debug
🤖

Autonomous Person Following

Autonomous Person Following Demo

Injects the LeWorldModel spatial tracker. Evaluates bounding box height ratios dynamically to issue forward/stop/backward motor commands.

python -m src.main --world-model lewm \
--follow --turbo

ROS2 Distributed Publisher

Launches the VisionPublisher node with a MultiThreadedExecutor. Broadcasts 10 JSON payload topics simultaneously (detections, depth, poses, plans) across your ROS network.

ros2 launch openeyes openeyes.launch.py \
device:=cuda ros2:=true
🌐

Headless REST API

Bypasses all GUI rendering. Boots a high-performance FastAPI server exposing the vision engine's state and detection arrays over standard HTTP.

python -m src.main --api \
--api-port 8000 --api-host 0.0.0.0
🎬

Batch Video Processing

Feeds an offline MP4 stream into the vision engine. Overlays hardware-accelerated detection boxes and compiles the result into a new hardware-encoded output stream.

python -m src.main \
--video path/to/input.mp4 \
--output result.mp4