Live Capabilities

Demos & Use Cases

Run targeted command profiles to activate OpenEyes capabilities for edge deployments, automation workflows, and distributed robotics pipelines.

demo_capabilities.py
$ python -m src.main --info
[SYS] Verifying capabilities...
[OK] YOLO TensorRT object detection
[OK] Gesture and face processing
[OK] World model runtime
[OK] Headless API service
[OK] ROS2 publishing mode

Framework Configurations

Use these command templates for common deployment scenarios.

Vision icon

Core Vision Pipeline

Core Vision Pipeline Demo

DeepStream appsink pipeline with detection, face, gesture, and pose output.

bash
python -m src.main --camera 0 --enable-face --enable-gesture --enable-pose --debug
Tracking icon

Autonomous Person Following

Autonomous Person Following Demo

Enables LeWM-based tracking and follow mode for distance-aware response.

bash
python -m src.main --world-model lewm --follow --turbo
ROS icon

ROS2 Distributed Publisher

Publishes detections, depth, and state payloads for ROS2-enabled systems.

bash
ros2 launch openeyes openeyes.launch.py device:=cuda ros2:=true
API icon

Headless REST API

Runs OpenEyes as an API service for external orchestration and monitoring.

bash
python -m src.main --api --api-port 8000 --api-host 0.0.0.0
Video icon

Batch Video Processing

Processes offline media for reproducible benchmarking and demo generation.

bash
python -m src.main --video path/to/input.mp4 --output result.mp4