Rendezvous, Proximity Operations, and Docking (RPOD) require precise, reliable visual feedback across all lighting conditions. Infinity Avionics cameras combined with the BRAIN edge processor provide the high-confidence situational awareness needed for fully autonomous in-space operations, from final approach and docking through to in-space manufacturing and active debris removal.
Self-awareness improves the understanding of the host spacecraft and immediate surroundings, while long-range space surveillance improves the understanding of RSOs at a longer distance.
Docking & Rendezvous
High frame-rate cameras with illumination enable precise relative navigation during final approach and docking. Real-time video streaming via Aquila supports both autonomous GNC algorithms and ground-in-the-loop operations. The Leo2MP’s wide FOV and built-in LED illumination handle the extreme lighting transitions encountered during approach.
In-Space Manufacturing & Robotics
Small, low-power cameras with LED illumination monitor robotic arm movements, material handling, and manufacturing processes in the microgravity environment, enabling the emerging in-space manufacturing economy. The Leo2MP is specifically designed for monitoring spacecraft structures, deployment activities, space manufacturing, and rover applications.
Debris Removal
Vision-based characterisation of tumbling or uncooperative targets is critical for active debris removal missions. Multi-camera configurations combined with BRAIN-powered sensor fusion enable robust pose estimation of non-cooperative objects, a key enabler for grappling and removal.
Landing Systems
Wide-angle cameras with on-board image processing support terrain-relative navigation, hazard detection, and precision landing for lunar or planetary landers. The compact form factor of Leo2MP and Aquila suits constrained lander architectures.
Recommended Products
| Product | Specification | Use Cases |
| Aquila | - FHD / HD 30FPS capture and save. - Video streaming for real-time situational awareness and improved autonomy. - Multiple lens options to support different FoVs and target object distances - JPEG or RGB565 encoding | - RPOD visual feedback - Robotic arm visual feedback - Combine multiple cameras with different FoVs for space debris detection and capture - In-space manufacture feedback |
| Leo2MP | - Multiple resolution modes up to 2MP - 110-degree FoV - LED illumination | - RPOD visual feedback - Robotic arm visual feedback - In-space manufacture feedback |
| Lynx4MP-70 | - 4MP sensor - 9-degree FoV - Image capture and storage | - Non-real-time situational awareness |
| Lynx4MP-10 | - 4MP sensor - 58-degree FoV - Image capture and storage | - Non-real-time situational awareness |
| Draco | - Neuromorphic sensing - Multiple lens options - High dynamic range | - Real-time situational awareness - RPOD feedback with enhanced dynamic range and lower data volume |
| BRAIN Edge Processor | - 100 TOPS processing power - NVIDIA Jetson eco-system - Available with add-on radiation shields - High-speed interfaces | - Central controller for proximity operations - Sensor fusion - Combine with Aquila, Leo2MP, or Draco for real-time visual feedback processing and decision making. - Combine with Lynx4MP for non-real-time image processing and decision making |
KEY CAPABILITIES
- Rendezvous, Proximity Operations and Docking (RPOD)
- Active Debris Removal (ADR)
- In-space manufacturing and robotics
- Autonomous GNC and vision-based navigation
- Pose estimation of non-cooperative targets
- Lunar and planetary precision landing
- TRL9 products
BRAIN
Edge Processor
BRAIN acts as the central nervous system for RPOD missions, fusing data from multiple Infinity Avionics cameras and external sensors (IMU, LiDAR, star tracker) to produce real-time, high-confidence relative navigation solutions for autonomous docking and proximity operations.