Abstract :
[en] Event-based (neuromorphic) cameras depart from frame-based sensing by reporting asynchronous per-pixel brightness changes. This produces sparse, low-latency data streams with extreme temporal resolution but demands new processing paradigms. In this survey, we systematically examine neuromorphic vision along three main dimensions. First, we highlight the technological evolution and distinctive hardware features of neuromorphic cameras from their inception to recent models. Second, we review image-processing algorithms developed explicitly for event-based data, covering works on feature detection, tracking, optical flow, depth and pose estimation, and object recognition. These techniques, drawn from classical computer vision and modern data-driven approaches, illustrate the breadth of applications enabled by event-based cameras. Third, we present practical application case studies demonstrating how event cameras have been successfully used across various scenarios. Distinct from prior reviews, our survey provides a broader overview by uniquely integrating hardware developments, algorithmic progressions, and real-world applications into a structured, cohesive framework. This explicitly addresses the needs of researchers entering the field or those requiring a balanced synthesis of foundational and recent advancements, without overly specializing in niche areas. Finally, we analyze the challenges limiting widespread adoption, identify research gaps compared to standard imaging techniques, and outline promising directions for future developments.
Scopus citations®
without self-citations
1