MC-VEO: A Visual-Event Odometry with Accurate 6-DoF Motion Compensation

Jiafeng Huang1,  Lin Zhang1,  Tianjun Zhang1,  and Shengjie Zhao1

1School of Software Engineering, Tongji University, Shanghai, China


Introduction

This is the website for our paper MC-VEO: A Visual-Event Odometry with Accurate 6-DoF Motion Compensation.

Nowadays, robust and accurate odometries, as the foundation technology of navigation systems, gains significance in autonomous driving and robotic navigation fields. Although odometries, especially visual odometries (VOs), have made substantial progress, their application scenarios are still limited by the normal cameras’ frame rate limitations and their low robustness to motion blur. The event camera, a recently proposed bionic sensor, seeks to tackle these challenges, offering new possibilities for VO solutions to overcome extreme environments. However, integrating event cameras into VO faces challenges like the RGB-event modality gap and the requirement for efficient event processing. To address these research gaps to some extent, we propose a novel visual-event odometry, namely MC-VEO (Motion Compensated Visual-Event Odometry).


Overall Framwork

The overall pipeline of our proposed MC-VEO is shown in the following figure. The events obtained from the event camera are divided into groups, and after motion compensation, clear event frames are formed. The images obtained from the color camera go through keyframe judgment and candidate point selection to predict and form the brightness increments. The event generative model is used to correlate measurements from events and images. The front-end predicts camera motion by minimizing the brightness increment error of both two kinds of measurements. The camera pose and velocities as well as the depth of sparse candidate points are refined by photometric bundle adjustment at the back-end to sustain the VO system’s good performance.

Figure 1. The overall pipeline of the MC-VEO.

Performances

- Qualitative Results

Figure 2. Qualitative comparison on four test sequences. Each row depicts the pseudo-colored inverse depth maps generated by corresponding methods (red represents near and blue for far). It is worth mentioning that since the timestamps of event frames formed by different methods are not completely aligned, we chose to show results with relatively close perspectives.

- Quantitative Results

Table 1. Absolute Translation Errors (cm) of MC-VEO and compared VOs.

Table 2. Rotation Errors (deg) of MC-VEO and compared VOs.

Considering all metrics comprehensively, the performence of the MC-VEO is the best among all compared schemes.


Source Codes

MC-VEO Code


Demo Videos

The following is the demo video demonstrating the performance of our MC-VEO in some typical sequences.