Online Correction of Camera Poses for the Surround-view System: A Sparse Direct Approach

Tianjun Zhang, Hao Deng, Lin Zhang and Shengjie Zhao, School of Software Engineering, Tongji University, China

Xiao Liu, College of Information and Computer Sciences, University of Massachusetts Amherst, USA

Yicong Zhou, Department of Computer and Information Science, University of Macau, Macau, China


Introduction

This is the website of our project "Online Correction of Camera Poses for the Surround-view System: A Sparse Direct Approach".

The surround-view system is an indispensable component of a modern advanced driving assistance system. It assists the drivers to monitor the road condition around the vehicle and to make better decisions. A typical surround-view system consists of four to six fisheye cameras. By calibrating the intrinsics and extrinsics of the system accurately, a top-down surround-view can be generated from raw fisheye images. However, poses of these cameras sometimes change due to bumps, collisions orchanges of tire pressure while driving. If poses' representations are not updated accordingly, observable geometric misalignment will appear in the generated surround-view image. At present, when the geometric misalignment appears, drivers have to drive to auto service stores for re-calibration. How to correct camera poses of the system online without re-calibration is still an open issue. To settle this problem, we introduce the sparse direct framework into the camera pose calibration field and propose a novel optimization scheme of a cascade structure. By minimizing the photometric errors in overlapping regions of adjacent birds-eye-view images, our method can correct the misalignment of the surround-view image online and can obtain high-precision camera poses without re-calibration. This method is actually composed of two levels of optimization. The first level of optimization is faster, but it suffers from the loss of degree of freedom. When the first-level optimization does not perform satisfactorily, we need the second-level optimization to compensate for the lost DOF. Experiments show that our method can effectively eliminate the misalignment caused by moderate camera pose changes in the surround-view system. More importantly, our online camera pose correction scheme relies totally on minimizing photometric errors without the need for additional physical equipments or calibration sites. Therefore, it can be easily integrated into pipelines of existing surround-view systems to improve their robustness and stability.


Main idea of the proposed approach

  1. The optimization objective of the proposed scheme is to minimize the photometric errors of the common-view areas between adjacent bird's-eye-views. In order to make it work, the user only needs to park the vehicle in a normal flat field with relatively rich textures. Except for this requirement, it does not require any other additional physical tools or special calibration sites. Hence, it can be seen that the proposed scheme has the advantage of being easy to use and having fewer requirements on the conditions of the operating site. Therefore, it is suitable for ordinary non-professional end-users.
  2. Instead of re-calibration, our scheme makes full use of initial camera poses of the calibrated surround-view system, making the optimization process converge quickly. Meanwhile, such a strategy also helps guarantee the high accuracy of the correction results.
  3. Our scheme follows a sparse direct framework, implying that it does not depend on visual feature points and thus requires less on its working conditions. Within the sparse direct framework, a novel pixel selection strategy is proposed, with which noise and unmatched objects between images captured by adjacent cameras can be eliminated effectively. Photometric errors are then only computed on the selected positions. Such a pixel selection strategy can effectively improve the whole pipeline's speed and robustness.
  4. The proposed scheme actually is of a cascade structure, comprising two different models, the ``ground model'' and the ``ground-camera model''. The ground model is simpler and more efficient than the ground-camera model, but it suffers from the loss of DOF (degree of freedom), which the ground-camera model does not have. In actual use, our scheme first tries to use the ground model. If it does not work satisfactorily, the scheme switches to the ground-camera model, which is theoretically more sophisticated and effective.

Source Codes

1. Online_Correction.zip

This is the code of the proposed online cameras' poses correction approach for the surround-view system. To run the code, read the following notes:

1) Prerequisites
We have tested the code with Ubuntu 14.04, but it should be easy to compile in other platforms.

C++11 or C++0x Compiler
We use the new thread and chrono functionalities of C++11.

OpenCV
We use OpenCV to manipulate images and features. Download and install instructions can be found at: http://opencv.org. We use 3.4.1, but it should also work for other versions higher than 3.0.

Eigen3
Download and install instructions can be found at: http://eigen.tuxfamily.org.

g2o
We use g2o library to perform non-linear optimizations. More info can be found in http://openslam.org/g2o.html.

Sophus
It's an Lie algebra library. More info can be found in https://strasdat.github.io/Sophus/.

2) Building the project
We use CMake to build the project on ubuntu 14.04.

cd Online_Correction/
mkdir build
cd build
cmake ..
make
cd ..
There has been no rules for "make install" yet, so if you want to use the library in other project, maybe you can copy the headers and the lib file to system path by hand.

3) Run the project
After compile and build the project, some executable files will be stored in the ./bin/ .

We have prepared one test sample set. For the version without pixel selection:
./bin/pose_adjustment_v2

For the version which follows a sparse direct framework:
./bin/pose_adjustment_v3

After an image appears, press enter to see the whole optimization process.


Sample Results

For each pair in the following table, the left image is generated with disturbed extrinsics and the right one is synthesized with corrected camera poses obtained by our approach. It can be observed that for all the examined cases, after applying our camera pose optimization approach, the geometric misalignments between adjacent birds-eye-views are all greatly reduced, corroborating the superior efficacy of the proposed approach.


Demo Videos

The following are two demo videos demonstrating the capability of camera pose correction approach for the surround-view system.


Last update: Feb. 7, 2020