In real-world scenarios, numerous approaches have been proposed to achieve accurate and robust perception. In particular, LiDAR-camera fusion has been actively studied, and calibration is essential to accurately leverage information from both sensors. Consequently, many studies have focused on estimating extrinsic parameters in a targetless manner. However, these approaches often require scenes to contain common geometric features across both modalities, which sometimes becomes challenging in complex environments. Deep learning-based methods have demonstrated high accuracy and robustness, but they typically rely on ground-truth data and suffer from issues related to domain differences. To address these limitations, we propose a targetless calibration method, which constructs geometry-aware superpixels from 2D priors. Our approach formulates an x-objective function based on normal variance, normal consistency, and depth discontinuity, and optimizes the extrinsicparameters iteratively. Experimental results on two different types of datasets demonstrate the robustness and accuracy of the proposed method, highlighting its effectiveness.
e-mail : jwk1939@naver.com