High-order deep infomax-guided deformable transformer network for efficient lane detection

Rong Gao, Siqi Hu, Lingyu Yan, Li Zhang, Hang Ruan, Yonghong Yu, Zhiwei Ye

Research output: Contribution to journalArticlepeer-review

4 Downloads (Pure)

Abstract

With the development of deep learning, lane detection models based on deep convolutional neural networks have been widely used in autonomous driving systems and advanced driver assistance systems. However, in the case of harsh and complex environment, the performances of detection models degrade greatly due to the difficulty in merging long-range lane points with global context and exclusion of important higher-order information. To address these issues, we propose a new learning model to better capture lane features, called Deformable Transformer with high-order Deep Infomax (DTHDI) model. Specifically, we propose a Deformable Transformer neural network model based on segmentation techniques for high-accuracy detection, in which local and global contextual information is seamlessly fused and more information about the diversity of lane line shape features is retained, resulting in extraction of rich lane features. Meanwhile, we introduce a mutual information maximization approach for mining higher-order correlations among global shape, local shape, and lane position of lane lines to learn more discriminative representations of lane lines. In addition, we employ a row classification approach to further reduce the computational complexity for robust lane line detection. Our model is evaluated on two popular lane detection datasets. The empirical results show that the proposed DTHDI model outperforms the state-of-the-art methods.
Original languageEnglish
JournalSignal, Image and Video Processing
Early online date4 Apr 2023
DOIs
Publication statusE-pub ahead of print - 4 Apr 2023

Cite this