Video-based Lane Detection Using a Fast Vanishing Point Estimation Method


BENLİGİRAY B., TOPAL C., Akinlar C.

14th IEEE International Symposium on Multimedia (ISM), Irvine, İngiltere, 10 - 12 Aralık 2012, ss.348-351 identifier identifier

  • Yayın Türü: Bildiri / Tam Metin Bildiri
  • Doi Numarası: 10.1109/ism.2012.70
  • Basıldığı Şehir: Irvine
  • Basıldığı Ülke: İngiltere
  • Sayfa Sayıları: ss.348-351
  • Anahtar Kelimeler: intelligent vehicle systems, lane detection, lane tracking, image processing
  • Anadolu Üniversitesi Adresli: Evet

Özet

Lane detection algorithms constitute a basis for intelligent vehicle systems such as lane tracking and involuntary lane departure detection. In this paper, we propose a simple and video-based lane detection algorithm that uses a fast vanishing point estimation method. The first step of the algorithm is to extract and validate the line segments from the image with a recently proposed line detection algorithm. In the next step, an angle based elimination of line segments is done according to the perspective characteristics of lane markings. This basic operation removes many line segments that belong to irrelevant details on the scene and greatly reduces the number of features to be processed afterwards. Remaining line segments are extrapolated and superimposed to detect the image location where majority of the linear edge features converge. The location found by this efficient operation is assumed to be the vanishing point. Subsequently, an orientation-based removal is done by eliminating the line segments whose extensions do not intersect the vanishing point. The final step is clustering the remaining line segments such that each cluster represents a lane marking or a boundary of the road (i.e. sidewalks, barriers or shoulders). The properties of the line segments that constitute the clusters are fused to represent each cluster with a single line. The nearest two clusters to the vehicle are chosen as the lines that bound the lane that is being driven on. The proposed algorithm works in an average of 12 milliseconds for each frame with 640x480 resolution on a 2.20 GHz Intel CPU. This performance metric shows that the algorithm can be deployed on minimal hardware and still provide real-time performance.