Lane Detection

To reach a state where autonomous public transport and personal vehicles are reliably operating, there are still many challenges to be tackled. Here we take a look at the most fundamental problem in order to get from a start point to an end point autonomously - detecting the lane we need to be in.

Demonstration and Discussion

The aim of this task is to segment the current driving lane successfully. We employ a U-Net architecture with an encoder and a decoder which outputs the segmented mask, given an input image. After training the model locally, we find that it is able to predict very well on unseen images. The mean Intersection over Union (IoU) on 6.7k test images is approximately 92% which suggests that on most of the roads in out dataset, the model is able to perform sufficiently well. This is also reflected in the above video.

Challenges and Future Work

On analyzing this particularly challenging video, we see multiple issues during prediction of the current lane. We notice that the U-Net model is only able to predict short distances accurately when the road contains extreme bends. It struggles to make continuous predictions when the road switches between shady and sunny. When the sun causes flashes of light, the onboard camera takes a second to adjust, overexposing the footage.

We can make improvements in the future to tackle these issues. To name a few, we can augment the dataset with incorrectly exposed footage for robustness. Furthermore, we can incorporate a temporal block to the archtecture to predict changes in the road bend based on its recent history.

Techonologies: Python, Jupyter Notebooks, Tensorflow, OpenCV, Numpy

GitHub Code Repository: Link