An interactive workshop and tutorial to discuss the role of image quality, image enhancement, and machine learning based perception for Assisted and Automated Driving
One of the main challenges to achieve safe and reliable assisted and automated driving (AAD) functions is how to design them in order to cope with the unavoidable measurement uncertainty and degradation of perception sensor data in dynamic, everchanging, and noisy driving scenarios. One of the most widely used sensors is camera, but cameras, similar to our eyes, are affected by numerous noise factors, such as environmental luminosity, adverse weather, poorly illuminated areas, obstructions, etc. In the machine learning and computer vision community there has been a significant amount of research on how to de-noise and enhance sensor data quality in general. However, image quality metrics have been mainly created for evaluating a ‘better perceived’ quality for human consumption. In this context, the tutorial will discuss and show the application of traditional image quality metrics, to then move to downstream machine learning perception tasks, such as detection, segmentation, etc. Is it possible to then design and optimise enhancement algorithms and deep neural networks for these perception tasks? How can one measure quality for machine learning? The tutorial will show the results of this novel machine learning paradigm opening the discussion for future directions in the research and deployment of AI-based data enhancement and perception.