www.industry-asia-pacific.com
Stemmer imaging News

Formula Student: autonomous driving using 3D imaging

Universities from all over the world participate in the Formula Student Driverless, a challenging engineering competition to develop the best driverless racing car. The municHMotorsport team relies on machine vision from STEMMER IMAGING to identify and evaluate track markings.

Formula Student: autonomous driving using 3D imaging

There is a hint of Formula 1 in the air watching the Formula Student Racing Team, members of the Munich University of Applied Sciences, diligently working together to develop and optimise their driverless racing car. Since 2005, students from various study programmes including automotive, mechanical and business engineering as well as computer science, design, business administration and many more, have been preparing for their future professional life in a fun, hands-on challenge.

“There are currently around 120 students and many of us are investing a great deal of time and effort in this project”, explains Timo Socher, a computer science student who has already been involved with the project for three years. In his role as CTO Driverless for the current season, he is responsible for all technical aspects of the driverless racing car that municHMotorsport is sending to the race in 2020. At the end of the project, which is organised completely independently, a six-figure sum will have been invested in this racer, which will navigate autonomously on a race track at speeds of up to 65 km/h.

Unknown circuit

Lap times and top speed are not the only criteria for the assessment of the cars in the battle for the winner's podium (see text box). However, the new developments have to prove their skills on the race track in three dynamic disciplines. The first one evaluates acceleration on a straight, 75-metre track from zero to a complete stop. In the second task, the so-called skidpad, the cars race on a well-known figure-eight circuit. The top discipline is the track drive: In this test, the cars have to complete ten laps on an unknown circuit that is up to 500 metres in length.

The course of all three disciplines is marked by blue and yellow traffic cones, which are placed at maximum distances of 5 metres apart, left and right along the boundary lines. "The autonomous vehicles follow these markings using the latest hardware and software technologies. They control acceleration, braking and steering movements of the racing cars to optimise the entire system for the best possible driving performance," explains Timo Socher.

"Machine vision is one of the key components in our system and plays a decisive role in detecting the cones. As a sensor, it provides the subsequent evaluation systems with the basic data for the vehicle’s reactions. Hence, the quality of the environment and cone detection forms the basis for all other vehicle modules and is essential for the entire system’s stability and performance."
Timo Socher, CTO Driverless, municHMotorsport

Demanding object recognition
In their current racing car, the Munich students have integrated two vision systems forming the base for the track detection. One vision system is attached to the safety bar and detects the cones at greater distances at a range of five to 20 metres. For the upcoming season, this range is to be extended to up to 30 metres in order to have more calculation time for the required measurements and to be able to react even more proactively.

The second system consists of two Intel RealSense cameras mounted under the front nose of the racing car, each with overlapping angles of 80 degrees. They ensure the recording of the images to evaluate the boundary markers at close distances of between 1.5 and 8 metres.

The image data is transferred to the CPU located at the racer’s rear and processed by a Jetson Xavier embedded computer in combination with other sensor data. The system uses technologies such as the merged sensor data and visual odometry which calculates the vehicle’s position and orientation by analysing the recorded images. This technology is often used in industry as the basis for robot positioning –further proof of the close connection between the municHMotorsport project and real industrial applications.

A racing car should, of course, be as fast as possible. For this reason, the images from the camera systems must be quickly and efficiently processed in the CPU. The purpose of machine vision is to locate and classify the coloured cones in the colour images and to estimate their position in relation to the camera. Object recognition is hampered by a number of unpredictable factors such as weather and light conditions, or the actual state of the race track, which can deviate from the ideal state due to potholes, unevenness or the elevation profile. Also, the background may vary from race to race due to spectator stands and other objects.

Best results with Deep Learning
In order to be optimally prepared for all the eventualities that may arise in terms of object recognition, the team uses state-of-the-art techniques, explains Socher: "There is a large number of algorithms and neural networks for object recognition and classification in colour images. In addition, there are of course, still more conventional imaging methods, such as edge detection using special colour filters. However, we chose a deep learning approach, which proved to be particularly robust to different weather and environmental conditions and therefore promised the best results for our purpose."

Timo Socher's colleagues took the required images with traffic cones in front of different backgrounds and environments under different conditions and further enriched the training data sets using data augmentation techniques. The main idea behind this approach: The more various training data is available, the better the accuracy of the trained models.

Using data augmentation techniques, existing images are easily modified by software, for example, by adding random pixel values, softening images, slightly rotating them or changing contrasts to generate a larger number of training images. “In addition, we exchanged training data with other teams in the competition and used about 3,000 images from the so-called KITTI data set, which provides images of roads without traffic cones”, Timo Socher explains.

The entire training data set now includes several thousand images and serves as the basis for the simulations used by the student team to save time in optimising the autonomous racing car.

www.stemmer-imaging.com

  Ask For More Information…

LinkedIn
Pinterest

Join the 155,000+ IMP followers