StradVision Unveils New Solution for Vehicle 360-degree Vision at NVIDIA’s GPU Technology Conference 2020

0 Comments

SEOUL, South Korea, Oct. 5, 2020 /PRNewswire/ —┬áStradVision will reveal its new Advanced Driver-Assistance Systems (ADAS) solution for automotive surround view monitoring at NVIDIA’s 11th GPU Technology Conference (GTC) 2020.

The annual event, held online from October 5 to 9 this year due to COVID-19, has previously attracted more than 45,000 registered attendees. It brings together industry leaders, specialists, developers, researchers, engineers, and innovators looking to enhance their skills, exchange ideas, and gain a deeper understanding of how AI will transform their work.

Available to GTC’s registered attendees as an on-demand presentation, StradVision Platform Engineer Kukhyun Cho’s session will explain how the company’s flagship product SVNet works with Surround View Monitors (SVMs) to form an accurate, 360-degree visualization of a vehicle’s environment. Through a process called Edge Blending, the image edges from front, rear, left, and right cameras are seamlessly fused into one combined image.

This vision solution enables ADAS functions such as Automated Valet Parking (AVP) or Advanced Parking Assist (APA), using object detection, distance estimation, free space detection, and parking space detection.

Cho will also expand on how StradVision integrates six SVNet networks with SVM onto NVIDIA’s Jetson Xavier system-on-chip (SoC) with TensorRT, a software development kit for deep learning inference. Known for its superior computing capabilities that are ideal for use with deep learning networks, the powerful Xavier AI platform enables SVNet to run the most advanced automotive Level 2 features, while generating a small footprint that will not overwhelm a vehicle’s ADAS.

SVNet is a lightweight software that allows vehicles to detect and identify objects accurately, such as other vehicles, lanes, pedestrians, animals, free space, traffic signs, and lights, even in harsh weather conditions or poor lighting.

The software relies on deep learning-based embedded perception algorithms, which compared with its competitors is more