A BETTER PERCEPTION
PARADIGM FOR
AUTONOMOUS DRIVING

VAYAVISION provides environmental perception software solutions for autonomous vehicles supporting SAE autonomy levels of L3, L4 and L5.

Passenger
cars
Robo-taxis
Autonomous
Shuttle
Autonomous
Trucks

Our VAYADRIVE
Software Solution

Our VayaDrive software solution combines state-of-the-art AI and computer vision technologies with computational efficiency to scale up the performance of AV sensors and hardware essential for planning the driving path.

Unmatched Environmental Perception Technology

Our raw data sensor fusion with upsampling generates the most complete 3D environmental model including occupancy grid, a list of classified static and dynamic objects with their velocities and motion vectors, tracking and more.

Better perception
Fewer false alarms
Highest detection rates

Precise Object Detection

To detect objects, no single type of sensor is enough. Cameras don’t perceive depth, while distance sensors such as LiDARs and Radars possess very low resolution. Vayavision has just the solution: raw data fusion with upsampling.

Detection of small objects absent in training sets
Depth data assigned to every pixel in camera picture
Accurate shape definition of vehicles, humans, and any other object

Creating most precise environmental model

To create a most accurate environmental model for safe & reliable autonomous driving, multiple processes must work in sync. Below is a high-level view of our environmental perception software framework. The process starts with the raw data received directly from the vehicle sensors via software API, and ends with complete environmental model data that is passed to the AV driving software module.

Raw data fusion & Upsampling algorithms construct High Definition 3D RGBd Model

Calibration, unrolling, and matching module receives raw sensor data before synchronizing and fusing it into a unified 3D model. Upsampling increases the effective resolution of the distance sensors, resulting in a dense RGBd model with each pixel containing both color and depth information. Localization and motion tracking help to determine the self-position and velocity.

READ MOREREAD LESS

Applying context-aware algorithms on the 3D RGBd Model to achieve Perception

Frame-based object detection and segmentation of the road includes obstacles, vehicles, pedestrians, bicycles, motorcycles, lane borders, available free space, and more.

Detection by classification is performed with DNN-based algorithms that require training. In parallel, detection without classification is performed by another set of algorithms, thus, enabling detection of unexpected obstacles. Multi-frame object tracking includes 3D modeling, motion tracking and sizing of each object.

READ MOREREAD LESS

Creation of the 3D Environmental Model

The resulting environmental model data is accessed via our software API, and includes an occupancy grid and list of parameters for any tracked objects: Localization, Orientation, Motion Vector and More.

READ MOREREAD LESS