Multiple neural networks process raw camera and radar inputs to develop an accurate understanding of relevant vehicles, obstacles, and the road configuration in real-time.
A universal video network that combines stereo disparity and mono motion to calculate the distance, absolute velocity, and trajectory of objects both near and far without requiring object recognition.Explore KineticFlow
A novel approach to processing raw HD radar outputs extends perception distance and provides resiliency in inclement weather, poor lighting, and occlusion scenarios as an independent sensor modality.
Interprets the real-time environment and identifies drivable paths by detecting road markers and lane semantics and combining them with vision and radar data.
The driving program analyzes the outputs from Perception and performs the task of driving, executing standard and defensive maneuvers to optimize safety, comfort, and route.
From lane centering and distance keeping to merging and changing lanes, Ghost executes driving maneuvers by precisely actuating the steering, accelerator, and brake.
Aided by 360° perception and reaction times that are three times faster than a human driver, Ghost is capable of aggressive braking and swerving to avoid dangerous obstacles and events on the road.
Ghost will pass slow vehicles, drive in eligible HOV lanes, and use OTA connectivity and real-time traffic data to continuously optimize routing decisions.
Multiple layers of software and hardware redundancy enable Ghost to drive safely without relying on a human backup, even in the event of failures or occlusions.
With high availability software and redundant sensors, sensor modalities, and processors, Ghost can continue driving safely in the case of sensor occlusions, dropped frames, or even full sensor or compute failure.
In case of the most severe failures, Ghost will achieve a minimal risk condition with an independent driving computer designed to bring the car to a safe stop without human intervention even if the primary system is compromised.
The approved operational design domain (ODD) is actively enforced and can be modified via OTA updates to expand driving areas and use cases and execute targeted restrictions if ever necessary.
Using an in-cabin camera equipped with artificial intelligence, Ghost is developing sophisticated driver intent perception models capable of transforming the interaction between car and driver.
Ghost monitors steering wheel and pedal inputs as well as the driver's hand positions, head and skeletal posture, and eye gaze to understand when to transition driving responsibilities to and from the human driver.
As a L4 system, Ghost does not require driver attention or intervention when engaged. Equipped with driver intent perception that can disambiguate accidental input on the car controls from deliberate driving activity, Ghost seamlessly begins driving when the driver simply lets go of the wheel and pedals. Likewise, the driver can take back control at any time by taking the wheel or pressing the pedals.
A real-time operating environment responsible for high-performance execution of the Autonomy Software.
Verified to be bug-free using formal software verification methods adapted from aerospace and defense.
Works seamlessly with a range of sensor and compute configurations, enabling OEMs to customize hardware across models and trim levels and deliver future hardware upgrades.
Implements best practices for hardening, securing communications, and intrusion detection to manage cybersecurity risk.
Enables secure update and rollback of software versions and facilitates unique feature entitlement per subscriber.
Sends unusual road scenes and system exceptions to Autonomy Cloud for training and analysis.
Securely directs in-car data flow between sensors, driving computer, and vehicle controls.