,

Ghost Sees Where Humans, Cameras, and LiDAR Can't

Learn how Ghost's software-defined radar sees around vehicles on the road.

By Matt Kixmoeller

November 30, 2022

5 minute read

One of the advantages of using multiple sensing modalities is that each modality has different super-powers.  By combining modalities you not only get redundancy (in Ghost’s case, both camera and radar can independently identify objects, and measure distance/velocity), but you get unique capabilities from specific sensors that the others don’t offer.

Radar, in particular, has a number of super-powers.  First, it’s exceptionally good at measuring relative velocity of objects, and provides the fastest answer to the question “how fast is that object moving compared to me.”  It also has the super-power of seeing through weather and lighting conditions that are challenging or impossible for cameras or LiDAR.  Finally – and the subject of this blog – the reflective nature of radar allows it to see around things in ways that humans, cameras, or LiDAR cannot.

Traditional auto-grade radar units found on most cars today have this ability to some extent, but unfortunately produce a lot of noisy returns that are difficult to interpret and trust, so radar is often combined with cameras to verify that a particular return is indeed an actionable object.  This is feasible for the vehicle immediately in front which can be seen and visually verified, but how do you validate returns from 2 or 3 cars in front that are obscured from the camera’s view?  Ghost uses a next-generation imaging radar that operates at a higher sensitivity and resolution, is mounted low to the ground, and does not depend as heavily on vision to compensate for the noise common in the last generation, low resolution radars. This enables Ghost to trust radar returns for many use cases where visual confirmation is not available.

Imagine this scenario – you are stuck driving behind a semi truck….what’s in front of it?  If you are a human, camera, or LiDAR, you have no idea.

blocked by Truck


But advanced radars like Ghost’s can see around this truck (using the phenomena of multipath reflection and diffraction – more on this below), detecting the presence of a vehicle or object in front of the obscuring vehicle.  This can be helpful in case traffic is slowing and the vehicle in front of Ghost is slow to react, allowing Ghost to begin taking action even before the lead vehicle reacts or moves out of the way to make the full scene visible.

Radar's ability to see around (and under) vehicles is a byproduct of the fact that radar reflections are much more specular than diffuse. This means that the direction radar reflects when it hits a flat surface is highly directional, much like a mirror and very much not like, for example, a piece of cloth. When a radar illuminates a scene of vehicles on a road at the correct angle, that wave of illuminating energy can bounce between the road and bottom of the vehicles in the scene, ping ponging between the two until it finally "exits" from under each vehicle. Interestingly, the way the radar wave behaves after exiting from under each vehicle looks very similar to how it behaved when it was initially transmitted, meaning that the wave can "continue on" to reflect off of (and underneath, again!) other vehicles.  Ultimately this results in a final radar signal that shows not only the vehicle directly in front, but the one in front of that, and sometimes (amazingly) the one in front again of the second vehicle.

Check this out in action – we’re following a truck at ~22m, and even before the truck moves to the adjacent lane Ghost knows there’s a second vehicle in front of it at 51m.  In this case that vehicle is moving slower, and Ghost is able to reduce speed seamlessly to manage the situation.  Furthermore, as Ghost’s KineticFlow visual neural network doesn’t work based upon object recognition, we are able to visually-confirm the leading car before it is fully-revealed to add confidence to decision-making.


Here's the same scene, in a top-down view showing Ghost’s radar detections and visual detections working together:


In the bottom-half of this view,  the clustered yellow point clouds are visual detections, and the larger dot point clouds are radar detections (pink dots are moving about the same speed as the ego vehicle, red dots are stationary items like road barriers).  Large blue dots indicate what Ghost has inferred to be detected relevant objects in the ego and surrounding lanes.  You’ll notice that the first blue dot in the ego lane tends to have both radar and visual detections, while detections that are further in the distance (and blocked by closer vehicles) only have radar detections.  

This ability to “see around” objects enables multiple layers of detections in both the ego lane, as well as neighboring lanes, helping Ghost react smoothly to cut-ins and complex traffic patterns.  

Consider another example – while following a car at 64m, a second vehicle cuts across the ego lane.  Ghost is able to continue the tracking of the 64m vehicle while the cutting-across car completely obscures it visually for a period, enabling us to drive this scenario without jerky reactions or having to “re-discover” the lead vehicle.


While we’re big believers in AI-driven visual perception, the inclusion of radar in the Ghost Autonomy Engine gives Ghost higher confidence, redundancy, and the ability to “see” objects that we’d miss with vision alone.

Company