,

"In the future, I want every single car to be much smarter"

Highlights from CEO John Hayes on The Road to Autonomy Podcast

By Ghost

May 18, 2023

minute read

Ghost Autonomy CEO John Hayes joined host Grayson Brulte on The Road to Autonomy podcast to discuss the origins of Ghost, how the mobile phone revolution unlocked foundational hardware and AI capabilities for the next generation of cars, and how the automotive industry might adapt to decreasing hardware costs and increasing software differentiation.  

Some highlights from the discussion as well as the full transcript can be found below.  

Highlights

The idea for Ghost came from the massive wave of innovations in cameras, chips and AI in mobile phones.

“The conclusion we came to was that there was an enormous amount of advances in sensing, especially cameras and consumer. And there was enormous changes in AI, which pointed to a new self-driving stack. One that isn't based on the DARPA Urban Challenge stack, something that's based on consumer technology that could be distributed everywhere.  
That was the foundation - let's look at what emerging trends are out there in hardware and where can we make smart software and what industry can we go into?”

Mobile phones have made real-time AI applications mainstream, used by billions of people every day. Ghost is bringing transformational AI capabilities originally developed for mobile into the automotive industry.

“One of the things that we saw early on was that AI was going to be such an important movement that the mobile chip makers would be putting AI specific technology into their chips. When we started out, we used ordinary GPUs, ordinary mobile GPUs, and since then the new generations of chips indeed do incorporate neural network accelerators in various forms.”
“I think the difference though is that today you can embrace low power with general purpose CPUs and GPUs. The platform that we started on, which was a mobile chip is a two watt chip, that is low power and there's a whole industry trying to optimize high performance compute for mobile devices. What we want to do is take that optimization, all that work that's been done in the mobile world and put it into the automotive world.”

Cameras have dramatically improved since the first self-driving cars were developed more than 15 years ago – today they can not only provide rich scene information, but are also resilient to lighting inconsistencies and reliability challenges of the past.

“In a sense, the camera people have solved all natural lighting conditions. They've basically bested people in terms of dynamic range. And that was a huge area of concern for autonomy because you can't have a car just stop working because you're pointing the sun or you can't be confused by a dark tunnel. Now that particular class of problem has gone away. That means that you can now build algorithms on very, very reliable cameras that are shipped in the billions. That's a great foundation to build any product on.”

Tesla has optimized their model for maximal software revenue and profitability by making Autopilot and FSD Beta available on every single car in their fleet with a standardized and cost effective hardware stack.

“When I look at what's interesting with the Tesla model, it's that they put the hardware for this in every single car. If you want to buy a car with Super Cruise, you get additional Super Cruise hardware to make that work. In Tesla's case, every car has the same cameras, same computer. The only difference between a car with FSD Beta or various levels of Autopilot is a software change. That's where I see more of the industry going, which is saying that, "Look, let's just put a great computer in every single car. Put great sensors, great sensors are not expensive. Then after they buy the car, we won't create hardware variations in the car in order to deliver driving features.
In Tesla's case, it's been very, very successful for them, a very large number of their customers by Autopilot and a surprising number buy FSD Beta, which isn't even released.”

Autonomous driving is now possible for the volume market, bringing advanced capabilities to $20,000 cars. Specialized independent software companies can help automakers deliver extraordinary value with minimal R&D investment.

“The future is I want every single car to be just much smarter than it is… I think that the future is we're looking at how do we get more advanced autonomy that's currently available only in very, very high end cars, call it $80,000 plus. I want to put that in a $20,000 car…I think that the value for consumers in autonomy shouldn't be something that is limited to a high-end car because of hardware constraints.
I think it's also an area where auto companies benefit from having an independent software provider because having the fleet breadth will concentrate R&D and allow us to make a better product than could be made within a single automotive company.”

Complete Transcript

Grayson Brulte:

Hello and welcome to The Road to Autonomy. I'm your host, Grayson Brulte on today's episode we're absolutely honored to welcome John Hayes, founder and CEO Ghost Autonomy. Welcome to podcast, John.

John Hayes:

Hi, great to be here.

Grayson Brulte:

I'm excited to have you here because personally-owned autonomous vehicles are the future. The industry's shifting there based on consumer demand. But before we get into autonomy, John, you started in data storage, you took Pure Storage public, you did the whole IPO and now you're working on autonomy. Why? What was the path?

John Hayes:

The path to data storage was also somewhat interesting because our data storage company was actually founded on the basis of trends in consumer technology. We were seeing that Flash was taking over the low end of the market like iPods and $100 laptops. It was also taking over the highest end of the market. That was when the MacBook Air first came out. And everything in the middle, we thought, "Oh my god, this new data storage technology is going to just take over absolutely everything." We set up a company and there were trends changing in the enterprise space, especially around virtualization. Ultimately what we saw was that there was a new software defined stack for data storage. That company is still out there selling lots of data storage technology, but for the next company we're looking for something similar. What's changing the consumer space?

The conclusion we came to was that first there's an enormous amount of advances in sensing, especially cameras and consumer. There was enormous changes in AI and a lot of that pointed to a new self-driving stack. One that isn't based on the DARPA Urban Challenge stack, something that's based on consumer technology that could be distributed everywhere. That was the foundational nature, let's look at what emerging trends are out there in hardware and where can we make smart software and what industry can we go into?

Grayson Brulte:

You've built your whole career on trends. Is that a fair statement here?

John Hayes:

I hope everyone does, but I would say built it on the good work of other people and trying to figure out what the next step is.

Grayson Brulte:

You always have to figure out the next step, you always have to stay ahead of trends. I pride myself on that and investors also have to stay ahead of trends or they get left at the station and the returns aren't very good. You mentioned the breakthrough in sensing. How much was that with the development that Sony did around the CMOS sensor, especially in that camera technology? Sony basically with the Alpha series changed cameras forever from a professional photographer standpoint. Then Sony very quietly built up a huge automotive business around camera sensing. Was it that inventions and innovation in Japan that really drove the big camera breakthroughs?

John Hayes:

That's what I think. If you look at the presentations from autonomy companies, they would often make claims about cameras that they would get confused in bright sunlight or they'd get confused in dark scenarios and you could take an iPhone and you could take a picture and show that that clearly was not true, that the information was there. When we started out, we started out with actually a perfectly ordinary off the shelf Sony camera and we've actually made that work in the dark. It's a, call it a 10 bit camera, and you have to deal with the sensing that you get. The next generation of camera we're going to do is also a Sony camera where they've gone from 10 bits of precision to 18 bits of precision. That's such a big leap that basically says you don't have to do exposure control anymore.

In a sense, the camera people have solved all natural lighting conditions. They've basically bested people in terms of dynamic range. And that was a huge area of concern for autonomy because you can't have a car just stop working because you're pointing the sun or you can't be confused by a dark tunnel. Now that particular class of problem has gone away. That means that you can now build algorithms on very, very reliable cameras that are shipped in the billions. That's a great foundation to build any product on.

Grayson Brulte:

It's scalable. What you're describing is scalable, the DARPA, it's called the traditional stack or the DARPA stack as you referred to it, is very cumbersome and very, very expensive. In some cases it's half a million dollars a vehicle and if you get production you're maybe you're down to $300,000, $250,000 a vehicle. If you go to an 18 wheeler, perhaps you're at $750K, $700K. Your stack, what you're describing with the Sony camera, the breakthroughs in AI that we're all seeing and reading about every day, what does the Ghost Autonomy stack look like? Are you putting Lidar in there? Are you putting 4D radar? How are you completing the Ghost Autonomy stack?

John Hayes:

We started out with only cameras and pairs of cameras so that we could get stereo and starting middle of 2021.  We actually built up a radar team because radar is just a really well proven technology in the automotive space. There's millions of radar shipped, but the direction we took with radar was to go into software defined direction. If you look at the automotive radar business they build very specific detections into chips and that's missing most of the richness of the radar signal itself. The last part is we connected to a pretty lightweight computer. One of the things that we saw early on was that AI was going to be such an important movement that the mobile chip makers would be putting AI specific technology into their chips. When we started out, we used ordinary GPUs, ordinary mobile GPUs, and since then the new generations of chips indeed do incorporate neural network accelerators in various forms.

We've focused on staying on that mobile platform. If you look inside our trunk, there's a pretty small computer. The trunk is actually, you can still quit a bag of golf clubs in the trunk and that's okay. And that's another key factor for making the stack work in a wide range of cars. You have to get the cost down, you have to get the power down, you have to create an envelope for your product that fits in a wide variety of cars. We've tried to make our stack look almost invisible, as invisible as we can make it. We've never used a roof rack, we've never used an alternate power source, we've never used a thousand dollar IMU.

Instead we've used the sensors that are built into the car, cameras that we add that are about five millimeters tall, they're very small consumer grade cameras and radars that are nearly invisible as well as a computer that if you're as old as I am, you'd remember like CD changers in your trunk and it's about half the size of a CD changer in your trunk because there's a perfect spot for it to mount. This is from the beginning, we focused on how do you shrink it down and work within those constraints.

Grayson Brulte:

I remember I had a Jeep Grand Cherokee way back in day. I had a 10 disc CD changer in the back and I thought I was the coolest person in the world because I can fluctuate through the 10 and the worst thing ever was having to go load in those new discs, but it didn't take up very much space and I was able to put my golf clubs in the back of that car. Years and years ago, I was invited to Mercedes-Benz R&D in Sunnyvale to see, this is the early, early prototypes of what they were working on, the S500 and this is very interesting. They said, "Open the trunk," I opened the trunk and it was all compute. And he said, "The biggest problem is we cannot commercialize this vehicle because our customers that buy the S500 cannot put their golf clubs in. Until we can solve the golf club problem, Mercedes cannot sell a self-driving car."

That's very similar to what you're discussing there. Low power is becoming a big trend. It's really being accelerated by the breakthroughs that Arm with all their IP licensing, Apple with all their new chips. Do you see that low power continuing to be, let's say, call it a trend in autonomy? Here's Ghost working on a low power solution, do you see other companies starting to embrace low power as well?

John Hayes:

I think that low power has always been embraced by the level two providers. No one would argue that Mobileye is not a low power chip. There are other companies that have embraced that low power philosophy. I think the difference though is that today you can embrace low power with general purpose CPUs and GPUs. The platform that we started on, which was a mobile chip is a two watt chip, that is low power and there's a whole industry trying to optimize high performance compute for mobile devices. What we want to do is take that optimization, all that work that's been done in the mobile world and put it into the automotive world.

In the automotive world, it translates directly to mileage, especially in electric cars. I think the other thing, aside from having a trunk large enough that you can take your luggage for a family of four in, it's also the case that no one wants to lose 25% of their battery when they turn on autonomy or pay the extra weight cost of having that more battery and that compounding cost that you get from more power use.

Grayson Brulte:

The other interest thing you mentioned no roof rack and you read these situations of lidars being stolen off of vehicles and you say, "No roof rack, five millimeter tall camera," one, the designer in me says, "Yes, yes, yes, yes. You're going to blend into the existing design." The security part of me says that, "I'm not going to become a target for the potential Lidar that some individual might think right or wrong that there's copper in there and decide to rip it off and go down to the pawn shop and make a few bucks." What are the designers saying when you come in and say, "Hey, we don't need this roof rack. We have low power. By the way, you can put your golf clubs in the trunk still."

John Hayes:

I think that that's something that when we've engaged with automotive companies, that's almost the first thing they've noticed because what they're used to seeing is a small company like us comes to them and then shows them this elaborate prototype system. The problem is that auto companies haven't seen many of those translate into something that could actually be put into a production car. But to solve the roof rack problem, it's a category of issue that is unique. If you put a camera behind a windshield, a windshield, it's glass, it's also known as a lens. And it means that you have to start doing pretty sophisticated correction to actually make the camera operate behind that glass.

You put it behind a side window, like a rear window, it's also tinted. You put it in a rear window and there's often a line in it, a line for the heater. You end up having to solve all of these problems that are necessary to bring a product to production. That's one of the things that they notice. It's not just that it's small, but it's been incorporated into the design. The rule that you could take is the smaller you make it, the easier it is to incorporate it into any design.

Grayson Brulte:

For a consumer, it just works. That's what a consumer wants. Consumer doesn't want this big bulkiness, they want it to look beautiful and work. How many cameras and radars are you running on a Ghost system?

John Hayes:

We're running four camera pairs. Camera pairs, a stereo pair, we can also access them individually, but it points forwards backwards left and right and we run one pretty high resolution radar pointing forwards. With that, you get depth in all directions, you get velocity in all directions and for forward because the radar gives you extended range and it gives you super accurate velocities. That made a big difference in controlling the car to have a really accurate radar just tells right down to 0.1 meters per second how fast other cars are going.

Grayson Brulte:

You eliminate the rear end. That's a huge issue where consumers, today in a traditionally driven car, they don't pay attention. Thank goodness we have emergency breaking that stops a lot of that. What type of ODD do you see the Ghost system initially rolling out and then what type of vehicle will be personally owned vehicles? Is it going to be a level two system or you going to be a level four system? What type of system and how do you envision it being rolled out?

John Hayes:

This is something that is in the control of the auto companies who integrated and they will probably go through phases where they start by marketing as a level two+ and then they upgrade to level three and then they upgrade to level four. What we engineer to is a level four design on the highway and probably level two everywhere else. Our intention is to roll level four. The way I think about it is going from fast to slow because the faster environments or environments with high speed limits means that the environmental complexity has to be low. If you have people driving, people have constant rate brains, they can only take a certain number of events per second. Imagine you start from freeways and then you go to highways, then you go to expressways and you go to arterial roads and I think it's a rolling ODD where you increase the competence at slower and slower speeds over time.

Grayson Brulte:

In order to increase that competence, do you need HD maps?

John Hayes:

We don't use HD maps. The main reason is because we tried, everyone should try and use HD maps and they just are not accurate enough. Sometimes it's not like a little bit inaccurate, it's very inaccurate. The other thing is that to find out where you're on an HD map, you're often using GPS, which is also not that accurate. It's like you're compounding problems. We do use ordinary navigation maps because those are good enough and the idea of placing yourself on an ordinary navigation map is well solved problem. But it's not a surprise to me that the autonomous companies that are running taxis and delivery services almost always end up creating their own maps and maintaining their own maps. That's just a massive undertaking that makes it pretty hard to scale from region to region.

Grayson Brulte:

It's hard to scale and when you're running level four on a highway, it's not as a complex environment as driving it downtown Philadelphia, downtown San Francisco. It's a very complex environment. I want to give you two scenarios. I have a level two+ system. I have the Mercedes Drive Pilot. I've been driving for a while and I drove it from my house in South Florida to Disney World and it was great, but every 30 seconds or so I had to move the wheel so it knew I was there. Then I drove it down to the Florida Keys and it didn't do very well because it was more of a complex highway driving environment, but it was on the highways I had to do multiple interchanges there until I got to the turnpike.

Your system, the Ghost system, hypothetically level four, could I just sit there and talk to my daughter or just relax on the way to Disney World and then the way to the Florida Keys? Or do I have to stay engaged during that highway part of the journey, not going from my house to the highway, but when I'm on the highway?

John Hayes:

We believe you should engage with the people in the car. You have to be awake and aware. But I think that a real system means that you have to be able to just not pay attention to the road and the system has to have enough wherewithal to understand it's self-diagnostic, it's redundancies as to whether it's making a decision or not. It starts with there's system redundancies that are necessary to make that happen. But there's also more subtle things like you probably noticed in Drive Pilot, you're probably adjusting your target speeds and adjusting your following distances relative to the traffic. And the experience in our system when you're on the highway, you actually don't set any goals for it. Its goal is to keep following the highway, but you don't set a speed, you don't set a following distance. What we do is we actually determine that from the entirety of the surrounding traffic.

You actually pick your speed indirectly by putting yourself in a lane that's going the speed you want and then it speeds up to the speed of that lane. We found that that created a lot of subtle safety benefits by looking at the surrounding traffic. If you have one column of cars that's really slow, it creates a bit of friction on the system where you slow down. Often those small changes in velocity can make a huge difference in harm reduction. If you are just going a few miles per hour slower than the necessity of hard is dramatically reduced. We use a lot of cues from the surrounding traffic in order to figure out what the right thing is to do at any given point in time. We think about in terms of the system has to be redundant. We use our two cameras, we use our radar. There's a lot of overlap in those signals, but also it has to make smart decisions even in non-emergency scenarios. That's been a real focus of our development for the last few months.

Grayson Brulte:

Smart decisions around speed, can you give you an example? When I was going down the Keys, I had to go through a construction zone. The speed didn't adjust. I'm like, "Holy smokes, I can't go into a construction zone at this speed." I had to manually adjust it. And certain times it has certain issues in construction zones. Your system will automatically adjust for that circumstance around a construction zone on a highway.

John Hayes:

We haven't implemented that today, but I think that that's a great idea because if you have closed lanes and you're next to, say you have no shoulder, that should create friction. Part of what we want to look at very closely and indeed build models off this directly is what would a person do in that scenario? I know that when I drive next to a construction zone, it creates a bit of anxiety to be really close to a bunch of bollards on one side and I'm paying very, very close attention to it. We see the same thing around here often in carpool lanes where you can have a carpool lane that is going 60 miles an hour right next to stop traffic. The correct behavior that isn't alarming is to pay close attention to that stop traffic and slow down. If you're two lanes over, you can go fast, but if you're just a little closer, you can slow down.

Then a lot of what's built on top of that is micro adjustments. This is something the side cameras contribute to? Do you have something looming on your side? Is there someone crowding your lane, can you make a small adjustment? Are you going the exactly the same speed as someone right next to you? That makes me uncomfortable, makes a lot of people uncomfortable. You shouldn't linger in someone's blind spot. Those micro adjustments contribute to I think a feeling of increased safety. But I also think it contributes to actual safety by taking the intuitions that people have and converting them into driving.

Grayson Brulte:

Which is healthy. That's a healthy environment in the carpool lanes in South Florida, when you get there, especially when you get to Miami-Dade, people used to jerk out into the middle and cut you off. Now the state DOT in the county got smart, they put dividers. The car physically can't get through. You're going to operate the level four system on the highway, you're engaging with your fellow passengers in the vehicle because at that point in the level four you are a passenger. What role does driver monitoring play? If you're going to downtown Orlando or you're getting into the Keys, you're going to have to take over at some point. Do you put a driver monitoring system in there so when the takeover point comes, you know the driver's paying attention?

John Hayes:

I think driver monitoring is really important and there's two ways we would use it. One is we want to know clearly if someone actually intends to drive at this moment. In that way the driver monitoring works in the inverse of most other driver monitoring or mostly they're trying to make you pay attention so that the system continues operating. In the level four world, you want to say, "If the person is not paying attention, the car should be driving," because even if they make very small movements against the controls, they're not intending to drive. But if the system wants you to take over, then it becomes more of a positive driver monitoring system where your choices become either you take over in that you're looking at the road and you're by paying attention to the road, you're agreeing with the decisions that the system is making or you're not paying attention at which point the system goes and executes an MRC.

When I talked earlier about increasing competence of the system over time, I think you can fluidly move between it clearly communicating that it's, "Okay, I'm going to keep making progress towards your destination. Even if you're not paying attention," to a system that says, "If you're not paying attention, then my responses do not make progress towards your destination." I think that it's actually that simple that are you paying attention or not? The car reacts to that and that's part of the interface of the car just as much as the steering wheel is an interface and the pedals are an interface, your attention to the environment is an interface. Another place that I think it would be a valuable input is in emergency features. You have emergency features like automatic emergency braking. If you look at products that are out on the market today, it's often tuned to only activate at the very last moment if it's very certain.

I think the behavior of that feature should be relative to driver monitoring. If someone's looking out the window and they're aware of the environment, it indeed should activate at the last moment. But if they're not looking out the window, then it should activate earlier because that would be much safer for everyone else around. And I think it's just part of the mix of signals you use to try and figure out what is the driver intending to do at this moment.

Grayson Brulte:

That opens up a really interesting question around driver behavior historically and driver comfort and creature comfort. Everybody has their seat a certain way and their mirrors a certain way. How do we get drivers used to driver monitoring where it's not a handicap in the system, it's a feature that can enhance the system?

John Hayes:

I think that's the key is right now driver monitoring, it feels like it's trying to punish you. It's slapping your wrist every single time and no one likes that. That's not a feature that's going to attract people. I think that it has to be driver monitoring for the benefit of the person in the car and that they feel like the monitoring is part of the input and helps them more fluidly interact with the car and express what they want the car to do. If you think about a level two feature in a suburban environment, I'd say, "It's level two, it's less competent," but the car should just drive along the road. That should be a natural behavior of the car. Here the driver monitoring is just, it's making the car easier to drive and making it more automated the more you're paying attention and less automated, the less you're paying attention. And it becomes a choice. It's not a punishment, but it's just a choice of what you want the car to do at this point.

Grayson Brulte:

You're rewarding the consumer. It could be framed that way. As an example, you see some of the stuff on YouTube, the videos around Ford's BlueCruise and people trying to game the system and they turn their head for a quick second and they come back and the system goes bonkers. There has to be that fine middle line if you're making a quick look or a quick dash down, it just doesn't overreact. I think that's something that we have to accomplish. Is that something that you're looking into? What is the right balance between optimal performance, safety and driver comfort?

John Hayes:

We are looking into it and the first readings is it's an incredibly complex context-dependent problem. It depends on the environment because if the environment is truly not something you're interacting with, if you're the only car on the road, then we should tolerate a little bit less attention because that's what people naturally do. If there isn't the possibility of the world changing that much, it should be variable. I think that that comes into why you need to make an updateable system that's deployed because I think you have to learn from actual experience what the right level of attention is. It's not going to be one function, it's not going to be one time. It's going to be a very, very complex model, which is relative to the complexity of the environment.

Grayson Brulte:

Let's use the term fleet for a moment. There's a fleet of Ghost vehicles around the world driving on highways or motorways as a matter of fact, around the world will you get that data put into some neural net to keep learning to improve the system and eventually make a better Ghost driver?

John Hayes:

Today we get everything back. I think in the future we want to be very selective about what we transfer, but I think it's essential that to improve the safety of the system, you have to figure out, "How do I not make the same mistake twice?" To do that you have to return data and you have to return the original sensory data like video segments of what was actually seen at every single camera because that will give you the deep understanding of what the system will do. I think that any system that aspires to be level four has to have this level of feedback because the idea that we're going to imagine every single requirement ahead of time is not plausible. What you need is a feedback system so you could rapidly react to it so that once you know that there's a problem, you have less exposure of making the same mistake over and over again.

Grayson Brulte:

You mentioned you want to be selective about the data you transferred. That's a very bold statement. Why?

John Hayes:

Most data is really not interesting. That's what it comes down to. If you're going to transfer video, in theory, yes, we could transfer every video from every car and that would be several megabytes a minute, but in practice the amount of new information is very, very sparse. Even when we do test-driving, we mark individual points in time. An hour of test-driving probably yields like a minute or two minutes of interesting data. That's going to decrease over time so that most cars will have nothing interesting happening. The other thing is for a consumer product, I think you want to be very selective of what you transfer because you don't want identifying information. You want to isolate it down to the situation. You want an interesting situation that's worth the time of your people to look at and you want it short enough that it's not revealing exactly where someone lives or people drive their cars to all sorts of things and it's like, "That's not our business." We're just interested in the quality of driving in the end and making that continuous improvement.

Grayson Brulte:

Bingo, bingo, bingo, bingo. You hit the nail on the head. It's the privacy issue. Why are more companies not talking about, let's call them competitors than there are L2 systems out there that gather every piece of data, but you're saying, "No, we don't need all the data." That's very similar to the Kodiak approach with what Andreas, their CTO is doing with their data and it's working for Kodiak. I respect that approach, but more importantly, you're taking a privacy-first approach to this. There's cameras on cars everywhere and consumers get worried. And if I'm a litigator or a trial attorney, "You had an affair and we're going through a divorce, bingo, my client's going to win." But on the Ghost system, you're protecting my client's privacy.

John Hayes:

The selfish motive is that we just don't want to transfer that much data. If you think about scaling to millions of consumer cars, we don't want that capacity and it's not going to be that interesting. I assure you our motives for doing this are selfish. I think consumer privacy is a very good side effect of just trying to minimize the amount of data you're transferring anyways.

Grayson Brulte:

Let's call it a win-win situation here. Listeners sitting here listening to the Ghost system sounds very similar to a higher performing L2 system. How would you compare and contrast the Ghost Autonomy system to a traditional L2 system that a listener might have in their vehicle today?

John Hayes:

The first thing from the user experience point of view is we focus very much on a concept called collaborative driving, where there isn't a button that you push to activate it. You are on the highway, it says, "I can drive anytime you want, by turning an indicator blue," and you let go of the steering wheel, it turns green and you don't set anything, the car just does, it just goes and picks a reasonable speed and a reasonable following distance and knows of course where the lanes are and just keeps driving. It's a real change in behavior and that becomes important because if you ask the person what speed they want to go, you have to try and honor the question you asked them. And often those systems will just try and always accelerate to some speed and it's like, "No, we don't want to do that.” We want to be completely fluid from a stop and go situation all the way up to fast traffic without anyone ever touching anything.

On the inside of the system, the system is built with considerable redundancy. We use the cameras as stereo pairs to determine distance. We also use the radar to determine distance. We also do object recognition. There's a lot of redundancy in that forward order. The other thing is when we do maneuvers like lane change, that's based on actual measurements of distances. We're not using radars which can be sometimes unreliable because they're very, very tiny corner radars, but we can do maneuvers on the highway which have been fully vetted for not colliding with something or not disturbing traffic around. What you get is a much more variable experience where you can just turn it on, just goes, if I drive a commute along a highway here, often I will go from very high speed to stop and go to very high speed to stop and go depending on the traffic.

You don't have to touch a thing and it just navigates you through. It's more like it's the product you want. My fundamental belief is that advanced technology is invisible in a way. That's why we don't have any buttons, we don't have any knobs and the system just does the right thing and whenever there's a situation where it doesn't seem to do the right thing, it's on us to fix it and just make the system better every single release.

Grayson Brulte:

I like that advanced technology is invisible. You're right, very simply, I'll put it this way, in layman's terms, you just want it to work. You don't want to have to think about what button do I hit? Think about an older individual, "What button do I have to hit?" And then the older individual saying to his wife, "You told me how to do this. Well I got to call the dealer." You eliminate all that traditional stuff that we've all experienced, especially with older parents.

John Hayes:

We pushed this to a limit in my last company, Pure Storage. The enterprise storage industry, people would have stacks and stacks and manuals on how to configure a system and we replaced all of those with the single business card of the commands you needed because there was no optimization to do. But we pushed it so far that we even removed the power button from the hardware because the possibility of having a power button means that it's possible someone could turn it off. Every button is the option of someone doing something that they didn't intend to do as opposed to figuring, "No, out the system should just run all the time." There's a bit of that philosophy here where there's no on/off button. It's like the system should figure out the right time to run.

Grayson Brulte:

It's simple, simple as simple does. I really like that. You're in there, the system says, "Okay, John, ready to go?" You just take your hands off the wheel. Then it just goes. How does it know how to change the lane? Do you have to do today where I have to hit the clicker and then when it feels that it's safe to go, the car goes or how does that work?

John Hayes:

That's the first level, yes, you hit the clicker and it'll change the lane. The next phase, which will probably be this year, is to build navigation where it'll just automatically change the lane, but that is built on a foundation. The other thing I wanted is to make the system extremely scalable so that you wouldn't have to enter a destination to activate it. You just start driving. If you just want to let go of the wheel for 30 seconds to send a text, that's a perfectly valid way to interact with the system. If you want to do a navigation, like you said down to the Florida Keys, then you can enter that and have the system predict what to do. But my belief is that you'd always do the first before you do the second. Before you do a complex data entry task, the first thing you're going to want is the car is already in a self-driving mode and then you enter a navigation destination so you can build up those layers as you proceed through the trip.

Grayson Brulte:

That's a very interesting approach. It reminds me on the back half of that for EV charging how the EV charging's already built into the car. If you're going on a long trip, it tells you where to charge. I can see it in the future from a navigation system that are built into the vehicles, you hear you're running level two, you must take over the vehicle, prepared to be take over. The consumer can see that when they're route planning. On a long journey, that becomes very interesting because now they're well in tune where they know they have to take over, they get on the highway, they don't have to hit a button, and it just takes over. What you're creating is the simple system that just works.

John Hayes:

That's always my goal. Maybe I don't like technology that much, but I want stuff to just work. Isn't that what everyone wants in their own life? Make my life easier and make my car smarter.

Grayson Brulte:

Nobody's going to write the dummies book for Ghost Autonomy because it's very simple. You don't need a dummies book to do it. You're getting there. You're building a very simple system. You're building a scalable system with redundancy in there with a big focus on safety from a business perspective, how are you going to commercialize the technology?

John Hayes:

We look at commercializing two parts. One of them is hardware. It's not our goal to make money selling hardware. And in fact, we build a compute system for our own uses, but there's no unique chips in there. Anyone can build it. Our goal is that the automakers who want to integrate it have options on how to get that constructed, and that's something that they're very good at. Our business is software. It's like what is the software that powers that that just makes your car much smarter, encompassing, call it conventional safety features all the way up to level four driving.

Grayson Brulte:

Are you hardware-agnostic then?

John Hayes:

We're not agnostic. The hardware has to be good enough. The good news is for the biggest components, there aren't that many choices. There's only three makers of cameras. There's only two makers of radar chips. There's the two market leaders for compute chips being Qualcomm and Nvidia. There's finite choices, but we believe you should have choices at every axis.

Grayson Brulte:

You say an OEM or a tier one and you integrate into a production vehicle, you're getting paid, you have your contractual agreement with them. Now let's fast-forward to the consumer. How do you see the consumer paying for this? With a Tesla, for example, with FSD, you can currently pay $15,000 for the FSD. If you have an older one, you can upgrade for a monthly fee between $99 and $199 a month, which is interesting from a Tesla investor standpoint, on a lease that's additional $1,200 to $2,400 a year in revenue per upgraded vehicle, and between $3,600-$7,200 in revenue over the life of a three-year lease, that's good revenue. That's profitable revenue. Do you see that the OEM say, "You want autonomy? You're going to have to subscribe to it. We're going to follow the Tesla model."

John Hayes:

I think that consumers will have a variety of ways they want to pay just like they do with cell phones. Some people pay cash for them, some of them put them in their monthly payments. When I look at what's interesting with the Tesla model, it's that they put the hardware for this in every single car. If you want to buy a car with Super Cruise, you get additional Super Cruise hardware to make that work. In Tesla's case, every car has the same cameras, same computer. The only difference between a car with FSD Beta or various levels of Autopilot is a software change. That's where I see more of the industry going, which is saying that, "Look, let's just put a great computer in every single car. Put great sensors, great sensors are not expensive. Then after they buy the car, we won't create hardware variations in the car in order to deliver driving features."

That's the part that's very interesting for other auto companies to take from Tesla is stop hyper-optimizing your hardware and trying to inventory manage and trying to make a car whose lifetime is set at the moment, the consumer buys it and say, "Hey, give me a few more square millimeters of compute power and we'll be able to fit software features and let's just take a risk on whether or not the customer buys it." In Tesla's case, it's been very, very successful for them, a very large number of their customers by Autopilot and a surprising number buy FSD Beta, which isn't even released.

Grayson Brulte:

That's true. Tesla's a brand. I'll say that again. Tesla's a brand and they have a very loyal following that's willing to buy beta stuff. When you look at your more traditional OEMs, whether it's a large German OEM or an American OEM, they're not going to do that. I like the cellphone analogy where some consumers, they have it built into their bills, some consumers pay cash. Do we get to a point where if, let's say you're leasing a vehicle from a dealer and it's equipped with a Ghost system, let's say your lease payment is, I don't know, $900 a month, for an additional a hundred dollars a month, the Ghost system could be activated where it's built into the lease payment that at the time of signing because it eliminates that friction, "Here, look at this. They're nickeling and diming me." No, it's just, it's part of your lease payment.

John Hayes:

I'm no expert in what you can put in a lease payment. I think that car dealers are very creative and they will find ways to sell things because that's their job and they'll do whatever financial engineering is necessary that fits the customer's payment goals.

Grayson Brulte:

John, you're clearly on your way to autonomy. You're clearly building a future with Ghost. What does that future hold for Ghost?

John Hayes:

The future is I want every single car to be just much smarter than it is. Just like with Flash, we looked at what was in the lowest end product and what's in the highest end product. I think that the future is we're looking at how do we get more advanced autonomy that's currently available only in very, very high end cars, like call it $80,000 plus cars? I want to put that in a $20,000 car and maybe a $300,000 car for a different reason. But I think that the value for consumers in autonomy shouldn't be something that is limited to a high-end car because of hardware constraints. I think it's also an area where auto companies benefit from having an independent software provider because having the fleet breadth will concentrate R&D and allow us to make a better product than could be made within a single automotive company.

Grayson Brulte:

When it becomes more cost-effective and gets back to the scalability point that we mentioned earlier. When do you plan on commercializing the Ghost stack?

John Hayes:

We're looking at automotive releases between 2025 to 2026.

Grayson Brulte:

You're close, you're not far away.

John Hayes:

It's closer than it seems, but it's very close.

Grayson Brulte:

You're building the stack, you're preparing commercialized, and as I said earlier, Ghost is on its way. John, this has been really insightful and I can't thank you enough for coming on The Road to Autonomy today to share insight into Ghost. There's a lot of individuals talking about Ghost, but you set the record straight on what Ghost is and it's clear that Ghost is building the future of autonomy. As we look to wrap up this insightful conversation, what would you like our listeners take away with them today?

John Hayes:

What I'd like to take away is almost everyone has agreed for a long time that there's a path to commercial autonomy. But I think that a lot of companies have thought, "Hey, let's build essentially the mainframe before we build the PC. Let's build the robot that goes everywhere before we build for the consumer." We started out by saying, "Let's build for the consumer first" because bottom-up innovation is just an incredibly important force in technological advantage. That's what we focused on the entire time, and I think that we're going to meet everyone in 2030 with full autonomy through a completely different path than the DARPA stack.

Grayson Brulte:

It's very clear you followed trends in commercializing personally autonomous owned vehicles is here, the future is bright, the future is autonomous, the future is Ghost. John, thank you so much for coming on The Road to Autonomy today.

John Hayes:

Thank you.

Grayson Brulte:

If you've enjoyed listening, please kindly rate, review and subscribe on your favorite podcast platform. Want to get in touch? Follow us on Twitter, Instagram and YouTube @roadtoautonomy or email podcast@roadtoautonomy.com. The Road to Autonomy podcast is produced by The Road to Autonomy LLC. The views and opinions expressed The Road to Autonomy podcast do not necessarily reflect those views of The Road to Autonomy, it's subsidiaries, it's shareholders, directors, investors, or partners. The content discussed on this podcast is for informational purposes only. It should not be taken as legal, investment, tax, or business advice. Nothing is a recommendation that you purchase, sell, or hold any security for other investment or that you pursue any investment style or strategy.

The content this podcast is presented on an as-is basis with the warranties, express or implied of any kind. Financial mentions about companies in The Road to Autonomy index, and discussions about The Road to Autonomy indexes are for informational purposes only and should not be relied upon when making any investment decision. Furthermore, an inclusion of security within The Road to Autonomy index is not a recommendation by The Road to Autonomy indices LLC, to buy, sell or hold. That's such security, nor is it considered to be investment advice.

In the press