Latest news about Bitcoin and all cryptocurrencies. Your daily crypto news habit.
There are many reasons why I believe trucks are the sensible first step for autonomous vehicles. The trucking industry accounts for nearly 4% of the US economy, with a quarter of that going towards labor costs. Thereâs currently a shortage of that labor (which has the effect of increasing the cost of every physical thing bought or sold), but the biggest argument for automotion is that trucks are just disproportionately dangerous.
Truck driving is one of the countryâs most deadly occupations and fatal accidents are common. One in four drivers of these 80,000lbs vehicles report having fallen asleep behind the wheel in the previous month and are usually surviving on 5 hours of sleep a night, so itâs hardly a surprise that out of over 32,000 fatality accidents in 2015, nearly 3,900 involved a large truck or bus (more than one in ten).
This isnât because drivers are daredevils, but because they work in a system where theyâre only paid per freight mile hauled. This can force them to choose between driving safely and paying rent.
My team at Starsky Robotics is working day and night to make unmanned regular service a reality soon. Which means that, unlike many in the space, we canât think about safety âlater.â Safety needs to happen now.
Starsky Roboticsâ first truck: Rosebud in 2017Automotive Safety (or, how I learned to stop worrying and love ISO 26262)
While relatively unknown in Silicon Valley, Safety Engineering has been one of the core disciplines of automotive engineering for over 100 years. And with good reason: when you build things that can hurt people itâs important to develop processes that allow your team to raise concerns, understand the risk and design for safety.
The former is important. At Mapboxâs Locate Conference the other week I was asked if autonomous engineers should swear an oath akin to the Hippocratic. The question has some basis: as an engineer building a self-driving truck you can be paralyzed with worry that a bad line of your code can hurt someone. Itâs incredibly important that we give our team the opportunity to voice any concerns that they might have. If we choose to move forward anyways, Starskyâs leadership does so while taking the responsibility from those who voiced the concerns (and who developed the system).
While perhaps over-referenced, ISO 26262 (the automotive safety bible), remains as relevant as ever when it comes to designing safe automotive products. ISO 26262 defines the risk of a (sub)system as the product of three values: Severity (of a failure), Exposure (to the failure), and Controllability (in the case of failure).
Risk Score = severity * exposure * controllability
To spare us the lengthy (and seemingly inevitable) pontifications around the different scenarios of an automotive accident, we can judge the severity of a system-wide failure to be a constant. If our system fully fails there will be significant irreparable harm.
Exposure is easy to understand. How much of your drive would that failure affect? You always need your brakes, but how often do you need your left turn signal? What is the likelihood that a particular (sub)system fails: whether itâs your front right tire or your perception system?
Controllability is more nuanced. Essentially, controllability is how skilled of a driver you need to be to safely deal with a failure. Almost anyone can safely manage getting a flat tire at speed on the freeway.
Putting this all together we see the risk score of the tire is sufficient to allow occasionally getting a flat tire is acceptable. The uncontrollability of an outright perception failure is why almost every autonomous team requires a lot of perception redundancy.
The easiest way to âcheatâ controllability for an autonomous vehicle is to always have a trained driver behind the wheelâŠwhich is what most of the autonomous industry is doing. Thatâs why itâs such a big deal that weâve done a fully unmanned test.
In February 2018, Starsky Robotics completed a 7-mile fully driverless trip in Florida without a single human in the truckSafety: an AVâs Most Important Feature
To recap: safety is highly important for self-driving trucks. At Starsky we want to quickly get unmanned trucks on the road. Itâs really hard to design a system thatâs safe without a physically-present human as backup.
Which is why it quickly became apparent that our first senior hire wasnât going to be a controls vet or a machine learning pro, but a Safety Lead.
Weâve built a robot that can drive a truck. Weâve built teleoperation capable of parking a 53â trailer in-between two others with a foot of clearance on each side. Weâve built highway autopilot capable of keeping a 45,000lbs trailer in a lane with high winds and heavy rain.
But making sure that system is safe to regularly go out amongst the motoring public without a physically-present human is a real challenge.
A weak Bench
When we started meeting with safety engineers in and around the autonomous space we noticed something: while everyone and their brother wants to do machine learning for autonomous vehicles, almost no one is working on safety. And many of those tasked with safety roles are looking in the wrong direction.
At one point, we even had a âbig dealâ safety guy ask us why we even needed to design a system that was safe without a physically-present person in it, because âwhatâs the point of a self-driving car with no one in it?â
We met people who were willing to audit the work of others but not set safety policies (and vice versa). Folks who had good thoughts about hardware but stumbled when it came to software (and vice versa). And many who didnât know how to start in an industry where for some time weâll be our strictest critics.
And then we met Walter.
Walter Stockwell: Starsky Roboticsâ Director of Safety Policy
The clear exception to all of the above was Walter Stockwell, who Iâm incredibly pleased to announce has joined Starsky as our Director of Safety Policy.
Walter Stockwell, Director of Safety Policy at Starsky Robotics
From our first conversations with him, Walter has helped shift our view of âsafetyâ from the naive-yet-common concept of safety as a definite to thinking about safety as a process, and as a series of qualified statements.
A system is not definitely safe after it completes ânâ number of tests, whether n is one test drive or Randâs 11B miles. A system becomes safer as it is designed to be safe and rigorously validation tested. A system will never be absolutely safe against all conceivable threats, but it needs to be able to operate absent of unacceptable risks within a specified operational design domain.
Walter brings not only this level of engineering maturity, but also years of hardware/software experience, experience in safety system engineering, practice in developing an organization to raise safety concerns, and leadership experience setting national safety policy in his last role at DJI.
âStarsky Robotics has become the frontrunner in the autonomous vehicle space. It has managed to solve some of the most complex challenges for driverless trucks. Itâs evident that everyone at Starsky has been focused on safety from the beginning and they are years ahead of the competition. Iâm excited to join this immensely innovative and forward-thinking team,â says Walter.
With Walter on our team, Iâm incredibly excited to see whatâs just down the road.
Keepinâ on Truckinâ
-Stefan
(P.S. We are still looking for a slough of team members to help us get our unmanned trucks on the road, not the least of which include help in Controls engineering, Machine Learning, and many others. You can apply at starsky.io).
Making (Autonomous) Trucking Safe was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.
Disclaimer
The views and opinions expressed in this article are solely those of the authors and do not reflect the views of Bitcoin Insider. Every investment and trading move involves risk - this is especially true for cryptocurrencies given their volatility. We strongly advise our readers to conduct their own research when making a decision.