Hands off the wheel

Event related article

March 19, 2015

It’s been a fast-paced year for the automotive industry on their way towards autonomous driving (AD). That’s what more than 150 industry experts agree on during the annual Tech.AD conference in Berlin. Independent consultant and moderator of the conference Richard Bishop remembers that a year ago it was revolutionary for a car OEM to show a self-driving car (SDC) and some shiny videos of the test track. This year, cars like the Mercedes F015 drove themselves to the CES venue in Las Vegas. Audi gambled even higher and covered a distance of over highway 500miles from San Francisco to the glittering show floor. And that was not their only bold move. They flexed their muscles by allowing media people to take the driver seat in the A7 show-off and giving them a hands-free ride. BMW pulled out its magic wand and used a Samsung Gear S-Smartwatch to tell an i3 some smart moves which dutifully find its parking space and come back again. But as exciting as the times seem, there’s still a lot of work to be done until we can snooze or type away on our daily commute: safety and security have to be ensured, legal boundaries set, privacy concerns cleared up and hearts won over. All this doesn’t come easy. So first things first and that is hands-on before hands-free: Extensive trials with more than a handful of cars need to be conducted to collect data and experiences that help define the parameters for safe SDCs. The big question that remains hanging in the air is how to go about it. Do it the Google way and jump right into Level 4 (full self-driving, or human driving alternatively) or play it safe the car industry way and move in iterations from current Level 2 to Level 3 (some auto-pilot functions only)?

Play it safe

One company that has safety embedded in their automotive genes is Volvo Cars and their Jonas Ekmark acknowledges that the future of mobility is all about freedom and emotion once safety is ensured. In his view, public transportation is a major competition to autonomous driving as one can keep an eye on his PC instead of on the road. That’s why Volvo plans to win tons of hard data in one of today’s most ambitious hands-on trials, to stay ahead and competitive: The DriveMe project is already attracting numerous volunteers to be part of a fleet of 100 SDCs roaming certified routes on public roads around a true swede spot: the city of Gothenburg. The maximum speed of 70 km per hour does not seem to diminish the anticipated fun. During the drive multiple sensors and cameras will constantly compare their values with the landmarks of a cloud-based high-density 3D map for accurate positioning. Three tunnel sections provide an extra thrill. It’s a Level 3 trial with ASIL D, so once the car leaves its lane or the certified road the driver will need to take over. If despite all blinking lights, warning tones and seat vibration he fails to do so the car will bring itself to a safe stop. Safety has two souls therefore the host vehicles will be equipped with redundant systems for sensing, signaling, controlling and actuating including hydraulic steering and braking. Eyes are made up by a forward looking tri-focal cam, corner radars and long-range radar. The brain is a second dedicated AD ECU. Besides necessary legal aspects Volvo is bound to boost their learning on how to improve traffic efficiency and safety; and what customers expect from self-driving cars; and how outsiders react to this new breed of vehicles.

Future transport

Vehicles are not the only one to change their role according to Daimler. In their Future Truck 2025 case study presented by Markus Kirschbaum they envision a new breed of people that will turn from simple driver to transport manager. Not an unrealistic role considering a worldwide increase in road transportation by 39% until 2030. Instead of the monotony with their hands on the wheel the truck inhabitants will run their logistical office and control the tasks of when and where to take on and deliver goods; all the while comfortably sitting as the truck gobbles up the miles or loads and unloads goods automatically at the destination. There’s little doubt Daimler’s truck division sees autonomous driving as a major enabler for future transport with a high potential for optimizing the flow of goods. AD can facilitate extended service hours with the much anticipated side effect of preventing accidents caused by tired or bored drivers. After first tests, Daimler’s approach is pragmatic. A simple flip of a switch activates the autopilot that keeps the truck on track in safe distance of others. With a stereo camera looking out front 100 meters, two front radar sensors looking out 250 meters covering an angle of 18° and 70m with 130° and side radar sensors sweeping 60 meters and 170°, its blind spot is currently the trailer. Their testing shows that this is okay and Daimler doesn’t want to complicate the system by making it full circle. This would involve working with multiple trailer manufacturers who would needed to provide the extra eyes. Also connectivity is not the most important feature. Nevertheless the vehicle’s ability to communicate with its peers and the infrastructure will become a major focus once available. Japan’s got a head start here. But questions pondered at Daimler are not only of technical nature. What about design? Do we need external signs to tell other traffic participants what mode the vehicle is in like a chameleon changing its colors? How much information needs to be displayed to the driver internally? Can we reduce the dashboard and throw out the mirrors? Replace the driver seat with a swivel chair equipped with a tablet and cup holder? It will take a while to find out. In the meantime Daimler plans to introduce new features globally every two years to support the goal towards autonomous driving.

Drive in style

But there are other facets to autonomous driving. What about our personal driving style? Does it make sense to reflect this in personal AD modes? Start with a generic AD mode that can be gradually trained to our personal liking? Ibeo’s Dr. Lange sees a future with intuitive driving where vehicle and driver get an equal share in the act. Predictive information is supplied by the car to support the driver in his decision. The information is based on actual laser sensor values that are compared with a reference system enabling the vehicle to define objects around it and their position relative to its own. Ibeo calls this OELA – Object to Ego Lane Association. They envision setting a bench mark with this system as laser scanners are very robust against other sources of light or disturbances like darkness and rain. Their test project in the UK at a roundabout proves: scanning generates more details than a video camera. For example, with one car blocking out the view on other vehicles or lane markings the position detection can be problematic with a video system.  Recent activities of Ibeo include participation in BMW’s self-parking i3 at or Audi’s self-driving A7 to CES; and the Rinspeed concept car “Budii” presented at the Geneva show.

Basics and ethics

But before we can let robo-buddies into our lives we need to get the basics right. SBDs head of safe car, Alain Dunoyer, reminds us that driving is still a highly dynamic scenario and the complexity of autonomous driving, despite making some things easier, adds to the challenge. The car industry’s approach is a step-by-step one where repetitive tasks in cruising scenarios are supported or taken over by automatic car functions. But how does the driver fit in? As the supervisor monitoring the machine? This does not really make sense as humans can easily be distracted when inactive. As the backup pulled into action when the car can’t compute the situation? This makes sense in emergency situations but affords special training for the drivers to keep reaction time to a minimum. Take a bottle that’s thrown out of the window from a car ahead of you. Or a blown tire from a truck soaring through the air and bound to crash into your windshield. How is the car to know what’s coming? Current sensing does not provide the possibility to define textures or density of the objects. The third option is a completely new field of research where driver and car have a cooperative relationship. But who is the boss in this scenario? Who will win the argument when both disagree and will there be time for arguments at all?

There are many questions that need to be addressed and many affect the whole industry. A unification of the HMI is in order which means standardizations across different car brands. The use of graphics, audio, text and haptic measures should be consistent across all types of cars. Green or red lights in the car should always carry the same message and controls, buttons and switches always activate the same features. The dashboard as we know it is bound to change anyway as the driver needs to know the ADAS status at all times and safety related messages need to have priority above all other functions. A warning sound cannot be mistaken as an anomaly in the latest Spotify music stream. It’s paramount to remove confusion for the driver. It’s also paramount to find ways to prevent the driver from cheating the system like attaching a can to the steering wheel.

It’s in the human DNA to try and trick the systems: out of curiosity, out of laziness, out of mischief or worse. A can on the steering wheel is a minor nuisance compared to recent cyber security breaches and that’s why the car industry needs to embrace a mindset similar to the IT industry. But security is not the only pothole in the road to autonomous driving. The increasing complexity of systems used in the car will affect the whole ecosystem. What happens when you need to replace your windshield? If your repair shop around the corner does not have equipment to accurately attach the camera and sensors then your whole positioning is out of whack. But what dealer will volunteer to invest a 5,000 Euro expensive kit when it doesn’t align with his cash-flow? There’s only so much that a dealership wants to deal with and ADAS systems are not on their wish list. Yet there are more important challenges to be met than processes that can be put into place with the right money or by twisting some arms.

How will the self-driving car of the future cope with unexpected scenarios? A reference system for possible objects to be encountered, roads to be driven on or traffic participants to be met can never be complete, especially as it depends on local driving habits. People who have been to Rome or Vietnam with their many scooters and seemingly reckless drivers can relate to this. How should an autonomous vehicle react in case of an anticipated crash? Do we need to program human behavior into the cars? And if so, what morality will it reflect? Protect others or protect the occupant of the car? Ethical thoughts similar to those pondered in the so called Trolley dilemma. Is it acceptable to endanger one life over many? It seems that sense and sensing play one of the most vital roles for AD in the future.

Content and Context

Equally important is to make sense of what the car sees and to put this into context. Dietmar Rabel from HERE, a Nokia owned map-related solutions provider, points out that the innovation cycle in automotive map applications is accelerating increasingly faster and has long since passed the pure map providing stage. Highly precise positioning and localization is necessary to enable accurate planning ahead for SDCs including maneuvers beyond sensor visibility. HERE’s intelligent car division sees the solution in a connected, cloud-based service with three different layers. The first one provides LIDAR created road geometries, expected to stay fairly constant except for constructions. Real-time information is available through the second layer and designed as a streaming service which delivers the respective map info as necessary. In order to provide the driver with his own comfort level in the SDC the third layer contains information how a human would handle the car on the particular stretch of road. It is important to define different profiles for this humanized driving as elderly people might not enjoy the burn-rubber-style of a hot-headed youth in a tight curve. Tests showed also that even a person sporting a speedy driving style might freak out in certain situations. Envision a narrow medieval gate that you might pass slowly in order not to brush along with the mirrors. Whereas the SDC might drive through at the fully allowed speed as it computes the necessary distance in millimeters and adjusts in the blink of an eye. The cloud functions as an extended sensor to provide the car with aggregated real-time information. It’s crowd sourcing allowing for confirmation of reports from single cars, because the road does not have to be icy in the winter if one car reports it to be slippery. The car could simply have bad tires. HERE is working with Nokia Network Services to perform some of the calculations directly within the cell towers to get reaction times down to milliseconds. This will also help to achieve a reliable density level which is necessary for vehicle to vehicle communication (V2V) and by far not there yet. However, trucks are excellent for piloting V2V sensor data. In future, once there is a solid data base in the car, efforts for keeping the data connection up and running will be bigger than ensuring the data is downloaded correctly.

The luxury of driving

For Audi the connection between car and driver is all about fun and excitement, in a secure and safe way of course. Dr. Bjoern Giesler points out that robot taxis are not on the agenda for the luxury brand. However, a Level 3 piloted driving in high traffic or for parking at home is in a few years. A pressing question is how to ensure fun and the alertness of the driver during an otherwise dull commute?  Of course you can monitor the interior and stop the car in case the driver falls asleep and can’t take back control when necessary. But who wants his car to tell him to take a break? Audi did a study on the average take-over time based on different scenarios. The less the driver was occupied the longer the reaction time. But how do you occupy yourself while driving? Monitor the car? Who would buy a system that doesn’t allow you to take your eyes off the controls? Best is to have a little something going on, to side task the driver and thus keep his alertness level up and running. Results of the study show that the take-over time of 4.2 seconds while monitoring the car can be reduced to 1.9 seconds if the driver undertakes productive tasks offered by the integrated infotainment system.

Eclipse of human reflexes

How are the chances that occupants of cars can detach themselves completely from any driving tasks? Can the self-driving car beat the human body to ensure autonomous driving in any condition and scenario? Questions asked by NXP’s Meindert van den Beld who thinks that cars need to be more intelligent than aircrafts. In comparison vehicles on the road operate in a more narrow space, require more decisions per seconds, present many unexpected circumstances and have less physical space and financial resources available to integrate overly redundant systems. There are cases like the IBM victory in chess in 1997 or Jeopardy in 2011 and Microsoft’ s image recognition capabilities based on deep convolutional neural networks (CNNs) as demonstrated in the ImageNet 1000 challenge that have eclipsed humans abilities. But recognition and calculation of possibilities are not the only physical factors that make up a human body.

What about reflexes that are triggered involuntary? There are some reflexes that are already mimicked by vehicle technology. The movement of eyes to the opposite direction the head is turned to, called Vetibul Coular, can be seen as model for ESP and traction systems or adaptive head lights. Seat belts and airbags or pre-crash and emergency brake functions are similar to the Moro Reflex where you suddenly spread your arms when you have the feeling of dropping. And the reduction of the pupil size due to changes in the intensity of light, the pupillary light reflex, is reflected in the automatic white balance correction. And just like the human body will focus all its energy on the most important organs and gradually shut down support functions in case of an emergency the SDC needs to monitor external and internal sources closely to start a graceful degradation when systems malfunction.

The cars of the future will have more human based functions. Take the Myotatic Reflex that automatically regulates our skeletal muscles so our body can remain its balance. Multiple ADAS sensors and fast processing computers that are able to learn and precise actuators will make up the fine clockwork that will make the SDC tick. Human emotional signals like the reddening of the face will be reflected in visualized communication between vehicles and their infrastructure using lights of multiple colors and sounds. But can a SDC like a robot taxi ever compete with the aura of a human being? Much can be learned from the aircraft industry where despite redundant technical systems there are two pilots to exude the feeling of trust. Would you board a plane with no captain? Difficult. Probably only if your life or that of your loved ones depended on it. Would you get into a taxi without a driver? Maybe. Depends on how it’s presented to you as videos of Google’s self-driving car taking volunteers for a ride around the block show. As long as there are no major accidents with casualties everything is likely to be fine. But what happens if the car behaves differently to how you would react in a certain situation? And all bizarre feelings aside, what about moral decisions? Can cars really be designed to learn and judge situations based on ethics?

Graphical Recording of we-conect conference Automotive Tech.AD Berlin 2015 by Susanne Asheuer

Pave the legal way

Therefore it’s paramount to create a legal framework for autonomous driving. Professor Dr. Eric Hilgendorf of University Wuerzburg outlines the current situation. The legal aspects of driving have been governed by the Vienna Convention on Road Traffic since 1968. Their articles 8 (there must be a driver) and 13 (the driver must be in control of vehicle at all times) stood in direct contraction to AD. Amendments pushed by Germany, Italy and France, and their luxury car brands, have been made in spring 2014. Drivers are now allowed to take their hands off the steering wheel but must monitor the self-driving cars at all times. This paves the way in contracting countries but still poses challenges of national vs. international law. The US legal boundaries, for example, are regulated by their National Highway Traffic Safety Administration (NHTSA), including Federal Motor Vehicle Safety Standards (FMVSSs) to which new vehicles must be certified, do not generally prohibit or uniquely burden automated vehicles. The regulations can be analyzed and decided upon by the US states individually, defining how their vehicle codes would or should apply to automated vehicles, including those that have an identifiable human operator and those that do not.

Above legal technicalities liability is a pressing question to be answered. First, there is civil liability governed by tort law considering contractual and strict liability. Second, there is criminal liability, defined as taking responsibility for any illegal behavior that caused harm or damage to someone or something. And this is the dilemma: SDCs relieve the driver of his driving tasks and at the highest level of integration also of monitoring the autonomous system. But what about checking the car’s functions before the commute to ensure all systems are go? Take a toddler who is going after his toy that fell behind the car and is injured because the rear looking sensors don’t work properly. This would be a case of negligence. The same argument is true for malfunctions during the drive; therefore changes to the duty of care can only be made once the autonomous systems have proved themselves as reliable. Until then, consequently, drivers would not be allowed to conduct side tasks during the ride.

What about the liability of others? Cars are becoming increasingly interconnected.  Exchanging or downloading of data in driving cars will become a standard process that needs to be made secure to prevent any malware from being installed. The European E-Commerce directive provides a system of provider liability. What about the car makers? Can employees responsible for designing and programming systems be held liable? How can they prevent potential causes of liability? One way would be to include autonomous accident avoidance systems. But how should those be programmed and what rules need to be applied? Is it the decision of one life against many? Germany’s dogma is that one life is as valuable as any number of lives. What about human body versus property? Can we, and do we, really want to let cars make these decisions? Because from a legal perspective the human driver will decide in a split second during the live event and cannot anticipate the collateral damage; and therefore might not be liable. Systems, however, are designed and programmed in advance. The programmers can think of possible scenarios in advance and design accordingly, hence, he can be held liable. A tough call for employees in the thick of the action.

Privacy please

But liability is not the only passenger in autonomous vehicles. There’s privacy in the back seat as well reminds us Freyja Van den Boom who is a researcher in the field of law and ICT at KU Leuven in Belgium. While politicians are pushing for a change of the Belgian Highway Code to make testing of SDCs on roads possible, major concerns remain regarding performance in poor driving conditions and liability, security and not to forget privacy as data is said to be the new oil. Autonomous doesn’t equal anonymous. Quite the opposite is true as examples like Tesla who have extensive knowledge on their drivers’ behaviors prove. Automated processing of personal data, defined as information relating to an identified or identifiable person, the latter clinically referred to as data subject, falls under the Data Protection Directive (95/46/EC).

There’s a difference between controller who collects the data and the processor who acts on behalf of him. It is ruled that the controller needs to treat data lawful and fair, meaning he needs the unambiguous consent of the data subject in form of a contract to use data that is collected or captured (think of dash cams or GPS trackers). The controller also needs to determine for which task (which must be adequate) and how he will use the data BEFORE processing it. Re-use requires additional legal consent of the data subject who has the right to get the evaluated information and also request access, correction and deletion of the raw data. It is understood that the controller has to implement appropriate technical and organizational measures to ensure data security. The car industry would be well advised to consider a privacy-by-design approach. They are also obliged to let the user know with whom his data is shared to enable autonomous driving. Another twist is added by emerging new concepts where we don’t own vehicles anymore but just buy mobility.

Google’s mobility to go

A concept that Google seems to follow, judging from their SDC activities and investments into mobility networks and apps. Egil Juliussen, Ph.D. Director Research Infotaiment and ADAS with IHS Automotive, sheds light on the question that currently shakes up the industry: Will Google take it up with the OEM? You bet. Why? Autonomous transportation at its highest developed level with no human driving at all, IHS defines this as Level 5 instead of the NHTSA Level 4, will require massive computing power as well as data storage and processing capability. There are surroundings to be scanned and evaluated in order to determine the exact position; all functions of the driving system need to be adjusted meticulously to comply with traffic rules and avoid crashes while navigating to the desired location; available data must be current, vehicle software up and running to ensure passenger safety and emergency maneuvers as well as communication with the environment. Tons of data with complex structures will need to be handled, mapped, categorized and incrementally distributed.

Guess what Google is good at. And this affects all kinds of self-driving vehicles anticipated for the future, independent of ownership, purpose or cargo. Will Google muck up every car makers’ turf? Probably not. At least in the beginning, as the respective business models are different. Luxury brands still appreciate their customer’s desire for private ownership that gives them control over their car. Therefore they target Level 4 vehicles with both a self-driving and a human driving mode. Concept cars like the Mercedes F015 additionally cater to space and privacy as luxury goods of the future. Google, however, wants to tap into the honey pot by addressing the volume market prone to turn into a car-as-a-service field enabling flexible mobility for anyone. That’s why they take a short cut and jump start Level 5 vehicles with self-driving mode only. First targets are fleets, taxi and transportation services with solutions out on the road by 2020 whispers say.

But slow and steady wins the race, right? The automotive industry has decades worth of experience in their trunk and it shows that incremental development is the way to go. Be cautious and play it safe. That’s better than to be sorry. Continue down your R&D path, add a playground in the Silicon Valley to test some new toys and leverage your suppliers’ expertise. If they want to leverage on Google and other newbies in the field, fine, but they won’t get to take our reins. Marketing can act the crowdsourcing daredevil and pick the customers brains. That’ll do the job and likely heave up autonomous driving in luxury cars to Level 3 by 2016 (Tesla), 2017/18 (Ford, Volvo, GM, Honda and Toyota) or 2020 (Mercedes, Audi, BMW) and to dual-mode Level 4 by 2025; whereas Google might have Level 4 vehicles out on the road starting in 2017 and Level 5 as early as 2020 if they get their stuff right. In comparison, the auto industry is anticipating emerging growth in the L5 segment a decade later.

Google’s wiring

So what is it that makes Google despite their lateral entry roll faster than the pros? Is it their mission to organize the world’s information and make it accessible for everybody to use? Since their foundation in 1998 experience has taught them a few things that are most important: focus on the user, first and foremost; whatever you do do it fast and extremely well; information is needed everywhere, by everybody. Google’s success, reflected by a comfortable, yearly net income growth of over 20 percent in large owned to advertising, allows them to invest heavily in R&D to become technically sophisticated and acquire knowledge where necessary. More than 160 firms have been scooped up by the search giant who expanded their reach into many segments of daily life like a kraken. Recent acquisitions were focused on companies good at maps and locating, home automation, machine learning and robotics.

Google’s well filled war chest also allows them to lavishly invest in businesses that have the potential or are already starting to disrupt whole markets. So far more than 250 companies got lucky. However, Google is not afraid of competing with its formerly fostered friends as the search giants start of own activities in the field of ride-sharing services show despite the heavy investment in Uber ($258M in 2013 + additional $$ in 2014). The internal research department, dubbed Google X, is encouraged to think beyond the blue sky and shoot for the moon. That’s what they did in 2008 with the first Level 4 self-driving cars that have so far covered more than 700.000 miles and what they do now with their goal to go purely electric. No gas for Google. Other prominent projects include Google Glass which could be used with in-car apps and support augmented reality output for head-up displays or project Loon with its network of balloons to provide internet access to remote locations. Catering to the ever increasing amount of transported data and demand for higher bandwidth, Google has started to roll out their own fiberglass network with speeds of 1Gbit/s in several US cities.

Their main driver behind activities like these, however, is their core competency: software development. Paired with their strong and fast innovation mentality this has turned them into one of the most desirable companies to work for. Google has proven their ability to build up the infrastructure needed for organizing and delivering vast amounts of data while leading in efficiency by aiming for the use of sustainable technology. Developing software for SDCs and building up a system necessary to bring clean mobility services to billions of people seems nothing but an entertaining yet rewarding challenge for them. In addition to already existing revenues from Android Auto OS and Google map services, software licenses and annual fees for updates are estimated to be a multi-billion dollar market looking out into the next two decades. The value of software built into a Level 3 car is around $60 plus some yearly $15No for updates and will likely double for Level 4 and 5 vehicles. Not to forget the hoard of advertising money to be yielded from web searches during the hands-free commute. Hence, Google’s strategy will most likely not include building cars themselves overnight but rather work with someone who already does, or buy them. Although a Byte says they may have learned from the Motorola stunt. In any case, there’s a huge potential for agile or ailing car makers and Tier1 suppliers with manufacturing capability like Magna Steyr, or with a large vendor’s tray like Bosch, to hop onto the bandwagon and join the Google gold rush. The ones that do will walk a tightrope nevertheless. The luring rewards are huge, so are the ties that bind.

Farmers market

An area where close ties are particularly desired is agriculture, at least between harvester and tractor. The better both vehicles are dovetailed the higher the yield. The potential for increasing the output offered by autonomous driving is unmatched by what humans could achieve. But what are the variables? What autonomy levels are needed or make sense in farming? Prof. Dr. Noack from the University of Applied Sciences Weihenstephan-Triesdorf challenges his peers; Thilo Steckel of agricultural engineering company CLAAS and David Brunner, researching autonomous vehicles in vineyards for Geisenheim University, among them. Which requirements are common for on-road and off-road use in agriculture? Can the two even be compared?

In numbers not by a long shot, that’s for sure. Also farming areas have a much higher complexity compared to the mostly well-defined public roads with structured lanes. Requirements regarding steering, traction control, robustness, reliability, exact cooperation with other machines and the ability to learn differ widely as well. What about safety? Normally, there shouldn’t be many unexpected incidents interfering with the harvesting process. The chance of a bicyclist jumping in from a dark alley is rare and the same goes for a plastic bottle thrown out from a car in front causing an emergency maneuver as the vehicle neither can define the object nor compute its impact. Strollers and joggers can clearly see the machines in action and stay out of the way. If they don’t that’s their own fault, isn’t it? The operator of the machine regularly announces his presence and undivided attention via the dead man’s switch, lest the machine goes around in circles. No harm to be expected there either.

What about positioning? Although it doesn’t make much sense to switch into autonomous gear on public roads positioning capabilities of agricultural vehicles are more accurate than cars. How else should one apply the seeds exactly on the fertilizer tape that was planted months ago before the winter? What about scanning the environment? Nothing overly fancy, mid-range radar sensors are just fine for the job: looking out far enough and unaffected by dirt and dust. Drones might help in the overall planning process if it wasn’t for the line of sight regulation rendering their use void. More important are cameras to keep an eye on the fill level. What about security? A tricky topic. Harvesting data on soil consistency, nutrition, maturity level of the crops and work hours spent for reaping the field provides farmers with valuable details for future planning. For smaller farmers who need to outsource to subcontractors, however, this turns into a double-edged sword as their farming assets become an open book. The challenge is to find the right balance in exploiting the yield increase with high and affordable degrees of automation while securing data integrity.

Driving away distraction

What about the integrity of drivers in automated cars? Humans are easily distracted in dull environments that don’t challenge their attention. How does the interface between car and driver need to be designed? A question asked by BMWs Dr. Lutz Lorenz who adds that it might be necessary to change driver license requirements for autonomous vehicles. Not quite comparable with planes but pilots are trained for automated flying and take over scenarios that might become necessary. How would those need to be defined in cars? Is it enough to put your hands on the steering wheel? Does the touch need to happen with a certain level of force? What happens when you brush the steering wheel by mistake? One thing is for sure. There need to be multi-mode warnings employing audio, visual and vibration tools to shorten the time the driver needs to take over to a minimum. Good vibes, a flashing head-up display and an unmistakable warning sound is in order. After all it’s not a living room you are in. It’s a car and it’s dynamic.

Dr. Dietrich Manstetten, Chief HMI expert from Robert Bosch, follows suit and acknowledges that the role of the driver is changing. The implications on HMI requirements need to be derived from three human behavioral categories: skill-based, knowledge-based and rule-based. The good driver in the future is still aware of the situation around him despite the shift of his visual attention. This sounds a little like a catch 22 unless the vehicle helps with the interplay of driving and non-driving moments. One viable option is to offer side tasks while monitoring the driver and sensing possible states of distraction and drowsiness or changes in health that trigger adaptive assistance from the vehicle controls (read: warning bells); or engage in an emergency maneuver like a controlled stop at the side of the road. Maybe the Google way is the better approach than the one the automotive industry is going?

Validating safety

The market will decide TRW’s Dr. Karl-Heinz Glander is sure. Google’s advantage is that they can benefit from many assisted driving functions already out on the road. Lane keeping, adaptive cruise control, collision mitigation braking, traffic sign assist, blind spot detection, emergency brake and steering or lane change are all functions already used in preparation for automated driving. To ensure safety it is mandatory to include a separate safety domain ECU into the functional architecture that is designed to cover four areas: observation – perception – decision & planning – realization. Looking at today’s specifications for validating automated functions, i.e. one million miles for emergency braking alone, it’s clear that the rules need to be changed and a more comprehensive approach needs to be taken. Redundancy will play a key role and functions to be backed-up include sensors, actuators, power supply and communication; not to forget computing capabilities that should be split into multi-core units for high performance and main computing respectively.

Software and its development is becoming an increasingly important component as well. Functionality is getting more complex by the minute, Toyota software shows 10,000 variables globally, and the majority of software bugs is found in the integration stage and not during programming, knows Dan Mender from Green Hills Software. Their experience is that fifty percent of the total software development time is consumed by debugging software errors. Therefore, the earlier in the process you start to look out for bugs the better. On average it takes some two to three hours to fix a bug during coding. The same bug fix might occupy as much as sixteen to eighteen hours of your time if found during the integration stage. But software does not equal software. For the automotive industry it’s vital to look for partners beyond the main stream database vendors who are entering the stage one by one. Their security evaluation levels comply only with EAL4+ specifications and offer no protection against hackers. Green Hills Software claims to be the only company catering to EAL6+ security level.

Ensuring every security level in software and hardware that can be thought of is essential for automated driving. However, one thing must not be forgotten: The car owner. He has his own mind and own taste; and if the personalization trend is anything to go by it is to be expected that he might decide to change parts of the car. To add just one more dash to the cocktail of complexity.

Cheers!

Author
AuthorBritta Muzyk
2019-02-15T20:49:29+01:00

Go to Top