Highly automated and autonomous vehicles require high precision maps with centimeter level accuracy for localization, and prior knowledge for systems like perception, navigation and control. High Definition maps, with detailed geometry and semantic information about the environment, enable highly automated driving. They can be considered as another sensor to autonomous vehicles (AV) since they aid in decision-making and decrease the unpredictability.
This paper takes a deep dive at the necessity of High Definition (HD) maps, how they assist highly automated driving, and their current applicability and future scope.
HD maps and highly automated driving
Levels of autonomy in vehicles are categorized by SAE1 from Level 0, which is No Automation to Level 5, which is fully autonomous. High precision maps are a pre-requisite to Level 3 and above, which are high levels of automation with progressively reducing human dependency.
HD maps contain important bits of information like basic geometry profile of the road including curvature, slope, lane width, number of lanes per direction, among others. This information is topped up by basic decision making which comes “naturally” to human beings. For instance, if one is in the rightmost lane while taking a right turn then post the turn, the vehicle usually remains in the right most lane. This information is “coded” in HD maps as “lane connectivity”. Lane boundaries, the center of the lanes, road barriers, emergency lanes, curbs, yield relationships at intersections (stop and yield signs) are all “known” to HD maps.
HD mapping layers
Different providers talk about different number of layers in the HD map, but all of them suggest that the level of intelligence (e.g. analytics behind the information stored in the layer) increases up from the base layer. In addition, the higher the autonomy level of a vehicle, higher the number of layers utilized.
For example, an L1 or L2 vehicle would use the bottom most layer (with reference to Here technologies maps2) and bottom 3 layers (of the Lyft LEVEL5 maps3) and the map so used is called the ADAS map. L3 and above levels of autonomy use all 3 layers of Here technologies maps and all 5 layers of the LEVEL5 maps and are HD maps. HERE HD maps are logically structured into 3 layers - Road Model, HD Lane Model, and HD Localization Model (See Figure 1). The Road Model layer contains road topology, road center line geometry, and road-level attributes. The HD Lane Model layer includes lane topology data and lane-level attributes. The HD Localization Model layer includes various features to support localization strategies. Intelligent layers sit on top of the HD map. For instance, activity layer is a dynamic layer recording short-term changes to the road network. The top most layer is the analytics layer, which describes how human beings drive any given stretch of road.
The five layers of the Lyft maps are the Base map, Geometric map, Semantic map layer, Map priors layer and the Real-time knowledge layer. The base map combined with geometric map layer and Semantic map layer of LEVEL5 maps conveys static information. The static layer resembles the commercial maps that we access in our cell phones today with more details on features, lane and driving infrastructure information. The dynamic layer or the Map priors layer covers the temporary changes to the static layer, like accident zones, road construction or maintenance stretches and also the human behavior data. Data such as temporal, statistical, probabilistic, analytical inputs to the map that are gold to an autonomous vehicle, are also part of HD maps since they add to an AV’s overall prediction capabilities.
HD maps- Analytics layer
The top most layer in HD maps provides analytics related information. This covers stochastic details like the probability of a driver stepping out of a car parked roadside at a specific time of the day. Other pieces of information may be purely temporal in nature, for instance, number of children expected on a road at a certain time of the day in a school district. This layer includes driving profile information that may be sourced/matched based on real-time driving behaviors of the surrounding vehicles.
HD map standards
The integration of the autonomous or partially automated vehicle ecosystem with HD maps has two very important interfaces. First, Advanced Driver Assistance Systems Interface Specification (ADASIS), which is an open group with the goal to define a standardized data exchange protocol between map database, ADAS and automated driving applications. Second, the Sensor Interface Specification (SENSORIS), which defines an interface for requesting and sending vehicle sensor data from vehicles to clouds and across clouds. The use cases for this include updating HD maps and near real-time collection of sensor information.
The Navigation Data Standard (NDS), ADASIS v2 and ADASIS v3 are the most widely adopted standards for maps used in navigation of L0 to L5 automated vehicles. The NDS is a format for automotive-grade navigation databases, which is being jointly developed by automotive OEMs, map data providers, and navigation device and software application providers. It is a format for standard navigation maps, meant for usage by drivers.
ADASIS v2 provides road level data, caters to ADAS functions, which have some ADAS attributes with dependency on maps, such as adaptive headlights. For adaptive headlights, having electronic horizon information helps with information on the road elevation/terrain, curvature, thereby ensuring the elevation of the beam is aligned with the road angles. ADASIS v3 is the standard for HD maps, which are lane level information accuracy maps with dynamic and analytical/probabilistic details needed by a completely autonomous vehicle to move. HD maps provide dynamic information, updated regularly. This information is needed for all L3+ automation level features like platooning, automated valet parking, robo-taxis and others. Refer Figure 3 for more details.
HD map making companies
HD mapping is a niche space with many players. Companies like Here, TomTom, Civil Maps, Deepmap, Carmera, Nomoko, Dynamic Mapping Platform (DMP) are the mainstream companies. However, companies focusing on ADAS and AV technology are also building their own maps. These include Mobileye (Roadbook with Road Experience Management)4, Lyft, Toyota Research Institute Advanced Development’s Automated Mapping Platform.
HD map making space also has a consortium headed by HERE. This is known as the OneMap Alliance5 and like any other alliance, the aim is ensuring standardization of formats and cross platform usage.
AD/HAD without HD maps
A variant approach for autonomy is to avoid the usage of HD maps altogether, and build them up through immediate sensing. This approach can be highly scalable, as they can deployed anywhere without the tedious process of creation and maintenance of HD maps. First hurdle for this would be the technical challenges of this approach like complex intersections, unknown entities and edge cases etc. Secondly, several optimizations are possible with the availability of prior information, like improved vehicular management resulting in fuel efficiency with the knowledge of intersections, efficient navigation management of platoons etc. Thirdly, many ADAS functions such as Adaptive Headlights, Intelligent Speed adaptation depend on electronic horizon i.e. look-ahead information obtained from these maps. Lastly, live updates to these maps aid in real-time changes such as re-routing owing to accidents, construction zone blockages or other dynamic issues. An approach without HD maps would need to identify similar solutions to address these scenarios.
HD mapping for AVPGs
Autonomous Vehicle Proving Grounds (AVPG) are testing tracks for autonomous vehicles meant to verify their response to a multitude of scenarios in different driving conditions, and environments.6
The manmade structures in an AVPG include urban and rural roads, highway stretches, parking areas, cloverleaf or other complex intersections, urban canyons and notably a user-defined area. All but one of these structures are immovable and rigidly defined. The user-defined area is, as the name suggests, one that can be redefined to create certain infrastructure entities according to the envisaged scenario. Certain features7 or objects like traffic signals, Portable Variable Message Signs (PVMS), removable strips for lane markings/partitions, help create any environment in this user-defined area for the AV to be tested in. This poses an interesting problem for HD mapping companies - what is the strategy for a user-defined area? How different is the mapping problem from the introduction of PVMS in variable traffic conditions?
Well, when PVMS is introduced in variable traffic conditions or traffic cones are placed on a road to mark an accident zone, the connected vehicle technology helps. This information is updated on HD maps in the cloud and this update reaches all vehicles that follow the first vehicle. The case in AVPGs is very different, there is only one AV and it needs to be tested. For testing, the static elements of the environment need to reflect in the HD map it refers.
The proposed solution is a modular “plug and play” approach to the variants of these maps. Features whose positions are changing in the map are known in advance, as described above. Digital twins of these features or objects have to be created and stored in a library. This library has to be invoked at the time of updating the HD map before the AV test. The updates to this HD map can be stored as a pluggable layer, called the “Dynamic location layer”, which can be invoked by the end user as applicable. Depending on the complexity of the customization, the creation of the pluggable layer can be as simple as introducing a new entity into the HD map or a full-fledged data capture to create it.
Also, the Analytics layer, which provides cognitive capabilities in real world deployments, can be removed from HD maps to make the decision making more difficult in AVPG.
Can HD maps extend beyond autonomous vehicles?
One-day autonomous drones will use an advanced version of HD maps for delivery, especially when non line-of-sight drones get the necessary regulatory approvals. Locations will then not just be limited to two dimensions but will also include knowing the building floor for delivery by drones. Eventually the question that we are faced with is just how much mapping information is enough? Delivery by drones, in essence, will imply the usage of HD maps (as they stand today) combined with street views, in a quasi-three dimensional space.
Along similar lines, if delivery by autonomous terrain robots is to become a norm soon, then one of the concerns that need to be addressed is connectivity on pedestrian sidewalks. There is a high chance of pedestrian sidewalk not being continuous. In such cases, the city road infrastructure planners depend on a human being’s cognitive capabilities to figure out “how” to hop on from one segment of walkway to the next segment. Either, the plan has to be to code the same intelligence into all machines, or make pedestrian walkways better and then map well-connected paths for these autonomous machines.
The relevance of HD maps
Despite all the information that HD maps provide and numerous ways in which they aid decision making for AV, there is some dispute around whether or not the usage of HD maps is needed at all. Tesla for instance, has highlighted the issue of updates needed for HD maps, and claiming that they “can’t adapt”8, turn down the idea of HD maps. Whether Lidar is necessary to create HD maps is also under question with, for instance, Bosch creating Radar Road Signature together with TomTom9.
HD maps aid decision making by being additional source of information, which are handy in various edge cases (where instant sensing may not suffice). They also help AVs standardize learnings across various entities using the same map, thus driving uniformity in driving behaviors as well. HD maps are going to stay relevant.
References
Sreesankar R
Sreesankar is a Senior Architect in the Wipro Autonomous Systems and Robotics practice team. He has over 19 years of experience in varied domains like Automotive, Embedded, and Securities. Currently, he leads the Wipro Auto Annotation Studio team, focusing on Computer Vision, Sensor fusion, Machine Learning, and Explainable AI, with the objective of managing and improving the AI system lifecycle. Sreesankar can be reached at sree.sankar@wipro.com.
Garima Jain
Garima is a Global Business Manager in Wipro’s Global 100 leadership program, working across functions and business units in India and the US, with rotations in pre-sales, sales, delivery, and domain consulting. Prior to completing her MBA from the Indian Institute of Management in Bangalore, she worked as an engineer on automotive core chip design for three years. She is passionate about robotics and all things related to autonomous and connected vehicles. Garima can be reached at garima.jain5@wipro.com