WO2024001554A1 - 车辆导航方法、装置、设备、存储介质和计算机程序产品 - Google Patents

车辆导航方法、装置、设备、存储介质和计算机程序产品 Download PDF

Info

Publication number
WO2024001554A1
WO2024001554A1 PCT/CN2023/093831 CN2023093831W WO2024001554A1 WO 2024001554 A1 WO2024001554 A1 WO 2024001554A1 CN 2023093831 W CN2023093831 W CN 2023093831W WO 2024001554 A1 WO2024001554 A1 WO 2024001554A1
Authority
WO
WIPO (PCT)
Prior art keywords
target
map
road
vehicle
frame
Prior art date
Application number
PCT/CN2023/093831
Other languages
English (en)
French (fr)
Inventor
张洪龙
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Publication of WO2024001554A1 publication Critical patent/WO2024001554A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour

Definitions

  • the present application relates to the technical field of map navigation, and in particular to a vehicle navigation method, device, computer equipment, storage medium and computer program product.
  • the navigation device While the vehicle is driving, the navigation device usually displays the vehicle navigation interface based on the vehicle's driving speed, direction, and vehicle location, combined with the navigation route planned for the vehicle, to implement vehicle navigation.
  • the display ratio of the navigation map is fixed, which cannot better present the road conditions, and the navigation effect is poor.
  • This application provides a vehicle navigation method.
  • the methods include:
  • a vehicle navigation interface for navigating the physical vehicle, the vehicle navigation interface including a map;
  • the device includes:
  • An interface display module used to display a vehicle navigation interface for navigating the physical vehicle, where the vehicle navigation interface includes a map;
  • a map display module configured to display a virtual vehicle on a target road in the map, where the virtual vehicle corresponds to the physical vehicle; when the physical vehicle travels to the current location and is in the target driving scene, determine the Target driving scene and road range data corresponding to the current location; update the image frame and perspective used to display the map to the target image frame and target perspective, and the target image frame and target perspective are consistent with the road range data. adaptation.
  • the computer device includes a memory and a processor.
  • the memory stores computer readable instructions.
  • the processor executes the computer readable instructions, the steps of the above vehicle navigation method are implemented.
  • This application also provides a computer-readable storage medium.
  • the computer-readable storage medium has computer-readable instructions stored thereon, and when the computer-readable instructions are executed by a processor, the steps of the above vehicle navigation method are implemented.
  • the computer program product includes computer readable instructions, which implement the steps of the above vehicle navigation method when executed by a processor.
  • Figure 1 is an application environment diagram of a vehicle navigation method in an embodiment
  • Figure 2 is a schematic diagram of map effects at different scale levels in one embodiment
  • Figure 3 is a schematic diagram of the map range viewed from different viewing angles in one embodiment
  • Figure 4 is a schematic diagram of the map range viewed from different viewing angles in yet another embodiment
  • Figure 5 is a schematic diagram of the relationship between scale and pitch angle in an embodiment
  • Figure 6 is a schematic diagram of a vehicle navigation system in one embodiment
  • Figure 7 is a schematic diagram of the data processing flow of the autonomous driving system in one embodiment
  • Figure 8 is a schematic diagram comparing the rendering effects of standard-definition maps and high-precision maps in one embodiment
  • Figure 9 is a schematic diagram of the jump logic of the driving state of the autonomous vehicle in one embodiment
  • Figure 10 is a schematic flowchart of a vehicle navigation method in one embodiment
  • Figure 11 is a schematic diagram of calculating the pitch angle under different picture frames in an embodiment
  • Figure 12 is a schematic flowchart of automatically adjusting graphic effects in an autonomous driving scenario in one embodiment
  • Figure 13 is a schematic diagram of an anterograde scenario in one embodiment
  • Figure 14 is a schematic diagram of the road transverse observation range in a lane changing scenario in one embodiment
  • Figure 15 is a schematic diagram of a lane changing scenario in an embodiment
  • Figure 16 is a schematic diagram of searching for the second lane in a lane changing scenario in an embodiment
  • Figure 17 is a schematic diagram of calculating the estimated drop-off point position of an entity vehicle in one embodiment
  • Figure 18 is a schematic diagram of an avoidance scenario in an embodiment
  • Figure 19 is a schematic diagram of the positions of the vehicle and obstacles in one embodiment
  • Figure 20 is a schematic diagram of an autonomous driving takeover scenario in one embodiment
  • Figure 21 is a rendering rendering of an autonomous driving takeover scene in one embodiment
  • Figure 22 is a schematic diagram of the road observation range in a takeover scenario in one embodiment
  • Figure 23 is a schematic diagram of the maneuvering point scene rendering effect in an autonomous driving scenario in one embodiment
  • Figure 24 is a structural block diagram of a vehicle navigation device in one embodiment
  • Figure 25 is an internal structure diagram of a computer device in one embodiment.
  • Vehicle navigation technology refers to mapping the real-time position relationship between the vehicle and the road into a visual vehicle navigation interface based on the positioning data provided by the satellite positioning system, so as to provide information to objects in the vehicle (such as the driver or the driver) while the vehicle is driving. Occupant) technology that provides navigation functions.
  • the object can learn the current location of the vehicle, the vehicle's driving route, the vehicle's driving speed, the road conditions in front of the vehicle, the road lane, and other vehicles near the vehicle's location. driving conditions, road scenes and other information.
  • Autonomous driving domain A collection of software and hardware used in vehicles to control autonomous driving.
  • Cockpit domain A collection of software and hardware such as the central control screen, instrument screen, and operation buttons in the vehicle that are used to interact with users in the cockpit. For example, the navigation map displayed on the central control screen in the cockpit and the interface for user interaction.
  • HD Map HD map, the full name is High Definition Map.
  • SD Map SD map, the full name is Standard Definition Map, standard definition map.
  • Base map tilt mode which can display 3D-like rendering effects such as 3D buildings and 4K bridge effects.
  • ACC Adaptive Cruise Control, adaptive cruise, is provided by the automatic driving system based on the cruising speed and speed set by the user.
  • the safety distance of the vehicle in front and the speed of the own vehicle are dynamically adjusted. When the vehicle in front accelerates, the own vehicle will also accelerate to the set speed. When the vehicle in front slows down, your vehicle will slow down to maintain a safe distance between your vehicle and the vehicle in front.
  • Lane Center Control lane centering assist
  • lane centering assist is a function provided by autonomous driving to assist the driver in controlling the steering wheel. It can continuously keep the vehicle centered in the current lane.
  • NOA Navigate on Autopilot, automatic assisted navigation driving function, referred to as NOA.
  • This function can guide the vehicle to drive automatically by setting the destination. Under the driver's monitoring, it can complete operations such as lane changing, overtaking, automatic entry and exit of ramps, etc.
  • NOA's driving behaviors include cruising, following, avoiding, yielding, single-rule planned lane-changing behaviors (such as merging into the fast lane, expected exit), and multi-condition decision-making lane-changing behaviors (such as changing lanes during cruising).
  • Maneuvering point The position on the electronic map that guides the driver to make maneuvers such as turning, decelerating, merging, and exiting. Usually it is the position of intersection turning, intersection divergence, intersection merging, etc.
  • Drop point The location of the physical vehicle predicted by the autonomous driving system when it completes automatic lane change.
  • the vehicle navigation method provided by the embodiment of the present application can be applied in the application environment as shown in Figure 1.
  • the application environment includes a terminal 102 and a server 104.
  • the terminal 102 communicates with the server 104 through a network.
  • the data storage system can store data that the server 104 needs to process, such as map data, including high-precision map data, standard-definition map data, etc.
  • the data storage system can be integrated on the server 104, or placed on the cloud or other servers.
  • the terminal 102 may include but is not limited to a mobile phone, a computer, an intelligent voice interaction device, a smart home appliance, a vehicle-mounted terminal, etc.
  • the terminal can also be a portable wearable device, such as a smart watch, smart bracelet, etc.
  • the server 104 can be implemented as an independent server or a server cluster composed of multiple servers.
  • the server 104 may be, for example, a server that provides functional services for maps, including positioning services, navigation services, etc.
  • the positioning services and navigation services can obtain positioning data about physical vehicles.
  • the server 104 may receive positioning data about the physical vehicle, perception data of the environment in which the physical vehicle is located, etc., generate a vehicle navigation interface about the physical vehicle based on these data, and display the vehicle navigation interface through the terminal 102 .
  • the terminal 102 can also receive positioning data, sensing data, etc. about the physical vehicle, and generate and display a vehicle navigation interface about the physical vehicle based on these data.
  • Embodiments of this application can be applied to various scenarios, including but not limited to cloud technology, artificial intelligence, smart transportation, assisted driving, automatic driving, etc.
  • the electronic map (hereinafter referred to as the map) in the vehicle navigation interface also has a display scale.
  • the map display scale is also called a scale, which represents the ratio of the distance on the displayed map to the distance on the actual map. For example, 1 centimeter on the map in the vehicle navigation interface represents 1 kilometer on the actual map.
  • a corresponding relationship is established between the scale level and the size of the actual geographical area.
  • the circumference of the earth is about 40,000 kilometers.
  • the circumference of the earth is used as the minimum scale level 0.
  • Table 1 the specific correspondence is shown in Table 1 below. It can be understood that the corresponding relationship between the scale level and the map frame is only illustrative.
  • the scale level can also be a decimal, such as 22.5, and the corresponding map frame is 15 meters.
  • FIG 2 it is a schematic diagram of map effects at different scale levels in one embodiment.
  • the map displayed with a map width of 20 meters is large and the map range is small
  • the map displayed with a map width of 500 meters is small and the map range is large.
  • the perspective of the map in the vehicle navigation interface is the perspective from which the map is viewed.
  • the perspective may be, for example, the pitch angle of the map.
  • Figure 3 is a schematic diagram of the map range viewed from different viewing angles in one embodiment. Referring to Figure 3, at the same scale level, the pitch angles are 40 degrees, 50 degrees, and 65 degrees. It can be seen that at the same scale level, the larger the pitch angle, the greater the visual range, and the smaller the pitch angle, the smaller the visual range. The less.
  • FIG. 4 a schematic diagram of the map range viewed at different viewing angles in yet another embodiment is shown, in order, vertical viewing angle, small pitching angle viewing angle, and large pitching angle viewing angle. The images and building effects presented under different viewing angles are different.
  • Figure 5 is a schematic diagram of the relationship between scale and pitch angle in one embodiment.
  • the visual range of 20 meters is the smallest, and the visual range of 500 meters is the largest.
  • the larger the pitch angle the greater the visual range, and the smaller the pitch angle, the smaller the visual range.
  • adjusting the pitch angle can adjust the visual range in different directions. By adjusting the pitch angle, the visual range can be expanded, and even a beyond-line-of-sight geographical area can be displayed on the map.
  • the display ratio of the map adopts an adaptive speed change method, which only considers the factor of speed and does not take into account the actual vehicle's performance in various driving scenarios.
  • the road range that needs to be paid attention to, and the map range displayed in the map navigation interface is limited, resulting in poor navigation effect.
  • the navigation perspective is also set in advance, and it fails to adapt to the driving scene where the physical vehicle is currently located and makes adaptive adjustments, resulting in poor vehicle navigation efficiency.
  • embodiments of the present application provide a vehicle navigation method, which not only pays attention to the road conditions of the target road where the current position of the physical vehicle is located, but also pays attention to the physical vehicle
  • the driving scene of the current location is used together as factors to adjust the map frame and perspective to achieve the effect of comprehensively adjusting the road range presented on the navigation map interface, which can distinguish the current driving scene of the physical vehicle and the physical vehicle.
  • the actual situation of the road where the current location is located adjust the map frame and perspective in the navigation map interface, improve the perceptibility of various driving scenarios, focus on the road range that needs attention in each driving scenario, improve the navigation effect, and also help entities
  • the driver or occupants of the vehicle understand the driving system's decisions, increasing the trustworthiness of the driving system.
  • the terminal 102 can display a vehicle navigation interface for navigating a physical vehicle, where the vehicle navigation interface includes a map; a virtual vehicle is displayed on the target road in the map, and the virtual vehicle corresponds to the physical vehicle;
  • the entity vehicle travels to the current position and is in the target driving scene, determine the target driving scene and the road range data corresponding to the current position; update the frame and perspective used to display the map to the target frame and target perspective, and the target frame Adapt the target perspective to road range data.
  • the target frame and target perspective required to update the map are determined by integrating the actual road conditions of the target road where the physical vehicle is currently located and the driving scene where the physical vehicle is currently located. It can be determined when the physical vehicle is in the target driving scene. , adjust the display mode of the map in real time so that the display mode adapts to the current road range that the user needs to observe in this target driving scenario, that is, The road range displayed in the updated map is adapted to the road area that needs to be observed in the driving scenario where the vehicle is located, which can improve the perceptibility of changes in the map, greatly improve the quality of the navigation map, and speed up The speed of picture reading improves the navigation experience.
  • the target perspective of the updated map can expand the visual range of the map and improve navigation efficiency when the target map is small.
  • the driving scene includes at least one preset target driving scene, such as a lane changing scene, an avoidance scene, a takeover scene, a maneuvering scene, and so on.
  • the lane-changing scenario refers to an actual vehicle actively changing lanes while driving. In the lane-changing scenario, it is necessary to focus on observing the lane to which the lane needs to be changed and the traffic coming from behind the lane.
  • Avoidance scenarios refer to situations where a physical vehicle encounters obstacles while driving, such as overtaking by a side vehicle, slowing down the vehicle in front, changing lanes of the vehicle in front, etc., resulting in poor road conditions in the current lane, and needs to avoid it by slowing down, changing lanes, etc.
  • the takeover scenario refers to the scenario where the autonomous vehicle is about to leave the area supported by the autonomous driving function and is about to switch to manual driving.
  • the autonomous driving takeover scenario it is necessary to focus on observing the location on the road where the autonomous driving point needs to be exited.
  • Maneuvering point scenes refer to the locations of maneuvers such as steering and U-turns when the physical vehicle is driving. When driving at a maneuvering point, you need to focus on observing the road conditions at the maneuvering point ahead.
  • the target driving scenarios may also include other scenarios. This application does not limit this. It is understandable that the road range that needs to be observed may be different in different driving scenarios.
  • driving scenarios can also include forward-moving scenes.
  • the forward-moving scenes refer to scenes in which there is no lane change, U-turn, steering, etc.
  • the presented The map frame and angle of view can be preset values and do not need to change with the target road where the vehicle is currently located.
  • the terminal can represent the driving scene of the physical vehicle through different identifiers, that is, different identifiers represent different types of driving scenes.
  • the vehicle navigation method provided by the embodiment of the present application can be applied in the vehicle navigation process of the automatic driving scene.
  • the automatic driving scene that is, the driving scene, refers to the scene where the vehicle is controlled by the vehicle-mounted automatic driving system.
  • the vehicle navigation process of the automatic driving scene by presenting a visual vehicle navigation interface to the driver or occupants of the vehicle, so that they can clearly and intuitively understand the road environment in which the vehicle is located.
  • the embodiment of this application combines the road environment of the target road where the current position of the autonomous vehicle is located and the driving scene of the current location of the autonomous vehicle to comprehensively determine the image frame and perspective required to update the map in the vehicle navigation interface, thereby presenting the map
  • the changes can improve the perceptibility of the scene the vehicle is in in the autonomous driving scenario, improve the trust of the vehicle members in the autonomous driving system and the driving sense of security provided by the autonomous driving system.
  • the vehicle navigation system provided by the embodiment of the present application can also be applied to the vehicle navigation process of the active driving system.
  • the active driving scenario that is, the human driving scenario, refers to the scenario where the vehicle is controlled by the driver.
  • the vehicle navigation process of the active driving scenario by presenting a visual vehicle navigation interface to the driver in the vehicle, he can clearly and intuitively understand the vehicle, the road environment where the vehicle is located, and the driving status of the vehicle.
  • the embodiment of this application combines the navigation environment of the target road where the vehicle is currently located with the driving scene where the vehicle is currently located, and jointly adjusts the frame and perspective of the map in the vehicle navigation interface to present changes in the map, which can improve the perception of the scene where the vehicle is located.
  • the driver can make driving decisions based on the presented vehicle navigation interface, which can improve traffic safety during vehicle driving.
  • the vehicle navigation method provided by the embodiment of the present application can also be applied to the vehicle navigation system as shown in Figure 6.
  • the vehicle navigation system includes a physical vehicle 601, a positioning device 602, a sensing device 603, and a vehicle-mounted terminal 604.
  • the positioning device 602, the sensing device 603, and the vehicle-mounted terminal 604 are mounted in the vehicle 601.
  • the positioning device 602 can be used to obtain the position data of the physical vehicle 601 (i.e., the own vehicle) in the world coordinate system (that is, the position data of the physical vehicle 601), where the world coordinate system refers to the absolute coordinate system of the system.
  • the positioning device can The location data of the physical vehicle 601 is obtained through GPS technology.
  • the positioning device 601 can send the location data of the physical vehicle 601 in the world coordinate system to the vehicle-mounted terminal 604, and the vehicle-mounted terminal 604 can obtain the current location of the physical vehicle in real time.
  • the positioning device mentioned in the embodiment of this application may be an RTK (Real Time Kinematic, carrier phase difference technology) positioning device.
  • the RTK positioning device can provide high-precision (for example, centimeter-level) positioning data of the vehicle 601 in real time (that is, the positioning data of the physical vehicle 601 location data).
  • the sensing device 603 can be used to sense the environment where the physical vehicle 601 is located and obtain environmental sensing data.
  • the sensing objects can be other vehicles or obstacles on the target road.
  • the environment sensing data may include the vehicle coordinates of other vehicles on the target road where the physical vehicle 601 is located (such as overtaking vehicles, leading vehicles in avoidance scenarios, rear-facing vehicles in lane changing scenarios, etc.) on the physical vehicle 601
  • the location data under the system that is, the coordinate data of other vehicles relative to the physical vehicle 601
  • the environment sensing data also includes data that the physical vehicle 601 needs to know in different scenarios, such as the predicted drop-off point in the lane change scenario.
  • the vehicle coordinate system refers to a coordinate system established with the vehicle center of the physical vehicle 601 as the coordinate origin.
  • the sensing device 603 can send the environment sensing data to the vehicle-mounted terminal 604.
  • the sensing device 603 includes a visual sensing device and a radar sensing device.
  • the sensing range of the sensing device 603 for sensing the environment where the physical vehicle 601 is located is determined by the sensors integrated with the sensing device.
  • the sensing device may include but is not limited to at least one of the following sensors: a visual sensor (such as a camera), Long-range radar and short-range radar. The detection distance supported by long-range radar is greater than the detection distance supported by short-range radar.
  • the vehicle terminal 604 integrates satellite positioning technology, mileage positioning technology and automobile black box technology, and can be used as terminal equipment for vehicle driving safety management, operation management, service quality management, intelligent centralized dispatch management, electronic stop sign control management, etc.
  • the vehicle-mounted terminal 604 may include a display screen, such as a central control screen, an instrument screen, an AR-HUD (AugmentedReality Head Up Display) display, etc.
  • the vehicle-mounted terminal 604 can convert the position data of the sensing object in the vehicle coordinate system into the position data of the sensing object in the world coordinate system, that is, the relative position of the sensing object.
  • the data is converted into position data of the sensing object, and then the vehicle-mounted terminal 604 can display a mark representing the sensing object in the navigation interface displayed on the display screen according to the position data of the sensing object.
  • the vehicle navigation method provided by the embodiment of the present application involves cross-domain communication between the autonomous driving domain and the cockpit domain;
  • the autonomous driving domain refers to the collection of software and hardware in the vehicle used to control autonomous driving, for example
  • the above-mentioned positioning device 602 and sensing device 603 both belong to the autonomous driving domain
  • the cockpit domain refers to the collection of software and hardware such as the central control screen, instrument screen, and operation buttons in the physical vehicle that are used to control the interaction with vehicle-related objects in the cockpit.
  • the above-mentioned vehicle-mounted terminal 604 belongs to the cockpit domain.
  • the cockpit domain and the autonomous driving domain are two relatively independent processing systems.
  • TCP Transmission Control Protocol
  • UDP UserDatagram Protocol
  • SOME/IP SOME/IP based on the vehicle Ethernet.
  • ScalableService-OrientedMiddleware overIP a data transmission protocol
  • other data transmission protocols carry out cross-domain data transmission.
  • automotive Ethernet can achieve relatively high data transmission rates (for example, 1000Mbit/s, etc.), while also meeting the requirements of the automotive industry in terms of high reliability, low electromagnetic radiation, low power consumption, and low latency.
  • FIG 7 it is a schematic diagram of the data processing flow of the autonomous driving system in one embodiment.
  • the autonomous driving domain collects positioning data and environment sensing data, it packages the data and transmits the packaged data to the cockpit domain through cross-domain communication.
  • the cockpit domain receives the packaged data, it performs a correction operation on the positioning data in combination with the high-precision map information to obtain the corrected positioning position of the physical vehicle, and then integrates other sensing objects sensed by the sensing data into the high-precision map based on the positioning position. , and finally all the integrated information is presented on the display screen in the cockpit area (central control screen, instrument screen, AR-HUD and other display devices) in the form of a high-precision map.
  • the map in the vehicle navigation interface displayed on the display screen can be a standard-definition map or a high-precision map.
  • Map data has developed from early standard-definition data to current high-precision data, and the accuracy of map data has increased from the original 5 to 10 meters to the current approximately 50cm.
  • the graphic effect of the navigation base map has also evolved from the original road-level (or path-level) rendering to the current lane-level rendering.
  • the picture effect has been expanded from the early flat perspective to the current 2.5D perspective, which greatly expands the field of view at the same display ratio and displays more over-the-horizon information.
  • Standard-definition maps are usually used to assist drivers in vehicle navigation, and their coordinate accuracy is about 10 meters.
  • autonomous vehicles need to accurately know the location of the physical vehicle, and the distance between the physical vehicle and the curb and the lane next to it is usually only a few dozen centimeters. Therefore, the accuracy requirements of high-precision maps are within 1 meter, and the distance between the physical vehicle and the curb and the adjacent lane is usually only a few dozen centimeters. Relative accuracy (such as the relative position accuracy of lanes and lanes, lanes and lane lines) is often even higher.
  • HD maps can present accurate road shapes and include The slope, curvature, heading, elevation, and roll data of each lane; the type and color of the lane lines; the speed limit requirements and recommended speeds of each lane; the width and material of the isolation belt; the content of arrows and text on the road , location; geographical coordinates, physical dimensions, and their specific characteristics of traffic lights, crosswalks and other traffic participants.
  • FIG 8 it is a schematic diagram comparing the rendering effects of a standard definition map and a high-precision map in one embodiment.
  • the map effect has undergone tremendous changes since it was upgraded from a standard-definition map to a high-definition map, including: changes in scale size (map range), switching of the vertical perspective to a 2.5D perspective, and refinement of the guidance effect (path-level upgrade to Lane level), these changes must be adjusted according to actual application scenarios to maximize the value of high-precision map rendering.
  • the automatic driving system includes the switching of multiple driving states (functional states).
  • Function upgrade refers to the gradual upgrade from the full manual driving state to the high-level automatic driving state.
  • the manual driving state can be directly upgraded to ACC, LCC and NOA, or it can be changed to the ACC state first, then to the LCC state, and finally to the NOA state, turning it on step by step.
  • Function downgrade is the opposite of function upgrade, and represents the process of gradually downgrading from high-level autonomous driving to full manual driving.
  • the driving scenarios mentioned in the embodiments of this application, in the autonomous driving scenario can specifically refer to the automatic lane changing scene, automatic avoidance scene, prompt takeover scene, automatic car following scene, etc. performed by the automatic driving system in the NOA state.
  • a vehicle navigation method is provided. This method is explained by taking the method applied to the terminal 102 in Figure 1 or the vehicle-mounted terminal 604 in Figure 6 as an example, including the following steps 1002 to 1004. 1006:
  • Step 1002 Display a vehicle navigation interface for navigating the physical vehicle, where the vehicle navigation interface includes a map.
  • the terminal can display the vehicle navigation interface.
  • the vehicle navigation interface is an interface for performing vehicle navigation for the physical vehicle while the physical vehicle is driving.
  • the vehicle navigation interface can include a map.
  • the map describes the actual road environment at the actual geographical location of the physical vehicle, including the target where the physical vehicle is located. Lane roads, lanes and lane markings, etc.
  • the map can be a standard definition map or a high-definition map.
  • the map is a high-precision map, which is a virtual road environment obtained by three-dimensional modeling of the road environment.
  • the map is a standard-definition map, which is obtained by two-dimensional modeling of the road environment.
  • the virtual road environment may only include road data but not spatial height data.
  • Step 1004 Display a virtual vehicle on the target road in the map, and the virtual vehicle corresponds to the physical vehicle.
  • the vehicle navigation interface displayed by the terminal also includes a virtual vehicle displayed on the target road.
  • the target road and the virtual vehicle here are virtual mappings of the actual target road where the physical vehicle is located and the physical vehicle.
  • the virtual vehicle is displayed on the target road of the virtual map in the navigation interface based on the current location data of the physical vehicle.
  • the position of the virtual vehicle in the virtual map corresponds to the current position obtained by positioning the physical vehicle.
  • the entity vehicle can be any vehicle that performs vehicle navigation through the displayed map.
  • the target road may include at least one lane, and the target road may be a multi-vehicle road.
  • Driving scenarios are scenes of a series of driving behaviors performed by a vehicle to achieve safe driving while driving.
  • the driving scene includes at least one target driving scene, such as a lane change scene, an avoidance scene, a takeover scene, a maneuvering point scene, and so on.
  • target driving scenarios may also include other scenarios, which are not limited by this application. It is understandable that in different driving scenarios, the road range that users need to focus on may vary.
  • driving scenarios can also include forward-driving scenarios.
  • the forward-driving scene refers to a scene in which there is no lane change, U-turn, steering, etc.
  • the presented The map frame and angle of view can be preset values and do not need to change with the target road where the physical vehicle is currently located.
  • target driving scenarios can include automatic lane changing scenarios, automatic avoidance scenarios, prompt takeover scenarios, automatic car following scenarios, etc. in the NOA state of the autonomous driving system.
  • the terminal can determine whether the physical vehicle is in a certain target driving scenario at the current location through some methods. For example, in a common vehicle navigation scenario, the terminal can determine the driving scenario of the vehicle based on changes in the location data of the vehicle, for example, based on the location of the physical vehicle. The data and the road data of the current location are used to determine whether the entity vehicle has changed lanes, that is, whether it is in a lane-changing scene; for another example, based on the location data of the entity vehicle and the road data of the current location, it is determined whether the entity vehicle is within a certain period of time.
  • the driving scene can be comprehensively determined based on the location data of the physical vehicle and the road data obtained from the electronic map.
  • the driving behavior of the physical vehicle is determined by the autonomous driving domain.
  • the terminal in the cockpit domain can obtain the current driving behavior information (or driving instructions) of the physical vehicle from the autonomous driving domain through cross-domain communication. , thereby obtaining the current driving scene of the physical vehicle.
  • the terminal in the cockpit domain can receive the command and determine that the physical vehicle is currently in the "lane changing scene”.
  • the terminal in the cockpit domain can receive the instruction and determine that the physical vehicle is currently in an "avoidance scenario", and so on.
  • the terminal can obtain the data required to update the map in the driving scene from the autonomous driving domain, for example, the steering information of the vehicle in the lane change scene, the obstacle relative to the vehicle in the avoidance scene location information, location information of the autonomous driving exit point in the takeover scenario, etc.
  • Step 1006 When the physical vehicle travels to the current position and is in the target driving scene, determine the target driving scene and the road range data corresponding to the current position; update the frame and perspective used to display the map to the target frame and target perspective, The target frame and target perspective are adapted to the road extent data.
  • the current location is the positioning position of the physical vehicle, and the vehicle position of the virtual vehicle displayed on the map is displayed based on the positioning position of the physical vehicle. It can be understood that while the physical vehicle is traveling, the current position of the physical vehicle changes all the time as time goes by.
  • the refresh frequency of the current position may be, for example, 10 times/second.
  • the target driving scene is the driving scene where the physical vehicle is at the current location.
  • the target driving scene can be any of the above-mentioned lane changing scenes, avoidance scenes, takeover scenes, maneuvering point scenes, and other driving scenes.
  • the terminal obtains the current location of the physical vehicle in real time, and determines whether the physical vehicle is in one of the above-mentioned target driving scenarios in the manner mentioned above. If so, the terminal determines the target driving scenario and the road range data corresponding to the current location based on the current location and a certain target driving scenario where it is currently located, and updates the frame and perspective required to display the map to the target based on the road range data. Picture frame and target perspective. The map is updated according to the frame and perspective determined by the road range data, so that the road range in the updated map is adapted to the road range that the user needs to pay attention to at the current location when the physical vehicle is in the target driving scene. of. That is, the road range data is the data required to update the road range displayed on the map, including road lateral range data and road longitudinal range data.
  • the map range of the map displayed in different picture frames and viewing angles is different.
  • the displayed road range is also different.
  • the smaller the picture frame the smaller the pitch angle, and the displayed road range.
  • the wider the lane the less the field of view ahead.
  • the larger the picture the larger the pitch angle, the narrower the road or lane displayed, the more field of view ahead.
  • the road range displayed in the updated map is related to the road attributes of the target road itself where the lane is located, the current location of the entity vehicle on the target road, and the current driving scene of the entity vehicle, that is, Said that these factors jointly determine the target frame and target perspective used to update the map.
  • the target driving scenario and the road range data corresponding to the current location are determined in advance based on the road range that the entity vehicle needs to pay attention to when driving in the target driving scenario. That is, different target driving scenarios correspond to different road ranges that require attention. For example, in a lane change scenario, it is necessary to observe the lane to which the lane is changed and the traffic coming from behind the lane. Then, the road range that needs to be paid attention to in the lane change scenario is mainly the range near the current location of the physical vehicle. For another example, in an avoidance scenario, it is necessary to observe obstacles and the lane where the obstacle is located. Then the road range that needs to be paid attention to in the avoidance scenario is mainly the range formed by the current position and the location of the obstacle.
  • the target frame and target perspective of the updated map are based on the actual road conditions of the target road where the current position of the physical vehicle is located and the current position of the physical vehicle.
  • the driving scene is determined so that the road range displayed in the updated map can adapt to the road range that the physical vehicle is located in and needs to pay attention to in the driving scene, which can improve the perceptibility of changes in the map. , greatly improving the quality of navigation maps, speeding up map reading, and improving the navigation experience.
  • the target perspective of the map can expand the visual range of the map and improve navigation efficiency when the target map is small.
  • determining the target driving scene and the road range data corresponding to the current location includes: when the entity vehicle travels to the current location and is in the target driving scene When, the target driving scene and the road transverse range data and road longitudinal range data of the current location are determined.
  • updating the frame and perspective used to display the map to the target frame and target perspective, and the target frame and target perspective are adapted to the road range data, including: according to the current location of the physical vehicle when it is in the target driving scene.
  • the road lateral range data at the location, and the road longitudinal range data at the current location when the physical vehicle is in the target driving scene determine the target frame and target perspective required to update the map; display the map as having the target frame and target perspective map.
  • the road lateral range data at the current position when the physical vehicle is in the target driving scene is used to determine the target image frame required to update the map.
  • the horizontal range and the longitudinal range of the road at the current position when the physical vehicle is in the target driving scene are used together to determine the target perspective required to update the map.
  • the horizontal range of the road is determined, the longer the longitudinal range of the road that needs to be displayed, the longer the longitudinal range of the road that needs to be displayed. The larger the target viewing angle required.
  • the horizontal range of the road can reflect the traffic conditions on both sides of the vehicle, and the longitudinal range of the road can reflect the traffic conditions in front and behind the physical vehicle.
  • the road lateral range can be quantitatively represented by the road lateral distance that the user needs to observe at the current position when the physical vehicle is in the target driving scene.
  • the road lateral distance can be the road lateral width of the entire target road, or the lane of the lane where the physical vehicle is located.
  • the lateral width can also be the lane where the physical vehicle is located, the adjacent lanes of the lane where the physical vehicle is located, or the lateral width of the lane formed by the adjacent lanes.
  • the specific requirements depend on the target driving scenario where the physical vehicle is located.
  • the longitudinal range of the road can be quantified by the longitudinal distance of the road that the user needs to observe at the current position when the physical vehicle is in the target driving scene.
  • the longitudinal distance of the road can be the farthest distance from the vehicle to the obstacle in front, or it can be the distance from the physical vehicle to the obstacle in front.
  • the estimated distance to the drop-off point can also be the distance from the physical vehicle to the autonomous driving exit point, which depends on the target driving scenario where the physical vehicle is located. It can be understood that there are differences between the road lateral distance and the road longitudinal distance defined in different target driving scenarios. That is, the road range at the same location may be different when the physical vehicle is in different driving scenarios. When the physical vehicle is in the same driving scenario, Road extents may also vary from location to location.
  • the terminal determines that the physical vehicle is in a certain target driving scene at the current position, it determines that the road lateral range data at the current position when the physical vehicle is at the target driving scene is the same as the road at the current position when the physical vehicle is at the target driving scene. Longitudinal range data, thereby determining the target frame and target perspective required to update the map based on the road transverse range data and the road longitudinal range data. Subsequently, the terminal obtains the map data of the current location and updates the map data according to the target frame. Render and display with the target perspective to obtain the map that needs to be displayed at the current location when the physical vehicle is in the target driving scene.
  • the target map required to update the map is determined based on the road lateral range data at the current position when the entity vehicle is in the target driving scene and the road longitudinal range data at the current position when the entity vehicle is in the target driving scene.
  • the frame and target perspective include: determining the map frame required to update the map based on the road lateral range data at the current position when the physical vehicle is in the target driving scene;
  • the longitudinal range data of the road at the current position determines the pitch angle required to update the map; when the pitch angle is greater than or equal to the preset threshold, the required image frame is increased, and the required image frame is returned according to the required image frame, and the physical vehicle is at the target
  • the step of determining the pitch angle required to update the map is continued based on the longitudinal range data of the road at the current location during the driving scene until the pitch angle is less than the preset threshold, and the target frame and target perspective required for updating the map are obtained.
  • the terminal can determine the location of the target road based on the road attributes of the target road where the physical vehicle is located, the current location of the physical vehicle, and the location of the target road where the physical vehicle is located. For the target driving scene, determine the lateral distance of the road at the current position when the physical vehicle is in the target driving scene. Based on the lateral distance of the road, query the mapping table shown in Table 1 to determine the map frame required to update the map. Subsequently, the terminal determines the longitudinal distance of the road at the current position when the entity vehicle is in the target driving scene based on the road attributes of the target road where the entity vehicle is located, the current location of the entity vehicle, and the target driving scene where the entity vehicle is located.
  • the pitch angle is less than the preset threshold, use the previously determined required map frame and the pitch angle as the target map frame and target required to update the map.
  • Viewing angle when the pitch angle is greater than or equal to the preset threshold, according to the picture frame list shown in Table 1, after increasing the picture frame by one level, the pitch angle is calculated again based on the longitudinal distance between the increased picture frame and the road, so Iterate until the pitch angle is less than the preset threshold.
  • the preset threshold for pitch angle can be set according to actual application requirements.
  • the strategy for determining the target picture frame and target perspective is:
  • the lateral range of the road may need to pay attention to at least 5 meters of information in the width direction of the road surface. Then the lateral range of the road is about 10 meters. From Table 1, it can be determined, showing The minimum image required for the map is about 10 meters, and the corresponding scale level is 22. When the image is 10 meters, when the pitch angle exceeds 75°, the viewing angle is almost parallel to the road, and there is 3D on the map. The display of buildings and the map rendering effect are not conducive to user viewing, so the maximum value of the pitch angle is 75°, then the preset threshold can be set to 75°. Of course, the preset threshold can also be 60°, 40°, or even 20°, which can be set according to actual application conditions, and there is no restriction on this.
  • the pitch angle is calculated based on the initial frame scale corresponding to the initial scale level i.
  • the cockpit domain obtains the current position of the self-vehicle and the target driving scene of the current physical vehicle from the autonomous driving domain through cross-domain communication, calculates the road lateral range data of the current target driving scene to determine the map frame, and calculates the current scene
  • the road longitudinal range data is used to determine the pitch angle, and the image frame and inclination angle are dynamically adjusted until the pitch angle meets the visual requirements.
  • the adjusted image frame and pitch angle are applied to high-precision map rendering. It should be noted that usually the own vehicle or the lane in which the own vehicle is located is displayed in the center of the vehicle navigation interface and remains fixed.
  • the parameters for rendering the high-precision map may also include the offset of the map (or called the center point, that is, which point on the map the center point of the vehicle navigation interface is).
  • the following takes an autonomous driving scenario and a high-precision map as an example to introduce some specific driving scenarios.
  • the driving scenarios include forward driving scenarios and several target driving scenarios.
  • the target driving scenarios include lane changing scenarios, avoidance scenarios, takeover scenarios, and maneuver point scenarios. .
  • the target road includes multiple lanes
  • the virtual vehicle is displayed in a first lane among the multiple lanes.
  • the method further includes: when the physical vehicle is in a forward-moving scene at the current location it is traveling to, updating the Display the map frame and angle to the set frame and angle in the forward-driving scene; center the first lane where the virtual vehicle is located on the map in the forward-driving scene.
  • the forward-moving scene refers to a scene in which you are going straight ahead without changing lanes, turning around, steering, etc.
  • the map frame and angle of view are preset values and do not need to follow the current position of the physical vehicle. Changes depending on the target path.
  • the terminal can determine the driving scenario of the physical vehicle based on the driving characteristics of the forward-moving scene, the current location of the current physical vehicle and road data, and analyze whether the driving scene is a forward-moving scene.
  • part (a) of Figure 13 is a schematic diagram of the forward driving scenario.
  • the outer frame represents the entire vehicle navigation interface
  • the three rectangular frames represent the three lanes
  • the circle represents the position of the vehicle.
  • Part (b) of Figure 13 is a rendering rendering of the anterograde scene.
  • the lane is displayed in the center of the vehicle navigation interface, and the lane where the physical vehicle is located is also displayed in the center of the vehicle navigation interface.
  • the self-vehicle is displayed in the lower area of the lane range of the vehicle's lane, for example, the self-vehicle is displayed in the lower 2/3 of the lane range of the vehicle's lane, so that the road ahead is displayed in the entire map. More scope.
  • determining the target driving scene and the road range data corresponding to the current location includes: when the entity vehicle travels to the current location and is in the lane changing scene
  • the lateral distance of the road required for the lane change scenario is calculated based on the road data of the current location
  • the longest longitudinal extension distance of the lane change required for the lane change from the current location is calculated based on the road data of the current location.
  • the lane-changing scenario refers to an autonomous vehicle actively changing lanes while driving.
  • the road lateral range data in the lane changing scene can be the road lateral distance at the current position of the physical vehicle when it is in the lane changing scene.
  • the road lateral distance may be the road width of the target road.
  • the lateral distance of the road can be the lane where the physical vehicle is located and the lateral width of the lane formed by the left and right lanes of the lane.
  • the lateral distance of the road can also be the lane where the physical vehicle is located and the left and right lanes of the lane.
  • the longitudinal range of the road in the lane change scenario can be the road range formed by extending the farthest distance of the lane change longitudinally from the current position. Therefore, the road longitudinal range data can be the farthest distance of the lane change, which can be a set value. It can also be a value calculated based on the road data of the current location. The calculation method will be mentioned later.
  • the longitudinal range of the road in the lane change scenario can also be the road range extending longitudinally from the current position to the predicted lane change drop point. The method of predicting the lane change drop point will be mentioned later.
  • updating the frame and perspective used to display the map to the target frame and target perspective, and the target frame and target perspective are adapted to the road range data, including: determining the requirements for updating the map based on the road lateral distance.
  • the target frame required for updating the map is determined based on the lateral distance of the road
  • the target pitch angle required for updating the map is determined based on the furthest distance between the required target frame and the longitudinally extending lane change, including: Based on the lateral distance of the road, determine the map frame required to update the map; obtain the maximum speed limit of the first lane; based on the maximum speed limit and lane change duration, calculate the longest longitudinal extension distance of the lane change; based on the required map frame and longitudinal extension Calculate the pitch angle according to the furthest distance of lane change; when the pitch angle is greater than or equal to the preset threshold, increase the required image frame and return to the steps of calculating the pitch angle based on the required image frame and the furthest distance of longitudinal extension of lane change. Continue execution until the pitch angle is less than the preset threshold, and the target frame and target pitch angle required for updating the map are obtained.
  • the lateral range of the road in the lane change scenario is determined by the lane where the physical vehicle is located (which can be recorded as the first lane), the left and right lanes of the first lane on the target road, and the left and right lanes. composed to ensure that the information of each lane can be complete Rendered to the map.
  • a lane width can be calculated to the left or right based on the width of the first lane, that is:
  • Range d+d+dR+dRR
  • Range dLL+dL+d+d.
  • the terminal can determine the initial scale level, that is, the initial frame, by querying Table 1.
  • the pitch angle determines the longitudinal range of the road that can be displayed on the map at the current scale level.
  • the road longitudinal range is related to the maximum speed limit of the current lane.
  • the maximum speed limit of the first lane is V kilometers per hour, that is, (V/3.6) meters per second, and the lane change time is 3 seconds, then the forward display distance is 3*V/3.6.
  • the maximum speed limit of the first lane is 100km/h, and the longitudinal range data of the road is 3*100/3.6 extending longitudinally from the vehicle's position, which is 83.4 meters.
  • Table 1 it can be seen that the initial map frame is 15 meters corresponding to level 21.5. In the case of 21.5 scale scale, the calculated pitch angle is 80°. Assuming that the preset threshold is 75°, it is necessary to expand the map frame to 20 meters and calculate the pitch angle. is 76.5°, then the map frame needs to be enlarged again to 30 meters, and the calculated pitch angle is 70.2°, which meets the requirements. It can be determined that the target frame of the updated map that needs to be displayed at the current location is 30 meters, and the target pitch angle is 70.2°.
  • the target frame and target pitch angle required for updating the map are determined based on the lateral distance and longitudinal distance of the lane that need to be paid attention to at the current position in the lane change scenario, which can help the occupants in the physical vehicle feel that they are currently in the lane change situation.
  • the displayed map can focus on the lane range of the current location in lane changing scenarios, improving the perceptibility of the scene and improving the occupants' trust in the autonomous driving system.
  • the target road includes multiple lanes
  • the virtual vehicle is displayed in a first lane among the multiple lanes.
  • the method further includes: changing lanes from the first lane in the target driving scene where the physical vehicle is at the current location.
  • the second lane and the estimated drop-off point of the physical vehicle in the second lane are displayed in the center of the updated map.
  • the second lane on the left side of the map is displayed in the center
  • the second lane on the right side of the map is displayed in the center.
  • the position of the virtual vehicle in the map can be changed from a position below the lane in the map to a position above or in the middle of the lane in the map.
  • the terminal can determine the offset of the map and display the map according to the offset to achieve the purpose of displaying the rear road conditions of the second lane.
  • parts (a) and (b) of Figure 15 are schematic diagrams of lane changing scenarios for left and right lane changes respectively.
  • the outer frame represents the entire vehicle navigation interface
  • the three rectangular frames represent the three lanes
  • the circle represents the position of the own vehicle
  • the rectangular frame within the lane represents the estimated drop-off point position of the virtual vehicle, which can display the second lane and the estimated drop-off point on the second lane.
  • the estimated drop-off point is displayed in the center of the vehicle navigation interface.
  • the terminal can obtain the current location and vehicle steering information about the entity vehicle from the autonomous driving domain, and determine that the entity vehicle needs to change lanes based on the vehicle steering information and the topology of the target road where the vehicle's current location is located. to the second lane.
  • the method further includes: obtaining the road topology of the target road at the current location; determining the second lane according to the lane changing direction and the road topology of the lane changing scenario; and determining the second lane according to the driving speed of the physical vehicle when initiating the lane change and the road topology.
  • the terminal obtains the current position of the entity vehicle and determines the first lane in which the current position is located, and queries the forward lane, backward lane, left lane and right lane of the first lane according to the road topology of the target road where the first lane is located. , combined with the vehicle's steering information (lane change to the left or right), to determine the second lane to which the entity vehicle will change lanes.
  • the terminal can obtain the steering information of the physical vehicle at its current location from the autonomous driving domain through cross-domain communication.
  • FIG 16 it is a schematic diagram of searching for the second lane in a lane changing scenario in one embodiment.
  • the terminal receives the right lane change information from the autonomous driving system, obtains the second lane information on the right from the first lane to the right topology, and then moves forward based on the second lane on the right. Search and search backward to determine the boundary line and lane centerline of the entire second lane.
  • the terminal receives the left lane change information from the autonomous driving system, obtains the second lane information on the left from the left topology of the first lane, and then uses the second lane on the left as the benchmark. Search forward and backward to determine the boundary line and lane centerline of the entire second lane.
  • FIG 17 it is a schematic diagram of calculating the estimated drop-off point position of the vehicle in one embodiment.
  • A represents the current position of the own vehicle
  • CD is the lane center line of the second lane.
  • Point B is not the real drop-off point. Calculating the drop-off point requires considering the lane change time and the driving speed of the vehicle. The specific calculation method is as follows:
  • the position of the vertical foot point B can be determined.
  • the vehicle status data monitored by the sensing device on the vehicle the steering angle of the vehicle can be obtained.
  • the vehicle's driving speed v and the lane change duration are used to calculate the estimated lane change distance AB.
  • the distance BB' can be calculated.
  • the estimated landing position can be obtained.
  • the estimated drop-off point can be displayed in the middle of the vehicle navigation interface, or at the upper position, keeping the estimated drop-off point unchanged, and showing that the vehicle is gradually approaching the estimated drop-off point as it travels.
  • determining the target driving scenario and the road range data corresponding to the current location includes: when the entity vehicle travels to the current location and is in the obstacle avoidance state
  • calculate the avoidance scenario Based on the lane lateral width of the lane where the entity vehicle is located and the lanes adjacent to the lane where the entity vehicle is located, calculate the avoidance scenario. The required lateral distance of the road and the longest distance between the current position and the obstacle are calculated. Update the frame and perspective used to display the map to the target frame and target perspective.
  • the target frame and target perspective are adapted to the road range data, including: determining the target frame required to update the map based on the road lateral distance, and Determine the target pitch angle required to update the map based on the required target frame and the farthest distance from the current position to the obstacle; update the frame and perspective used to display the map to the target frame and target pitch angle.
  • Avoidance scenarios refer to situations where a vehicle encounters obstacles while driving, such as overtaking by a side vehicle, slowing down the vehicle in front, changing lanes of the vehicle in front, etc., resulting in poor road conditions on the current lane. It is necessary to avoid the danger by slowing down, changing lanes, etc. In avoidance scenarios, it is necessary to focus on observing obstacles and the lane where the obstacle is located.
  • the lane where the obstacle is located is usually the adjacent lane to the lane where the own vehicle is located.
  • part (a) of Figure 18 is a schematic diagram of an avoidance scenario in an embodiment.
  • the outer frame represents the entire vehicle navigation interface
  • the three rectangular frames represent the three lanes
  • the circle represents the position of the own vehicle
  • the rectangular frame represents the position of the obstacle.
  • the vehicle in an avoidance scenario, can be displayed on the map below the lane where the vehicle is located to better present obstacles ahead or on both sides.
  • the terminal determines the target frame and target perspective based on the position of the obstacle and the own vehicle so that the displayed map can pay attention to the details of the avoidance scenario.
  • the road lateral The range can be the road lateral distance at the current position when the entity vehicle is in the avoidance scene.
  • the road lateral distance can be the road width of the target road. If the target road includes many lanes, the road lateral distance can be the physical vehicle.
  • the lane in which the vehicle is located, the lateral width of the lane formed by the left and right adjacent lanes of the lane, or the lateral width of the smallest rectangular area where the physical vehicle and the obstacle are located, are not specifically limited in this application.
  • the longitudinal range of the road in the avoidance scenario can be the longitudinal range of the lane from the current position to the obstacle.
  • the target frame required for updating the map is determined based on the road lateral distance
  • the target pitch required for updating the map is determined based on the required target frame and the farthest distance between the current position and the obstacle.
  • Angle including: determining the image frame required to update the map based on the lateral distance of the lane; calculating the pitch angle based on the required image frame and the farthest distance between the current position and the obstacle; when the pitch angle is greater than or equal to the preset threshold When, increase the required image frame, return to the required image frame and the farthest distance between the current position and the obstacle, and continue to calculate the pitch angle until the pitch angle is less than the preset threshold, the updated map is obtained The required target image frame and target pitch angle.
  • the terminal determines the adjacent lanes of the lane where the entity vehicle is located on the target road; determines the map frame required to update the map based on the lane lateral distance formed by the lane where the entity vehicle is located and the adjacent lanes; determines the distance between the entity vehicle and the obstacle. the farthest distance; calculate the pitch angle based on the required image frame and the farthest distance; when the pitch angle is greater than or equal to the preset threshold, increase the required image frame and return to the required image frame and the farthest distance , the step of calculating the pitch angle continues until the pitch angle is less than the preset threshold, and the target frame and target pitch angle required for updating the map are obtained.
  • FIG. 18 it is a schematic diagram of an avoidance scenario in one embodiment.
  • the avoidance scenario focuses on the lane in which the vehicle is traveling and the traffic participant information in the left and right adjacent lanes.
  • the rectangular block in the figure represents the obstacle cutting into the side lane, the arrow represents the traveling direction of the obstacle, and ⁇ represents the current position of the own vehicle.
  • the terminal can determine the initial scale level, that is, the initial image frame, based on Range and query table 1. .
  • the terminal can calculate the farthest distance between the own vehicle and the obstacle, as shown in Figure 19, as Schematic diagram of the position of the own vehicle and obstacles in one embodiment.
  • an O-xy coordinate system is established with the center of the own vehicle as the coordinate origin O, the right direction of the own vehicle as the x-axis direction, and the forward direction of the own vehicle as the y-axis direction.
  • a coordinate system is established with the center of the obstacle (sensing target) perceived by the vehicle as the origin of the coordinates, the direction to the right as the x-axis, and the direction of the vehicle forward as the y-axis.
  • O'-x'y' coordinate system and O"-x"y" are coordinate systems established based on the two sensing targets themselves.
  • the coordinates of O' and O" in the O-xy coordinate system are respectively (Ox',Oy'), (Ox",Oy").
  • O’-x’y is the state of O-xy translated to the obstacle coordinate system.
  • O’-xy coincides with O-xy after rotating ⁇ ° clockwise. Assume that the farthest distance from the own vehicle to the obstacle is the distance from the own vehicle position to a.
  • This distance can be used as the longitudinal distance of the road at the current position of the entity vehicle in the avoidance scene in the lane change scenario.
  • the three lanes in Figure 18 are equal in width and the width of each lane is 3.5 meters.
  • the road longitudinal range data is the farthest distance from the vehicle's position to the obstacle. It is assumed that the calculated longest distance is 10 meters, and the pitch angle is predicted Let the threshold be 75°. Combining Table 1, it can be seen that the initial map frame is 15 meters corresponding to level 21.5. At the scale of 21.5, the calculated pitch angle is 33.8°, which meets the requirements. It can be determined that the target map frame that needs to be updated at the current location is 15 meters, and the target pitch The angle is 33.8°.
  • the target frame and target pitch angle required for updating the map are determined based on the lateral distance of the lane and the longitudinal distance of the lane that need to be paid attention to at the current position of the physical vehicle in the avoidance scenario, which can help the occupants in the vehicle feel that they are currently in the avoidance situation.
  • the map displayed can focus on the vehicle and obstacles in the avoidance scene, improving the perceptibility of the scene.
  • determining the target driving scene and the road range data corresponding to the current location includes: when the entity vehicle travels to the current location and is in the takeover prompt
  • the takeover scenario of the autonomous driving exit point based on the road data of the target road where the current position is located and the road data of the road where the autonomous driving exit point is located, the current position and the lane lateral distance in the takeover scenario are calculated, and the current position to the automatic driving exit point is calculated.
  • Distance from the driving exit point update the frame and perspective used to display the map to the target frame and target perspective.
  • the target frame and target perspective are adapted to the road range data, including: determining the need to update the map based on the road lateral distance The target frame, and determine the target pitch angle required to update the map based on the required target frame and the distance from the current position to the autonomous driving exit point; update the frame and perspective used to display the map to the target frame and target Pitch angle.
  • the takeover scenario is a scenario in which the autonomous vehicle is about to leave the area supported by the autonomous driving function and is about to switch to manual driving.
  • the road range in the takeover scenario is the road range on the target road from the current location to the autonomous driving exit point.
  • the takeover prompt point is a point that it must pass when it is about to reach the autonomous driving exit point. This point is a certain distance from the autonomous driving exit point. For example, it can It is 2.5 kilometers.
  • the target image size of the displayed map needs to be much larger than the area where the vehicle is located.
  • the lateral width of the target road When the distance between the current position of the physical vehicle and the autonomous driving exit point is relatively short, such as 20 meters, in order to present the road conditions between the physical vehicle and the autonomous driving exit point as clearly as possible, then The target image size of the displayed map is smaller.
  • the target image required to display the map is first expanded to the point where the autonomous driving exit point can be observed, and then the own vehicle and the exit point are always visible and Gradually reduce the image size.
  • Figure 20 is a schematic diagram of an autonomous driving takeover scenario in one embodiment. It can be seen that in the autonomous driving takeover scenario, in order to keep the autonomous driving exit point always visible, the map size displayed in the autonomous driving takeover scenario is smaller than that in the forward driving scenario.
  • Figure 21 is a rendering rendering of an autonomous driving takeover scene in an embodiment, where A represents the location of the own vehicle, B represents the location of the autonomous driving exit point, and the AB interval is the area prompting manual takeover.
  • the target frame required for updating the map is determined based on the road lateral distance
  • the target pitch angle required for updating the map is determined based on the required target frame and the distance from the current position to the autonomous driving exit point, Including: determining the frame required to update the map based on the lateral distance of the lane; calculating the pitch angle based on the required frame and the distance from the current position to the autonomous driving exit point; when the pitch angle is greater than or equal to the preset threshold, increase the According to the required map frame and the distance from the current position to the autonomous driving exit point, the step of calculating the pitch angle continues until the pitch angle is less than the preset threshold, and the target required to update the map is obtained. Picture frame and target pitch angle.
  • the steps for determining the target frame and target pitch angle in the takeover scenario include: determining the target road and the road where the autonomous driving exit point is located, and the lane lateral distance formed by it, and determining the map frame required to update the map based on the lane lateral distance; Calculate the distance from the current position to the autonomous driving exit point; calculate the pitch angle based on the required image frame and distance; when the pitch angle is greater than or equal to the preset threshold, increase the required image frame and return to the required image frame The amplitude and distance are calculated, and the steps of calculating the pitch angle are continued until the pitch angle is less than the preset threshold, and the target image frame and target pitch angle required for updating the map are obtained.
  • Figure 22 is a schematic diagram of the road range in a takeover scenario in one embodiment.
  • the lateral range of the road in the takeover scenario It is the multi-lane range composed of the lane where point B is located and the two left and right lanes, and the lane where point A is located and the two left and right lanes, that is, the range shown by Range in the figure.
  • the lane width can be supplemented according to the lane where point A is located to form the lateral range of the road in this case.
  • the longitudinal range of the road taking over the scene is the distance between points AB.
  • the terminal receives the location of the autonomous driving exit point in the current takeover scenario sent by the autonomous driving domain through cross-domain communication, and displays the autonomous driving exit point on the map based on the location.
  • the distance from the current position to the autonomous driving exit point is 1000 meters, and the preset threshold for the pitch angle is 75°.
  • the initial map frame is 15 meters corresponding to level 21.5.
  • the calculated pitch angle based on 15 meters and 1000 meters is much greater than 75°, which does not meet the requirements.
  • the map frame is gradually expanded until the map frame is When it is 312 meters, the calculated pitch angle is 72.6°, which meets the requirements. It can be determined that the target frame required to update the map at the current location of the physical vehicle is 312 meters, and the target pitch angle is 72.6°.
  • the target frame and target pitch angle required for updating the map are determined based on the longitudinal distance of the lane that needs to be paid attention to at the current position of the physical vehicle in the takeover scenario, which can help the occupants of the physical vehicle feel that they are currently in the takeover scenario.
  • the displayed map can focus on the autonomous driving exit point in the takeover scenario, improving the perceptibility of the scenario.
  • determining the target driving scenario and the road range data corresponding to the current location includes: when the entity vehicle travels to the current location and is traveling in the target In the maneuvering point scene of the maneuvering operation area, based on the intersection width of the road where the target maneuvering point is located, extend the preset distance along the intersection extension direction of the target maneuvering point to obtain the road lateral distance and road longitudinal distance in the maneuvering point scenario; update with From the display map frame and perspective to the target frame and target perspective, the target frame and target perspective are adapted to the road range data, including: determining the target frame required to update the map based on the road lateral distance, and determining the target frame required to update the map based on the required road range data. The required longitudinal distance between the target map frame and the road is determined to determine the target pitch angle required to update the map. .
  • the maneuvering point scene is the position where the physical vehicle turns, makes a U-turn, and other maneuvering operations while driving.
  • the physical vehicle travels to a distance less than a certain threshold from a maneuver point scene, it is determined that the entity vehicle enters the maneuver area of the maneuver point, that is, the entity vehicle is in the maneuver point scene.
  • the map size displayed on the terminal is expanded and the pitch angle is reduced to present the traffic conditions of the entire maneuver point. That is, the road range at the current position of the physical vehicle when it is in the maneuvering point scene is the range of the maneuvering point ahead.
  • FIG 23 it is a schematic diagram of the rendering effect of the maneuvering point scene in the autonomous driving scenario.
  • the maneuvering point in Figure 23 is an intersection, and the horizontal and vertical distances corresponding to the road range are the width of the intersection, as shown in the dotted rectangular box in the figure.
  • determining the target frame required for updating the map based on the lateral distance of the road, and determining the target pitch angle required for updating the map based on the required longitudinal distance between the target frame and the road includes: based on the lateral distance of the road , determine the map frame required to update the map; calculate the pitch angle based on the longitudinal distance between the required map frame and the road; when the pitch angle is greater than or equal to the preset threshold, increase the required map frame and return to the required map frame The longitudinal distance between the frame and the road is continued, and the step of calculating the pitch angle is continued until the pitch angle is less than the preset threshold, and the target frame and target pitch angle required for updating the map are obtained.
  • the steps for determining the target frame and target pitch angle in the maneuver point scenario include: determining the road lateral distance and road longitudinal distance of the target maneuver point, determining the map frame required to update the map based on the road lateral distance, and determining the required map frame based on the required
  • the longitudinal distance between the image frame and the road is calculated, and the pitch angle is calculated.
  • the pitch angle is greater than or equal to the preset threshold, the required image frame is increased, and the steps of calculating the pitch angle based on the required longitudinal distance between the image frame and the road are returned to continue.
  • the target frame and target pitch angle required to update the map are obtained.
  • the width of the intersection is 25 meters, and the distance from the current position to the intersection ahead is 50 meters.
  • the road range can be extended by 10 meters in the extension direction of each intersection, then , the lateral range of the road in this maneuvering point scenario is 35 meters, and the longitudinal range of the road is 60 meters.
  • the initial image frame is 39 meters corresponding to level 20.
  • the calculated pitch angle is 56.97°
  • the target frame required to display the map at the current location is 39 meters
  • the target pitch angle is 56.97°.
  • the target frame and target pitch angle required for displaying the map are determined based on the lateral distance and longitudinal distance of the lane that need to be paid attention to at the current position of the physical vehicle in the maneuvering point scenario, which can help the occupants in the physical vehicle feel the current position of the physical vehicle.
  • the map displayed can focus on the lane range of the current position in the maneuvering point scene, improving the perceptibility of the scene and improving the occupants' trust in the autonomous driving system.
  • the terminal can first enter the automatic driving state when the physical vehicle is in the automatic driving state.
  • the terminal can execute the strategy of adjusting the map frame and pitch angle in the forward-moving scene; in the physical vehicle, the terminal can first enter the automatic driving state.
  • the terminal executes the strategy of adjusting the map frame and pitch angle in the lane change scene.
  • the physical vehicle lane change After the physical vehicle lane change is completed or the lane change is canceled, it returns to the forward-moving scene; when the physical vehicle is at the current position
  • implement the strategy of adjusting the map frame and pitch angle in the automatic avoidance scene and return to the forward-moving scene after the avoidance is completed or canceled; when the physical vehicle is about to exit autonomous driving and enter the takeover scene at its current position, Execute the strategy of adjusting the map frame and pitch angle in the takeover scenario.
  • the takeover After the takeover is completed, enter the SD navigation scene and start executing the SD navigation map frame adjustment strategy.
  • the image frame does not change and the image frame adjustment strategy of the previous scene is maintained. For example, if an automatic avoidance task is inserted during automatic lane change, the adjustment strategy of the automatic lane change scene is maintained. When switching between lane changing scenes, the status does not conflict and you can directly switch to the adjustment strategy that takes over the scene.
  • a method for automatically adjusting the picture effect based on high-precision maps and driving status in an autonomous driving scenario is provided, using data such as lane length, road surface width, and road topology relationships in the high-precision map as an automatic adjustment strategy.
  • data such as lane length, road surface width, and road topology relationships in the high-precision map as an automatic adjustment strategy.
  • the application scenarios such as going along, changing lanes, yielding, avoiding, and taking over the output of the automatic driving system, comprehensively adjust parameters such as the map frame, pitch angle, and the position indicated by the center point of the map to achieve the goal of The purpose of automatic adjustment of effects.
  • This method will greatly improve the quality of navigation maps, speed up map reading, improve navigation experience, further help vehicle occupants understand the decision-making actions of the autonomous driving system, and increase vehicle occupants' trust in the autonomous driving system.
  • the actual road conditions of the target road where the physical vehicle's current position is located and the driving scene where the physical vehicle's current position is located are integrated to determine the target frame and target perspective required to display the map, so that the road range displayed on the map can be adapted
  • the location of the physical vehicle and the road area that requires attention in the driving scenario can improve the perceptibility of map changes, greatly improve the quality of navigation maps, speed up map reading, and improve the navigation experience.
  • the target perspective of the updated map can expand the visual range of the map and improve navigation efficiency when the target map is small.
  • embodiments of the present application also provide a vehicle navigation device for implementing the above-mentioned vehicle navigation method.
  • the solution to the problem provided by this device is similar to the solution recorded in the above method. Therefore, for the specific limitations in one or more vehicle navigation device embodiments provided below, please refer to the above limitations on the vehicle navigation method. I won’t go into details here.
  • a vehicle navigation device 2400 including: an interface display module 2402 and a ground Figure shows module 2404, where:
  • the interface display module 2402 is used to display a vehicle navigation interface for navigating the physical vehicle, where the vehicle navigation interface includes a map;
  • the map display module 2404 is used to display virtual vehicles on the target road in the map, and the virtual vehicles correspond to the physical vehicles; when the physical vehicle travels to the current position and is in the target driving scene, determine the target driving scene and the location corresponding to the current position.
  • Road extent data update the frame and perspective used to display the map to the target frame and target perspective, and the target frame and target perspective are adapted to the road extent data.
  • Each module in the above-mentioned vehicle navigation device can be implemented in whole or in part by software, hardware and combinations thereof.
  • Each of the above modules may be embedded in or independent of the processor of the computer device in the form of hardware, or may be stored in the memory of the computer device in the form of software, so that the processor can call and execute the operations corresponding to the above modules.
  • a computer device which may be the terminal 102 in FIG. 1 or the vehicle-mounted terminal 604 in FIG. 6 , and its internal structure diagram may be as shown in FIG. 25 .
  • the computer device includes a processor, memory, input/output interface, communication interface, display unit and input device.
  • the processor, memory and input/output interface are connected through the system bus, and the communication interface, display unit and input device are connected to the system bus through the input/output interface.
  • the processor of the computer device is used to provide computing and control capabilities.
  • the memory of the computer device includes non-volatile storage media and internal memory.
  • the non-volatile storage medium stores an operating system and computer-readable instructions.
  • This internal memory provides an environment for the execution of an operating system and computer-readable instructions in a non-volatile storage medium.
  • the input/output interface of the computer device is used to exchange information between the processor and external devices.
  • the communication interface of the computer device is used for wired or wireless communication with external terminals.
  • the wireless mode can be implemented through WIFI, mobile cellular network, NFC (Near Field Communication) or other technologies.
  • the computer readable instructions when executed by the processor implement a vehicle navigation method.
  • the display unit of the computer device is used to form a visually visible picture and can be a display screen, a projection device or a virtual reality imaging device.
  • the display screen can be a liquid crystal display screen or an electronic ink display screen.
  • the input device of the computer device can be a display screen.
  • the touch layer covered above can also be buttons, trackballs or touch pads provided on the computer equipment shell, or it can also be an external keyboard, touch pad or mouse, etc.
  • the input interface of the computer device can receive data sent from the positioning device or sensing device on the vehicle, including vehicle position data, obstacle position data, obstacle position data relative to the own vehicle, and so on.
  • Figure 25 is only a block diagram of a partial structure related to the solution of the present application, and does not constitute a limitation on the computer equipment to which the solution of the present application is applied.
  • Specific computer equipment can May include more or fewer parts than shown, or combine certain parts, or have a different arrangement of parts.
  • a computer device including a memory and a processor.
  • Computer-readable instructions are stored in the memory.
  • the processor executes the computer-readable instructions, the vehicle described in any one or more of the above embodiments is implemented. Navigation method steps.
  • a computer-readable storage medium is provided, with computer-readable instructions stored thereon.
  • the computer-readable instructions are executed by a processor, the vehicle navigation method described in any one or more of the above embodiments is implemented. step.
  • a computer program product including computer readable instructions, which when executed by a processor implement the steps of the vehicle navigation method described in any one or more of the above embodiments.
  • the user information including but not limited to user equipment information, user personal information, etc.
  • data including but not limited to data used for analysis, stored data, displayed data, etc.
  • the computer readable instructions can be stored in a non-volatile computer.
  • the computer-readable instructions when executed, may include the processes of the above method embodiments.
  • any reference to memory, database or other media used in the various embodiments provided in this application may include at least one of non-volatile and volatile memory. kind.
  • Non-volatile memory can include read-only memory (ROM), magnetic tape, floppy disk, flash memory, optical memory, high-density embedded non-volatile memory, resistive memory (ReRAM), magnetic variable memory (Magnetoresistive Random Access Memory (MRAM), ferroelectric memory (Ferroelectric Random Access Memory (FRAM)), phase change memory (Phase Change Memory, PCM), graphene memory, etc.
  • Volatile memory may include random access memory (Random Access Memory, RAM) or external cache memory.
  • RAM Random Access Memory
  • RAM Random Access Memory
  • RAM random access memory
  • RAM Random Access Memory
  • RAM random access memory
  • RAM Random Access Memory
  • RAM random access memory
  • RAM Random Access Memory
  • SRAM static random access memory
  • DRAM Dynamic Random Access Memory
  • the databases involved in the various embodiments provided in this application may include at least one of a relational database and a non-relational database.
  • Non-relational databases may include blockchain-based distributed databases, etc., but are not limited thereto.
  • the processors involved in the various embodiments provided in this application may be general-purpose processors, central processing units, graphics processors, digital signal processors, programmable logic devices, quantum computing-based data processing logic devices, etc., and are not limited to this.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Navigation (AREA)
  • Instructional Devices (AREA)
  • Traffic Control Systems (AREA)

Abstract

一种车辆导航方法,包括:显示用于对实体车辆进行导航的车辆导航界面,车辆导航界面包括地图;在地图中的目标道路上显示虚拟车辆,虚拟车辆与实体车辆对应;当实体车辆行驶至当前位置、且处于目标行驶场景时,确定目标行驶场景以及当前位置所对应的道路范围数据;更新用于显示地图的图幅和视角至目标图幅和目标视角,目标图幅和目标视角与道路范围数据相适配。

Description

车辆导航方法、装置、设备、存储介质和计算机程序产品
本申请要求于2022年06月30日提交中国专利局,申请号为202210758586.2,申请名称为“车辆导航方法、装置、设备、存储介质和计算机程序产品”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及地图导航技术领域,特别是涉及一种车辆导航方法、装置、计算机设备、存储介质和计算机程序产品。
背景技术
随着计算机技术的发展,萌生出的地图导航工具已广泛应用于路线导航中,在人们的日常出行中发挥了很大的作用,尤其是车辆导航。在车辆行驶过程中,导航设备通常会根据车辆的行驶速度、方向、车辆位置,结合为车辆规划的导航路线,显示车辆导航界面,实现车辆导航。通常,车辆导航界面中,导航地图的显示比例固定不变,不能更好地呈现路况,导航效果较差。
发明内容
本申请提供了一种车辆导航方法。所述方法包括:
显示用于对实体车辆进行导航的车辆导航界面,所述车辆导航界面包括地图;
在所述地图中的目标道路上显示虚拟车辆,所述虚拟车辆与所述实体车辆对应;
当所述实体车辆行驶至当前位置、且处于目标行驶场景时,确定所述目标行驶场景以及所述当前位置所对应的道路范围数据;
更新用于显示所述地图的图幅和视角至目标图幅和目标视角,所述目标图幅和目标视角与所述道路范围数据相适配。
本申请还提供了一种车辆导航装置。所述装置包括:
界面显示模块,用于显示用于对实体车辆进行导航的车辆导航界面,所述车辆导航界面包括地图;
地图显示模块,用于在所述地图中的目标道路上显示虚拟车辆,所述虚拟车辆与所述实体车辆对应;当所述实体车辆行驶至当前位置、且处于目标行驶场景时,确定所述目标行驶场景以及所述当前位置所对应的道路范围数据;更新用于显示所述地图的图幅和视角至目标图幅和目标视角,所述目标图幅和目标视角与所述道路范围数据相适配。
本申请还提供了一种计算机设备。所述计算机设备包括存储器和处理器,所述存储器存储有计算机可读指令,所述处理器执行所述计算机可读指令时实现上述车辆导航方法的步骤。
本申请还提供了一种计算机可读存储介质。所述计算机可读存储介质,其上存储有计算机可读指令,所述计算机可读指令被处理器执行时实现上述车辆导航方法的步骤。
本申请还提供了一种计算机程序产品。所述计算机程序产品,包括计算机可读指令,该计算机可读指令被处理器执行时实现上述车辆导航方法的步骤。
附图说明
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本申请的实施例,并与说明书一起用于解释本申请的原理。显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为一个实施例中车辆导航方法的应用环境图;
图2为一个实施例中不同比例尺级别下的地图效果示意图;
图3为一个实施例中不同视角下所查看的地图范围示意图;
图4为又一个实施例中不同视角下所查看的地图范围示意图;
图5为一个实施例中比例尺与俯仰角之间关系的示意图;
图6为一个实施例中车辆导航***的示意图;
图7为一个实施例中自动驾驶***的数据处理流程示意图;
图8为一个实施例中标清地图与高精地图渲染效果对比示意图;
图9为一个实施例中自动驾驶车辆的驾驶状态的跳转逻辑示意图;
图10为一个实施例中车辆导航方法的流程示意图;
图11为一个实施例中不同图幅下计算俯仰角的示意图;
图12为一个实施例中自动驾驶场景下自动调整图面效果的流程示意图;
图13为一个实施例中顺行场景示意图;
图14为一个实施例中变道场景的道路横向观察范围示意图;
图15为一个实施例中变道场景示意图;
图16为一个实施例中变道场景下搜索第二车道的示意图;
图17为一个实施例中计算实体车辆的预估落车点位置的示意图;
图18为一个实施例中避让场景示意图;
图19为一个实施例中自车车辆与障碍物的位置示意图;
图20为一个实施例中自动驾驶接管场景示意图;
图21为一个实施例中自动驾驶接管场景渲染效果图;
图22为一个实施例中接管场景下的道路观察范围的示意图;
图23为一个实施例中自动驾驶场景下的机动点场景渲染效果示意图;
图24为一个实施例中车辆导航装置的结构框图;
图25为一个实施例中计算机设备的内部结构图。
具体实施方式
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。
车辆导航技术是指基于卫星定位***提供的定位数据,将车辆与道路之间的实时位置关系映射到可视化的车辆导航界面中,以在车辆行驶过程中,向车辆中的对象(如驾驶员或乘员)提供导航功能的技术。通过可视化的车辆导航界面,以及该车辆导航界面中的地图,该对象可以了解到车辆的当前位置、车辆的行驶路线、车辆的行驶速度、车辆前方路况、道路车道、车辆所在位置附近的其它车辆的行驶状况、道路场景等信息。
下面对车辆导航技术中涉及的一些概念进行说明:
自驾域:车辆中用于控制自动驾驶的软、硬件的集合。
座舱域:车辆中用于座舱内与用户交互的中控屏、仪表屏、操作按钮等软硬件集合。例如座舱内的中控屏幕上显示的导航地图以及与用户交互的界面。
HD Map:HD地图,全称是High Definition Map,高精地图。
SD Map:SD地图,全称是Standard Definition Map,标清地图。
2.5D视角:底图倾斜模式,该模式下能展示3D楼块、4K桥梁效果等类3D的渲染效果。
ACC:Adaptive Cruise Control,自适应巡航,是自动驾驶***提供的根据用户设定的巡航速度与 前车的安全距离,动态调节自车速度。前车加速,自车也会加速到设定的速度。前车减速,自车将会降速来保持自车与前车的安全距离。
LCC:Lane Center Control,车道居中辅助,是自动驾驶提供的辅助驾驶员控制方向盘的功能,它能持续将车辆居中保持在当前车道内。
NOA:Navigate on Autopilot,自动辅助导航驾驶功能,简称NOA。该功能可通过设置目的地,即可引导车辆自动行驶,在驾驶员的监测下可完成变道、超车、自动驶入、驶出匝道等操作。NOA的驾驶行为有巡航、跟车、避让、让行、单一规则规划变道行为(如并入快车道、有预期退出)、多条件决策变道行为(如巡航过程中变道)。
机动点:电子地图中引导驾驶员做出转向、减速、并道、驶出等机动动作的位置。通常是路口转向、路口分流、路口合流等位置。
落车点:自动驾驶***完成自动变道时所预测的实体车辆所处的位置。
本申请实施例提供的车辆导航方法,可以应用于如图1所示的应用环境中。该应用环境包括终端102与服务器104,终端102通过网络与服务器104进行通信。数据存储***可以存储服务器104需要处理的数据,如地图数据,包括高精地图数据、标清地图数据等。数据存储***可以集成在服务器104上,也可以放在云上或其他服务器上。
其中,终端102可以包括但不限于手机、电脑、智能语音交互设备、智能家电、车载终端等。终端还可以是便携式可穿戴设备,例如智能手表、智能手环等。服务器104可以用独立的服务器或者是多个服务器组成的服务器集群来实现。服务器104例如可以是为地图提供功能服务的服务器,包括定位服务、导航服务等,定位服务、导航服务可获取关于实体车辆的定位数据。服务器104可以接收关于实体车辆的定位数据、实体车辆所处环境的感知数据等,依据这些数据生成关于实体车辆的车辆导航界面,通过终端102显示该车辆导航界面。当然,也可以由终端102接收关于实体车辆的定位数据、感知数据等,依据这些数据生成并显示关于实体车辆的车辆导航界面。本申请实施例可应用于各种场景,包括但不限于云技术、人工智能、智慧交通、辅助驾驶、自动驾驶等。
与实际地图类似,车辆导航界面中的电子地图(以下简称为地图)也存在显示比例,地图显示比例也称比例尺,表示显示的地图上的距离与实际地图距离的比。比如,车辆导航界面中地图上的1厘米,表示实际地图上的1千米,图幅与显示比例之间存在对应关系,显示比例越小,图幅越大,即地图中所显示的道路范围越大,地图细节越粗糙;显示比例越大,图幅越小,即地图中所显示的道路范围越小,地图细节越细致逼真。
如下表所示,通过比例尺级别与实际地理区域大小建立对应关系,地球周长是4万公里左右,在一个实施例中,以地球周长长度作为最小比例尺级别0级,随着级别的递增所表示的图幅逐级递减,具体对应关系如下表1所示。可以理解的是,该比例尺级别与图幅的对应关系仅为示意,比例尺级别还可以是小数,例如22.5,对应的图幅是15米。
表1

如图2所示,为一个实施例中不同比例尺级别下的地图效果示意图。参照图2可知,在图2所示的地图中,图幅为20米所显示的地图大,地图范围小,图幅为500米所显示的地图小,地图范围大。
车辆导航界面中地图的视角,是查看地图的视角,视角例如可以是地图的俯仰角。图3为一个实施例中不同视角下所查看的地图范围示意图。参照图3,在相同比例尺级别下,俯仰角依次为40度、50度与65度,可见,在相同比例尺级别下,俯仰角越大,可视范围越多,俯仰角越小,可视范围越少。参照图4,为又一个实施例中不同视角下所查看的地图范围示意图,依次是垂直视角、小俯仰角视角和大俯仰角视角。不同视角下所呈现的图幅和建筑物效果是不一样的。
图5为一个实施例中比例尺与俯仰角之间关系的示意图。参考图5,在相同视角(如垂直视角下)图幅20米的可视范围最小,图幅500米的可视范围最大。在相同比例尺级别下(如20米),俯仰角越大,可视范围越多,俯仰角越小,可视范围越少。可知,在相同图幅下,也就是相同比例尺级别下,调整俯仰角,能够调整不同朝向的可视范围,通过调整俯仰角可以扩大可视范围,甚至在地图中呈现出超视距地理区域。
为了使实体车辆顺利使用车辆导航功能,有些车辆导航方式中,地图的显示比例采用了自适应速度变化的方式,仅考虑了速度这方面的因素,没有考虑到实体车辆在各种不同行驶场景下所需关注的道路范围,地图导航界面中显示的地图范围有限,导致导航效果较差。此外,通常车辆行驶过程中,导航视角也是事先设置的,未能适应实体车辆当前位置所处的行驶场景而进行自适应调整,车辆导航效率较差。
基于此,为了向实体车辆提供更好导航效果与提升导航效率,本申请实施例提供了一种车辆导航方法,该方法不仅关注实体车辆当前位置所处的目标道路的道路状况,还关注实体车辆当前位置所处的行驶场景,将该二者共同作为调整地图的图幅与视角的因素,达到综合调整导航地图界面所呈现的道路范围的效果,能够分实体车辆的当前行驶场景、视实体车辆当前位置所在道路的实际情况,调整导航地图界面中地图的图幅与视角,提升各类行驶场景的可感知程度,聚焦各个行驶场景下所需要关注的道路范围,提升导航效果,还能帮助实体车辆上的驾驶员或乘员理解驾驶***的决策,增加驾驶***的可信任度。
具体来说,在一个实施例中,终端102可以显示用于对实体车辆进行导航的车辆导航界面,车辆导航界面包括地图;在地图中的目标道路上显示虚拟车辆,虚拟车辆与实体车辆对应;当实体车辆行驶至当前位置、且处于目标行驶场景时,确定目标行驶场景以及当前位置所对应的道路范围数据;更新用于显示地图的图幅和视角至目标图幅和目标视角,目标图幅和目标视角与道路范围数据相适配。
也就是说,更新地图所需要的目标图幅和目标视角,是综合实体车辆当前位置所处目标道路的实际路况与实体车辆当前位置所处行驶场景确定的,可以在实体车辆处于目标行驶场景时,实时调整地图的显示方式,使得显示方式与当前在这种目标行驶场景下用户所需要观察到的道路范围相适配,即 使得该更新后的地图中所显示的道路范围,适配车辆所处位置在所处行驶场景下所需重点观察的道路区域,可以提升地图的变化的可感知程度,大大提升导航地图品质,加快阅图速度,提升导航体验。此外,更新后的地图所具有的目标视角,在目标图幅较小的情况下可以扩大地图的可视范围,提升导航效率。
行驶场景包括至少一种预设的目标行驶场景,如变道场景、避让场景、接管场景、机动场景,等等。变道场景,指的是实体车辆行驶过程中主动更换行驶车道,变道场景下,需要重点观察需要变道至的车道与该车道后向来车的情况。避让场景,指的是在实体车辆行驶过程中遇到障碍,如旁车超车、前车减速、前车变道等导致当前车道路况不佳的情况下,需通过减速、变道等动作来避让危险情况的场景,避让场景下,需要重点观察障碍物以及障碍物所在车道的情况。接管场景,指的是自动驾驶车辆即将驶出自动驾驶功能所支持的区域,将要转人工驾驶的场景,自动驾驶的接管场景下,需要重点观察道路中需要退出自动驾驶点的位置。机动点场景,指的是实体车辆行驶过程中转向、掉头等机动操作的位置。驾驶机动点场景下,需要重点观察前方机动点的道路状况。
除了上述行驶场景以外,目标行驶场景还可以包括其它场景,本申请对此不作限制,可以理解的是,不同行驶场景下,需要重点观察的道路范围可以存在差异。此外,除了包括上述多种目标行驶场景外,行驶场景还可以包括顺行场景,顺行场景指的是前方一路直行,无变道、掉头、转向等操作的场景,顺行场景下,所呈现的地图的图幅与视角可以是预设值,无需随车辆当前位置所处目标道路而变化。终端可以通过不同的标识表示实体车辆的行驶场景,即不同标识表示不同类型的行驶场景。
本申请实施例提供的车辆导航方法可以应用于自动驾驶场景的车辆导航过程中,自动驾驶场景即车驾场景,是指车辆由车载的自动驾驶***控制行驶的场景,在自动驾驶场景的车辆导航过程中,通过向车辆中的驾驶员或乘员呈现可视化的车辆导航界面,使其清楚直观地了解到该车辆所处的道路环境。本申请实施例,结合自动驾驶车辆当前位置所处目标道路的道路环境与自动驾驶车辆当前位置所处的行驶场景,综合确定更新车辆导航界面中的地图所需的图幅与视角,从而呈现地图的变化,可以提升自动驾驶场景中车辆所处场景的可感知程度,提升车内成员对自动驾驶***的信任度与自动驾驶***为其提供的驾驶安全感。
本申请实施例提供的车辆导航***也可以应用于主动驾驶***的车辆导航过程中,主动驾驶场景即人驾场景,是指车辆由驾驶员控制行驶的场景,在主动驾驶场景的车辆导航过程中,通过向车辆内驾驶员呈现可视化的车辆导航界面,可以清楚直观地了解到该车辆与车辆所处的道路环境,以及车辆的行驶状态。本申请实施例结合车辆当前所处目标道路的导航环境与车辆当前所处的行驶场景,共同调整车辆导航界面中地图的图幅与视角,呈现地图的变化,可以提升车辆所处场景的可感知程度,驾驶员可以基于呈现的车辆导航界面进行驾驶决策,这样可以提升车辆行驶过程中的交通安全。
本申请实施例提供的车辆导航方法,还可以应用于如图6所示的车辆导航***。该车辆导航***中包括实体车辆601、定位设备602、感知设备603以及车载终端604,定位设备602、感知设备603以及车载终端604搭载于车辆601中。
定位设备602可以用于获取实体车辆601(即自车)在世界坐标系下的位置数据(即实体车辆601的位置数据),其中,世界坐标系是指***的绝对坐标系,比如定位设备可以通过GPS技术获得实体车辆601的位置数据。定位设备601可以将实体车辆601在世界坐标系下的位置数据发送至车载终端604,车载终端604就可以实时获取实体车辆的当前位置。本申请实施例提及的定位设备可以是RTK(Real Time Kinematic,载波相位差分技术)定位设备,RTK定位设备可以实时地提供车辆601高精度(例如厘米级)的定位数据(即实体车辆601的位置数据)。
感知设备603可以用于对实体车辆601所处环境进行感知,得到环境感知数据,感知对象可以是目标道路上的其它车辆或障碍物。例如,环境感知数据可以包括实体车辆601所处目标道路上的其它车辆(如避让场景下的超车车辆、前车车辆、变道场景下的后向来车车辆等等)在实体车辆601的车辆坐标系下的位置数据(即其它车辆相对于实体车辆601的坐标数据),环境感知数据还包括包括实体实体车辆601在不同场景下所需知晓的数据,例如变道场景下预测的落车点在车道上的位置,接管场景下自动驾驶退出点在车道上的位置,等等。车辆坐标系是指以实体车辆601的车辆中心为坐标原点所建立的坐标系。感知设备603可以将环境感知数据发送至车载终端604。感知设备603包括视觉感知设备、雷达感知设备。感知设备603对实体车辆601所处环境进行感知的感知范围是由感知设备集成的传感器所决定的,一般情况下,感知设备可以包括但不限于以下至少一种传感器:视觉传感器(例如相机)、长距雷达以及短距雷达,长距雷达支持探测的距离大于短距雷达支持探测的距离。
车载终端604融合了卫星定位技术、里程定位技术及汽车黑匣技术,能用于对车辆进行行车安全管理、运营管理、服务质量管理、智能集中调度管理、电子站牌控制管理等的终端设备,车载终端604可以包括显示屏,例如中控屏、仪表屏、AR-HUD(AugmentedReality Head Up Display,增强显示抬头显示器)显示屏等。车载终端604在接收到实体车辆601的位置数据和环境感知数据后,可以将感知对象在车辆坐标系下的位置数据,转换为感知对象在世界坐标系下的位置数据,即将感知对象的相对位置数据转换为感知对象的位置数据,然后,车载终端604可以根据感知对象的位置数据在显示屏中显示的导航界面中显示代表该感知对象的标记。
以自动驾驶场景为例,本申请实施例提供的车辆导航方法涉及自动驾驶域与座舱域之间的跨域通信;其中,自动驾驶域是指车辆中用于控制自动驾驶的软硬件集合,例如上述提及的定位设备602以及感知设备603均属于自动驾驶域;座舱域是指实体车辆中用于控制座舱内与车辆关联对象进行交互的中控屏、仪表屏、操作按钮等软硬件集合,例如上述提及的车载终端604属于座舱域。座舱域与自动驾驶域是两个相对独立的处理***,两个***之间基于车载以太网通过TCP(Transmission Control Protocol,传输控制协议)、UDP(UserDatagram Protocol,用户数据报协议)和SOME/IP(ScalableService-OrientedMiddleware overIP,一种数据传输协议)等数据传输协议进行数据跨域传输。其中,车载以太网可实现比较高的数据传输速率(例如,1000Mbit/s等),同时还满足汽车行业要求的高可靠性、低电磁辐射、低功耗、低延迟等方面的要求。
如图7所示,为一个实施例中自动驾驶***的数据处理流程示意图。参照图7,自动驾驶域采集到定位数据与环境感知数据后,打包数据并通过跨域通信的方式将打包数据传输至座舱域。座舱域收到打包数据后,结合高精地图信息对其中的定位数据进行纠偏操作,得到实体车辆纠偏后的定位位置,随后基于定位位置将感知数据所感知的其它感知对象融入到高精地图中,最后将所有融合好的信息以高精地图的形式呈现在座舱域的显示屏上(中控屏、仪表屏、AR-HUD等显示设备)。
显示屏上所显示的车辆导航界面中的地图,可以是标清地图,也可以是高精地图。地图数据从早期的标清数据发展到现在的高精数据,地图数据精度从原来的5米~10米提升至现在的50cm左右。导航底图的图面效果也从原来的道路级(或路径级)渲染进化为现在的车道级渲染。图面效果从早期的平面视角扩展为现在的2.5D视角,大大扩展了相同显示比例下的视野范围,展示了更多的超视距信息。
标清地图通常用于辅助驾驶员进行车辆导航,其坐标精度在10米左右。而在自动驾驶领域,自动驾驶车辆需要精确知晓实体车辆位置、实体车辆与马路牙子、旁边的车道距离通常仅有几十厘米左右,因此高精度地图的精度要求都在1米以内,而且横向的相对精度(比如车道和车道,车道和车道线的相对位置精度)往往还要更高。此外,在一些情况下,高精地图还可以呈现准确的道路形状,并包括 每个车道的坡度、曲率、航向、高程,侧倾的数据;车道线的种类、颜色;每条车道的限速要求、推荐速度;隔离带的宽度、材质;道路上的箭头、文字的内容、所在位置;红绿灯、人行横道等交通参与物的地理坐标,物理尺寸以及他们的特质特性等等。
如图8所示,为一个实施例中标清地图与高精地图渲染效果对比示意图。参照图8,图面效果从标清地图升级到高精地图后发生了巨大的变化,包括:比例尺大小(图幅范围)变化、垂直视角切换为2.5D视角、引导效果精细化(路径级升级为车道级),这些变化要根据实际应用场景调整才能发挥高精地图渲染的最大价值。
如图9所示,为一个实施例中自动驾驶车辆的驾驶状态的跳转逻辑示意图。参照图9,自动驾驶***包含了多种驾驶状态(功能状态)的切换,功能升级指的是从全手动驾驶状态逐步升级到高阶的自动驾驶状态。手动驾驶状态可直接升级到ACC、LCC和NOA,也可以先变更为ACC状态开启、再到LCC状态开启、最后到NOA状态,逐级开启。功能降级与功能升级相反,表示从高阶的自动驾驶逐步降级到全手动驾驶的过程。本申请实施例所提及的行驶场景,在自动驾驶场景下,可以特指自动驾驶***在NOA状态下做的自动变道场景、自动避让场景、提示接管场景、自动跟车场景,等等。
在一个实施例中,如图10所示,提供了一种车辆导航方法,以该方法应用于图1中的终端102或图6中的车载终端604为例进行说明,包括以下步骤1002至步骤1006:
步骤1002,显示用于对实体车辆进行导航的车辆导航界面,车辆导航界面包括地图。
在实体车辆行驶过程中,终端可以显示车辆导航界面。车辆导航界面是在实体车辆行驶过程中,为实体车辆进行车辆导航的界面,车辆导航界面中可以包括地图,地图描述了实体车辆所处实际地理位置处的实际道路环境,包括实体车辆所处目标车道的道路、车道以及车道中的指示标记物等。地图可以是标清地图,也可以高精地图。例如,在自动驾驶场景下,地图是高精地图,地图是对道路环境进行三维建模得到的虚拟道路环境,在普通车辆导航场景下,地图是标清地图,对道路环境进行二维建模得到的虚拟道路环境,可仅包括道路数据,而不包括空间上的高度数据。
步骤1004,在地图中的目标道路上显示虚拟车辆,虚拟车辆与实体车辆对应。
在实体车辆行驶过程中,终端显示的车辆导航界面中还包括显示于目标道路上的虚拟车辆,这里的目标道路与虚拟车辆,均是实体车辆所在的实际目标道路与实体车辆的虚拟映射,该虚拟车辆是根据实体车辆的当前位置数据显示在导航界面中虚拟地图的目标道路上的,虚拟车辆在该虚拟地图中的位置,与对实体车辆进行定位得到的当前位置相对应。该实体车辆可以是通过显示的地图进行车辆导航的任一个车辆。目标道路可以包括至少一个车道,目标道路可以是多车道路。
在道路上行驶的实体车辆,可能处于不同的行驶场景,即可能存在相应的行驶场景。行驶场景是车辆行驶过程中为实现安全驾驶所执行的一系列行驶行为的场景。行驶场景包括至少一种目标行驶场景,如变道场景、避让场景、接管场景、机动点场景,等等。关于这些目标行驶场景的具体说明,可以参考前面的相关说明。除了上述行驶场景以外,目标行驶场景还可以包括其它场景,本申请对此不作限制。可以理解的是,不同行驶场景下,用户需要重点观察的道路范围可以存在差异。此外,除了包括上述多种目标行驶场景外,行驶场景还可以包括顺行场景,顺行场景指的是前方一路直行,无变道、掉头、转向等操作的场景,顺行场景下,所呈现的地图的图幅与视角可以是预设值,无需随实体车辆当前位置所处目标道路而变化。在自动驾驶场景中,目标行驶场景可以包括自动驾驶***在NOA状态下的自动变道场景、自动避让场景、提示接管场景、自动跟车场景,等等。
终端可以通过一些方式确定实体车辆在当前位置是否处于某一目标行驶场景。例如,在普通车辆导航场景下,终端可以根据车辆的位置数据的变化确定车辆的行驶场景,例如,根据实体车辆的位置 数据与当前位置所在的道路数据判定实体车辆是否发生了变道行为,即是否处于变道场景;又例如,根据实体车辆的位置数据与当前位置所在的道路数据判定实体车辆是否在一段时间范围内直行,若是,则处于顺行场景;又例如,根据实体车辆的位置数据与当前位置所在的道路数据判定实体车辆是否行驶至接管点,即处于提示接管场景,等等。也即,普通车辆导航场景下,行驶场景可基于实体车辆的位置数据与根据电子地图所获得的道路数据综合判定。例如,在自动驾驶场景场景下,实体车辆的行驶行为是由自动驾驶域决策的,座舱域的终端可以通过跨域通信,从自动驾驶域获得实体车辆当前的行驶行为信息(或称行驶指令),从而获得实体车辆当前的行驶场景,比如,当自动驾驶域给出“变道指令”时,座舱域的终端就可以接收到该指令并确定实体车辆当前处于“变道场景”,当自动驾驶域给出“紧急避让指令”时,座舱域的终端就可以接收到该指令并确定实体车辆当前处于“避让场景”,等等。在获得实体车辆当前的行驶场景后,终端可以从自动驾驶域获取该行驶场景下为更新地图所需的数据,例如,变道场景下的车辆的转向信息,避让场景下的障碍物相对于车辆的位置信息,接管场景下自动驾驶退出点的位置信息,等等。
步骤1006,当实体车辆行驶至当前位置、且处于目标行驶场景时,确定目标行驶场景以及当前位置所对应的道路范围数据;更新用于显示地图的图幅和视角至目标图幅和目标视角,目标图幅和目标视角与道路范围数据相适配。
其中,当前位置,是实体车辆的定位位置,地图中所显示的虚拟车辆的车辆位置,是根据实体车辆的定位位置显示的。可以理解,实体车辆行驶过程中,随着时间的推移,实体车辆的当前位置是时刻变化的,例如,当前位置的刷新频次例如可以是10次/秒。目标行驶场景是实体车辆在当前位置所处的行驶场景,目标行驶场景可以是上述变道场景、避让场景、接管场景、机动点场景等多个行驶场景中的任意一种。
具体地,在实体车辆的行驶过程中,终端实时获取实体车辆的当前位置,并按前面所提及到的方式,判定实体车辆是否处于上述的某一种目标行驶场景。若是,则终端根据当前位置、当前所处的某一种目标行驶场景,确定目标行驶场景以及当前位置所对应的道路范围数据,根据该道路范围数据更新显示地图所需的图幅和视角至目标图幅和目标视角。根据道路范围数据所确定的图幅与视角更新地图,使得更新后的地图中的道路范围,是与实体车辆处于该目标行驶场景时,用户在该当前位置处所需要关注到的道路范围相适配的。即道路范围数据是用于更新地图所显示的道路范围所需的数据,包括道路横向范围数据与道路纵向范围数据。
关于图幅与视角的具体说明,可以参考前面的相关说明。基于前面的描述,不同图幅与视角所显示的地图的地图范围是不一样的,自然,所显示的道路范围也是不一样的,例如,图幅越小,俯仰角越小,所显示的道路或车道越宽、前方视野越少,图幅越大,俯仰角越大,所显示的道路或车道越窄,前方视野越多。本申请实施例中,更新后的地图中所显示的道路范围,和车道所在目标道路本身的道路属性、实体车辆在目标道路上的当前位置、实体车辆当前所处的行驶场景均相关,也就是说,这些因素共同决策出用于更新地图的目标图幅与目标视角。
目标行驶场景以及当前位置所对应的道路范围数据,是事先根据实体车辆在目标行驶场景下的行驶行为所需关注的道路范围确定的。也即,不同的目标行驶场景对应了不同的所需关注的道路范围。例如,变道场景下,需要观察变道至的车道与该车道后向来车的情况,那么变道场景下所需关注的道路范围主要是实体车辆所处当前位置附近的范围。又例如,避让场景下,需要观察障碍物以及障碍物所在车道的情况,那么避让场景下所需关注的道路范围主要是当前位置以及该障碍物所在位置形成的范围。
本实施例中,之所以称之为“适配”,是因为更新后的地图所具有的目标图幅和目标视角,是综合实体车辆当前位置所处目标道路的实际路况与实体车辆当前位置所处行驶场景确定的,使得,该更新后的地图中所显示的道路范围,可以适配实体车辆所处位置在所处行驶场景下所需关注的道路范围,可以提升地图的变化的可感知程度,大大提升导航地图品质,加快阅图速度,提升导航体验。此外,地图所具有的目标视角,在目标图幅较小的情况下可以扩大地图的可视范围,提升导航效率。
在一个实施例中,当实体车辆行驶至当前位置、且处于目标行驶场景时,确定目标行驶场景以及当前位置所对应的道路范围数据,包括:当实体车辆行驶至当前位置、且处于目标行驶场景时,确定目标行驶场景以及当前位置的道路横向范围数据和道路纵向范围数据。
在一个实施例中,更新用于显示地图的图幅和视角至目标图幅和目标视角,目标图幅和目标视角与道路范围数据相适配,包括:根据实体车辆处于目标行驶场景时在当前位置处的道路横向范围数据,与实体车辆处于目标行驶场景时在当前位置处的道路纵向范围数据,确定更新地图所需的目标图幅与目标视角;将地图显示为具有目标图幅和目标视角的地图。
实体车辆处于目标行驶场景时在当前位置处的道路横向范围数据,用于确定更新地图所需的目标图幅,所需显示的道路横向范围越宽,所需的目标图幅越大;该道路横向范围与实体车辆处于目标行驶场景时在当前位置处的道路纵向范围,共同用于确定更新地图所需的目标视角,在道路横向范围确定的情况下,所需显示的道路纵向范围越长,所需的目标视角越大。实际应用中,道路横向范围能够体现车辆两侧的交通状况,道路纵向范围能够体现实体车辆前后方的交通状况。
道路横向范围可以采用实体车辆处于目标行驶场景时在当前位置处,用户所需观察到的道路横向距离量化表示,道路横向距离可以是整个目标道路的道路横向宽度,可以是实体车辆所在车道的车道横向宽度,还可以是实体车辆所在车道、实体车辆所在车道的相邻车道或邻近车道共同形成的车道横向宽度,具体需要视实体车辆所处的目标行驶场景而定。道路纵向范围可以采用实体车辆处于目标行驶场景时在当前位置处,用户所需观察到的道路纵向距离来量化表示,道路纵向距离可以是车辆到前方障碍物的最远距离,可以是实体车辆到预估落车点的距离,还可以是实体车辆到自动驾驶退出点的距离,具体需要视实体车辆所处的目标行驶场景而定。那么可以理解,不同目标行驶场景所定义的道路横向距离与道路纵向距离存在差异,也即,实体车辆处于不同行驶场景时在同一位置处的道路范围可能存在差异,实体车辆处于相同同行驶场景时在不同一位置处的道路范围也可能存在差异。
具体地,终端确定实体车辆在当前位置处于某一目标行驶场景时,则确定实体车辆处于目标行驶场景时在当前位置处的道路横向范围数据与实体车辆处于目标行驶场景时在当前位置处的道路纵向范围数据,从而,根据该道路横向范围数据与道路纵向范围数据,确定更新地图所需的目标图幅与目标视角,随后,终端获取当前位置的地图数据,将该地图数据按该目标图幅与目标视角进行渲染显示,得到实体车辆处于目标行驶场景时在当前位置所需要显示的地图。
在一个实施例中,根据实体车辆处于目标行驶场景时在当前位置处的道路横向范围数据,与实体车辆处于目标行驶场景时在当前位置处的道路纵向范围数据,确定更新地图所需的目标图幅与目标视角,包括:根据实体车辆处于目标行驶场景时在当前位置处的道路横向范围数据,确定更新地图所需的图幅;根据所需的图幅,与实体车辆处于目标行驶场景时在当前位置处的道路纵向范围数据,确定更新地图所需的俯仰角;当俯仰角大于或等于预设阈值时,增大所需的图幅,返回根据所需的图幅,与实体车辆处于目标行驶场景时在当前位置处的道路纵向范围数据,确定更新地图所需的俯仰角的步骤继续执行,直至俯仰角小于预设阈值时,得到更新地图所需的目标图幅与目标视角。
具体地,终端可以根据实体车辆所处目标道路的道路属性、实体车辆当前位置、实体车辆所处的 目标行驶场景,确定实体车辆处于目标行驶场景时在当前位置处的道路横向距离,根据该道路横向距离,查询如表1所示的映射表,确定更新地图所需的图幅。随后,终端根据实体车辆所处目标道路的道路属性、实体车辆当前位置、实体车辆所处的目标行驶场景,确定实体车辆处于目标行驶场景时在当前位置处的道路纵向距离,根据前面所确定的所需的图幅与该道路纵向距离,计算俯仰角,在俯仰角小于预设阈值时,将前面所确定的所需的图幅与该俯仰角,作为更新地图所需的目标图幅与目标视角,在俯仰角大于或等于预设阈值时,按表1所示的图幅列表,增大一级图幅后,重新根据该增大的图幅与该道路纵向距离,计算俯仰角,如此迭代,直至俯仰角小于预设阈值。关于俯仰角的预设阈值,可以根据实际应用需求进行设置。
也即是,确定目标图幅与目标视角的策略是:
1、根据实体车辆处于目标行驶场景时在当前位置处的道路横向范围数据,确定更新地图所需的图幅;
2、根据实体车辆处于目标行驶场景时在当前位置处的道路纵向范围数据,确定俯仰角;
3、当俯仰角大于或等于预设阈值时则调整增大一级图幅(缩小地图,扩大地图范围)后,再重新按上述的2、3计算俯仰角,直到俯仰角小于预设阈值。
举例来说,在实际应用中,不论是哪种行驶场景,道路横向范围可能需要关注至少路面所在宽度方向左右5米的信息,那么道路横向范围约为10米,通过表1,可以确定,显示地图所需要的最小图幅是10米左右,对应的比例尺级别是22级,在图幅为10米的情况下,当俯仰角超过75°之后,视角近乎于平行路面,且图面上有3D建筑物的显示,地图渲染效果不利于用户查看,因此俯仰角的最大值为75°,那么该预设阈值可以设定为75°,当然预设阈值也可以是60°、40°、甚至是20°,可以根据实际应用情况进行设置,对此不作限制。
假设道路横向距离为horizontalDist,道路纵向距离是verticalDist。表1中查询最接近horizontalDist的图幅scale,即:
scale=Find{Min{Scale(i)-horizontalDist}},0<i<23;
也就是,从比例尺级别i=1开始,依次计算Scale(i)-horizontalDist,取计算结果最小的那个i,作为初始比例尺级别;
基于初始比例尺级别i对应的初始图幅scale计算俯仰角,俯仰角计算公式:
skewAngle=arctan(verticalDist/scale)。
如图11所示,为不同图幅下计算俯仰角的示意图。比如,根据horizontalDist确定当前图幅scale设置为20米,verticalDist是100米,那么skewAngle=arctan(100/20)=78.69°,而当前俯仰角超过了预设阈值,则需要扩大一级图幅(调整为50米图幅),重新计算,那么skewAngle=arctan(100/50)=63.435°,可符合要求。这样,终端便可将获取的当前位置的地图数据,按图幅50米、俯仰角63.435°进行渲染显示,呈现实体车辆处于目标行驶场景时在当前位置处的更新后的地图。
如图12所示,为自动驾驶场景下自动调整图面效果的流程示意图。参照图12,座舱域通过跨域通信从自动驾驶域获取自车当前位置和当前实体车辆所处的目标行驶场景,计算当前所处目标行驶场景的道路横向范围数据以确定图幅,计算当前场景的道路纵向范围数据确定俯仰角,动态调整图幅和倾角直到俯仰角符合视觉要求,最后将调整好的图幅、俯仰角应用于高精地图渲染。需要说明的是,通常自车车辆或自车车辆所在的车道居中显示在车辆导航界面中,保持固定不变,而在一些目标行驶场景下,需要将其它车道或其他车辆居中显示在车辆导航界面中,那么此种情况下,渲染高精地图的参数还可以包括地图的偏移量(或称中心点,即车辆导航界面的中心点是地图上的哪个位置点)。
下面以自动驾驶场景、地图为高精地图为例,介绍一些具体的行驶场景,行驶场景包括顺行场景与若干目标行驶场景,目标行驶场景包括变道场景、避让场景、接管场景、机动点场景。
在一个实施例中,目标道路包括多个车道,虚拟车辆显示于多个车道中的第一车道,方法还包括:当在所行驶至的当前位置,实体车辆处于顺行场景时,更新用于显示地图的图幅和视角至顺行场景下的设定图幅和设定视角;将虚拟车辆所在的第一车道,居中显示在顺行场景下的地图中。
顺行场景,指的是前方一路直行,无变道、掉头、转向等操作的场景,顺行场景下,所呈现的地图的图幅与视角是预先设定的值,无需随实体车辆当前位置所处目标道路而变化。终端可以根据顺行场景的行驶特点,基于当前实体车辆的当前位置以及道路数据判定实体车辆的行驶场景,分析行驶场景是否为顺行场景。如图13所示,图13的(a)部分,是顺行场景示意图,外框表示整个车辆导航界面,三个矩形框表示三条车道,圆圈表示自车位置。图13的(b)部分,是顺行场景渲染效果图。在一个实施例中,顺行场景下,将车道居中显示在车辆导航界面中,将实体车辆所在的车道也居中显示在车辆导航界面中。可选地,在车辆所在车道的车道范围的下方区域,显示自车车辆,例如,在车辆所在车道的车道范围的下方2/3处,显示自车车辆,这样,整个地图中呈现的前方道路范围更多。
在一个实施例中,当实体车辆行驶至当前位置、且处于目标行驶场景时,确定目标行驶场景以及当前位置所对应的道路范围数据,包括:当实体车辆行驶至当前位置、且处于变道场景时,根据当前位置的道路数据计算变道场景所需的道路横向距离,以及根据当前位置的道路数据计算从当前位置进行变道所需的纵向延伸变道最远距离。
变道场景,指的是自动驾驶车辆行驶过程中主动更换行驶车道,变道场景下,重点观察需要变道至的车道与该车道后向来车的情况。变道场景下的道路横向范围数据,可以是实体车辆处于变道场景时在当前位置处的道路横向距离。该道路横向距离可以是目标道路的道路宽度。在目标道路包括车道较多的情况下,该道路横向距离可以是实体车辆所在车道、该车道的左右车道形成的车道横向宽度,该道路横向距离还可以是实体车辆所在车道、该车道的左右车道、左左车道、右右车道形成的车道横向宽度,该道路横向距离还可以是实体车辆所在车道宽度的4倍所形成的车道横向宽度,本申请对此不作特别限制。变道场景下的道路纵向范围,可以是从当前位置纵向延伸变道最远距离所形成的道路范围,故道路纵向范围数据可以是该变道最远距离,其可以是设定的一个数值,也可以是根据当前位置所在道路数据计算出的一个数值,计算方式后文将提到。变道场景下的道路纵向范围,还可以是从当前位置纵向延伸至预测的变道落车点所形成的道路范围,预测变道落车点的方式后文将提到。
在一个实施例中,更新用于显示地图的图幅和视角至目标图幅和目标视角,目标图幅和目标视角与道路范围数据相适配,包括:根据道路横向距离,确定更新地图所需的目标图幅,以及根据所需的目标图幅与纵向延伸变道最远距离,确定更新地图所需的目标俯仰角;更新用于显示地图的图幅和视角至目标图幅和目标俯仰角。
在一个实施例中,根据道路横向距离,确定更新地图所需的目标图幅,以及根据所需的目标图幅与纵向延伸变道最远距离,确定更新地图所需的目标俯仰角,包括:根据道路横向距离,确定更新地图所需的图幅;获取第一车道的最高限速;根据最高限速与变道时长,计算纵向延伸变道最远距离;根据所需的图幅与纵向延伸变道最远距离,计算俯仰角;当俯仰角大于或等于预设阈值时,增大所需的图幅,返回根据所需的图幅与纵向延伸变道最远距离,计算俯仰角的步骤继续执行,直至俯仰角小于预设阈值时,得到更新地图所需的目标图幅与目标俯仰角。
在一个可选的实施例中,变道场景下的道路横向范围,由实体车辆所在车道(可记为第一车道)、目标道路中该第一车道的左右车道以及左左车道、右右车道所构成,确保各个车道的信息能够完整的 呈现到地图中。如图14所示,为变道场景下的道路横向范围,由各个车道宽度构成范围Rang,Range=dLL+dL+d+dR+dRR。
对于没有左左或右右车道的位置,可以将图幅范围减少dLL或dRR,即:Range=dL+d+dR+dRR或者Range=dLL+dL+d+dR。
对于没有左车道或右车道的位置,可以以第一车道的宽度为基准向左或向右计算一个车道宽度,即:
无左车道时,Range=d+d+dR+dRR;
无右车道时,Range=dLL+dL+d+d。
随后,终端可以通过查询表1可以确定初始比例尺级别,也即初始图幅。
俯仰角决定了当前比例尺级别下地图能够呈现的道路纵向范围。在一些可选的实施例中,道路纵向范围与当前车道的最高限速相关。例如,第一车道的最高限速是V公里每小时,即(V/3.6)米每秒,变道时长为3秒,那么前向展示距离就是3*V/3.6,举个例子,当前道路为三车道为例,三条车道的宽度相等,每条车道的宽度是3.5米,那么变道场景下的道路横向范围数据是3.5x4=14米。第一车道的最高限速是100km/h,道路纵向范围数据是从自车位置纵向延伸3*100/3.6,也即83.4米。结合表1可知初始图幅是21.5级对应的15米,在21.5级比例尺的情况下,计算俯仰角为80°,假设预设阈值为75°,则需要扩大图幅为20米,计算俯仰角为76.5°,那么需要再次扩大图幅为30米,计算俯仰角为70.2°,符合要求,可以确定当前位置需要显示的更新后的地图的目标图幅为30米,目标俯仰角为70.2°。
本实施例中,通过变道场景下在当前位置所需关注的车道横向距离与车道纵向距离,确定更新地图所需要的目标图幅与目标俯仰角,能够帮助实体车辆内乘员感受到当前处于变道场景,所显示的地图能聚焦变道场景下的当前位置的车道范围,提升场景可感知程度,提升乘员对自动驾驶***的信任度。
在一个实施例中,目标道路包括多个车道,虚拟车辆显示于多个车道中的第一车道,方法还包括:在实体车辆在当前位置所处的目标行驶场景,为从第一车道变道至第二车道的变道场景时,将第二车道以及实体车辆在第二车道的预估落车点,居中显示在更新后的地图中。
例如,实体车辆从第一车道(也称当前车道)向左变道至第二车道时,地图中左侧的第二车道居中显示,实体车辆从第一车道向右变道至第二车道时,地图中右侧的第二车道居中显示。可选地,在车辆的行驶场景从顺行场景切换到变道场景时,地图中虚拟车辆的位置可由处于地图中车道下方的位置,变化为处于地图中车道上方或中间的位置。终端可确定地图的偏移量,按该偏移量显示地图,以达到展示第二车道后向道路路况的目的。如图15所示,图15的(a)部分与(b)部分,分别是向左变道和向右变道的变道场景示意图。外框表示整个车辆导航界面,三个矩形框表示三条车道,圆圈表示自车位置,车道内的矩形框表示虚拟车辆的预估落车点位置,可以显示第二车道以及第二车道上的预估落车点居中显示在车辆导航界面中。
对于行驶于第一车道上的实体车辆,终端可以从自动驾驶域获得关于实体车辆的当前位置以及车辆转向信息,根据车辆转向信息与车辆当前位置所在目标道路的拓扑结构,确定实体车辆要变道至的第二车道。
在一个实施例中,方法还包括:获取目标道路在当前位置的道路拓扑结构;根据变道场景的变道方向与道路拓扑结构,确定第二车道;根据启动变道时实体车辆的行驶速度与变道时长,计算预估变道距离;确定启动变道时实体车辆到第二车道的中心线的垂直距离;根据预估变道距离与垂直距离, 确定实体车辆在第二车道的预估落车点。
具体地,终端获取实体车辆当前位置以及确定当前位置所在的第一车道,根据第一车道所在目标道路的道路拓扑结构查询第一车道的前向车道、后向车道、左侧车道和右侧车道,结合车辆的转向信息(向左变道或向右变道),从中确定实体车辆将要变道至的第二车道。在自动驾驶场景下,终端可以从自动驾驶域通过跨域通信获得实体车辆在当前位置的转向信息。
如图16所示,为一个实施例中变道场景下搜索第二车道的示意图。参照图16,右转时,终端接收到自动驾驶***的向右变道信息,从第一车道向右拓扑获取右侧的第二车道信息,随后,以右侧第二车道为基准分别向前搜索和向后搜索,确定整个第二车道的边界线和车道中心线。左转时,是左变道场景,终端接收到自动驾驶***的向左变道信息,从第一车道向左拓扑获取左侧的第二车道信息,随后,以左侧第二车道为基准分别向前搜索和向后搜索,确定整条第二车道的边界线和车道中心线。
如图17所示,为一个实施例中计算车辆的预估落车点位置的示意图。参照图17,A表示自车车辆当前位置,CD是第二车道的车道中心线。从A点向CD直线作垂线,垂足为B,B点并不是真正的落车点位置,计算落车点位置需要考虑变道的时间和自车车辆的行驶速度。具体计算方法如下:
假设变道时自车车辆的行驶速度为v米/秒,变道时长是3秒,转向角度是角B’AB,即θ,那么在第二车道上B’的位置是B点位置加上变道时所走过的距离BB’。
BB’=AB’*sin(∠B’AB)=v*3*sin(θ)。
其中,根据自车车辆的坐标(当前位置)以及垂直距离AB的长度,可以确定垂足B点的位置,根据车辆上的感知设备所监测到的车辆状态数据,可以得到车辆的转向角度,根据启动变道时车辆的行驶速度v与变道时长,计算预估变道距离AB,从而根据上述公式可以计算出距离BB’,根据垂足B的位置与该距离BB’,可以得到预估落车点的坐标,从而根据该坐标将预估落车点显示在车辆导航界面中。预估落车点可以显示在车辆导航界面的中间位置,也可以显示在偏上方的位置,保持预估落车点不变,显示自车车辆随着行驶而逐步靠近该预估落车点。
在一个实施例中,当实体车辆行驶至当前位置、且处于目标行驶场景时,确定目标行驶场景以及当前位置所对应的道路范围数据,包括:当实体车辆行驶至当前位置、且处于避让障碍物的避让场景时,根据当前位置确定实体车辆所在车道、实体车辆所在车道的相邻车道的车道横向宽度,根据实体车辆所在车道、实体车辆所在车道的相邻车道的车道横向宽度,计算避让场景所需的道路横向距离,以及计算当前位置到障碍物之间的最远距离。更新用于显示地图的图幅和视角至目标图幅和目标视角,目标图幅和目标视角与道路范围数据相适配,包括:根据道路横向距离,确定更新地图所需的目标图幅,以及根据所需的目标图幅与当前位置到障碍物之间的最远距离,确定更新地图所需的目标俯仰角;更新用于显示地图的图幅和视角至目标图幅和目标俯仰角。
避让场景,指的是在车辆行驶过程中遇到障碍,如旁车超车、前车减速、前车变道等导致当前车道路况不佳的情况下,需通过减速、变道等动作来避让危险情况的场景,避让场景下,需要重点观察障碍物以及障碍物所在车道的情况,障碍物所在车道通常为自车车辆所在车道的相邻车道。
如图18所示,图18的(a)部分是一个实施例中避让场景示意图。参照图18的(a)部分,外框表示整个车辆导航界面,三个矩形框表示三条车道,圆圈表示自车位置,矩形框表示障碍物位置。在一个实施例中,避让场景下,可以在地图中将车辆显示在车辆所在车道的下方位置,以更好地呈现前方或两侧出行的障碍物。避让场景下,终端根据障碍物与自车车辆位置来确定目标图幅与目标视角以使显示的地图能够关注避让场景的细节。
避让场景下,重点关注自车车辆所在车道及左右相邻车道的交通参与者的信息,那么,道路横向 范围,可以是实体车辆处于避让场景时在当前位置处的道路横向距离,该道路横向距离可以是目标道路的道路宽度,在目标道路包括车道较多的情况下,该道路横向距离可以是实体车辆所在车道、该车道的左右相邻车道形成的车道横向宽度,还可以是实体车辆与障碍物所在最小矩形区域的横向宽度,本申请对此不作特别限制。避让场景下的道路纵向范围,可以是从当前位置至障碍物之间的车道纵向范围。
在一个实施例中,根据道路横向距离,确定更新地图所需的目标图幅,以及根据所需的目标图幅与当前位置到障碍物之间的最远距离,确定更新地图所需的目标俯仰角,包括:根据车道横向距离,确定更新地图所需的图幅;根据所需的图幅与当前位置到障碍物之间的最远距离,计算俯仰角;当俯仰角大于或等于预设阈值时,增大所需的图幅,返回根据所需的图幅与当前位置到障碍物之间的最远距离,计算俯仰角的步骤继续执行,直至俯仰角小于预设阈值时,得到更新地图所需的目标图幅与目标俯仰角。
具体地,终端确定目标道路中实体车辆所在车道的相邻车道;根据实体车辆所在车道、相邻车道所形成的车道横向距离,确定更新地图所需的图幅;确定实体车辆与障碍物之间的最远距离;根据所需的图幅与最远距离,计算俯仰角;当俯仰角大于或等于预设阈值时,增大所需的图幅,返回根据所需的图幅与最远距离,计算俯仰角的步骤继续执行,直至俯仰角小于预设阈值时,得到更新地图所需的目标图幅与目标俯仰角。
如图18的(b)部分所示,为一个实施例中避让场景示意图,避让场景重点关注自车行驶的车道及左右相邻车道的交通参与者信息。图中矩形块表示从旁边车道切入的障碍物,箭头表示障碍物的行驶方向,☆表示自车车辆的当前位置。在图18的(b)部分所示的当前位置,避让场景下的车道横向距离Range=dL+d+dR,随后,终端可以根据Range,查询表1可以确定初始比例尺级别,也即初始图幅。
为了清晰呈现自车车辆至障碍物之间的车道范围,所需的俯仰角可以采用如下方式确定:终端可以计算自车车辆到障碍物之间的最远的距离,如图19所示,为一个实施例中自车车辆与障碍物的位置示意图。参照图19,以自车车辆中心为坐标原点O、自车向右方向为x轴方向、自车前进方向为y轴方向建立O-xy坐标系。以自车车辆感知到的障碍物(感知目标)的自身中心为坐标原点,自身向右方向为x轴方向,自身前进方向为y轴建立坐标系。参照图19,O’-x’y’坐标系和O”-x”y”是基于两个感知目标自身,建立的坐标系。O’、O”在O-xy坐标系下的坐标分别是(Ox’,Oy’),(Ox”,Oy”)。
以O’-x’y’坐标系为例进行说明。假设该障碍物的长和宽分别是h米、w米,则在O’-x’y’坐标系下a、b、c、d的坐标分别是(w/2,h/2),(-w/2,h/2),(-w/2,-h/2),(w/2,h/2)。O’-xy是O-xy平移到障碍物坐标系下的状态,O’-xy顺时针旋转α°后与O-xy重合。假设自车车辆到该障碍物的最远距离是自车位置到a的距离,a在O’-x’y’的坐标是(x’,y’),a在O’-xy坐标是(x,y),则:x=x’*cos(α)-y’*sin(α);y=y’*cos(α)+x’*sin(α);
将a在O’-xy的坐标平移到O-xy坐标系,得到a在O-xy坐标系的位置是(Ox,Oy),其中:
Ox=Ox’+x’*cos(α)-y’*sin(α);
Oy=Oy’+y’*cos(α)+x’*sin(α)。
经过上述计算得到a点的坐标之后即可计算自车当前到a点的距离。
该距离可以作为变道场景下实体车辆处于避让场景时在当前位置处的道路纵向距离。举个例子,图18中三条车道的宽度相等,每条车道的宽度是3.5米,那么避让场景下的道路横向范围是3.5x4=14米。道路纵向范围数据是从自车位置到障碍物的最远距离,假定计算的最远距离为10米,俯仰角的预 设阈值为75°。结合表1可知初始图幅是21.5级对应的15米,在21.5级比例尺的情况下,计算俯仰角为33.8°,符合要求,可以确定当前位置需要更新地图的目标图幅为15米,目标俯仰角为33.8°。
本实施例中,通过避让场景下在实体车辆当前位置所需关注的车道横向距离与车道纵向距离,确定更新地图所需要的目标图幅与目标俯仰角,能够帮助车辆内乘员感受到当前处于避让场景,所显示的地图能聚焦避让场景下的自车车辆与障碍物,提升场景可感知程度。
在一个实施例中,当实体车辆行驶至当前位置、且处于目标行驶场景时,确定目标行驶场景以及当前位置所对应的道路范围数据,包括:当实体车辆行驶至当前位置、且处于从接管提示点行驶至自动驾驶退出点的接管场景时,根据当前位置所在目标道路的道路数据与自动驾驶退出点所在道路的道路数据,计算当前位置以及接管场景下的车道横向距离,以及计算当前位置到自动驾驶退出点的距离;更新用于显示地图的图幅和视角至目标图幅和目标视角,目标图幅和目标视角与道路范围数据相适配,包括:根据道路横向距离,确定更新地图所需的目标图幅,以及根据所需的目标图幅与当前位置到自动驾驶退出点的距离,确定更新地图所需的目标俯仰角;更新用于显示地图的图幅和视角至目标图幅和目标俯仰角。
接管场景,是自动驾驶车辆即将驶出自动驾驶功能所支持的区域,将要转人工驾驶的场景,自动驾驶的接管场景下,需要重点观察道路中需要退出自动驾驶点的位置。接管场景下的道路范围,为目标道路中从当前位置至自动驾驶退出点之间的道路范围。实体车辆行驶至接管提示点时,认为实体车辆处于接管场景,接管提示点是行驶至即将达到自动驾驶退出点时要经过的某个点,该点距离自动驾驶退出点有一定的距离,例如可以是2.5公里。在实体车辆当前位置至自动驾驶退出点之间的距离较远的情况下,例如2公里,为使自动驾驶退出点呈现在车辆导航界面中,显示地图的目标图幅则需要远大于车辆所在的目标道路的横向宽度,在实体车辆当前位置至自动驾驶退出点之间的距离较近的情况下,例如20米,为尽可能清晰地呈现实体车辆与自动驾驶退出点之间的道路路况,则显示地图的目标图幅较小。可见,自动驾驶接管场景下,在实体车辆移动过程中,显示地图所需要的目标图幅,是先扩大,扩大至可以观察到自动驾驶退出点,随后保持自车车辆与该退出点始终可见并逐步缩小图幅。
图20为一个实施例中自动驾驶接管场景示意图。可见,自动驾驶接管场景为保持自动驾驶退出点始终可见,自动驾驶接管场景下显示的地图的图幅,小于顺行场景下的图幅。图21为一个实施例中自动驾驶接管场景渲染效果图,其中A表示自车车辆所在位置,B表示自动驾驶退出点的位置,AB区间为提示人工接管区域。
在一个实施例中,根据道路横向距离,确定更新地图所需的目标图幅,以及根据所需的目标图幅与当前位置到自动驾驶退出点的距离,确定更新地图所需的目标俯仰角,包括:根据车道横向距离,确定更新地图所需的图幅;根据所需的图幅与从当前位置至自动驾驶退出点的距离,计算俯仰角;当俯仰角大于或等于预设阈值时,增大所需的图幅,返回根据所需的图幅与从当前位置至自动驾驶退出点的距离,计算俯仰角的步骤继续执行,直至俯仰角小于预设阈值时,得到更新地图所需的目标图幅与目标俯仰角。
具体地,接管场景下目标图幅与目标俯仰角的确定步骤包括:确定目标道路以及自动驾驶退出点所在道路,所形成的车道横向距离,根据车道横向距离,确定更新地图所需的图幅;计算从当前位置至自动驾驶退出点的距离;根据所需的图幅与距离,计算俯仰角;当俯仰角大于或等于预设阈值时,增大所需的图幅,返回根据所需的图幅与距离,计算俯仰角的步骤继续执行,直至俯仰角小于预设阈值时,得到更新地图所需的目标图幅与目标俯仰角。
图22为一个实施例中接管场景下的道路范围的示意图。参照图12,接管场景下的道路横向范围, 为B点所在车道及左右两车道、A点所在车道及左右两车道构成的多车道范围,即图中的Range所示的范围。当然,在A点或B点不存在左车道或右车道的情况下,可以按A点所在车道补充车道宽度,形成这种情况下的道路横向范围。接管场景的道路纵向范围,为AB两点的距离。终端接收自动驾驶域通过跨域通信发送的当前接管场景下的自动驾驶退出点的位置,根据该位置,将自动驾驶退出点显示在地图中。
举个例子,图22中,假设车道宽度相等,每条车道的宽度是3.5米,那么该接管场景下的道路横向范围即多车道范围是3.5x4=14米。假定当前位置到自动驾驶退出点的距离为1000米,俯仰角的预设阈值为75°。结合表1可知初始图幅是21.5级对应的15米,在21.5级比例尺的情况下,根据15米与1000米所计算俯仰角远大于75°,不符合要求,逐次扩大图幅,直至图幅为312米时,计算得到的俯仰角为72.6°,符合要求,可以确定在实体车辆当前位置更新地图需要的目标图幅为312米,目标俯仰角为72.6°。
本实施例中,通过接管场景下在实体车辆当前位置所需关注的车道纵向距离,确定更新地图所需要的目标图幅与目标俯仰角,能够帮助实体车辆内乘员感受到当前处于接管场景,所显示的地图能聚焦接管场景下的自动驾驶退出点,提升该场景可感知程度。
在一个实施例中,当实体车辆行驶至当前位置、且处于目标行驶场景时,确定目标行驶场景以及当前位置所对应的道路范围数据,包括:当实体车辆行驶至当前位置、且处于行驶于目标机动点的机动操作区域的机动点场景时,基于目标机动点所在道路的路口宽度,沿目标机动点的路口延伸方向延伸预设距离得到机动点场景下的道路横向距离与道路纵向距离;更新用于显示地图的图幅和视角至目标图幅和目标视角,目标图幅和目标视角与道路范围数据相适配,包括:根据道路横向距离,确定更新地图所需的目标图幅,以及根据所需的目标图幅与道路纵向距离,确定更新地图所需的目标俯仰角。。
机动点场景,是实体车辆行驶过程中转向、掉头等机动操作的位置。驾驶机动点场景下,需要重点观察前方机动点的道路状况。在一个实施例中,在实体车辆行驶至距离某个机动点场景的距离小于某个阈值时,确定实体车辆进入该机动点的机动区域,即实体车辆处于机动点场景。在机动点场景下,终端显示地图的图幅扩大,俯仰角减小,以能够呈现出整个机动点的通行状况。也即,实体车辆处于机动点场景时在当前位置处的道路范围,是前方机动点所在的范围。如图23所示,为自动驾驶场景下的机动点场景渲染效果示意图。图23中的机动点为十字路口,道路范围所对应的横向距离和纵向距离即为路口宽度,如图虚线矩形框所示。为了包含更多的信息,也可以在路口的基础上沿着各路口的延伸方向延长一定距离作为道路范围,从而在该场景下的地图中呈现出沿着各路口的延伸方向延长一定距离的道路范围。
在一个实施例中,根据道路横向距离,确定更新地图所需的目标图幅,以及根据所需的目标图幅与道路纵向距离,确定更新地图所需的目标俯仰角,包括:根据道路横向距离,确定更新地图所需的图幅;根据所需的图幅与道路纵向距离,计算俯仰角;当俯仰角大于或等于预设阈值时,增大所需的图幅,返回根据所需的图幅与道路纵向距离,计算俯仰角的步骤继续执行,直至俯仰角小于预设阈值时,得到更新地图所需的目标图幅与目标俯仰角。
具体地,机动点场景下目标图幅与目标俯仰角的确定步骤包括:确定目标机动点的道路横向距离与道路纵向距离,根据道路横向距离,确定更新地图所需的图幅,根据所需的图幅与道路纵向距离,计算俯仰角,当俯仰角大于或等于预设阈值时,增大所需的图幅,返回根据所需的图幅与道路纵向距离,计算俯仰角的步骤继续执行,直至俯仰角小于预设阈值时,得到更新地图所需的目标图幅与目标俯仰角。
举例来说,在图23所示的机动点场景中,十字路口的宽度为25米,当前位置到前方路口的距离为50米,可分别着各路口的延伸方向延长10米作为道路范围,那么,该机动点场景下的道路横向范围是35米,道路纵向范围是60米,结合表1可知初始图幅是20级对应的39米,在20级比例尺的情况下,计算俯仰角为56.97°,可以确定当前位置需要显示地图的目标图幅为39米,目标俯仰角为56.97°。
本实施例中,通过机动点场景下在实体车辆当前位置所需关注的车道横向距离与车道纵向距离,确定显示地图所需要的目标图幅与目标俯仰角,能够帮助实体车辆内乘员感受到当前处于机动点场景,所显示的地图能聚焦机动点场景下的当前位置的车道范围,提升场景可感知程度,提升乘员对自动驾驶***的信任度。
在一个实施例中,终端可以在实体车辆处于自动驾驶状态时,先进入自动驾驶状态,在车辆处于顺行场景时,终端可执行顺行场景调整地图的图幅与俯仰角的策略;在实体车辆在当前位置处于自动变道场景时,终端执行变道场景下调整地图的图幅与俯仰角的策略,实体车辆变道完成或变道取消后恢复到顺行场景;在实体车辆在当前位置处于自动避让场景时,执行自动避让场景下调整地图的图幅与俯仰角的策略,避让完成或避让取消后恢复到顺行场景;在实体车辆在当前位置处于即将退出自动驾驶进入接管场景时,执行接管场景下调整地图的图幅与俯仰角的策略,接管完成后进入SD导航场景,开始执行SD导航的图幅调整策略。在状态冲突时,图幅不发生变化,保持上一场景的图幅调整策略,比如,在自动变道时***了自动避让的任务,则保持自动变道场景的调整策略。在变道场景切换变道场景时,状态不冲突,可以直接切换为接管场景的调整策略。
本申请实施例中,提供了在自动驾驶场景下的基于高精地图和驾驶状态的图面效果自动调整方法,将高精地图里面的车道长度、路面宽度、道路拓扑关系等数据作为自动调整策略的输入,同时结合自动驾驶***输出的顺行、变道、让行、避让、接管等应用场景,综合调整地图的图幅、俯仰角和图面中心点所指示的位置等参数,达到图面效果自动调整的目的。该方法将大大提升导航地图品质,加快阅图速度,提升导航体验,进一步帮助车内乘员理解自动驾驶***的决策动作,增加车内乘员对自动驾驶***的信任度。
本实施例中,综合实体车辆当前位置所处目标道路的实际路况与实体车辆当前位置所处行驶场景确定显示地图所需的目标图幅与目标视角,使地图中所显示的道路范围,适配实体车辆所处位置在所处行驶场景下所需关注的道路区域,可以提升地图的变化的可感知程度,大大提升导航地图品质,加快阅图速度,提升导航体验。此外,更新后的地图所具有的目标视角,在目标图幅较小的情况下可以扩大地图的可视范围,提升导航效率。
应该理解的是,虽然如上的各实施例所涉及的流程图中的各个步骤按照箭头的指示依次显示,但是这些步骤并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的顺序限制,这些步骤可以以其它的顺序执行。而且,如上的各实施例所涉及的流程图中的至少一部分步骤可以包括多个步骤或者多个阶段,这些步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,这些步骤或者阶段的执行顺序也不必然是依次进行,而是可以与其它步骤或者其它步骤中的步骤或者阶段的至少一部分轮流或者交替地执行。
基于同样的发明构思,本申请实施例还提供了一种用于实现上述所涉及的车辆导航方法的车辆导航装置。该装置所提供的解决问题的实现方案与上述方法中所记载的实现方案相似,故下面所提供的一个或多个车辆导航装置实施例中的具体限定可以参见上文中对于车辆导航方法的限定,在此不再赘述。
在一个实施例中,如图24所示,提供了一种车辆导航装置2400,包括:界面显示模块2402和地 图显示模块2404,其中:
界面显示模块2402,用于显示用于对实体车辆进行导航的车辆导航界面,车辆导航界面包括地图;;
地图显示模块2404,用于在地图中的目标道路上显示虚拟车辆,虚拟车辆与实体车辆对应;当实体车辆行驶至当前位置、且处于目标行驶场景时,确定目标行驶场景以及当前位置所对应的道路范围数据;更新用于显示地图的图幅和视角至目标图幅和目标视角,目标图幅和目标视角与道路范围数据相适配。
上述车辆导航装置中的各个模块可全部或部分通过软件、硬件及其组合来实现。上述各模块可以硬件形式内嵌于或独立于计算机设备中的处理器中,也可以以软件形式存储于计算机设备中的存储器中,以便于处理器调用执行以上各个模块对应的操作。
在一个实施例中,提供了一种计算机设备,该计算机设备可以是图1中的终端102或图6中的车载终端604,其内部结构图可以如图25所示。该计算机设备包括处理器、存储器、输入/输出接口、通信接口、显示单元和输入装置。其中,处理器、存储器和输入/输出接口通过***总线连接,通信接口、显示单元和输入装置通过输入/输出接口连接到***总线。其中,该计算机设备的处理器用于提供计算和控制能力。该计算机设备的存储器包括非易失性存储介质、内存储器。该非易失性存储介质存储有操作***和计算机可读指令。该内存储器为非易失性存储介质中的操作***和计算机可读指令的运行提供环境。该计算机设备的输入/输出接口用于处理器与外部设备之间交换信息。该计算机设备的通信接口用于与外部的终端进行有线或无线方式的通信,无线方式可通过WIFI、移动蜂窝网络、NFC(近场通信)或其他技术实现。该计算机可读指令被处理器执行时以实现一种车辆导航方法。该计算机设备的显示单元用于形成视觉可见的画面,可以是显示屏、投影装置或虚拟现实成像装置,显示屏可以是液晶显示屏或电子墨水显示屏,该计算机设备的输入装置可以是显示屏上覆盖的触摸层,也可以是计算机设备外壳上设置的按键、轨迹球或触控板,还可以是外接的键盘、触控板或鼠标等。该计算机设备的输入接口可以接收来自于车辆上的定位设备或感知设备发送的数据,包括车辆位置数据、障碍物位置数据、障碍物相对于自车车辆的方位数据,等等。
本领域技术人员可以理解,图25中示出的结构,仅仅是与本申请方案相关的部分结构的框图,并不构成对本申请方案所应用于其上的计算机设备的限定,具体的计算机设备可以包括比图中所示更多或更少的部件,或者组合某些部件,或者具有不同的部件布置。
在一个实施例中,提供了一种计算机设备,包括存储器和处理器,存储器中存储有计算机可读指令,该处理器执行计算机可读指令时实现上述任一个或多个实施例所描述的车辆导航方法的步骤。
在一个实施例中,提供了一种计算机可读存储介质,其上存储有计算机可读指令,计算机可读指令被处理器执行时实现上述任一个或多个实施例所描述的车辆导航方法的步骤。
在一个实施例中,提供了一种计算机程序产品,包括计算机可读指令,该计算机可读指令被处理器执行时实现上述任一个或多个实施例所描述的车辆导航方法的步骤。
需要说明的是,本申请所涉及的用户信息(包括但不限于用户设备信息、用户个人信息等)和数据(包括但不限于用于分析的数据、存储的数据、展示的数据等),均为经用户授权或者经过各方充分授权的信息和数据,且相关数据的收集、使用和处理需要遵守相关国家和地区的相关法律法规和标准。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机可读指令来指令相关的硬件来完成,所述的计算机可读指令可存储于一非易失性计算机可读取存储介质中,该计算机可读指令在执行时,可包括如上述各方法的实施例的流程。其中,本申请所提供的各实施例中所使用的对存储器、数据库或其它介质的任何引用,均可包括非易失性和易失性存储器中的至少一 种。非易失性存储器可包括只读存储器(Read-Only Memory,ROM)、磁带、软盘、闪存、光存储器、高密度嵌入式非易失性存储器、阻变存储器(ReRAM)、磁变存储器(Magnetoresistive Random Access Memory,MRAM)、铁电存储器(Ferroelectric Random Access Memory,FRAM)、相变存储器(Phase Change Memory,PCM)、石墨烯存储器等。易失性存储器可包括随机存取存储器(Random Access Memory,RAM)或外部高速缓冲存储器等。作为说明而非局限,RAM可以是多种形式,比如静态随机存取存储器(Static Random Access Memory,SRAM)或动态随机存取存储器(Dynamic Random Access Memory,DRAM)等。本申请所提供的各实施例中所涉及的数据库可包括关系型数据库和非关系型数据库中至少一种。非关系型数据库可包括基于区块链的分布式数据库等,不限于此。本申请所提供的各实施例中所涉及的处理器可为通用处理器、中央处理器、图形处理器、数字信号处理器、可编程逻辑器、基于量子计算的数据处理逻辑器等,不限于此。
以上实施例的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本说明书记载的范围。
以上所述实施例仅表达了本申请的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对本申请专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本申请构思的前提下,还可以做出若干变形和改进,这些都属于本申请的保护范围。因此,本申请的保护范围应以所附权利要求为准。

Claims (19)

  1. 一种车辆导航方法,由计算机设备执行,所述方法包括:
    显示用于对实体车辆进行导航的车辆导航界面,所述车辆导航界面包括地图;
    在所述地图中的目标道路上显示虚拟车辆,所述虚拟车辆与所述实体车辆对应;
    当所述实体车辆行驶至当前位置、且处于目标行驶场景时,确定所述目标行驶场景以及所述当前位置所对应的道路范围数据;
    更新用于显示所述地图的图幅和视角至目标图幅和目标视角,所述目标图幅和目标视角与所述道路范围数据相适配。
  2. 根据权利要求1所述的方法,其特征在于,所述当所述实体车辆行驶至当前位置、且处于目标行驶场景时,确定所述目标行驶场景以及所述当前位置所对应的道路范围数据,包括:
    当所述实体车辆行驶至当前位置、且处于目标行驶场景时,确定所述目标行驶场景以及所述当前位置的道路横向范围数据和道路纵向范围数据。
  3. 根据权利要求2所述的方法,其特征在于,所述更新用于显示所述地图的图幅和视角至目标图幅和目标视角,所述目标图幅和目标视角与所述道路范围数据相适配,包括:
    根据所述实体车辆处于所述目标行驶场景时在所述当前位置处的道路横向范围数据,与所述实体车辆处于所述目标行驶场景时在所述当前位置处的道路纵向范围数据,确定更新地图所需的目标图幅与目标视角;
    将所述地图显示为具有所述目标图幅和所述目标视角的地图。
  4. 根据权利要求3所述的方法,其特征在于,所述根据所述实体车辆处于所述目标行驶场景时在所述当前位置处的道路横向范围数据,与所述实体车辆处于所述目标行驶场景时在所述当前位置处的道路纵向范围数据,确定更新地图所需的目标图幅与目标视角,包括:
    根据所述实体车辆处于所述目标行驶场景时在所述当前位置处的道路横向范围数据,确定更新地图所需的图幅;
    根据所述所需的图幅,与所述实体车辆处于所述目标行驶场景时在所述当前位置处的道路纵向范围数据,确定更新地图所需的俯仰角;
    当所述俯仰角大于或等于预设阈值时,增大所述所需的图幅,返回所述根据所述所需的图幅,与所述实体车辆处于所述目标行驶场景时在所述当前位置处的道路纵向范围数据,确定更新地图所需的俯仰角的步骤继续执行,直至所述俯仰角小于预设阈值时,得到更新地图所需的目标图幅与目标视角。
  5. 根据权利要求1所述的方法,其特征在于,所述目标道路包括多个车道,所述虚拟车辆显示于所述多个车道中的第一车道,所述方法还包括:
    当在所行驶至的当前位置,所述实体车辆处于顺行场景时,更新用于显示所述地图的图幅和视角至所述顺行场景下的设定图幅和设定视角;
    将所述虚拟车辆所在的第一车道,居中显示在所述顺行场景下更新后的地图中。
  6. 根据权利要求1所述的方法,其特征在于,所述当所述实体车辆行驶至当前位置、且处于目标行驶场景时,确定所述目标行驶场景以及所述当前位置所对应的道路范围数据,包括:
    当所述实体车辆行驶至当前位置、且处于变道场景时,根据所述当前位置的道路数据计算所述变道场景所需的道路横向距离,以及根据所述当前位置的道路数据计算从所述当前位置进行变道所需的纵向延伸变道最远距离;
    所述更新用于显示所述地图的图幅和视角至目标图幅和目标视角,所述目标图幅和目标视角与所 述道路范围数据相适配,包括:
    根据所述道路横向距离,确定更新地图所需的目标图幅,以及根据所述所需的目标图幅与所述纵向延伸变道最远距离,确定更新地图所需的目标俯仰角;
    更新用于显示所述地图的图幅和视角至所述目标图幅和所述目标俯仰角。
  7. 根据权利要求6所述的方法,其特征在于,所述目标道路包括多个车道,所述虚拟车辆显示于所述多个车道中的第一车道,所述方法还包括:
    在所述实体车辆在当前位置所处的目标行驶场景,为从所述第一车道变道至第二车道的变道场景时,将所述第二车道以及所述实体车辆在所述第二车道的预估落车点,居中显示在更新后的地图中。
  8. 根据权利要求7所述的方法,其特征在于,所述方法还包括:
    获取所述目标道路在所述当前位置的道路拓扑结构;
    根据所述变道场景的变道方向与所述道路拓扑结构,确定所述第二车道;
    根据启动变道时所述实体车辆的行驶速度与变道时长,计算预估变道距离;
    确定启动变道时所述实体车辆到所述第二车道的中心线的垂直距离;
    根据所述预估变道距离与所述垂直距离,确定所述实体车辆在所述第二车道的预估落车点。
  9. 根据权利要求6所述的方法,其特征在于,所述根据所述道路横向距离,确定更新地图所需的目标图幅,以及根据所述所需的目标图幅与所述纵向延伸变道最远距离,确定更新地图所需的目标俯仰角,包括:
    根据所述道路横向距离,确定更新地图所需的图幅;
    获取所述第一车道的最高限速;
    根据所述最高限速与变道时长,计算纵向延伸变道最远距离;
    根据所述所需的图幅与所述纵向延伸变道最远距离,计算俯仰角;
    当所述俯仰角大于或等于预设阈值时,增大所述所需的图幅,返回所述根据所述所需的图幅与所述纵向延伸变道最远距离,计算俯仰角的步骤继续执行,直至所述俯仰角小于预设阈值时,得到更新地图所需的目标图幅与目标俯仰角。
  10. 根据权利要求1所述的方法,其特征在于,所述当所述实体车辆行驶至当前位置、且处于目标行驶场景时,确定所述目标行驶场景以及所述当前位置所对应的道路范围数据,包括:
    当所述实体车辆行驶至当前位置、且处于避让障碍物的避让场景时,根据所述当前位置确定所述实体车辆所在车道、所述实体车辆所在车道的相邻车道的车道横向宽度,根据所述实体车辆所在车道、所述实体车辆所在车道的相邻车道的车道横向宽度,计算所述避让场景所需的道路横向距离,以及计算所述当前位置到所述障碍物之间的最远距离;
    所述更新用于显示所述地图的图幅和视角至目标图幅和目标视角,所述目标图幅和目标视角与所述道路范围数据相适配,包括:
    根据所述道路横向距离,确定更新地图所需的目标图幅,以及根据所述所需的目标图幅与所述当前位置到所述障碍物之间的最远距离,确定更新地图所需的目标俯仰角;
    更新用于显示所述地图的图幅和视角至所述目标图幅和所述目标俯仰角。
  11. 根据权利要求10所述的方法,其特征在于,所述根据所述道路横向距离,确定更新地图所需的目标图幅,以及根据所述所需的目标图幅与所述当前位置到所述障碍物之间的最远距离,确定更新地图所需的目标俯仰角,包括:
    根据所述车道横向距离,确定更新地图所需的图幅;
    根据所述所需的图幅与所述当前位置到所述障碍物之间的最远距离,计算俯仰角;
    当所述俯仰角大于或等于预设阈值时,增大所述所需的图幅,返回所述根据所述所需的图幅与所述当前位置到所述障碍物之间的最远距离,计算俯仰角的步骤继续执行,直至所述俯仰角小于所述预设阈值时,得到更新地图所需的目标图幅与目标俯仰角。
  12. 根据权利要求1所述的方法,其特征在于,所述当所述实体车辆行驶至当前位置、且处于目标行驶场景时,确定所述目标行驶场景以及所述当前位置所对应的道路范围数据,包括:
    当所述实体车辆行驶至当前位置、且处于从接管提示点行驶至自动驾驶退出点的接管场景时,根据所述当前位置所在目标道路的道路数据与所述自动驾驶退出点所在道路的道路数据,计算所述当前位置以及所述接管场景下的车道横向距离,以及计算所述当前位置到所述自动驾驶退出点的距离;
    所述更新用于显示所述地图的图幅和视角至目标图幅和目标视角,所述目标图幅和目标视角与所述道路范围数据相适配,包括:
    根据所述道路横向距离,确定更新地图所需的目标图幅,以及根据所述所需的目标图幅与所述当前位置到所述自动驾驶退出点的距离,确定更新地图所需的目标俯仰角;
    更新用于显示所述地图的图幅和视角至所述目标图幅和所述目标俯仰角。
  13. 根据权利要求12所述的方法,其特征在于,所述根据所述道路横向距离,确定更新地图所需的目标图幅,以及根据所述所需的目标图幅与所述当前位置到所述自动驾驶退出点的距离,确定更新地图所需的目标俯仰角,包括:
    根据所述车道横向距离,确定更新地图所需的图幅;
    根据所述所需的图幅与从所述当前位置至所述自动驾驶退出点的距离,计算俯仰角;
    当所述俯仰角大于或等于预设阈值时,增大所述所需的图幅,返回所述根据所述所需的图幅与从所述当前位置至所述自动驾驶退出点的距离,计算俯仰角的步骤继续执行,直至所述俯仰角小于所述预设阈值时,得到更新地图所需的目标图幅与目标俯仰角。
  14. 根据权利要求1所述的方法,其特征在于,所述当所述实体车辆行驶至当前位置、且处于目标行驶场景时,确定所述目标行驶场景以及所述当前位置所对应的道路范围数据,包括:
    当所述实体车辆行驶至当前位置、且处于行驶于目标机动点的机动操作区域的机动点场景时,基于所述目标机动点所在道路的路口宽度,沿所述目标机动点的路口延伸方向延伸预设距离得到所述机动点场景下的道路横向距离与道路纵向距离;
    所述更新用于显示所述地图的图幅和视角至目标图幅和目标视角,所述目标图幅和目标视角与所述道路范围数据相适配,包括:
    根据所述道路横向距离,确定更新地图所需的目标图幅,以及根据所述所需的目标图幅与所述道路纵向距离,确定更新地图所需的目标俯仰角。
  15. 根据权利要求14所述的方法,其特征在于,所述根据所述道路横向距离,确定更新地图所需的目标图幅,以及根据所述所需的目标图幅与所述道路纵向距离,确定更新地图所需的目标俯仰角,包括:
    根据所述道路横向距离,确定更新地图所需的图幅;
    根据所述所需的图幅与所述道路纵向距离,计算俯仰角;
    当所述俯仰角大于或等于预设阈值时,增大所述所需的图幅,返回所述根据所述所需的图幅与所述道路纵向距离,计算俯仰角的步骤继续执行,直至所述俯仰角小于所述预设阈值时,得到更新地图所需的目标图幅与目标俯仰角。
  16. 一种车辆导航装置,所述装置包括:
    界面显示模块,用于显示用于对实体车辆进行导航的车辆导航界面,所述车辆导航界面包括地图;
    地图显示模块,用于在所述地图中的目标道路上显示虚拟车辆,所述虚拟车辆与所述实体车辆对应;当所述实体车辆行驶至当前位置、且处于目标行驶场景时,确定所述目标行驶场景以及所述当前位置所对应的道路范围数据;更新用于显示所述地图的图幅和视角至目标图幅和目标视角,所述目标图幅和目标视角与所述道路范围数据相适配。
  17. 一种计算机设备,包括存储器和处理器,所述存储器存储有计算机可读指令,所述处理器执行所述计算机可读指令时实现权利要求1至15中任一项所述的方法的步骤。
  18. 一种计算机可读存储介质,其上存储有计算机可读指令,所述计算机可读指令被处理器执行时实现权利要求1至15中任一项所述的方法的步骤。
  19. 一种计算机程序产品,包括计算机可读指令,该计算机可读指令被处理器执行时实现权利要求1至15中任一项所述的方法的步骤。
PCT/CN2023/093831 2022-06-30 2023-05-12 车辆导航方法、装置、设备、存储介质和计算机程序产品 WO2024001554A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210758586.2 2022-06-30
CN202210758586.2A CN115145671A (zh) 2022-06-30 2022-06-30 车辆导航方法、装置、设备、存储介质和计算机程序产品

Publications (1)

Publication Number Publication Date
WO2024001554A1 true WO2024001554A1 (zh) 2024-01-04

Family

ID=83409908

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/093831 WO2024001554A1 (zh) 2022-06-30 2023-05-12 车辆导航方法、装置、设备、存储介质和计算机程序产品

Country Status (2)

Country Link
CN (1) CN115145671A (zh)
WO (1) WO2024001554A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115145671A (zh) * 2022-06-30 2022-10-04 腾讯科技(深圳)有限公司 车辆导航方法、装置、设备、存储介质和计算机程序产品

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007024599A (ja) * 2005-07-13 2007-02-01 Denso Corp 車載ナビゲーション装置
CN101033976A (zh) * 2007-04-18 2007-09-12 江苏新科数字技术有限公司 导航仪的提示路况信息的方法
CN101216322A (zh) * 2007-12-28 2008-07-09 凯立德欣技术(深圳)有限公司 一种交叉路口的导航画面的显示方法、装置及gps导航设备
CN107665250A (zh) * 2017-09-26 2018-02-06 百度在线网络技术(北京)有限公司 高清道路俯视地图的构建方法、装置、服务器和存储介质
CN115145671A (zh) * 2022-06-30 2022-10-04 腾讯科技(深圳)有限公司 车辆导航方法、装置、设备、存储介质和计算机程序产品

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007024599A (ja) * 2005-07-13 2007-02-01 Denso Corp 車載ナビゲーション装置
CN101033976A (zh) * 2007-04-18 2007-09-12 江苏新科数字技术有限公司 导航仪的提示路况信息的方法
CN101216322A (zh) * 2007-12-28 2008-07-09 凯立德欣技术(深圳)有限公司 一种交叉路口的导航画面的显示方法、装置及gps导航设备
CN107665250A (zh) * 2017-09-26 2018-02-06 百度在线网络技术(北京)有限公司 高清道路俯视地图的构建方法、装置、服务器和存储介质
CN115145671A (zh) * 2022-06-30 2022-10-04 腾讯科技(深圳)有限公司 车辆导航方法、装置、设备、存储介质和计算机程序产品

Also Published As

Publication number Publication date
CN115145671A (zh) 2022-10-04

Similar Documents

Publication Publication Date Title
US11378956B2 (en) Perception and planning collaboration framework for autonomous driving
US10551840B2 (en) Planning driven perception system for autonomous driving vehicles
US10754341B2 (en) Systems and methods for accelerated curve projection
CN108089572B (zh) 用于车辆定位的方法和装置
US10414397B2 (en) Operational design domain decision apparatus
US10139818B2 (en) Visual communication system for autonomous driving vehicles (ADV)
US10248196B2 (en) System for occlusion adjustment for in-vehicle augmented reality systems
US11113971B2 (en) V2X communication-based vehicle lane system for autonomous vehicles
US20190080266A1 (en) Cost based path planning for autonomous driving vehicles
US11520347B2 (en) Comprehensive and efficient method to incorporate map features for object detection with LiDAR
US20190315357A1 (en) Novel method for speed adjustment of autonomous driving vehicles prior to lane change
US11327498B2 (en) Polynomial-fit based reference line smoothing method for high speed planning of autonomous driving vehicles
US20210356257A1 (en) Using map information to smooth objects generated from sensor data
US11148668B2 (en) Autonomous vehicle control for reverse motion
WO2019092846A1 (ja) 表示システム、表示方法、およびプログラム
US10922976B2 (en) Display control device configured to control projection device, display control method for controlling projection device, and vehicle
WO2024001554A1 (zh) 车辆导航方法、装置、设备、存储介质和计算机程序产品
US20210043090A1 (en) Electronic device for vehicle and method for operating the same
US20190283760A1 (en) Determining vehicle slope and uses thereof
US11276139B2 (en) Way to generate images with distortion for fisheye lens
US20220364874A1 (en) Method of providing image by vehicle navigation device
US11392140B2 (en) Two inertial measurement units and GPS based localization system for an autonomous driving truck
US20200282987A1 (en) Extra-freedom stitch method for reference line smoothing
JP2021181914A (ja) 地図表示システム、地図表示プログラム
KR20190121276A (ko) 차량용 전자 장치 및 그의 동작 방법

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23829740

Country of ref document: EP

Kind code of ref document: A1