CN116394980B - Vehicle control method, automatic driving prompting method and related devices - Google Patents

Vehicle control method, automatic driving prompting method and related devices Download PDF

Info

Publication number
CN116394980B
CN116394980B CN202310666660.2A CN202310666660A CN116394980B CN 116394980 B CN116394980 B CN 116394980B CN 202310666660 A CN202310666660 A CN 202310666660A CN 116394980 B CN116394980 B CN 116394980B
Authority
CN
China
Prior art keywords
vehicle
data
self
map information
road
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310666660.2A
Other languages
Chinese (zh)
Other versions
CN116394980A (en
Inventor
范圣印
贾砚波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jidu Technology Co Ltd
Original Assignee
Beijing Jidu Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jidu Technology Co Ltd filed Critical Beijing Jidu Technology Co Ltd
Priority to CN202310666660.2A priority Critical patent/CN116394980B/en
Publication of CN116394980A publication Critical patent/CN116394980A/en
Application granted granted Critical
Publication of CN116394980B publication Critical patent/CN116394980B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W60/00Drive control systems specially adapted for autonomous road vehicles
    • B60W60/001Planning or execution of driving tasks
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W2050/0001Details of the control system
    • B60W2050/0043Signal treatments, identification of variables or parameters, parameter estimation or state estimation
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • B60W2050/146Display means

Landscapes

  • Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Human Computer Interaction (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Traffic Control Systems (AREA)
  • Navigation (AREA)

Abstract

The embodiment of the specification provides a vehicle control method, an automatic driving prompting method and a related device. The method may include: acquiring positioning data and environment data in the process that the vehicle runs from a first position to a second position; storing the positioning data and the environment data; acquiring traffic identification information related to a position of the vehicle based on the environmental data; acquiring first self-building map information at least based on the traffic identification information; acquiring second self-building map information according to the first self-building map information corresponding to the running times more than the first threshold value; the second self-built map information represents the map with higher accuracy than the first self-built map information; storing second self-built map information; and controlling the vehicle to execute automatic driving on the same road section at least based on the second self-built map information. Therefore, the vehicle can realize automatic driving without using a high-precision navigation map.

Description

Vehicle control method, automatic driving prompting method and related devices
Technical Field
Embodiments in the present disclosure relate to the field of vehicle technologies, and in particular, to a vehicle control method, an automatic driving prompting method, and related devices.
Background
Automated driving of vehicles has become an important direction of research and application in academia and industry. Currently, the application of autopilot technology generally relies on high-precision navigational maps. That is, there is a need for high-precision navigation maps for achieving high-precision positioning of vehicles, as well as further motion planning and control.
However, the high-precision navigation map is costly to produce and update and is difficult to quickly adapt to the dynamic changes in road conditions within the range of motion of people.
Disclosure of Invention
Various embodiments in the present disclosure provide a vehicle control method, an automatic driving prompting method, and a related apparatus, which can implement automatic driving without using a high-precision navigation map.
One embodiment of the present specification provides a vehicle control method including: acquiring positioning data and environment data in the process that the vehicle runs from a first position to a second position; wherein the positioning data includes information indicating a position of the vehicle at a certain point in the traveling process, and the environment data includes information indicating a surrounding environment of the vehicle at a certain point in the traveling process; storing the positioning data and the environment data; acquiring traffic identification information related to a position of the vehicle based on the environmental data; acquiring first self-building map information at least based on the traffic identification information; acquiring second self-building map information according to the first self-building map information corresponding to the running times more than the first threshold value; wherein the path of each of the number of runs greater than the first threshold includes at least one identical road segment; the second self-built map information represents the map with higher accuracy than the first self-built map information; storing second self-built map information; and controlling the vehicle to execute automatic driving on the same road section at least based on the second self-built map information.
One embodiment of the present specification provides a control method of a vehicle, to which the vehicle is applied, the method including: determining a road segment involved in the vehicle traveling from a specified first location to a specified second location; acquiring automatic driving confidence information of at least part of sub-road sections in the road sections; the automatic driving confidence information is generated according to historical difference data corresponding to the sub-road section, and the historical difference data is obtained based on virtual control data generated by an automatic driving decision algorithm executing the sub-road section and actual control data executed by a driver driving the vehicle to pass through the sub-road section; the virtual control data is generated by processing the self-built map information, the environment data and the positioning data by the automatic driving decision algorithm to obtain a lane-level positioned target fusion running track of the vehicle and based on the target fusion running track; wherein the self-built map information is map information constructed by the vehicle; and prompting the automatic driving confidence information.
One embodiment of the present specification provides a control device for a vehicle, including: the first acquisition module is used for acquiring positioning data and environment data in the process that the vehicle runs from a first position to a second position; wherein the positioning data includes information indicating a position of the vehicle at a certain point in the traveling process, and the environment data includes information indicating a surrounding environment of the vehicle at a certain point in the traveling process; storing the positioning data and the environment data; a second acquisition module configured to acquire traffic identification information related to a position of the vehicle based on the environmental data; the third acquisition module is used for acquiring first self-built map information at least based on the traffic identification information; a fourth obtaining module, configured to obtain second self-building map information according to first self-building map information corresponding to traveling times greater than a first threshold; wherein the path of each of the number of runs greater than the first threshold includes at least one identical road segment; the second self-built map information represents the map with higher accuracy than the first self-built map information; the storage module is used for storing second self-built map information; and the control module is used for controlling the vehicle to execute automatic driving on the same road section at least based on the second self-built map information.
One embodiment of the present specification also provides an automatic driving notification device, including: a determining module for determining a road section involved in the vehicle traveling from a specified first location to a specified second location; the system comprises a confidence information acquisition module, a control module and a control module, wherein the confidence information acquisition module is used for acquiring automatic driving confidence information of at least part of sub-road sections in the road sections; the automatic driving confidence information is generated according to historical difference data corresponding to the sub-road section, and the historical difference data is obtained based on virtual control data generated by an automatic driving decision algorithm executing the sub-road section and actual control data executed by a driver driving the vehicle to pass through the sub-road section; the virtual control data is generated by processing the self-built map information, the environment data and the positioning data by the automatic driving decision algorithm to obtain a lane-level positioned target fusion running track of the vehicle and based on the target fusion running track; wherein the self-built map information is map information constructed by the vehicle; and the prompting module is used for prompting the automatic driving confidence information.
One embodiment of the present disclosure provides an electronic device including a memory and a processor, where the memory stores at least one computer program, and the at least one computer program is loaded and executed by the processor to implement a method for controlling a vehicle as described above, or implement an automatic driving prompting method as described above.
One embodiment of the present specification provides a computer-readable storage medium having stored therein at least one computer program that, when executed by a processor, is capable of implementing a vehicle control method as described above, or an automatic driving prompting method as described above.
According to the embodiments provided by the specification, the positioning data and the environment data of the vehicle are collected and stored in the running process of the vehicle, and further, the vehicle uses the stored positioning data and environment data to establish a semantic map of a specified road, so that when the vehicle runs on the specified road again, automatic driving can be performed based on the semantic map of the specified road, and automatic driving without a high-precision navigation map is achieved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic flow chart of a vehicle control method according to an embodiment of the present disclosure.
Fig. 2 is a schematic flow chart of a vehicle control method according to an embodiment of the present disclosure.
Fig. 3 is a schematic flow chart of a vehicle control method according to an embodiment of the present disclosure.
Fig. 4 is a schematic flow chart of a vehicle control method according to an embodiment of the present disclosure.
Fig. 5 is a schematic block diagram of a vehicle control apparatus according to an embodiment of the present disclosure.
Fig. 6 is a schematic flow chart of a vehicle control method according to an embodiment of the present disclosure.
Fig. 7 is a schematic flow chart of an automatic driving prompting method according to an embodiment of the present disclosure.
Fig. 8 is a schematic block diagram of a vehicle control apparatus according to an embodiment of the present disclosure.
Fig. 9 is a schematic block diagram of an automatic driving prompting device according to an embodiment of the present disclosure.
Fig. 10 is a schematic block diagram of an electronic device according to an embodiment of the present disclosure.
Detailed Description
The technical solutions of the embodiments provided in the present specification will be clearly and completely described below with reference to the drawings in the present specification, and it is apparent that the described embodiments are only some embodiments, not all embodiments. All other examples, which can be made by one of ordinary skill in the art without undue burden based on the embodiments provided in this specification, are within the scope of the present invention.
The high-precision navigation map is an electronic map with higher precision and more data dimension compared with a common navigation map. Higher accuracy may be manifested in accuracy to the centimeter level, with more data dimensions in that it includes surrounding static information related to traffic in addition to road information.
High-precision navigation maps store a large amount of driving assistance information as structured data, which can be divided into two categories. The first category is road data such as the position, type, width, gradient, and curvature of lane lines. The second category is fixed object information around the lane, such as traffic sign, traffic light, etc., lane height limit, crossing, obstacle and other road details, and infrastructure information including overhead objects, guard rails, number, road edge type, roadside landmark, etc. In the process of automatically driving the vehicle, the specific driving problem can be judged according to the multidimensional information provided by the high-precision map and the internet of vehicles, the vehicle control signal is output and is transmitted to an execution layer for execution.
Practical applications of the high-precision map may include a mapping process, and an updating process, which are three tightly coupled processes to ensure high-frequency flow and updating of data. Specifically, for example, mapping processes may include field acquisition and field production, mapping processes may include high-precision (self) localization, environmental awareness, and path planning, updating processes may include change detection and cross-validation, and so on.
The dynamic complicated drawing process of the high-precision map determines that the later updating and maintenance can occupy great workload. The high-precision maps required for intelligent driving of vehicles can be divided into four categories according to update frequency: long-term static data with a frequency of one month, short-term static data with a frequency of 1 hour, semi-dynamic data with a frequency of 1 minute, dynamic data with a frequency of 1 second. Compared with the current popular common navigation map which is updated once in 1-2 months, the update frequency of the high-precision map is high and the difficulty is high. The cost of making and updating high-precision maps is very high.
In the related art, the vehicle realizes automatic driving, and has very strong dependence on a high-precision navigation map. Therefore, it is necessary to provide a technical solution for realizing automatic driving without completely depending on a high-precision navigation map.
The vehicle described in the embodiments of the present specification may be a vehicle that is driven by a person and has an assist intelligent driving function, or may be a vehicle that is capable of automatically and intelligently traveling. The vehicle type may include, in particular, a car, an off-road vehicle, a van, etc., and the embodiment of the present specification does not particularly limit the vehicle.
One embodiment of the present specification provides a vehicle control method. The vehicle control method may be applied to a vehicle control system of a vehicle. The vehicle control method may include: the system comprises an acquaintance finding stage, a shadow pattern verification stage, an automatic driving path recommending stage and an automatic driving stage. The acquaintance road discovery stage can realize automatic discovery of acquaintance roads by controlling vehicles and analysis of driving styles and road conditions of drivers. The shadow mode verification stage can obtain a path capable of realizing automatic driving by comparing and verifying virtual automatic driving of the vehicle with driving of a driver. The automatic driving path recommending stage can be used for recommending to a driver aiming at a path capable of being driven automatically and carrying out interactive confirmation with the driver. The autopilot phase may enable autopilot of the vehicle to be performed on a driver-allowed path.
Referring to fig. 1, the following steps may be included in the acquaintance discovery phase.
Step S11: and storing the collected positioning data and environment data of the vehicle in the running process of the vehicle.
In this embodiment, the positioning data may be used to determine the position of the vehicle relative to the ground. Thus, by collecting the positioning data of the vehicle, continuous positioning can be performed for the position of the vehicle. In some implementations, the positioning data may include satellite positioning data. Satellite positioning data may be used to represent absolute positioning of the vehicle relative to the ground. Of course, the positioning data may also include positioning data generated by an inertial navigation system onboard the vehicle. Inertial navigation positioning data is used to represent the relative positioning of the vehicle.
In some embodiments, the vehicle control system may receive satellite positioning data for the vehicle; wherein the satellite positioning data has a first frequency; acquiring inertial navigation positioning data generated by an inertial navigation system of the vehicle; wherein the inertial navigation positioning data has a second frequency; wherein the second frequency is greater than the first frequency; correcting the satellite positioning data based on the inertial navigation positioning data to obtain positioning data of the vehicle; wherein the positioning data of the vehicle has the second frequency.
In this embodiment, the vehicle receives satellite positioning data via a satellite positioning system and a receiver. The received satellite signals can be resolved to obtain satellite positioning data. In some embodiments, multiple models may be employed to calculate the satellite positioning data separately, and then the optimal solution is selected as the final satellite positioning data.
In this embodiment, the inertial navigation system is an assisted navigation system that uses an accelerometer and a gyroscope to measure acceleration and angular velocity of an object and can continuously estimate the position, attitude, and velocity of a moving object. By detecting acceleration and angular velocity of the inertial navigation system, inertial navigation can detect a change in position. In particular, for example, eastern or westward movements. Speed change, e.g., a change in speed magnitude or direction. The attitude changes, for example, rotation about the respective axes. Specifically, for example, attitude data representing the attitude can be obtained by a solution to the gyroscope output signal. By combining the speed data and the posture data of the speed, position change data indicating a relative position change of the vehicle can be obtained.
In this embodiment, the second frequency of the inertial navigation positioning data is higher than the first frequency of the satellite positioning data. In this way, in order to improve the positioning accuracy with respect to the position of the vehicle. The satellite positioning data with relatively low frequency can be corrected by adopting the inertial navigation positioning data with high frequency, so that the frequency of the satellite positioning data is improved. Specifically, for example, the satellite positioning data and the inertial positioning data may be aligned according to the acquisition time, and at this time, since the data frequency of the satellite positioning data is smaller than the data frequency of the inertial positioning data, a part of the inertial positioning data does not have corresponding satellite positioning data. In this way, satellite positioning data corresponding to the full inertial navigation positioning data can be inferred based on the position change data represented between the continuous inertial navigation positioning data. In this way, the completed satellite positioning data may have a second frequency.
In this embodiment, the environmental data may be used to represent the vehicle driving process, by collecting sensor data including the traffic sign. Specifically, for example, the vehicle may be provided with a plurality of detection devices and sensors, and environmental data may be acquired by the plurality of detection devices and sensors. Specifically, for example, the environment data may include bird's eye view data or plan view data.
In this embodiment, the positioning data and the environmental data may each correspond to an acquisition time. In this way, the positioning data and the environmental data can be correlated by the acquisition time. Specifically, it may be understood that the acquisition time establishes a correspondence between the positioning data and the environmental data, and the environmental data may represent environmental information of a location represented by the positioning data in the positioning data and the environmental data having the correspondence. In this embodiment, the location data and the environmental data may be stored in the database independently according to the acquisition time. In some embodiments, the acquired time may be used to establish a correspondence between the positioning data and the environmental data, and then the correspondence may be stored in the database.
Step S12: and identifying traffic identification information corresponding to the positioning data from the stored environment data to form single-pass map building information.
In this embodiment, traffic identification information may be used to represent traffic rules. Further, the traffic identification information indicates that the vehicle needs to follow the traffic rules on the road with the traffic identification information. By identifying the traffic identification information in this way, the movement plan of the vehicle during the execution of the automatic driving can follow the traffic rules represented by the traffic identification information. Specifically, for example, traffic identification information indicating that a left turn is possible is identified, and it may be indicated that a vehicle is possible to turn left at the traffic intersection. Traffic identification information representing a straight-going lane on the ground is identified, indicating that the vehicle is not allowed to turn at the traffic intersection while traveling in that lane.
In this embodiment, roads corresponding to different positioning data may have different traffic identification information. In this way, the traffic identifier represented by the traffic identifier information itself has a correspondence with the location. Specifically, for example, a road may pass through a gate of a school, and a traffic sign indicating deceleration is provided on the road. Traffic identification information representing the traffic identification needs to be identified from the environment data corresponding to the positioning data, so that when the vehicle passes through the position represented by the positioning data during automatic driving, the vehicle speed can be controlled according to the traffic identification information. Specifically, for example, a trained machine learning model may be provided in the vehicle, and traffic identification information is identified by inputting environmental data into the machine learning model. The traffic identification information may include ground traffic identification information and air traffic identification information. The ground traffic sign may include a lane line, a stop line, etc. The air traffic identification may include identification of traffic lights, traffic signs, and the like. In some embodiments, in the process of identifying the traffic light and the traffic sign, based on the 3D detection result, each corner point of the traffic light and the traffic sign can be respectively projected onto a PV (Perspective View) view angle image of a front view camera of the vehicle according to the external reference relation between the traffic light and the traffic sign and the camera of the vehicle. Rectangular frames can be respectively arranged for traffic lights and traffic signs. Therefore, the red light, the yellow light and the green light can be identified for the images in the rectangular frame of the traffic light, and the traffic sign can be identified for the images in the rectangular frame of the traffic sign.
In some embodiments, deriving environmental data representing obstacles and traffic signs may be implemented, in particular, based on bevfomer as a backbone network and setting up relevant detection heads for different traffic signs. The detection head of the BEVFomer can be trained based on certain traffic identification data, and the detection precision of the BEVFomer is improved. Of course, in some embodiments, it may also be implemented by HDMapNet or VectorMapNet.
Of course, in some embodiments, the environmental data may also include obstacle data representing dynamic and static obstacles in the roadway. Only traffic identification data representing traffic identifications among the environmental data may be focused in generating the one-way map information, while obstacle data representing obstacles is filtered out. Dynamic obstacles may include motor vehicles, pedestrians, non-motor vehicles, and the like. Static obstacles may include water horses, cones, and the like. As the joint map data, only dynamic obstacle data may be filtered out, and static obstacle data may be retained. Therefore, the dynamic obstacle data does not have versatility, so that it may not be necessary to reserve when generating the joint map data. For static obstacle data, the static obstacle data may last for a certain time, and may be updated and stored as the vehicle travels on the corresponding road multiple times and as the static obstacle in the actual road is in the same time.
In the present embodiment, a positional relationship indicating a relationship between traffic marks in an environment where a vehicle is located may be recorded. In particular, tracking may be performed for a plurality of image frames having traffic identifications from environmental data. For example, the type of traffic sign, the spatial location, at the BEV viewing angle may be tracked by means of filtering. For an image frame of the tracked traffic sign, it may be identified to obtain traffic sign data, and then further confirm whether the image frame is reserved. For example, the traffic identification data of the image frame may be discarded due to insufficient accuracy of the identified traffic identification data after occlusion by an obstacle. By discarding insufficiently accurate traffic identification data, the occurrence of errors can be reduced to some extent. Specifically, the algorithm of filtering may include, but is not limited to: kalman filtering, extended Kalman filtering, particle filtering, and the like.
In this embodiment, the specific optimization process may be performed on local traffic sign data, so that the traffic sign data has accurate spatial position data in a three-dimensional space. Specifically, the optimization processing can be performed on the traffic identification data based on the BA local optimization algorithm. Further, relative position labels may be added for a plurality of traffic sign data according to a set rule. Specifically, for example, a positional relationship label indicating that the zebra stripes are located in front of the stop line is added to traffic identification data indicating the stop line and traffic identification data indicating the zebra stripes. For example, the lane line positional relationship may be sequentially labeled according to the sequence of the traffic identification data representing the lane lines that are identified.
In the present embodiment, the obtained traffic sign information identified from the environmental data and the positioning data corresponding to the acquisition time are integrally formed into one-way map information. Specifically, a single trip of the vehicle will have a defined start and end point. After the vehicle reaches the end, it is often possible to park and, after a certain time, begin the next trip. In this way, the positioning data and the environmental data can be divided into a plurality of single trips according to the driving behavior of the vehicle by the driver. Thus, the single-pass map information of the single-pass journey is generated according to the previous description.
Step S13: dividing the acquaintance road groups aiming at the single-pass map building information; in the single-pass mapping information included in the same acquaintance road group, positions represented by the starting point information of the single-pass mapping information meet a first set distance condition, and positions represented by the end point information meet a second set distance condition.
In this embodiment, the classification of the acquaintance road groups may be performed according to whether or not the start point information and the end point information of the one-way construction information satisfy the specified association relationship. Specifically, the specifying the association relationship may include: the starting points represented by the starting point information of the single-pass mapping information conform to a first set distance relation, and the ending points represented by the ending point information of the single-pass mapping information conform to a second set distance relation. Of course, the first set distance relationship may represent a condition that the distance between the starting points needs to be satisfied. The second set distance relationship may represent a condition that the distance between the endpoints needs to be met. The first set distance relationship and the second set distance relationship may be the same, or may be different. Specifically, the first distance setting relationship and the second distance setting relationship may be set according to actual requirements. Specifically, for example, the first set distance relationship may be less than 200 meters, and the second set distance relationship may be less than 200 meters. Of course, the first set distance relationship may be less than 100 meters and the second set distance relationship may be less than 150 meters. In some embodiments, the specified association relationship may also include that the path overlap ratio is greater than a specified overlap ratio threshold. Specifically, the start point, the route point, and the end point of the plurality of path information collectively form the overlap ratio between paths. The specified overlap threshold may be 70%, or 75%, 80%, etc. In some embodiments, the acquaintance road set may be understood as a road corresponding to the one-way map information in the acquaintance road set that the vehicle has traveled at least once. Of course, in some embodiments, the number of times the vehicle travels on the road corresponding to the one-way map information may be limited, and the one-way map information may be classified into the acquaintance road group only when the number of times is greater than the specified travel number threshold.
In some embodiments, different acquaintance groups may be used to represent a driver's usage scenario. Specifically, for example, the acquaintance road groups corresponding to the on-duty scene of the driver may be on-duty acquaintance road groups, and the one-way map information included in the on-duty acquaintance road groups all represents the on-duty path of the driver. Of course, there may be a working acquaintance road group, or a shopping acquaintance road group, etc.
Step S14: and establishing and storing a semantic map corresponding to the single trip by combining the single trip map establishing information, wherein the semantic map is used for the vehicle to execute automatic driving on a specified road related to the single trip.
In the present embodiment, the specified road may be a road to which the positioning data corresponds. Specifically, the location where the positioning data locates belongs to the specified road. Further, the driving track formed by the continuous change of the positioning data can be used as the designated road. Specifically, for example, a vehicle is driven in from the start point of a road and is driven out from the end point of the road, and the road may be used as the specified road. Of course, the vehicle may also drive from another road into the specified road from the middle of the road, and the semantic map established at this time may cover only the portion of the specified road that is traversed by the vehicle.
The semantic map may be built based on a semantic map (semantical mapping), which can represent a self-built map specifying road and traffic identification information. A semantic map for a specified road is established, so that when the vehicle travels on the specified road again, automatic driving can be performed based on the semantic map. Specifically, for example, during the driving of the vehicle, the semantic map may be input as a part of data to the automatic driving decision planning module, so as to further control the vehicle according to the motion plan of the automatic driving decision planning module. In some embodiments, semantic mapping may be performed only for the acquaintance road groups to obtain a semantic map of the corresponding acquaintance road groups.
In some implementations, a road object may be generated that represents a specified road to which the positioning data corresponds; a traffic identification object representing traffic identification information corresponding to the positioning data is added to the road object. In this embodiment, a road object may be used to represent a simulation of the specified road. In this way, the position of the vehicle with respect to the specified road can be controlled based on the road object inside the vehicle. The road object may include a subject object representing a road subject of a specified road, and a traffic sign object representing a traffic sign provided on the road subject. Specifically, for example, the subject object may simulate the width and length of a specified road in a certain proportion. Traffic identifications that traffic identification objects may represent include, but are not limited to: ground traffic identification and air traffic identification. The ground traffic identification may include lane boundary identification, lane lines, etc. The air traffic sign may include a traffic light, speed limit sign, etc. Specifically, for example, a road object, a traffic identification object, and the like may call public map (OpenStreetMap) establishment. Of course, other software that can edit the map to create road objects and traffic identification objects can be used.
In this embodiment, the traffic identification object may be used to simulate a traffic identification represented by traffic identification information. The traffic identification information corresponds to positioning data, so that the traffic identification object can also correspond to the positioning data. In this way, the position of the traffic identification object relative to the road object can be indicated by the positioning data. Thus, traffic identification objects are added to the road objects in accordance with the positioning data.
In some implementations, locating data according to the boundary identifications may add traffic identification objects representing the boundary identifications to the road objects; aligning the same road objects among the designated roads belonging to the single-pass map building information in the same acquaintance road group according to the boundary identification; the same road object related to the same acquaintance road group is adjusted to have the same traffic identification object.
In this embodiment, the boundary identification may be used to represent the boundary of the road. Traffic identification information representing a road boundary is first added to a road object. In this way, the positioning data corresponding to the boundary identification of the specified road in each single trip can be determined. The road objects of a single trip in the same acquaintance road group can then be aligned according to the boundary identification. At this time, the same position data can be associated with the road objects of the same specified road of the plurality of single trips.
Further, it is possible to compare whether there is a difference in the traffic sign objects between the same road objects related to the target single trip in the same acquaintance road group. And in the event of a discrepancy, the agreement may be adjusted. Specifically, for example, there may be a lack of the traffic identification object, or the positioning data corresponding to the traffic identification object is different, and in the adjustment process, the traffic identification object may be correspondingly added, or the positioning data corresponding to the traffic identification object may be modified. The accuracy of setting the traffic identification object by the road object in the acquainted road group is improved.
In some implementations, the traffic identification object includes a lane type identification object that represents a lane type. In the semantic map, the vehicle control system can identify objects according to the lane types of the specified roads, and determine the number of lanes of the specified roads; dividing the appointed road into sub-road sections according to the number of lanes; wherein the number of lanes of adjacent sub-sections is different; and connecting the lanes divided by the adjacent sub-road segments based on the lane type indicated by the lane type identifier.
In the present embodiment, the lane type identification object may be used to represent a lane type of a lane having the lane type identification object. Further, the number of lanes in the specified road may be determined in accordance with the number of lane type identification objects. Specifically, for example, in the case where there are two lane type identification objects representing left turn and straight travel, respectively, it may be determined that a specified road has two lanes.
In some cases, the number of lanes of a given road may change. Specifically, for example, there may be a merge in a specified road, or a case of increasing lanes. At this time, in the case that the number of lanes is changed, a behavior planning needs to be made in a targeted manner. Therefore, the specified road is divided into a plurality of sub-road sections according to the number of the lanes, so that the vehicle can conduct behavior planning according to the plurality of sub-road sections of the specified road in the process of automatically driving the specified road, and the probability of traffic accidents of the vehicle when the number of the lanes changes is reduced.
As such, lane type representation objects may be added to the corresponding road objects in the semantic map, as well as dividing the road objects into multiple lanes.
In some embodiments, the collected positioning data and environmental data are stored during travel of the vehicle on the designated road; and updating the semantic map of the specified road according to the positioning data and the environment data. Thus, when the vehicle runs on the appointed road, the semantic map can be updated according to the latest collected positioning data and environment data. Thus, the accuracy of the semantic map can be improved.
Step S15: and generating a fusion positioning track corresponding to the single-pass map building information based on the semantic map.
In this embodiment, a corresponding fusion positioning track may be generated for the single-pass mapping information in the acquaintance road group. Specifically, ground traffic identification information derived from the environmental data may be obtained. Specifically, ground traffic identification information indicating lane lines, road boundary lines, and stop lines may be acquired.
In the present embodiment, specifically, a detection intensity map may be generated in response to environmental data. In this detection intensity map, pixels corresponding to the lane line, the road boundary line, and the stop line may be set to different specified pixel values, respectively. It is understood that the plurality of pixels in the detected intensity map are divided into a plurality of categories, and the same category has the same pixel value. The pixel values of the different classes are not identical. Specifically, for example, the lane line, the road boundary line, and the stop line are each classified into one category, and pixels not belonging to the foregoing category are regarded as special categories.
The relative pose T of the vehicle in the semantic map can be obtained by using the positioning data obtained by fusing the satellite positioning data and the inertial navigation positioning data. And obtaining road objects with the length and width of M meters respectively by taking the relative pose as the center in the semantic map. For the road object, the lane line, the road boundary line and the stop line are respectively used as a category, and each category is respectively adjusted to be a specified pixel value. And obtaining a semantic map intensity map of the road object. In the semantic map intensity map and the detection intensity map, the classification is the same, and the pixel values of pixels belonging to the same classification are the same. Also, pixels that do not belong to a lane line, a road boundary line, and a stop line are taken as the special class.
In this embodiment, a pose estimation algorithm may be used to obtain the localization pose of the vehicle based on the detection intensity map and the semantic map intensity map. Thus, based on continuous change of a plurality of environmental data, the relative pose and the positioning pose corresponding to the environmental data are mutually fused, and the fused running track of the vehicle is obtained. Wherein the pose estimation algorithm may include, but is not limited to, an iterative closest point algorithm (Iterative Closest Point, ICP), or a semantic iterative closest point algorithm.
Step S16: and analyzing the fused driving track to obtain the longitudinal and/or transverse target speed distribution of the vehicle relative to the road, so as to determine the driving style of the driver.
In the present embodiment, the lateral speed data and the longitudinal speed data distribution of the vehicle may be analyzed from the fused travel track as target speed distribution data in the two directions. The target speed distribution data may be matched with a preset driving style to obtain a target driving style of the corresponding driver. Specifically, in the process of analyzing the target speed distribution data, the fusion driving track of the corresponding intersection region can be eliminated. The situation of the intersection area is complex, the special performance is higher than that of other road sections, the fusion driving track of the intersection area is eliminated, the analyzed target speed distribution data can more accurately represent the speed situation of the road sections in the non-intersection area, and then the target driving style can be accurately determined. Further, each driving style may be associated with reference speed distribution data, and the target speed distribution data may be compared with the reference speed distribution data of each driving style, respectively, and the driving style corresponding to the reference speed distribution data most similar to the speed distribution data may be determined as the target driving style. In some embodiments, the variance may be calculated according to the lateral speed data and the longitudinal speed data, each driving style may correspond to a value range, and the value ranges of different driving styles do not overlap, so that the driving style corresponding to the value range including the variance may be determined as the target driving style. The target driving style is used for guiding the speed distribution of the vehicle in the process of executing automatic driving on a specified road; wherein at least the speed profiles of the different driving styles differ.
Further, in some embodiments, the navigation path information may be determined to relate to driving difficulty of the road based on the target speed distribution data. The driving difficulty may include high difficulty, ordinary difficulty, and low difficulty. Specifically, for example, if the longitudinal target speed distribution data of a road is smaller than a certain speed threshold, the road may be determined to be a frequently congested road section, and the road may be determined to be highly difficult. If the longitudinal target speed profile data for a link is maintained relatively uniformly at a relatively high speed, the link may be deemed to be of low difficulty. In some embodiments, driving difficulty may be individually identified for an intersection region. Specifically, for example, the target speed distribution data in the transverse direction of the intersection area is obtained according to the target fusion driving track, so that the situation that the vehicle faces more complex driving conditions when passing through the intersection area is described, and the driving difficulty of the intersection area is determined to be high. The difficulty rule can be set according to actual needs by a person skilled in the art to specify the driving difficulty of the road related to the navigation path information.
Referring to fig. 2, in the shadow pattern verification stage, an autopilot decision-making module of the vehicle may be adjusted based on the fused driving track. Specifically, the vehicle control system may analyze the fused driving track to obtain a longitudinal and/or transverse target speed distribution of the vehicle relative to the road; and adjusting the automatic driving decision planning module so that the speed distribution of the vehicle tends to the target speed distribution in the process that the vehicle automatically drives on the road corresponding to the acquaintance road group. Specifically, the shadow pattern verification stage may include the following steps.
Step S21: and in the running process of the vehicle, combining the satellite positioning data and the inertial navigation positioning data of the vehicle to obtain the positioning data of the vehicle.
In this embodiment, the vehicle control system may continuously combine the satellite positioning data and the inertial navigation positioning data of the vehicle during the running process of the vehicle to obtain positioning data formed by the completed satellite positioning data, and specifically, the present invention may be described with reference to the foregoing embodiment, and will not be repeated.
Step S22: and identifying the obstacle and the traffic sign based on the collected environmental data to obtain obstacle data representing the obstacle and traffic sign data representing the traffic sign.
In the present embodiment, specifically, obstacle recognition, air traffic identification, ground traffic identification, and the like may be performed based on the environmental data. Specifically, reference may be made to the foregoing embodiment for comparison and explanation, and no further description is given.
Step S23: and reading a local semantic map in a specified position range from the stored semantic map according to the positioning data of the vehicle.
In this embodiment, the vehicle control system may have a semantic map engine that may read a local semantic map near the positioning data of the vehicle from a stored semantic map according to the positioning data, so that the data processing amount may be reduced to some extent.
In this embodiment, the local semantic map may include map semantic elements constituting the semantic map within a specified location range, and locations of the semantic elements. Specifically, for example, the map semantic elements may include road objects, traffic identification objects, and the like, as well as positional relationships. The specified location range may be a specified distance range. Specifically, for example, the specified position range is within 300 meters of the distance from the positioning data. Of course, it is not limited to 300 m, but may be 400 m or 500 m.
Step S24: and generating a fusion driving track of the vehicle by combining the local semantic map, the environment data and the positioning data.
In this embodiment, the environment data and the positioning data may be collected during the running of the vehicle, and the fusion processing may be performed with the local semantic map to generate a fused running track of the vehicle. Specifically, reference may be made to the foregoing embodiments for comparison and explanation, and no further description is given.
Step S25: and carrying out local semantic mapping based on the ground traffic identification data identified from the environment data to obtain a local map.
In this embodiment, the local map may be used to represent the positional relationship between ground traffic identifications in the environment in which the vehicle is located. In particular, tracking may be performed for a plurality of image frames having ground traffic identifications from environmental data. For example, the type of ground traffic sign, the spatial location, at the BEV perspective may be tracked by means of filtering. The tracked ground traffic identifications can be identified to obtain traffic identification data, and then further confirmation is carried out on whether the traffic identification data are reserved or not. For example, for some traffic identification data due to obstruction, the same traffic identification may be possible compared to previously identified traffic identification data, but due to the obstruction, the identified traffic identification data may be inaccurate, and the image frame of traffic identification data may be discarded. By discarding insufficiently accurate traffic identification data, the occurrence of errors can be reduced to some extent. Specifically, the algorithm of filtering may include, but is not limited to: kalman filtering, extended Kalman filtering, particle filtering, and the like.
In this embodiment, the specific optimization process may be performed on the traffic identification data in the local map, so that the traffic identification data has accurate spatial position data in a three-dimensional space. Specifically, the optimization processing can be performed on the traffic identification data based on the BA local optimization algorithm. Further, relative position labels may be added for a plurality of traffic sign data according to a set rule. Specifically, for example, a positional relationship label indicating that the zebra stripes are located in front of the stop line is added to traffic identification data indicating the stop line and traffic identification data indicating the zebra stripes. For example, the lane line positional relationship may be sequentially labeled according to the sequence of the traffic identification data representing the lane lines that are identified.
Step S26: and carrying out track prediction according to the obstacle data representing the obstacle and the local map to obtain a track prediction result.
In this embodiment, the obstacle data and the local map may be input to the trajectory prediction module to obtain the trajectory prediction result output by the trajectory prediction module. Specifically, for example, the trajectory prediction module may be implemented using TNT or DenseTNT. Specifically, for example, the track prediction module may obtain, based on the local semantic map, lane data representing lanes where the vehicle is located, the number of lanes, a distance from a next intersection, navigation instruction information of the next intersection, an intersection lane allocation relationship, dynamic obstacle data, static obstacle data, and the like, to obtain a plurality of predicted tracks of the vehicle. The navigation instruction information may include, but is not limited to, straight, left turn, right turn, and head drop, etc. The intersection lane allocation relationship refers to a driving behavior in which lanes are allowed. For example, the rightmost lane is a right turn lane, the middle is a straight run lane, and the leftmost lane is a left turn lane. The track prediction module can perform multi-track prediction according to the input multiple parameters, score the multiple tracks further, and select the predicted track with the highest score as a track prediction result.
Step S27: and carrying out virtual behavior planning and virtual motion planning of the vehicle based on the fused driving track data, the road object of the local semantic map, the navigation path information of the vehicle and the track prediction result.
In some embodiments, the vehicle control system may input the fused driving trajectory data of the vehicle, the road object of the local semantic map, the navigation path information of the vehicle, and the trajectory prediction result to an automatic driving decision planning module of the vehicle control system, which outputs the virtual behavior plan and the virtual motion plan. The virtual behavior planning and virtual motion planning output by the automatic driving decision planning module can be used for realizing the track prediction result. Specifically, for example, if the lane in which the vehicle is located does not match the target lane indicated by the track prediction result, the automatic driving decision planning module needs to give a virtual behavior plan that requires lane changing. If traffic identification information representing speed limit exists, the automatic driving decision planning module can judge whether to output virtual behavior planning needing to be slowed down or accelerated according to the vehicle speed. The automatic driving decision-making planning module can output virtual behavior planning for deciding deceleration or braking according to the state of the traffic lights and the distance of the vehicles to the intersection. For crossing, the automatic driving decision planning module can output virtual behavior planning of the target lane after switching to the next road based on the set rule. For example, setting rules may include: the first lane on the right is entered after the default right turn, the last lane on the left to right is entered, the lane with a closer lateral distance from the lane is entered straight, etc. Furthermore, the driving style of the driver can be combined, and in particular, when decision judgment is carried out, a plan similar to the driving style of the vehicle owner is selected from a plurality of plans. For some road segments marked as driving difficulties, the automated driving decision planning module may output a discreet virtual behavior plan. In particular, a discreet virtual behavior plan may be one that is deemed to have a higher security. For example, in the case of a dynamic obstacle in front of the vehicle, the autopilot decision planning module may output a virtual behavior plan that controls the vehicle to wait for parking. Under the condition that the vehicle runs at the intersection, the automatic driving decision-making planning module can output virtual behavior planning for controlling the vehicle to avoid lane change.
Furthermore, the automatic driving decision-making planning module can conduct virtual motion planning based on the planning result of the virtual behavior planning. Specifically, for example, virtual motion planning may be performed in combination with virtual behavior planning, obstacle data. Specifically, for example, the autopilot decision-making module may take a split-horizon virtual motion plan, or a blended-horizon virtual motion plan. In some implementations, the autopilot decision planning module may employ MPC (Model Predictive Control ) for virtual motion planning.
In this embodiment, the autopilot decision planning module may output a virtual behavior plan and a virtual motion plan every one time period. The time period may be 80 milliseconds, 100 milliseconds, 150 milliseconds, etc.
Step S28: and comparing the virtual control data based on the virtual behavior planning and the virtual motion planning with actual control data of a driver to obtain difference data so as to modify the automatic driving decision planning module according to the difference data.
In the present embodiment, the difference data may be used to represent the difference between controlling the vehicle based on the virtual control data and controlling the vehicle by the driver. In particular, the discrepancy data may comprise behavioural planning discrepancy data and movement planning discrepancy data.
In this embodiment, the vehicle control system may execute virtual control for the vehicle based on the virtual behavior plan and the virtual motion plan, resulting in virtual control data. Specifically, for example, the vehicle control system may simulate lateral control and longitudinal control of the vehicle based on the virtual behavior plan to obtain virtual lateral control data and virtual longitudinal control data. In the present embodiment, the vehicle control system may simulate control represented by virtual lateral control data and virtual longitudinal control data of the vehicle in combination with vehicle kinematics. Further, control over the vehicle may be simulated according to the virtual control data, and virtual travel track data may be generated in a manner that forms target fusion travel track data of the vehicle according to the technical scheme described in the foregoing embodiment. Specifically, for each virtual control data of the time period, virtual driving track data corresponding to the virtual control data may be obtained.
In this embodiment, the vehicle control system may read actual lateral control data and actual longitudinal control data generated by the driver actually controlling the vehicle, and generate the difference data. Specifically, the virtual transverse control data and the actual transverse control data can be compared, the virtual longitudinal control data and the actual longitudinal control data can be compared, and the virtual transverse difference data and the virtual longitudinal difference data of the virtual behavior planning can be obtained according to the comparison result. Further, the automatic driving decision planning module is adjusted according to the differences represented by the virtual transverse difference data and the virtual longitudinal difference data respectively. Specifically, for example, the control direction indicated by the virtual lateral control data may be compared with the control direction of the actual lateral control data for the vehicle, and if the two directions are opposite, difference data indicating a difference in direction may be recorded. For example, the acceleration or deceleration indicated by the virtual longitudinal control data may be compared with the acceleration or deceleration control of the actual longitudinal control data for the vehicle, and if the two are not identical, difference data indicating a difference in acceleration and deceleration may be recorded. Of course, in some embodiments, the discrepancy data may represent the number of discrepancies for which there is a directional discrepancy.
In some embodiments, the automated driving decision planning module may be adjusted according to the discrepancy data such that the virtual control data may reach the actual control data of the driver, reducing the deviation between the virtual behavior planning and the actual driving behavior of the driver. Specifically, the virtual control data may form virtual control running track data of the vehicle, where the virtual control running track data may have a lateral speed distribution and a longitudinal speed distribution, and the lateral speed distribution and the longitudinal speed distribution of the virtual control running track data may be made to be the same as those formed by an actual driving vehicle of the driver by adjusting the automatic driving decision planning module.
In this embodiment, after a time period is finished, if it is determined that there is a difference between the virtual behavior plan of the time period and the actual driving behavior of the driver, the time period may be taken as a target time period, and target fusion driving track data and track prediction results corresponding to the target time period may be recorded. Therefore, the automatic driving decision planning module can compare the current input target fusion driving track data and track prediction results with the target fusion driving track data and track prediction results corresponding to the target time period recorded before. Under the condition that the current target merges the driving track data and the track prediction result and the recorded target merges the driving track data and the track prediction result, the difference data corresponding to the target time period is used as one input quantity of the automatic decision-making planning module, so that the automatic decision-making planning module can combine the previous difference data when generating the virtual behavior planning, and the obtained virtual behavior planning can be more similar to the actual driving behavior of a driver. In some embodiments, because the current target fusion travel track data and track prediction result may only be similar to, but not identical to, the target fusion travel track data and track prediction result for the target time period. In order to avoid that the adjustment amplitude of the automatic driving decision-making module is too large, the output virtual behavior plan is caused to be more different from the actual driving behavior of the driver. The difference data for the target time period may be weighted and then input to the automated driving decision planning module. Specifically, for example, the weight for weighting the difference data may be 0.2, 0.3, 0.35, or the like.
In the present embodiment, if it is determined that the vehicle is traveling on the navigation path information indicating a road, there is no difference between the virtual behavior plan output by the automatic driving decision plan module and the actual driving behavior of the driver. It may be further determined whether a correction of the virtual motion plan is required. Specifically, according to the virtual behavior planning of each time period, the vehicle is simulated to obtain virtual control running track data of the vehicle in each time period, and actual target fusion running track data of the vehicle corresponding to each time period is obtained. In this way, the absolute pose error (absolute pose error, APE) between the virtual control travel track data and the actual target fusion travel track data of the vehicle is further calculated, and thus the automatic driving decision planning module can be revised according to the absolute pose error. Of course, in some embodiments, the automatic driving decision planning module may be revised only if the absolute pose error is greater than a set pose error threshold. Of course, in some embodiments, absolute track errors (absolute trajectory error, ATE), relative pose errors (relative pose error, RPE), and relative track errors (relative trajectory error, RTE) between the virtual control travel track data and the actual target fusion travel track data of the vehicle may also be calculated and the autopilot decision planning module modified accordingly.
Step S29: and generating automatic driving confidence information corresponding to the single-pass map information according to the difference data.
In the present embodiment, the automatic driving confidence information corresponding to the one-way map information may be generated for the one-way map information divided into the acquaintance road groups based on the difference data. Specifically, for example, the automatic driving confidence information of the one-way map information may be generated according to the confidence index generation rule based on the difference data of the road represented by the one-way map information in which the vehicle repeatedly travels. Specifically, for example, the confidence index generation rule may include: the difference data of the vehicle running on the one-way map information representing the road is equal to a first specified threshold value, and the automatic driving confidence information of the vehicle running on the one-way map information representing the road is considered to be excellent; when the difference data of the road represented by the one-way map information of the vehicle running is larger than a first specified threshold value and smaller than or equal to a second specified threshold value, the automatic driving confidence information of the road represented by the one-way map information of the vehicle running is determined to be good; when the difference data of the vehicle running on the one-way map building information is larger than a second designated threshold value and smaller than or equal to a third threshold value, the automatic driving confidence information of the vehicle running on the one-way map building information representing the road is considered to be normal; and when the difference data of the vehicle running on the one-way map information is larger than a third threshold value, determining that the automatic driving confidence information of the vehicle running on the one-way map information representing the road is poor. In some embodiments, the number of differences between the virtual behavior plan and the actual driving behavior of the driver per kilometer can be counted according to the difference data when the vehicle runs on the road in the process of single-pass mapping information representation. In this way, the first specified threshold may be 0, and if the number of differences is equal to the first specified threshold, it may indicate that the autopilot confidence information corresponding to the one-way map information is excellent, and at this time, that the autopilot capability is excellent. The second specified threshold may be 1, and when the number of differences is greater than 0 and equal to or less than 1, it may indicate that the autopilot confidence information corresponding to the one-way map information is good, and at this time, it indicates that the autopilot capability is good. The third specified threshold may be 2, and when the number of differences is greater than 1 and equal to or less than 2, it may indicate that the autopilot confidence information corresponding to the one-way map information is normal, and at this time, it indicates that the autopilot capability is normal. When the number of differences is greater than 2, it may be indicated that the autopilot confidence information corresponding to the one-way map information is poor, and at this time, it is indicated that the autopilot capability is poor.
In some embodiments, the autopilot confidence information may be a specific value. Specifically, the maximum value of the automatic driving confidence information may be set to 100. When the number of differences is 0, it is possible to assume that the autopilot confidence information corresponding to the one-way map information is 100, which indicates that the autopilot capability is excellent. When the number of the differences is greater than 0 and equal to or less than 1, it is possible to map to a value interval of a value of less than 100 and equal to or greater than 90, which indicates that the autopilot ability is good. When the number of differences is greater than 1 and equal to or less than 2, it may be mapped to a value of less than 90 and equal to or greater than 80, which indicates that the autopilot capability is normal. When the number of differences is greater than 2, it may be mapped to a value less than 80, which indicates poor autopilot capability. In some embodiments, weights may be set for the values of the autopilot confidence information in conjunction with the driving difficulty. Specifically, when the automatic driving difficulty of the single-pass map building information is high, a first weight value can be set for the automatic driving confidence information obtained through calculation. When the automatic driving difficulty of the single-pass mapping information is common difficulty, a second weight can be set for the automatic driving confidence information. When the automatic driving difficulty of the single-pass mapping information is low, a third weight can be set for the automatic driving confidence information. Wherein the first weight is greater than the second weight and greater than the third weight. When the autopilot confidence information is 100, the weight may not be set. In this way, the difficulty of automatic driving can be applied to the automatic driving confidence information, so that the safety of the automatic driving of the vehicle can be represented more safely and accurately.
In this embodiment, the corresponding color may be set for the one-way map information according to the automatic driving confidence information. Specifically, when the terminal of the vehicle displays the road represented by the one-way map information through the semantic map, the automatic driving confidence information of the one-way map information may be represented by the color. Specifically, for example, for the one-way map information with excellent autopilot confidence information, the color may be dark green, the autopilot confidence information may be good one-way map information, the color may be green, the autopilot confidence information may be normal one-way map information, the color may be yellow, the autopilot confidence information may be poor one-way map information, and the color may be red.
In some implementations, the shadow mode may have a verification period. Specifically, the duration of the verification period may be one week. Of course, the duration of the verification period may be 10 days, 15 days, 3 days, or the like. The vehicle control system may generate the autopilot confidence information for the navigation path information based on the virtual drive data for the last two days of the verification period. Thus, the automatic driving confidence information can well represent the automatic driving capability of the vehicle. Of course, the vehicle may also provide a setting function through the display interface, and the driver may set a selection to let the vehicle control system learn the own driving style, or may select to let the vehicle control system not learn the own driving style.
Please refer to fig. 3. The automatic driving path recommendation phase may include the following steps.
Step S31: and in the process of driving the vehicle by the driver, matching in the acquaintance road group according to the current positioning data of the vehicle, and obtaining one-way map building information matched with the current positioning data.
In this embodiment, the vehicle control system may determine whether to travel on a road corresponding to the one-way map information in the acquaintance road group according to the current positioning data of the vehicle. When the vehicle is determined to travel on the road corresponding to the one-way map information, the one-way map information can be used as the target one-way map information. Of course, the road represented by the target single-pass map information needs to cover the current position of the vehicle.
In this embodiment, after the vehicle control system obtains the target one-way map information, the driver may be reminded that the vehicle control system has matched the current road to obtain the target one-way map information, so that the vehicle control system has a certain automatic driving capability. Specifically, the driver can be reminded in a voice manner.
Step S32: and displaying the automatic driving confidence information of the target navigation path information on a terminal interface of the vehicle.
In the present embodiment, the vehicle control system may display a general navigation map on a terminal interface of the vehicle and display automatic driving confidence information of the target navigation path information. Of course, the established semantic map may also be displayed, as well as the presentation of autopilot confidence information. Specifically, for example, the value of the autopilot confidence information may be displayed directly on the terminal interface. Of course, a color corresponding to the automatic driving confidence information may be displayed. Specifically, for example, in a common navigation map displayed on a terminal interface, the road color represented by the corresponding target one-way map information is the color corresponding to the automatic driving confidence information of the target one-way map information.
In some cases, when the autopilot capability represented by the autopilot confidence information of the target single trip map information is normal or poor, the driver may be alerted by voice, requiring much attention if the autopilot of the vehicle is started.
Step S33: and starting automatic driving according to the received instruction of the driver.
In this embodiment, the vehicle control system may interact with the driver through a voice or a button of the display interface, and when a determination start automatic driving instruction issued by the driver is obtained, start automatic driving of the vehicle.
Please refer to fig. 4. The autopilot phase may include the following steps.
Step S41: and in the running process of the vehicle, combining the satellite positioning data and the inertial navigation positioning data of the vehicle to obtain the current positioning data of the vehicle.
In this embodiment, the vehicle control system may continuously combine the satellite positioning data and the inertial navigation positioning data of the vehicle during the running process of the vehicle to obtain the current positioning data formed by the completed satellite positioning data, and specifically, the present positioning data may be described with reference to the foregoing embodiment, and will not be described again.
Step S42: and identifying the obstacle and the traffic sign based on the acquired current environment data to obtain obstacle data representing the obstacle and traffic sign data representing the traffic sign.
In this embodiment, specifically, obstacle recognition, air traffic identification, ground traffic identification, and the like may be performed based on the current environmental data. Specifically, reference may be made to the foregoing embodiment for comparison and explanation, and no further description is given.
Step S43: and reading a local semantic map in a specified position range from the stored semantic map according to the current positioning data of the vehicle.
In this embodiment, the semantic map engine of the vehicle control system may read the local semantic map from the semantic map according to the current positioning data. Specifically, reference may be made to the foregoing embodiment for comparison and explanation, and no further description is given.
Step S44: and generating a current fusion running track of the vehicle by combining the local semantic map, the current environment data and the current positioning data.
In this embodiment, the current environment data and the current positioning data collected during the running process of the vehicle may be fused with the local semantic map to generate a current fused running track of the vehicle. Specifically, reference may be made to the foregoing embodiments for comparison and explanation, and no further description is given.
Step S45: and carrying out local semantic mapping based on the ground traffic identification data identified from the current environment data to obtain a local map.
In this embodiment, local semantic mapping may be performed according to ground traffic identification data identified from current environmental data, to obtain a local map. Specifically, reference may be made to the foregoing embodiments for comparison and explanation, and no further description is given.
Step S46: and carrying out track prediction according to the obstacle data representing the obstacle and the local map to obtain a track prediction result.
In this embodiment, the track prediction module may perform track prediction according to the obstacle data and the local map that represent the obstacle, so that the automatic driving decision planning module performs final behavior planning and motion planning. Specifically, reference may be made to the foregoing embodiments for comparison and explanation, and no further description is given.
Step S47: and carrying out behavior planning and motion planning of the vehicle based on the fused driving track data, the road object of the local semantic map, the navigation path information of the vehicle and the track prediction result so as to control the vehicle to drive according to the behavior planning and the motion planning.
In this embodiment, the automatic driving decision planning module of the vehicle control system may control the vehicle to implement automatic driving according to the generated behavior plan and the motion plan.
One embodiment of the present specification also provides a vehicle control apparatus. As shown in fig. 5, the vehicle control apparatus may include the following modules.
The storage module is used for storing the collected positioning data and environment data of the vehicle in the running process of the vehicle.
And the identification module is used for identifying traffic identification information of the corresponding positioning data from the stored environment data.
The building module is used for combining the traffic identification information and the positioning data, building and storing a semantic map of the specified road corresponding to the positioning data, wherein the semantic map is used for the vehicle to execute automatic driving on the specified road.
The specific functions and effects achieved by the vehicle control device may be explained with reference to other embodiments of the present specification, and will not be described herein. The respective modules in the vehicle control apparatus may be implemented in whole or in part by software, hardware, and combinations thereof. The modules can be embedded in hardware or independent of a processor in the computer equipment, and can also be stored in a memory in the computer equipment in a software mode, so that the processor can call and execute the operations corresponding to the modules.
Referring to fig. 6, the embodiment of the present disclosure further provides a control method of a vehicle. The control method of the vehicle may be applied to a vehicle. The control method of the vehicle may include the following steps.
Step S50: acquiring positioning data and environment data in the process that the vehicle runs from a first position to a second position; wherein the positioning data includes information indicating a position of the vehicle at a certain point in the traveling process, and the environment data includes information indicating a surrounding environment of the vehicle at a certain point in the traveling process; and storing the positioning data and the environment data.
In the present embodiment, the first position may be a start position of one trip of the vehicle. The second position may be an end position of a stroke. The vehicle travels from the first location to the second location and generates positioning data and environmental data by an electronic control unit of the vehicle. The vehicle control system of the vehicle may constantly store the positioning data and the environment data correspondingly.
Step S51: based on the environmental data, traffic identification information related to the location of the vehicle is acquired.
In this embodiment, the vehicle control system may recognize traffic identification information from the environment data. Specifically, the traffic identification information is identified from the environmental data, and may be explained in comparison with the foregoing embodiments, which is not described in detail.
Step S52: and acquiring first self-built map information at least based on the traffic identification information.
In the present embodiment, the first self-created map information is formed by integrating the obtained traffic identification information identified from the environmental data and the positioning data corresponding to the acquisition time. In some embodiments, the first self-created map information may be the one-way map information described in the previous embodiments. Of course, those skilled in the art may also use other embodiments to form the first self-building map information, which will not be described in detail herein after the disclosure of the embodiments of the present disclosure.
Step S53: acquiring second self-building map information according to the first self-building map information corresponding to the running times more than the first threshold value; wherein the path of each of the number of runs greater than the first threshold includes at least one identical road segment; wherein the second self-built map information represents a map with higher accuracy than the first self-built map information.
In this embodiment, the vehicle may travel on the road section indicated by the first self-built map information a plurality of times. Specifically, for example, a driver may often drive a vehicle to work and return home. At this time, the driver can form a first self-built map information every time he drives the vehicle to work, and can also form a first self-built map information every time he goes home.
In this embodiment, the first threshold value may be set as a critical value, and it may be determined whether or not the link indicated by the first self-map information can be used as a acquaintance road. That is, the first self-built map information whose number of times of being driven is greater than the first threshold value may be recognized as a acquaintance road, and may be divided into acquaintance road groups. The second self-built map information may be correspondingly generated for the first self-built map information identified as the acquaintance road. The second self-built map information may be a self-built map established by the vehicle control system. Specifically, for example, the second self-built map information may be a semantic map that the vehicle control system builds based on a semantic map (Simultaneous Localization and Mapping, abbreviated as SLAM). In particular, it can be explained in comparison with the previous embodiments. Of course, the second self-built map information may not be limited to the semantic map, and a person skilled in the art may implement the technical solution described in the present specification according to the known technology. It is intended that all such modifications and variations as come within the scope of the invention are within the true spirit and scope of the invention.
In the present embodiment, both the first self-built map information and the second self-built map information may represent a map built in the vehicle control system. The map accuracy of the map represented by the second self-built map information may be higher than the map accuracy of the first self-built map information. In general, a map with higher accuracy is generated, and a larger amount of computation is required, so that more computational resources are required. In this embodiment, the second self-building map information with higher map accuracy may be generated for the route identified as the acquaintance road, so that the road section with higher map accuracy is more targeted to be built. That is, the driver may travel on the road segment more times, so that the generated second self-built map information may also have more utilization rate.
In this embodiment, the first self-created map information may be traffic identification information obtained by identifying the traffic identification information from the environmental data, and the traffic identification information and the positioning data corresponding to the acquisition time may be stored and formed integrally. Such that the first self-built map information may be primarily a record for the collected positioning data and environment data. The second self-built map information may be a semantic map constructed based on the semantic map. The second self-built map information may be a data object required for further generating the second self-built map information based on the first self-built map information based on the acquaintance road. The data objects may include, but are not limited to, road objects, traffic identification objects, and the like. Furthermore, in the construction process of the second self-built map information, the data of the plurality of first self-built map information can be integrated, so that the generated data object can have more accurate content and position. As such, the second self-built map information may be a simulation for the actual road segment. So that the accuracy of the second self-built map information is higher than that of the first self-built map information.
Step S54: and storing the second self-built map information.
In the present embodiment, the vehicle control system may store the acquired second self-built map information in a memory of the vehicle. Thus, the vehicle itself is made up of map for the acquaintance road.
Step S55: and controlling the vehicle to execute automatic driving on the same road section at least based on the second self-built map information.
In the present embodiment, when the vehicle is traveling on the road section of the acquaintance road again, the vehicle may be controlled to perform the automatic driving based on the second self-built map information. Specifically, for example, the second self-built map information may be used as part of input data of an automatic driving decision-making module of the vehicle control system, so that the automatic driving decision-making module outputs a behavior plan and a motion plan by using the second self-built map information, thereby realizing automatic driving of the vehicle on a acquaintance road. Specifically, the vehicle control system executes the automatic driving process based on the second self-built map information, and the content comparison explanation of the automatic driving process based on the semantic map can be introduced with the foregoing embodiment, which is not repeated.
In some embodiments, the vehicle is provided with a plurality of electronic control units, and in the step of acquiring the positioning data and the environmental data, the positioning data and the environmental data are generated by the plurality of electronic control units of the vehicle.
In this embodiment, the vehicle may be provided with a plurality of electronic control units, each of which may respectively realize a function required for the vehicle. The electronic control unit can also be matched with a sensor of the vehicle for use, data are collected through the sensor, and the electronic control unit processes the data. In particular, for example, the vehicle may be provided with a plurality of environmental sensors, and the sensor signals acquired by the environmental sensors are provided to respective electronic control units, which process the sensor signals to obtain environmental data. The vehicle may also be provided with an electronic control unit that generates positioning data. Specifically, the electronic control unit may be a positioning module, and may generate positioning data through a GPS signal, or a beidou signal. The electronic control unit may further include: and the inertial navigation positioning module.
In the embodiment, the positioning data and the environment data are both processed by calculation in the electronic control unit deployed in the vehicle, so that the vehicle control system can interact with the server in the network less, and the autonomy of vehicle control is improved. Further, after the vehicle control system acquires the environmental data and the positioning data according to the electronic control unit arranged by the vehicle control system, the vehicle control system can automatically generate the semantic map without acquiring the high-precision map from the server.
In some embodiments, the first self-built map information corresponds to a first location as a starting point and a second location as an ending point; dividing acquaintance road groups aiming at the first self-built map information; the first self-built map information contained in the same acquaintance road group accords with a specified association relation; wherein, the appointed association relation comprises: the first positions of the first self-building map information conform to a first set distance condition, and the second positions of the first self-building map information conform to a second set distance condition; or, the coincidence degree of the road sections related between the first self-built map information is higher than a specified coincidence degree threshold value.
In this embodiment, the specifying the association relationship may include: the starting points represented by the starting point information of the single-pass mapping information conform to a first set distance relation, and the ending points represented by the ending point information of the single-pass mapping information conform to a second set distance relation. Of course, the first set distance relationship may represent a condition that the distance between the starting points needs to be satisfied. The second set distance relationship may represent a condition that the distance between the endpoints needs to be met. The first set distance relationship and the second set distance relationship may be the same, or may be different. Specifically, the first distance setting relationship and the second distance setting relationship may be set according to actual requirements. Specifically, for example, the first set distance relationship may be less than 200 meters, and the second set distance relationship may be less than 200 meters. Of course, the first set distance relationship may be less than 100 meters and the second set distance relationship may be less than 150 meters. In some embodiments, the specified association relationship may also include that the path overlap ratio is greater than a specified overlap ratio threshold. Specifically, the start point, the route point, and the end point of the plurality of path information collectively form the overlap ratio between paths. The specified overlap threshold may be 70%, or 75%, 80%, etc. In some embodiments, the acquaintance road set may be understood as a road corresponding to the one-way map information in the acquaintance road set that the vehicle has traveled at least once. Of course, in some embodiments, the number of times the vehicle travels on the road corresponding to the one-way map information may be limited, and the one-way map information may be classified into the acquaintance road group only when the number of times is greater than the specified travel number threshold.
In some embodiments, the step of controlling the vehicle to perform autonomous driving on the same road segment based at least on the second self-built map information includes: generating a fusion running track of the vehicle running on the road section based on the second self-built map information; analyzing the fused driving track to obtain the longitudinal and/or transverse target speed distribution of the vehicle relative to the road section; and adjusting an automatic driving decision planning module of the vehicle so that the speed distribution of the vehicle tends to the target speed distribution in the process that the vehicle automatically drives on the road section corresponding to the acquaintance road group.
In the present embodiment, the fused travel track of the vehicle traveling road section may be generated based on the second self-built map information. Specifically, the second self-created map information may be single-pass map information. Multiple single-pass mapping information in the same acquaintance road set may relate to at least partially identical road segments. At least for the same road segments, a fused locating track of the vehicle can be generated. Specifically, the manner of establishing the fusion positioning track can be explained in comparison with the foregoing embodiment, and will not be repeated.
In this embodiment, after the fused travel track is generated, the target speed distribution of the vehicle in the longitudinal direction and/or the transverse direction with respect to the road section may be obtained by analyzing the fused travel track. Therefore, the automatic driving decision planning module of the vehicle can be adjusted, so that the speed distribution of the vehicle tends to the target speed distribution in the process that the vehicle automatically drives on the road section corresponding to the acquaintance road group. In some embodiments, the driving style of the driver may be determined according to the target speed distribution, and further, when the automatic driving decision planning module is adjusted, the automatic driving decision planning module may learn the driving style of the driver, and the speed distribution of the vehicle in the driving process on the acquaintance road tends to the target speed distribution.
In some embodiments, the step of generating the fused driving track corresponding to the first self-building map information based on the second self-building map information may include: reading a local map in a specified position range from the stored second self-built map information according to the positioning data of the vehicle; and generating a fusion running track of the vehicle by combining the local map, the environment data and the positioning data.
In this embodiment, in the shadow pattern verification stage, the local map may be obtained by locating in the stored second self-built map information according to the location data of the vehicle. Specifically, a local map in a specified position range centered on the position located in the second self-built map information may be read. In one embodiment, the second self-built map information is a semantic map. And reading a local semantic map in a specified position range from the stored semantic map according to the positioning data of the vehicle, and generating a fusion running track of the vehicle by combining the local semantic map, the environment data and the positioning data. Specifically, reference may be made to the foregoing embodiments for comparison and explanation, and no further description is given.
In some embodiments, the step of adjusting the automatic driving decision planning module of the vehicle to make the speed distribution of the vehicle trend toward the target speed distribution in the process of automatically driving the vehicle on the road segment corresponding to the acquaintance road group may include: local mapping is carried out based on the ground traffic identification data identified from the environment data, and a local map is obtained; track prediction is carried out according to the obstacle data which are identified from the environment data and represent the obstacle and the local map, so as to obtain a track prediction result; based on the fusion driving track data, the road object of the local map, the navigation path information of the vehicle and the track prediction result, virtual behavior planning and virtual motion planning of the vehicle are carried out; and comparing the virtual control data based on the virtual behavior planning and the virtual motion planning with actual control data of a driver to obtain difference data so as to modify the automatic driving decision planning module according to the difference data.
The content in this embodiment may be explained in comparison with the description content of the embodiment related to the shadow pattern verification stage, and will not be described in detail.
In some embodiments, the control method of the vehicle may further include: determining the driving difficulty corresponding to the road section according to the target speed distribution and the set difficulty rule; wherein the driving difficulty comprises high difficulty or low difficulty; wherein, the setting difficulty rule includes: the longitudinal speed distribution indicates that the speed of the vehicle is smaller than a specified speed threshold value, and the driving difficulty corresponding to the road section is determined to be high difficulty; or the longitudinal speed distribution indicates that the speed distribution of the vehicle is uniform, the average speed is greater than the appointed speed threshold value, and the driving difficulty corresponding to the road section is determined to be low.
In the present embodiment, specifically, for example, if the target speed distribution data in the longitudinal direction of one road is smaller than a specified speed threshold, the road can be determined to be a frequently congested road section, and the road can be determined to be highly difficult. Specifically, the speed of the specified speed threshold may be small, such as 30KM/H, 25KM/H, or the like. If the longitudinal target speed profile data of a link is maintained more uniformly at a faster speed and the average speed is greater than the specified speed threshold, the link may be deemed to be less difficult.
In some embodiments, the control method of the vehicle further includes: generating automatic driving confidence information corresponding to the road segment in the second self-built map information according to the difference data; wherein, in the second self-built map information, the automatic driving confidence information is represented by the color of the road section.
In the present embodiment, the automatic driving confidence information corresponding to the link is generated, and can be explained in comparison with the embodiment of the automatic driving confidence information for the single-pass map information described above.
In this embodiment, a corresponding color may be set for the road section in the second self-built map information according to the autopilot confidence information. Specifically, when the terminal of the vehicle displays the road section through the second self-built map information, the automatic driving confidence information of the road section may be represented by the color. Specifically, for example, for a road segment with excellent autopilot confidence information, the color may be dark green, the autopilot confidence information may be a good road segment, the color may be green, the autopilot confidence information may be a normal road segment, the color may be yellow, the autopilot confidence information may be a poor road segment, and the color may be red.
Please refer to fig. 7. The embodiment of the specification also provides an automatic driving prompting method. The method is applied to the vehicle. The automatic driving hint method may include the following steps.
Step S60: a road segment is determined that the vehicle is traveling from a specified first location to a specified second location.
Step S61: acquiring automatic driving confidence information of at least part of sub-road sections in the road sections; the automatic driving confidence information is generated according to historical difference data corresponding to the sub-road section, and the historical difference data is obtained based on virtual control data generated by an automatic driving decision algorithm executing the sub-road section and actual control data executed by a driver driving the vehicle to pass through the sub-road section; the virtual control data is generated by processing the self-built map information, the environment data and the positioning data by the automatic driving decision algorithm to obtain a lane-level positioned target fusion running track of the vehicle and based on the target fusion running track; the self-built map information is map information constructed by the vehicle.
Step S62: and prompting the automatic driving confidence information.
In this embodiment, the self-created map information may be a semantic map generated by executing semantic mapping for the vehicle control system of the vehicle. Specifically, the second self-built map information disclosed in the foregoing embodiment may be compared with the second self-built map information disclosed in the foregoing embodiment.
In the present embodiment, the first position may be used to represent a start position of a stroke. The second position may be used to indicate an end position of travel. In some embodiments, the first location may be a location represented by current positioning data of the vehicle collected by the vehicle control system. The second location may be a target location entered by the driver into the vehicle control system. A road segment may be formed from the first location to the target location. The road segment may include a plurality of sub-road segments. Specifically, for example, there may be route points in the road section. Thus, the sub-road sections can be formed from the first position to the nearest passing point, between adjacent passing points, or between the last passing point and the second position. Of course, in some embodiments, each road involved in a road segment may also be considered a sub-road segment.
In the present embodiment, the difference data calculated when the vehicle historically travels on the sub-link may be used as the history difference data for each sub-link. Furthermore, the automatic driving confidence information of each sub-section may be generated according to the description of the foregoing embodiment. The control system of the vehicle may perform the operation of the automatic driving decision algorithm based at least on the environmental information during driving by the driver. In this way, the vehicle control system simulates the control process of the autonomous vehicle, generating virtual control data. Thus, by comparing the virtual control data with the actual control data generated by the vehicle being driven by the driver, it is possible to obtain difference data indicating the degree of difference between the result of the vehicle control system controlling the vehicle to perform automatic driving and the degree of difference in the driving of the vehicle by the driver. In this manner, the vehicle control system may dynamically modify parameters in the autopilot decision algorithm in hopes of reducing the degree of discrepancy represented by the discrepancy data so that the vehicle control system may learn the driving habits of the driver driving the vehicle.
In this embodiment, the autopilot decision algorithm may be integrated into the vehicle control system. Specifically, for example, an automatic driving decision planning module may be provided in the vehicle control system, and the automatic driving decision algorithm may be applied to the automatic driving decision planning module.
In some embodiments, the vehicle control system may generate the target fusion driving track based on the self-built map information using the stored positioning data and the environment data only when the number of times the vehicle is driven on a certain road section is greater than the specified driving number threshold, and may generate the virtual control data based on the automatic driving decision algorithm according to the target fusion driving track. And comparing the virtual control data with stored actual control data of the driver to obtain historical difference data. In an implementation, autopilot confidence information is generated for the corresponding sub-road segment. As such, after the autopilot confidence information is generated, it may be stored in a memory of the vehicle settings for read use. The autopilot confidence information may be updated as the vehicle travels further on the corresponding sub-road segment.
The details of the technical solution related to this embodiment may be explained with reference to the related embodiments in the shadow pattern verification stage, and will not be described in detail.
In the present embodiment, the vehicle control system may control the vehicle to present the driver with the automatic driving confidence information corresponding to the traveled link. Thus, the driver can automatically judge whether to start automatic driving.
The details of the technical solution related to this embodiment may be explained by referring to the related embodiments in the aforementioned automatic driving route recommendation stage, and will not be described in detail.
In some embodiments, the step of prompting the autopilot confidence information may include: and controlling an on-board display of the vehicle, prompting a road section on which the vehicle can execute automatic driving, and prompting automatic driving confidence information of the road section corresponding to the road section.
In the present embodiment, the vehicle control system may control the in-vehicle display of the vehicle, may prompt the road section capable of performing the automatic driving, and the automatic driving confidence information of the road section. In this way, the driver can quickly learn the automatic driving ability of the vehicle for the driven road section or the following road section. The driver can conveniently decide whether to start automatic driving or not, or can conveniently plan to start automatic driving on a subsequent road section. Specifically, for example, the vehicle control system may control the in-vehicle display to display the ordinary navigation map, and may represent the automatic driving confidence information with respect to the color of the road section in the ordinary navigation map. Thus, after the driver sees the relevant interface, the driver can know that the road section can perform automatic driving and the vehicle can perform the automatic driving.
The details of the technical solution related to this embodiment may be explained by referring to the related embodiments in the aforementioned automatic driving route recommendation stage, and will not be described in detail.
In some embodiments, the step of controlling the on-board display of the vehicle, prompting the vehicle to be able to perform the automatically driven road segment, and prompting the automatically driven confidence information of the road segment corresponding to the road segment may include: different road sections are distinguished, and the automatic driving confidence information of each road section is respectively prompted.
In the present embodiment, the automatic driving confidence information of different road segments is independent from each other. That is, the autopilot confidence information for different road segments may or may not be the same. Therefore, the automatic driving confidence information of each road section is respectively prompted, and the automatic driving capability of the vehicle control system on different road sections can be accurately expressed. Thus, accurate information feedback is provided for the driver, so that the driver can adopt proper driving behaviors in different road sections. In particular, for example, for a road section where autopilot confidence information indicates poor autopilot capability, the driver also needs to carefully observe the external situation after starting the autopilot function in order to take over the vehicle immediately in case of an accident. In this embodiment, for example, in order to distinguish different road segments more clearly, the same or different automatic driving confidence information may be prompted by color, or lines, or patterns in the vehicle display.
The details of the technical solution related to this embodiment may be explained by referring to the related embodiments in the aforementioned automatic driving route recommendation stage, and will not be described in detail.
In some embodiments, the step of feeding back the driving capability of performing the automatic driving, which is represented by the automatic driving confidence information, to the driver may include: controlling a vehicle-mounted display of the vehicle, and distinguishing color display road sections according to automatic driving confidence information; wherein the color of the road segment is used for representing the driving capability of the vehicle to execute automatic driving; or, by voice broadcasting, the automatic driving confidence information of the road section indicates the driving capability of executing automatic driving.
In the present embodiment, the driving ability of the automatic driving of different road sections may be represented by different colors. The driver can learn the driving ability of the automatic driving of the road section after viewing the in-vehicle display. In some embodiments, the control system of the vehicle may also inform the driver of the driving capability of the autopilot on the road section on which the vehicle is located by means of voice broadcasting. Thus, the driver can know the driving capability of the automatic driving of the road section where the vehicle is located without looking at the vehicle-mounted display.
In the present embodiment, the in-vehicle display may be an electronic device provided in the vehicle. In particular, the in-vehicle display may be an LCD or LED display applied to a vehicle. Of course, in some embodiments, the in-vehicle Display may be, for example, an automotive Head Up Display (HUD), or an augmented reality Display device (Augmented Reality, AR), or the like.
Please refer to fig. 8. The present embodiment provides a control device for a vehicle. The control device of the vehicle includes the following modules.
The first acquisition module is used for acquiring positioning data and environment data in the process that the vehicle runs from a first position to a second position; wherein the positioning data includes information indicating a position of the vehicle at a certain point in the traveling process, and the environment data includes information indicating a surrounding environment of the vehicle at a certain point in the traveling process; and storing the positioning data and the environment data.
And the second acquisition module is used for acquiring traffic identification information related to the position of the vehicle based on the environment data.
And the third acquisition module is used for acquiring the first self-built map information at least based on the traffic identification information.
A fourth obtaining module, configured to obtain second self-building map information according to first self-building map information corresponding to traveling times greater than a first threshold; wherein the path of each of the number of runs greater than the first threshold includes at least one identical road segment.
And the storage module is used for storing the second self-built map information.
And the control module is used for controlling the vehicle to execute automatic driving on the same road section at least based on the second self-built map information.
In this embodiment, the functions and effects achieved by the control device of the vehicle may be explained in comparison with the foregoing embodiments, and will not be described in detail.
Please refer to fig. 9. The embodiment of the specification provides an automatic driving prompt device, which comprises: a determining module for determining a road section involved in the vehicle traveling from a specified first location to a specified second location; the system comprises a confidence information acquisition module, a control module and a control module, wherein the confidence information acquisition module is used for acquiring automatic driving confidence information of at least part of sub-road sections in the road sections; the automatic driving confidence information is generated according to historical difference data corresponding to the sub-road section, and the historical difference data is obtained based on virtual control data generated by an automatic driving decision algorithm executing the sub-road section and actual control data executed by a driver driving the vehicle to pass through the sub-road section; the virtual control data is generated by processing the self-built map information, the environment data and the positioning data by the automatic driving decision algorithm to obtain a lane-level positioned target fusion running track of the vehicle and based on the target fusion running track; wherein the self-built map information is map information constructed by the vehicle; and the prompting module is used for prompting the automatic driving confidence information.
In this embodiment, the functions and effects implemented by the automatic driving prompt device may be explained in comparison with the foregoing embodiments, and will not be described in detail.
Please refer to fig. 10. The embodiment of the present specification also provides an electronic device, including a memory and a processor, where the memory stores a computer program, and the processor executes the computer program to implement the vehicle control method in any of the above embodiments.
The electronic device may include a processor, a non-volatile storage medium, an internal memory, a communication interface, a display device, and an input device connected by a system bus. The non-volatile storage medium may store an operating system and associated computer programs.
The present specification embodiment also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a computer, causes the computer to execute the vehicle control method in any of the above embodiments.
It will be appreciated that the specific examples herein are intended only to assist those skilled in the art in better understanding the embodiments of the present disclosure and are not intended to limit the scope of the present invention.
It should be understood that, in various embodiments of the present disclosure, the sequence number of each process does not mean that the execution sequence of each process should be determined by the function and the internal logic, and should not constitute any limitation on the implementation process of the embodiments of the present disclosure.
It will be appreciated that the various embodiments described in this specification may be implemented either alone or in combination, and are not limited in this regard.
Unless defined otherwise, all technical and scientific terms used in the embodiments of this specification have the same meaning as commonly understood by one of ordinary skill in the art to which this specification belongs. The terminology used in the description is for the purpose of describing particular embodiments only and is not intended to limit the scope of the description. The term "and/or" as used in this specification includes any and all combinations of one or more of the associated listed items. As used in this specification and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It will be appreciated that the processor of the embodiments of the present description may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method embodiments may be implemented by integrated logic circuits of hardware in a processor or instructions in software form. The processor may be a general purpose processor, a Digital signal processor (Digital SignalProcessor, DSP), an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), an off-the-shelf programmable gate array (Field Programmable Gate Array, FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components. The methods, steps and logic blocks disclosed in the embodiments of the present specification may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of a method disclosed in connection with the embodiments of the present specification may be embodied directly in hardware, in a decoded processor, or in a combination of hardware and software modules in a decoded processor. The software modules may be located in a random access memory, flash memory, read only memory, programmable read only memory, or electrically erasable programmable memory, registers, etc. as well known in the art. The storage medium is located in a memory, and the processor reads the information in the memory and, in combination with its hardware, performs the steps of the above method.
It will be appreciated that the memory in the embodiments of this specification may be either volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. The nonvolatile memory may be a read-only memory (ROM), a Programmable ROM (PROM), an Erasable Programmable ROM (EPROM), an Electrically Erasable Programmable ROM (EEPROM), or a flash memory, among others. The volatile memory may be Random Access Memory (RAM). It should be noted that the memory of the systems and methods described herein is intended to comprise, without being limited to, these and any other suitable types of memory.
Those of ordinary skill in the art will appreciate that the elements and algorithm steps described in connection with the embodiments disclosed herein can be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present specification.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described system, apparatus and unit may refer to corresponding procedures in the foregoing method embodiments, and are not repeated herein.
In the several embodiments provided in this specification, it should be understood that the disclosed systems, apparatuses, and methods may be implemented in other ways. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the embodiment.
In addition, each functional unit in each embodiment of the present specification may be integrated into one processing unit, each unit may exist alone physically, or two or more units may be integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solutions of the present specification may be essentially or portions contributing to the prior art or portions of the technical solutions may be embodied in the form of a software product stored in a storage medium, including several instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the methods described in the embodiments of the present specification. And the aforementioned storage medium includes: a usb disk, a removable hard disk, a read-only memory (ROM), a random-access memory (RAM), a magnetic disk, or an optical disk, etc.
The foregoing is merely specific embodiments of the present disclosure, but the scope of the present disclosure is not limited thereto, and any person skilled in the art can easily think about changes or substitutions within the technical scope disclosed in the present disclosure, and should be covered by the scope of the present disclosure. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (9)

1. A control method of a vehicle, characterized by being applied to the vehicle, the method comprising:
acquiring positioning data and environment data in the process that the vehicle runs from a first position to a second position; wherein the positioning data includes information indicating a position of the vehicle at a certain point in time during traveling, and the environment data includes information indicating a surrounding environment of the vehicle at a certain point in time during traveling; storing the positioning data and the environment data;
acquiring traffic identification information related to a position of the vehicle based on the environmental data;
acquiring first self-building map information at least based on the traffic identification information;
acquiring second self-building map information according to the first self-building map information corresponding to the running times more than the first threshold value; wherein the path of each of the number of runs greater than the first threshold includes at least one identical road segment; the second self-built map information represents the map with higher accuracy than the first self-built map information;
storing second self-built map information;
controlling the vehicle to perform automatic driving on the same road section based at least on the second self-built map information, including: generating a fusion running track of the vehicle running on the road section based on the second self-built map information; analyzing the fused driving track to obtain the longitudinal and/or transverse target speed distribution of the vehicle relative to the road section; local mapping is carried out based on the ground traffic identification data identified from the environment data, and a local map is obtained; track prediction is carried out according to the obstacle data which are identified from the environment data and represent the obstacle and the local map, so as to obtain a track prediction result; based on the fusion driving track data, the road object of the local map, the navigation path information of the vehicle and the track prediction result, virtual behavior planning and virtual motion planning of the vehicle are carried out; and comparing the virtual control data based on the virtual behavior planning and the virtual motion planning with actual control data of a driver to obtain difference data so as to modify an automatic driving decision planning module according to the difference data.
2. The method according to claim 1, wherein the vehicle is provided with a plurality of electronic control units, and wherein in the step of acquiring positioning data and environment data, the positioning data and the environment data are generated by the plurality of electronic control units of the vehicle.
3. The method of claim 1, wherein the first self-built map information corresponds to a first location as a start point and a second location as an end point; dividing acquaintance road groups aiming at the first self-built map information; the first self-built map information contained in the same acquaintance road group accords with a specified association relation; wherein, the appointed association relation comprises: the first positions of the first self-building map information accord with a first set distance relation, and the second positions of the first self-building map information accord with a second set distance relation; or, the coincidence degree of the road sections related between the first self-built map information is higher than a specified coincidence degree threshold value.
4. The method of claim 1, wherein the step of generating the fused travel track corresponding to the first self-built map information based on the second self-built map information comprises:
reading a local map in a specified position range from the stored second self-built map information according to the positioning data of the vehicle;
And generating a fusion running track of the vehicle by combining the local map, the environment data and the positioning data.
5. The method according to claim 1, wherein the method further comprises:
determining the driving difficulty corresponding to the road section according to the target speed distribution and the set difficulty rule; wherein the driving difficulty comprises high difficulty or low difficulty; wherein, the setting difficulty rule includes: the longitudinal speed distribution indicates that the speed of the vehicle is smaller than a specified speed threshold value, and the driving difficulty corresponding to the road section is determined to be high difficulty;
or the longitudinal speed distribution indicates that the speed distribution of the vehicle is uniform, the average speed is greater than the appointed speed threshold value, and the driving difficulty corresponding to the road section is determined to be low.
6. The method according to claim 1, wherein the method further comprises:
generating automatic driving confidence information corresponding to the road segment in the second self-built map information according to the difference data; wherein, in the second self-built map information, the automatic driving confidence information is represented by the color of the road section.
7. A control device for a vehicle, comprising:
The first acquisition module is used for acquiring positioning data and environment data in the process that the vehicle runs from a first position to a second position; wherein the positioning data includes information indicating a position of the vehicle at a certain point in time during traveling, and the environment data includes information indicating a surrounding environment of the vehicle at a certain point in time during traveling; storing the positioning data and the environment data;
a second acquisition module configured to acquire traffic identification information related to a position of the vehicle based on the environmental data;
the third acquisition module is used for acquiring first self-built map information at least based on the traffic identification information;
a fourth obtaining module, configured to obtain second self-building map information according to first self-building map information corresponding to traveling times greater than a first threshold; wherein the path of each of the number of runs greater than the first threshold includes at least one identical road segment; the second self-built map information represents the map with higher accuracy than the first self-built map information;
the storage module is used for storing second self-built map information;
the control module is used for controlling the vehicle to execute automatic driving on the same road section at least based on the second self-built map information, and comprises the following steps: generating a fusion running track of the vehicle running on the road section based on the second self-built map information; analyzing the fused driving track to obtain the longitudinal and/or transverse target speed distribution of the vehicle relative to the road section; local mapping is carried out based on the ground traffic identification data identified from the environment data, and a local map is obtained; track prediction is carried out according to the obstacle data which are identified from the environment data and represent the obstacle and the local map, so as to obtain a track prediction result; based on the fusion driving track data, the road object of the local map, the navigation path information of the vehicle and the track prediction result, virtual behavior planning and virtual motion planning of the vehicle are carried out; and comparing the virtual control data based on the virtual behavior planning and the virtual motion planning with actual control data of a driver to obtain difference data so as to modify an automatic driving decision planning module according to the difference data.
8. An electronic device comprising a memory and a processor, wherein the memory stores at least one computer program that is loaded and executed by the processor to implement the method of controlling a vehicle according to any one of claims 1 to 6.
9. A computer-readable storage medium comprising,
the computer-readable storage medium has stored therein at least one computer program which, when executed by a processor, is capable of realizing the control method of a vehicle according to any one of claims 1 to 6.
CN202310666660.2A 2023-06-07 2023-06-07 Vehicle control method, automatic driving prompting method and related devices Active CN116394980B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310666660.2A CN116394980B (en) 2023-06-07 2023-06-07 Vehicle control method, automatic driving prompting method and related devices

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310666660.2A CN116394980B (en) 2023-06-07 2023-06-07 Vehicle control method, automatic driving prompting method and related devices

Publications (2)

Publication Number Publication Date
CN116394980A CN116394980A (en) 2023-07-07
CN116394980B true CN116394980B (en) 2023-10-03

Family

ID=87018358

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310666660.2A Active CN116394980B (en) 2023-06-07 2023-06-07 Vehicle control method, automatic driving prompting method and related devices

Country Status (1)

Country Link
CN (1) CN116394980B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111123952A (en) * 2019-12-31 2020-05-08 华为技术有限公司 Trajectory planning method and device
CN112046502A (en) * 2019-05-20 2020-12-08 现代摩比斯株式会社 Automatic driving device and method
CN114509065A (en) * 2022-02-16 2022-05-17 北京易航远智科技有限公司 Map construction method, map construction system, vehicle terminal, server side and storage medium
CN114771563A (en) * 2022-04-06 2022-07-22 扬州大学 Method for realizing planning control of track of automatic driving vehicle
CN115905449A (en) * 2022-12-30 2023-04-04 北京易航远智科技有限公司 Semantic map construction method and automatic driving system with familiar road mode

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6663835B2 (en) * 2016-10-12 2020-03-13 本田技研工業株式会社 Vehicle control device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112046502A (en) * 2019-05-20 2020-12-08 现代摩比斯株式会社 Automatic driving device and method
CN111123952A (en) * 2019-12-31 2020-05-08 华为技术有限公司 Trajectory planning method and device
CN114509065A (en) * 2022-02-16 2022-05-17 北京易航远智科技有限公司 Map construction method, map construction system, vehicle terminal, server side and storage medium
CN114771563A (en) * 2022-04-06 2022-07-22 扬州大学 Method for realizing planning control of track of automatic driving vehicle
CN115905449A (en) * 2022-12-30 2023-04-04 北京易航远智科技有限公司 Semantic map construction method and automatic driving system with familiar road mode

Also Published As

Publication number Publication date
CN116394980A (en) 2023-07-07

Similar Documents

Publication Publication Date Title
JP7009716B2 (en) Sparse map for autonomous vehicle navigation
JP7020728B2 (en) System, method and program
JP7125214B2 (en) Programs and computing devices
JP6969962B2 (en) Map information providing system for vehicle driving support and / or driving control
EP2012088B1 (en) Road information generating apparatus, road information generating method and road information generating program
RU2742213C1 (en) Method to control information on lanes, method of traffic control and device for control of information on lanes
KR20180009755A (en) Lane estimation method
KR20200123474A (en) Framework of navigation information for autonomous navigation
CN109643118B (en) Influencing a function of a vehicle based on function-related information about the environment of the vehicle
CN110692094A (en) Vehicle control apparatus and method for control of autonomous vehicle
US11703347B2 (en) Method for producing an autonomous navigation map for a vehicle
CN116394981B (en) Vehicle control method, automatic driving prompting method and related devices
CN112519677A (en) Control device
CN112781600A (en) Vehicle navigation method, device and storage medium
KR102624829B1 (en) Method, apparatus and computer program for providing route guidance service using location information of vehicle
Milanés et al. The tornado project: An automated driving demonstration in peri-urban and rural areas
CN115698633A (en) Method for operating an auxiliary function for guiding a motor vehicle
CN115905449B (en) Semantic map construction method and automatic driving system with acquaintance road mode
CN116394980B (en) Vehicle control method, automatic driving prompting method and related devices
CN116736855A (en) Method and system for assessing autonomous driving planning and control
US20210027071A1 (en) Road curvature generation in real-world images as a method of data augmentation
CN114056337A (en) Vehicle driving behavior prediction method, device and computer program product
CN114509077A (en) Method, device, system and computer program product for generating navigation guide line
KR101428414B1 (en) Apparatus and method for displaying road guide information on the windshield
CN115183791A (en) Navigation method, navigation device and location-based service providing method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant