CN117804478A - Road level positioning method, device, equipment and storage medium - Google Patents

Road level positioning method, device, equipment and storage medium Download PDF

Info

Publication number
CN117804478A
CN117804478A CN202311755727.6A CN202311755727A CN117804478A CN 117804478 A CN117804478 A CN 117804478A CN 202311755727 A CN202311755727 A CN 202311755727A CN 117804478 A CN117804478 A CN 117804478A
Authority
CN
China
Prior art keywords
probability
road
vehicle
scene
function
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311755727.6A
Other languages
Chinese (zh)
Inventor
刘力源
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202311755727.6A priority Critical patent/CN117804478A/en
Publication of CN117804478A publication Critical patent/CN117804478A/en
Pending legal-status Critical Current

Links

Landscapes

  • Navigation (AREA)
  • Traffic Control Systems (AREA)

Abstract

The disclosure provides a road level positioning method, a device, equipment and a storage medium, which relate to the technical field of computers, in particular to the fields of positioning, map, navigation, automatic driving and the like. The specific implementation scheme is as follows: determining a probability quality function corresponding to a road scene according to the road scene of the vehicle; and carrying out evidence synthesis according to the probability quality function corresponding to the road scene to obtain the comprehensive probability that the vehicle is on the target road, wherein the comprehensive probability that the vehicle is on the target road is used for determining whether the vehicle is on the target road or not. In the embodiment of the disclosure, the evidence synthesis is performed on the probability quality function corresponding to the road scene where the vehicle is located, so that the accuracy of the road-level positioning result of the vehicle can be improved.

Description

Road level positioning method, device, equipment and storage medium
Technical Field
The present disclosure relates to the field of computer technology, and in particular, to the fields of road level positioning, mapping, navigation, autopilot, and the like.
Background
Standard definition (Standard Definition, SD) maps are generally standard two-dimensional electronic maps for vehicle driving navigation, and have the characteristics of large range, low precision and the like. The SD map may also be referred to as a standard map, a car-to-machine map, a traditional map, or the like. The High Definition (HD) map is a High-precision electronic map, and has the characteristics of small range, high precision, and the like. HD maps may also be referred to as high-definition maps. Because the high-precision map mainly covers the high-grade roads, the related road-grade positioning scheme is to finish positioning the high-grade roads on the SD map and then to connect the SD map.
Disclosure of Invention
The disclosure provides a road level positioning method, a device, equipment and a storage medium.
According to an aspect of the present disclosure, there is provided a road-level positioning method including:
determining a probability quality function corresponding to a road scene according to the road scene of the vehicle;
and carrying out evidence synthesis according to the probability quality function corresponding to the road scene to obtain the comprehensive probability that the vehicle is on the target road, wherein the comprehensive probability that the vehicle is on the target road is used for determining whether the vehicle is on the target road or not.
According to another aspect of the present disclosure, there is provided a road grade locating apparatus, comprising:
the function determining module is used for determining a probability quality function corresponding to a road scene according to the road scene where the vehicle is located;
and the evidence synthesis module is used for carrying out evidence synthesis according to the probability quality function corresponding to the road scene to obtain the comprehensive probability that the vehicle is on the target road, and the comprehensive probability that the vehicle is on the target road is used for determining whether the vehicle is on the target road or not.
According to another aspect of the present disclosure, there is provided an electronic device including:
at least one processor; and
A memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of the embodiments of the present disclosure.
According to another aspect of the present disclosure, there is provided a non-transitory computer-readable storage medium storing computer instructions for causing the computer to perform a method according to any one of the embodiments of the present disclosure.
According to another aspect of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements a method according to any of the embodiments of the present disclosure.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The drawings are for a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1 is a schematic flow chart diagram of a road level locating method according to an embodiment of the present disclosure;
FIG. 2 is a schematic flow chart diagram of a road level locating method according to another embodiment of the present disclosure;
FIG. 3 is a schematic flow chart diagram of a road level locating method according to another embodiment of the present disclosure;
FIG. 4 is a schematic diagram of a road projection Mass function;
FIG. 5 is a schematic diagram of a boundary-aware Mass function;
FIG. 6 is a schematic diagram of a lane line number aware Mass function;
FIG. 7 is a schematic diagram of a lane line type perception Mass function;
FIG. 8 is a schematic diagram of a road width aware Mass function;
FIG. 9 is a schematic diagram of a scene-aware Mass function;
FIG. 10 is a schematic diagram of a navigation route following Mass function;
FIG. 11 is a schematic illustration of evidence synthesis;
FIG. 12 is a schematic flow diagram of a long sequence tracking module based on topology constraints;
FIG. 13 is a schematic block diagram of an overall module;
FIG. 14 is a schematic block diagram of a road level locating apparatus according to an embodiment of the disclosure;
FIG. 15 is a schematic block diagram of a road level locating apparatus according to another embodiment of the present disclosure;
fig. 16 is a block diagram of an electronic device used to implement a positioning method of an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present disclosure to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
Fig. 1 is a schematic flow chart of a road level positioning method according to an embodiment of the present disclosure. The method may include:
s110, determining a probability quality function corresponding to a road scene according to the road scene of the vehicle;
s120, performing evidence synthesis according to a probability quality function corresponding to the road scene to obtain the comprehensive probability that the vehicle is on the target road, wherein the comprehensive probability that the vehicle is on the target road is used for determining whether the vehicle is on the target road.
In the disclosed embodiments, there may be a variety of road scenarios. For example, an intersection scene, a lane line number change scene, a lane line type change scene, a special-shaped lane line scene, and the like. A road scene may correspond to one or more probability Mass functions (Probability Mass Function, PMF), also referred to as Mass functions, associated with the scene. The Mass function may represent the probability of a discrete random variable at each particular value. For example, static observation and dynamic observation based on visual perception, positioning observation and the like can obtain the values of various elements in the Mass function. The correspondence between the road scene and the Mass function may be preset. For example, scene S1 corresponds to Mass functions m1, m2, and m3, and scene S2 corresponds to Mass functions m2, m4, m5, and m6. The values of the elements in the Mass function may be determined based on some events in the road scene. Some elements may correspond to events that are mutually exclusive, and some elements may correspond to events that are intersecting. In the road-level positioning method, the probability Mass function (Mass function) may include probabilities corresponding to various relationships of the vehicle to the road.
In some examples, evidence synthesis may be performed on multiple Mass functions in a road scene based on D-S (Dempster-Shafer) evidence theory. The D-S evidence theory is an uncertain reasoning theory that can provide a method for processing uncertain information. Based on the D-S evidence theory, multisource information such as various sensor information, map information, positioning information and the like can be fused, and evidence synthesis is carried out according to certain rules such as formulas to obtain comprehensive probability under each condition.
In some examples, if a road scene corresponds to multiple Mass functions, evidence may be synthesized from the Mass functions to obtain the probability that the vehicle is on the target road in the road scene. For example, if one road scene corresponds to two Mass functions, such as m2 and m3, the probabilities in the two Mass functions m2 and m3 may be evidently combined to obtain the comprehensive probability m23. For another example, if one road scene corresponds to three Mass functions, such as m1, m2 and m3, the probabilities in the two Mass functions m1 and m2 may be evidently synthesized to obtain a comprehensive probability m12, and the comprehensive probability m123 may be obtained by evidently synthesizing the comprehensive probability m12 with another Mass function m 3. Alternatively, the three Mass functions ml, m2 and m3 are directly subjected to evidence synthesis to obtain the comprehensive probability m123. In the embodiments of the present disclosure, roads in a map may be referred to as road links (links), link lines, and the like.
In the embodiment of the disclosure, evidence synthesis is performed on the probability quality function corresponding to the road scene where the vehicle is located, so that the accuracy of the road-level positioning result of the vehicle is improved. For example, uncertainty information can be fused using a probability mass function and evidence synthesis to obtain more accurate road level positioning results.
Fig. 2 is a schematic flow chart of a road level positioning method according to another embodiment of the present disclosure. The method may include one or more features of the method described above, and in one embodiment, the method further includes:
s210, determining a road scene where the vehicle is located according to at least one of map information, geographic position information of the vehicle and perception information of the vehicle.
In the disclosed embodiments, the map information may include data of an SD map and/or data of an HD map, such as data of a fused SD map and HD map. The geographic position information of the vehicle may include vehicle position information obtained using a positioning system such as GPS, beidou system, etc., and may also be referred to as positioning information of the vehicle. The perceived information of the vehicle may include perceived information of various sensors on the vehicle such as cameras, radar, infrared, etc. The sensing information of the vehicle can comprise raw data acquired by various sensors on the vehicle, and also can comprise data after the raw data is processed. For example, a lane line, a boundary line, or the like, which is obtained by recognizing an image acquired by a camera.
In the embodiment of the disclosure, one or more of hidden state preliminary screening, map fusion, scene generation and the like can be performed in the data preprocessing stage.
For example, hidden state prescreening may include: the Hidden state of the road (Link) is determined using a Hidden markov algorithm (Hidden MarkovModel, HMM). And according to the positioning information such as the current position and heading information of the vehicle, the roads which are particularly different from the positioning information of the vehicle, such as the roads in the opposite direction, are screened out.
For another example, the map fusion may include SD-HD fusion, and the specific manner in which the map information (which may also be referred to as map data) is fused may include: and correcting the geometric information of the SD map by using the geometric information of the HD map, so as to improve the precision of the related data in the SD map. In addition, prior information such as high-definition lane lines, road edges, signs and the like can be added to the SD map. The data fusion strategy of the SD map and the HD map may be implemented by combining attributes such as a level in the SD map, an expressway conversion gateway (IC), a highway Junction (JCT), and the like with a projection distance relationship, so as to correlate with corresponding data in the HD map.
For another example, the scene generation may include: the road scene, such as an intersection scene, a lane line type change scene, a lane line number change scene, a special-shaped lane line scene, and the like, can be determined according to the characteristics of the map information. In different scenarios, the vehicle may perceive dynamic features that are consistent with the scenario. For example, in an intersection scene, a vehicle may perceive degradation, disappearance, and the like of a lane line. For another example, in a lane-type change scenario, the vehicle may perceive that the lane line changes from a solid line to a broken line, from a broken line to a solid line, and so on.
In the embodiment of the disclosure, the road scene where the vehicle is located can be accurately determined according to one or more of map information, positioning information of the vehicle and perception information of the vehicle, and a Mass function conforming to the road scene is obtained, so that accuracy of road-level positioning is improved.
In one embodiment, step S120 performs evidence synthesis according to a probability quality function corresponding to the road scene, to obtain a comprehensive probability that the vehicle is on the target road, including: obtaining the comprehensive probability corresponding to the road scene according to the intersection sets of a plurality of elements in different probability quality functions corresponding to the road scene and the normalization parameters; wherein the normalization parameter is determined according to the values of elements whose intersections are not null in the different probability mass functions.
In embodiments of the present disclosure, the elements in the probability mass function may represent the relationship of the vehicle to the road. The value of an element may be a probability. For example, the relationship of the vehicle to the road may include that the vehicle is on the road (the vehicle is on the road), that the vehicle is not on the road (the vehicle is not on the road), that an unknown state (whether the vehicle is on the road is not determined), and so on. There may be a number of situations where the intersection of elements in different probability mass functions. For example, elements in the Mass function include { A1, A2, A3}. The value of A1 in the Mass function ml may be different from the value of A1 in the Mass function m2, and the values of A2 and A3 are similar. Let A1 and A2 be mutually exclusive, but A1 and A3 are intersecting and A2 and A3 are intersecting. In this case, the intersection of A1 in m1 and A1 in m2 may be A1, the intersection of A1 in ml and A3 in m2 may be A1, and the intersection of A3 in ml and A1 in m2 may be A1. Fusion calculations were performed from A1, A3 in m1 and A1, A3 in m 2. For example, m1 (A1) ×m2 (A1) +m1 (A1) ×m2 (A3) +m1 (A3) ×m2 (A1) can be calculated to obtain a fusion value. And calculating by using the fusion value and the normalization parameter to obtain the comprehensive probability corresponding to the road scene. The composite probabilities may also be referred to as fusion probabilities, combined probabilities, composite probabilities, and the like. For example, the integrated probability may be equal to the fusion value divided by the normalized parameter.
In the above example, the intersection of A1 in m1 and A1, A3 in m2 is not null, the intersection of A2 in ml and A2, A3 in m2 is not null, the intersection of A3 in ml and A1, A2, A3 in m2 is not null, and the normalization parameters can be obtained from A1, A2, A3 in m1 and A1, A2, A3 in m 2. For example, the normalization parameter is equal to m1 (A1). Times.m2 (A1) +m1 (A1). Times.m2 (A3) +m1 (A2). Times.m2 (A2) +ml (A2). Times.m2 (A3) +m1 (A3). Times.m2 (A1) +m1 (A3). Times.m2 (A2) +m1 (A3). Times.m2 (A3).
In the embodiment of the disclosure, based on different probability quality functions corresponding to the road scene, the normalization parameters are calculated, and then the comprehensive probability corresponding to the road scene is calculated, so that the road level positioning more suitable for the road scene can be obtained, and the accuracy of the road binding result is improved.
In one embodiment, the elements in the probability mass function include at least one of the following vehicle to road relationships: the vehicle being on the target road, the vehicle not being on the target road, and an unknown state;
the values of the elements in the probability mass function include at least one of: a first probability that the vehicle is on the target road, a second probability that the vehicle is not on the target road, and an unknown probability.
In the embodiment of the present disclosure, a Mass function corresponding to one road scene may be expressed as { In, out, unknown }. Where In represents a first probability that the vehicle is on a target road (may be abbreviated as In event); out represents a second probability that the vehicle is not on the target road (which may be abbreviated as Out event); unknown denotes an Unknown probability, i.e. a probability of not determining whether the vehicle is on the target road (which may be abbreviated as Unknown event). The un-own event may be understood as a union of an In event and an Out event, where the In event and the Out event are mutually exclusive events, an intersection of the In event and the un-own event is an In event, and an intersection of the Out event and the un-own event is an Out event. In event, out event, unennown event can be understood as examples of vehicle-to-road relationships. The unknown probability representing the uncertain factors is added into the probability quality function, and reasoning can be carried out based on the uncertain factors, so that the road level positioning more suitable for certain road scenes is obtained, and the accuracy of the road binding result is improved.
In one embodiment, according to the intersection of a plurality of elements in different probability quality functions corresponding to the road scene and the normalization parameter, the comprehensive probability corresponding to the road scene is obtained, including at least one of the following:
Acquiring the comprehensive probability corresponding to the vehicle on the target road according to a plurality of elements and normalization parameters of the vehicle on the target road, which are obtained by intersection sets in different probability quality functions corresponding to the road scene;
according to a plurality of elements and normalization parameters of which the intersection sets are not on the target road of the vehicle in different probability mass functions corresponding to the road scene, obtaining the comprehensive probability of the vehicle not on the target road;
and obtaining the comprehensive probability corresponding to the unknown state according to a plurality of elements and normalization parameters of which the intersection sets are the unknown state in different probability quality functions corresponding to the road scene.
For example, assume that a road scene corresponds to 2 Mass functions m1 and m2, where the elements include { In, out, unown }. In m1 and m2, the intersection of In and Out is null, the intersection of In and Unknown is In, and the intersection of Out and Unknown is Out.
In one case, the elements intersecting In include In and Unknown In m1 and In and Unknown In m 2. The fusion value corresponding to In can be calculated as m1 (In) ×m2 (In) +m1 (In) ×m2 (unnnonwn) +m1 (unnnonwn) ×m2 (In). The normalization parameters were obtained by calculating m1 (In) ×m2 (Out) +m1 (In) ×m2 (Unknown) +m1 (Out) ×m2 (Out) +m1 (Out) ×m2 (Unknown) +m1 (Unknown) ×m2 (In) +m1 (Unknown) ×m2 (Out) +m1 (Unknown) ×m2 (Unknown). And (3) calculating the fusion value corresponding to In and dividing the fusion value by the normalization parameter to obtain the comprehensive probability corresponding to In of the vehicle on the target road.
In another case, the elements whose intersection is Out include Out, unknown in m1 and Out, unknown in m 2. The fusion value corresponding to Out was obtained by calculating m1 (Out) ×m2 (In) +m1 (Out) ×m2 (Unknown) +m1 (Unknown) ×m2 (Out). Referring to the above example, the K value calculation method calculates the fusion value corresponding to Out divided by the normalization parameter, so as to obtain the comprehensive probability corresponding to Out of the vehicle on the target road.
In another case, elements whose intersection is Unknown state Unknown include Unknown in m1 and Unknown in m 2. And calculating m1 (Unknown) multiplied by m2 (Unknown) to obtain a fusion value corresponding to the Unknown, wherein the calculation formula of the fusion value is similar to that of the K value. Referring to the above example, the K value calculation method calculates the fusion value corresponding to the Unknown probability divided by the normalization parameter, so as to obtain the comprehensive probability corresponding to the Unknown of the vehicle on the target road.
In the embodiment of the disclosure, the comprehensive probabilities corresponding to the elements representing different hypothesis conditions can be calculated based on the probabilities corresponding to different elements in the probability quality function, so that the road level positioning more suitable for specific road scenes can be obtained, and the accuracy of the road binding result is improved.
In one embodiment, the values of the elements in the probability mass function include probabilities derived using a base probability distribution. In the embodiment of the disclosure, the basic probability distribution may be given according to some characteristics of the road where the vehicle is currently located, and the given manner may be an empirical value, a preset value, etc., so that an accurate result can be obtained in a simple manner. For example, if it is recognized that the lane line in front of the road L1 where the vehicle is located is degraded, the probability of the road on L1 of the vehicle may be valued larger in the intersection-related Mass function. For another example, if it is recognized that the left side of the road L2 where the vehicle is located has a dotted line lane line, and the right side has only a solid line boundary line, the probability that the vehicle is on the road L2 can be valued larger in the Mass function related to the lane line type.
In one embodiment, a road scene corresponds to one or more probability mass functions, the type of the one or more probability mass functions including at least one of: a road surface projection probability mass function; a boundary-aware probability mass function; a lane line type perception probability mass function; a lane line number perception probability quality function; a road width perception probability mass function; the navigation route follows the probability mass function; scene perception probability quality function. In the embodiment of the disclosure, various Mass functions can be set from various angles, so that the road positioning method is beneficial to adapting to more road scenes and improving the accuracy of road-level positioning results of more road scenes.
In one embodiment, the probability in the road projection probability mass function is determined based on at least one of: map-given boundaries, map boundaries estimated from errors, location error areas where the vehicle is located. Specific examples may be found in fig. 4 below and the associated description.
In one embodiment, the probability in the boundary-aware probability mass function is determined based on at least one of: perceived boundary intercept, locating boundary intercept, map-presented boundary, map boundary estimated from error, vehicle true position, vehicle position with error. The perceived boundary intercept may include an intercept of the perceived actual position of the vehicle body relative to the map boundary. The positioning boundary intercept may include an intercept of a vehicle body position acquired by a positioning manner such as GPS with respect to a map boundary. The difference of the perceived boundary intercept from the locating boundary intercept of the different roads may be calculated. The probability of the vehicle on the target road with relatively smaller difference is larger, the probability of the vehicle not on the target road is smaller, and the unknown probability is smaller. Specific examples may be found in fig. 5 below and related descriptions thereof.
In one embodiment, the probability in the lane line number aware probability mass function is determined based on at least one of: perceived lane lines, boundaries given by a map, vehicle position with errors. If the number of the perceived lane lines is the same as that of a certain road, the probability that the vehicle is on the road is larger, the probability that the vehicle is not on the road is smaller, and the unknown probability is smaller. If the number of the perceived lane lines is different from that of a certain road, the probability that the vehicle is on the road is smaller, the probability that the vehicle is not on the road is larger, and the unknown probability is smaller. Specific examples may be found in fig. 6 below and the associated description.
In one embodiment, the probability in the lane line type perceived probability mass function is determined based on at least one of: perceived solid lane lines, perceived dashed lane lines, map-presented boundaries, vehicle locations with errors. If a road is perceived to have N solid lanes, M broken lanes may be matched to the lane type on the road to which the vehicle location relates on the map. If a certain road is provided with N solid line lane lines and M broken line lane lines, the probability that the vehicle is on the road is larger, the probability that the vehicle is not on the road is smaller, and the unknown probability is smaller. Specific examples may be found in fig. 7 below and the associated description.
In one embodiment, the probability in the road width perceived probability mass function is determined based on at least one of: perceived boundaries, map-presented boundaries, perceived road widths, map-presented road widths, vehicle locations. If the width of the road where the vehicle is located is calculated according to the perceived boundary and the boundary given by the map, the perceived road width is calculated. A comparison can be made with the width of the road on the map to which the vehicle position relates. If the road width given by the map of a certain road is the same as the perceived road width, the probability that the vehicle is on the road is larger, the probability that the vehicle is not on the road is smaller, and the unknown probability is smaller. Specific examples may be found in fig. 8 below and the associated description.
In one embodiment, the probability in the navigation route following probability mass function is determined based on at least one of: navigation route information, vehicle position. If the navigation route in front of the vehicle needs right turning (left turning or straight running), whether an intersection (left turning or straight running) which can turn right is arranged on a road related to the position of the vehicle on a map can be searched, if so, the probability that the vehicle is on the road is high, the probability that the vehicle is not on the road is low, and the unknown probability is low. Otherwise, the probability that the vehicle is on the road is smaller, the probability that the vehicle is not on the road is larger, and the unknown probability is smaller. Specific examples may be found in fig. 10 below and the associated description thereof.
In the embodiment of the application, various Mass functions can be obtained based on the perception information, the map information, the positioning information and the like related to the road scene, so that more accurate Mass functions can be obtained, and the accuracy of the road-level positioning result is improved.
In one embodiment, the probability in the scene perception probability quality function is determined based on at least one of: vehicle position, scene features around the vehicle position. Specific examples may be found in fig. 9 below and the associated description.
In one embodiment, the scene perception probability quality function includes at least one of: a crossroad scene perception probability quality function; a scene perception probability quality function of the number of lane lines; a lane-line type-changing scene-aware probability mass function; and a special-shaped lane line scene perception probability quality function.
For example, in an intersection scene-aware probability mass function, scene features around the unknown vehicle may include degradation of lane lines ahead of the vehicle. For another example, in the scene-aware probability quality function of the number of lane lines changing, the scene characteristics of the unknown surroundings of the vehicle may include that the number of lane lines perceived around the vehicle is changed from less to more or from less to more. As another example, in the lane-line type-changing scene-aware probability mass function, the scene features around the vehicle are not known may include the lane-line type changing from solid line to broken line, or from broken line to solid line. For another example, the special lane line scene perception probability mass function vehicle may be based on a scene feature around the vehicle that is unknown. In the embodiment of the application, various scene perception Mass functions can be set based on the characteristics of various road scenes, so that the method is beneficial to adapting to more road scenes and improving the accuracy of road-level positioning results of more road scenes.
Fig. 3 is a schematic flow chart of a road level positioning method according to another embodiment of the present disclosure. The method may include one or more features of the method described above, and in one embodiment, the method further includes:
s310, tracking according to the first state of the vehicle at the first moment and the comprehensive probability that the vehicle is on the target road at the first moment, and obtaining the second state of the vehicle at the second moment.
In the disclosed embodiment, the tracking process may include long sequence tracking (or referred to as long track tracking), and thus, S310 may be performed in multiple iterations. The result of each execution may be input to the next iteration. For example, the comprehensive probability obtained by combining the evidences is used for long-sequence tracking, so that continuous tracking is accurately performed, and the accuracy and recall rate of road-level positioning in complex scenes are improved. If the second state of the vehicle at the second moment comprises that the posterior probability of the vehicle on the target road is larger than the specified threshold, the vehicle can be bound with the target road section, the vehicle is considered to be on the target road section, the road binding process is completed, and the overall algorithm running efficiency can be accelerated. Further, tracking processing can be performed according to the comprehensive probability that the vehicle is not on the target road at the first moment, so that a second state of the vehicle at the second moment is obtained, wherein the second state comprises a posterior probability that the vehicle is not on the target road, and the posterior probability is used for verifying a calculation result, so that algorithm robustness is improved.
In one embodiment, the tracking process includes a static binary bayesian filtering. The static binary Bayes filtering mode can adopt probability comparison to express the confidence coefficient, so that the truncation problem caused by probability approaching 0 or 1 in the long sequence probability superposition process is prevented. An example of an iterative formula for static binary bayesian filtering is as follows:
wherein I is t-1 Can represent a first state, I t Can represent a second state, p (x|z t ) The synthetic probability based on the evidence described above, for example, the synthetic probability of In, the synthetic probability of Out, or the synthetic probability of Unknown may be substituted, and p (x) may be a set value.
The role of high-precision positioning, such as road-level positioning, may include: and determining road information of the vehicle according to the sensor and the map information, further determining the lane information by a lane positioning module based on the road, or performing association of perception data and/or high-precision data and position optimization by a lane line matching module based on the road. Because the high-precision map mainly covers high-grade roads, in order to avoid wrong road positioning, some road-grade positioning schemes are to finish positioning the high-grade roads on the road network of the SD map first and then to connect the road network of the HD map. However, serial SD-HD staged initialization affects initialization efficiency, and since the data of the SD map may not have road element information, the observation of vehicle perception matching cannot be applied to road level positioning.
In some application scenarios, road level localization using hidden Markov model (Hidden Markov Model) algorithms may use road links (Link) as hidden states and road topology as a transition matrix constraint between the hidden states. Assume that the errors of position and heading conform to a fixed prior distribution and fusion of different types of observations using fixed weights results in an observation state probability matrix (emision prob). However, if the observations of the location and heading of the road Link do not follow a fixed a priori distribution, positioning results may be erroneous. The information fusion of multiple sensors is inflexible in using fixed weights, and inconvenient to expand. Perception information such as perception of lane lines, boundaries and perception of scenes is not fully utilized. These deficiencies can result in the road level positioning algorithm not being able to quickly and accurately complete the initialization of the binding.
According to the scheme, SD-HD data can be fused, then the matching information of the perception observation and the map is fully utilized in the initialization process, binding is completed in a single stage, and efficiency is improved.
1. Principle of scheme design
In applications such as cell phones and car navigation, road level positioning can be performed based on SD map data. And the sensor input of road level location is comparatively simple, can use distance and course observation to bind the way. In applications such as automatic driving based on high-precision maps, both the sensor for positioning and the map data are enhanced. For example, sensor aspects add perceived observations of camera images, and high-precision map data add information on lane attributes, lane lines, edges, signs, etc. on roads. The scheme of the embodiment of the disclosure can fully utilize the perception observation and the high-precision map information so as to improve the road-level positioning effect of the high-precision map.
The embodiment of the disclosure can fully use the information of the perception and high-precision map, and the specific scheme is as follows:
(1) And in the data preprocessing stage, SD and HD data are fused, and binding is completed in one initialization, so that the efficiency is improved.
(2) And a sensor observation model is improved, the sensing information is fully mined by combining the scene, and the road binding speed of the complex scene is improved.
(3) DS evidence theory is introduced as a multi-sensor information fusion algorithm, so that the limitation of fixed weight fusion is broken, and the flexibility of the algorithm is improved.
(4) And a binary Bayesian algorithm is introduced to carry out long-sequence tracking, continuously collect available information and improve the success rate of binding the road.
SD-HD fusion
Compared with the logic of positioning SD by using SD first, positioning SD with main path attribute and then handing over to HD, the data of HD-SD is fused together in the data preprocessing stage. Data fusion can be defined as: and correcting the geometry information of the SD by using the related geometry information of the HD, improving the data precision of the corresponding SD, and simultaneously adding prior information such as high-precision lane lines, road edges, labels and the like for the SD. And the fusion strategy completes the association with the corresponding HD according to the SD grade/IC/JCT and other attributes combined with the projection distance relation.
After fusion is completed, the modified SD and the unmodified SD are observed and tracked simultaneously, so that the effect of one-time tracking can be achieved, and the binding effect can be achieved.
2. DS-evidence theory-based multi-information fusion
SD-based binding algorithms typically use hidden markov algorithms, which are essentially bayesian probabilistic inferences with added road topology transfer constraints, and are purely probabilistic based uncertainty inference methods. The pure probability approach, while having strict theoretical basis, generally requires that the prior probability and conditional probability of an event be given, which is often not readily available. Such as GPS, typically have a constant lateral offset that does not fit the normal distribution, and perception may not fit reality due to traffic congestion or lane line ambiguity. To accommodate these uncertainties, the present approach uses DS evidence theory as a new uncertainty inference tool.
DS evidence theory originates from research work by the math, university of Harvard, U.S. at 60 th century, A.P. Dempster, in solving the problem of multi-value mapping using upper and lower probabilities. The student G.Shafer of Dempster further develops the theory of evidence and introduces the concept of trust function, forming a set of mathematical methods for dealing with uncertainty reasoning problems based on 'evidence' and 'combination'. Evidence theory may be used to address uncertainty issues.
Compared with the application of the pure probability method in road positioning, the DS evidence theory has the characteristics that:
(1) Evidence theory can handle both uncertainty caused by randomness (e.g., observation probability caused by position errors) and uncertainty caused by ambiguity (e.g., lane line opening observation).
(2) Evidence theory may not require prior probability and conditional probability density, and the basic probability distribution may be given intuitively or empirically.
(3) Evidence theory has the ability to directly express "uncertainty" and "unaware". Such information may be represented in a Mass function and retained during evidence synthesis. The independent expression of "uncertainty" has important significance for long sequence tracking, and if the probability of "uncertainty" is too high, the state at the last moment can be maintained, and the state is prevented from being biased by the information band with higher ambiguity.
(4) Evidence theory can gradually shrink the hypothesis set along with the increase of evidence, and the new information is easy to expand.
The basic concept of evidence theory is briefly introduced below:
2.1 identification framework (Frame of discernment)/hypothesis space
Identification framework: the set of things or objects examined and judged, for example, the recognition framework may be a non-empty set Θ= { a, b, c. }, which may contain n pairwise exclusive events. Identification ofThe power set of the frame contains 2n elements Where { a, b } represents a.u.b, i.e., the union of two events, a and b. Accordingly, { a, c } represents the union of the two events a and b.
For application In road level positioning, each high-precision road corresponds to an identification frame, and the elements In the frame are P (Θ) = { In, out, unknown }. Here, unknown= { In, out }, which can be understood as the union of two events In and Out.
2.2 basic probability distribution (Basic probability assignment, BPA)
Let BPA in space be a function m of an element P (Θ) → [0,1], called the Mass function, and satisfy the following formula:
the first formula may represent that the probability is 0 in the case where the elements of the Mass function are empty sets; the second formula may represent the sum of the probabilities of all elements in the Mass function as 1, and m (A) may represent the probability of element A.
In road-level localization, the assigned value of BPA is obtained back from the desired target effect. For example, for evidence "number of lane lines and unique match of high-precision map", how many observations are expected to complete the binding (vehicle-to-road binding). If a frame of evidence is expected to complete positioning, the probability of allocation of the evidence to the event In is high, and if a plurality of frames of observation is expected to complete positioning, the probability of allocation needs to be reduced.
2.3D-S evidence Synthesis
The D-S synthesis rule may include an evidence synthesis formula, exemplified as follows:
for the followingTwo mass functions m on Θ 1 ,m 2 The Dempster and synthesis rules of (c) are:
where K is the normalization constant.
For the followingA finite number of mass functions m on Θ 1 ,m 2 ,...,m n The Dempster and synthesis rules of (c) are:
wherein,
2.4 concrete Mass function definition
For application of DS evidence theory in road level localization, a base probability distribution function (BBA) is first set. For example, the following Mass function may be set first:
(1) Road surface projection Mass function
(2) Boundary aware Mass function
(3) Lane line type perception Mass function
(4) Lane line number perception Mass function
(5) Road width perception Mass function
(6) Navigation following Mass function
(7) Crossroad scene perception Mass function
(8) Scene perception Mass function for lane line number change
(9) Lane line type change scene perception Mass function
(10) Special-shaped lane line scene perception Mass function
With respect to the fixed weight fusion multi-source information, the kinds of the Mass functions in the embodiments of the present disclosure are not limited to the examples of the Mass functions described above, but may be dynamically expanded. Therefore, the algorithm effect can be continuously improved along with the iteration of the drive test case (case) and the improvement of the sensor.
The following are examples of Mass functions in several possible scenarios, where the probability distribution table is only exemplary and not limiting, and may be specifically modified or adjusted according to the actual requirements.
2.4.1 road surface projection Mass function
First, the road width and the map error are combined into new road surface information. And then judging whether the GPS error circle projection is positioned on the road surface in the current different positioning states. As shown in fig. 4, one road link a is map data after HD fusion, and the other road link b is SD data. The boundary of two Link is expanded respectively, and the expansion of the boundary is much smaller because the error of high-precision data is small. For example, the boundary given based on the SD map and the maximum error of the SD map may be extended out of the extended boundary on the left. The boundary given based on the HD map and the maximum error of the HD map can be extended out of the extended boundary on the right. The overlapping part of the two roads is arranged between the expansion boundaries after the maximum error is integrated. The element In the Mass function represents the probability at LinkA, out represents the probability of no longer LinkA, unknown represents uncertainty. If the intermediate vehicle is located at an overlap, the probability value of the element unknown in the Mass function may be set to be larger, e.g., 1, and the probability values of in and Out may be set to be smaller, e.g., 0. If the left vehicle is located In LinkA and not at overlap, the probability value of element In the Mass function can be set to be larger, and the probability values of Out and unknown can be set to be smaller. If the right vehicle is located In LinkB and not at overlap, the probability value of element In the Mass function can be set smaller and the probability values of Out and unown can be set larger.
For special scenes, such as ramp entrances, two road surfaces are impossible to be overlapped in reality, so that the left boundary map error of the SD can be only expanded to the HD boundary.
For vehicles in different locations, the BBA allocation table for whether they are on HD is shown in table 1 below:
TABLE 1
LINK A In Out unknown
Mass road projection observation (left vehicle) 0.8 0.1 0.1
Mass road projection observation (intermediate vehicle) 0 0 1
Mass road projection observation (Right vehicle) 0.1 0.8 0.1
2.4.2 boundary aware Mass function
The perception of the road boundary by the perception module of the vehicle is under the vehicle body coordinate system, so that the vehicle is not affected by GPS errors. Intercept information of the actual relative boundary of the vehicle body is known from the perception of the road boundary. Errors caused by the high-precision map can be ignored, and if the GPS does not have errors, the intercept of the GPS position from the high-precision road boundary is the same as the perceived intercept. If different, it is considered that the error of the GPS is caused. The intercept of the perceived boundary and the intercept difference of the GPS-high definition boundary are used to determine if the GPS error is reasonable. For example. GPS errors exceeding 2 meters may be unreasonable.
As shown in fig. 5, if the current state is Real-time kinematic (RTK) fixed solution. It can be assumed that the maximum error of the GPS is 2 meters, and then the BBA allocation table 2 of the red car for each Link is:
TABLE 2
LINK A In Out unknown
Mass boundary perception (Red car) 0.8 0.1 0.1
2.4.3 Lane line quantity perception Mass function
As shown in fig. 6, link B has only 4 lane lines or boundaries (boundaries), but the perception module perceives 5 lane lines or boundaries (boundaries), there are many lane lines on Link a, and the probability that the vehicle is located in Link a is high. An example of BBA allocation table 3 can thus be given as:
TABLE 3 Table 3
LINK A In Out unknown
Mass lane number (Red car) 0.8 0.1 0.1
2.4.4 lane line Linear perception Mass function
As shown in fig. 7, the perceived lane line type includes 2 broken lines and 2 solid lines, and the lane line type is matched only with the LinkA, and the probability that the vehicle is located in the LinkA is high. An example of a corresponding BBA allocation table 4 can thus be given as:
TABLE 4 Table 4
LINK A In Out unknown
Mass lane number (Red car) 0.8 0.1 0.1
2.4.5 road width aware Mass function
As shown in fig. 8, if the sensing module senses boundary information of both sides of the road at the same time, width information of the road can be obtained. The perceived width information is compared with the width information of the map. If the perceived width information is the same as or similar to LinkA, the following example of BBA allocation Table 5 can be obtained:
TABLE 5
LINK A In Out unknown
Mass road width (Red car) 0.7 0.1 0.2
2.4.6 scene aware Mass function
Taking an overhead intersection scenario as an example:
as shown in fig. 9, the lane line perception is degraded when the vehicle passes through the area where the 2D projection of the intersection is located. If there is no intersection in LinkA (e.g., high speed) and there is an intersection in LinkB (e.g., accessory road), the probability that the vehicle is located in LinkB is large. The corresponding BBA allocation table 6 can thus be given:
TABLE 6
LINK A In Out unknown
Mass road width (Red car) 0.1 0.8 0.1
2.4.7 navigation route following Mass function
As shown in fig. 10, if the current vehicle has navigation route information, the navigation route can also bring a certain evidence.
For example, if the navigational arrow indicates a right turn at an intersection, linkB has an intersection, linkA has no intersection, the following example of BBA allocation Table 7 can be obtained:
TABLE 7
LINK A In Out unknown
Mass road width (Red car) 0 0.5 0.5
In this example, the results assigned according to Table 7 represent: according to the navigation route, the probability that the current vehicle is positioned on Link A is 0, the probability that the current vehicle is positioned on Link B is 0.5, and the probability that the current vehicle is positioned on Link A or Link B is not 0.5. That is, the route of Link B does not provide evidence that the current location is on a, if other evidence is not clear whether it is on a or B, where a 0.5 probability of Link B ensures that the vehicle is on the Link on which the navigation route is located. However, if other evidence indicates that the vehicle is on LinkA, here the 0.5 probability of uilkown ensures that too much collision with other evidence will not occur, thereby timely reporting yaw information.
2.4.8D-S evidence synthesis
In combination with the above observations about whether the red trolley is located in LinkA, an example of the overall basic probability distribution table 8 is listed as follows:
LINKA Mass road surface projection Mass boundary perception Mass laneNumber of wires Mass lane line type
In 0 0.8 0.8 0.8
Out 0 0.1 0.1 0.1
unknown 1 0.1 0.1 0.1
The data in table 8 is evidence synthesized using the evidence synthesis formula in 2.2, and an example of the probability after synthesis is obtained is table 9:
TABLE 9
LINKA Massall
In 0.989
Out 0.0096
unknown 0.0014
An example of the calculation procedure based on table 8 to obtain table 9 is as follows:
(1) Since the synthesis formula satisfies the binding rate, the synthesis rule of the plurality of mass functions may be:
(2) For P (Θ) = { In,0ut, un-nonwn }, two Mass functions m are assumed A 、m B
And normalize the constant
The above-mentioned materials are mixedThe formula expansion of (2) can be obtained:
the normalization constants are expanded to obtain:
K=m A (In)·m B (In)+m A (In)·m B (Unknwon)+m A (0ut)·m B (0ut)
+m A (Out)·m B (Unknown)+m A (Unknown)·m B (In)
+m A (Unknown)·m B (Out)+m A (Unknown)·m B (Unknown)
(3) Substituting the values in Table 8 into the expanded equation, wherein the Mass road projection is noted as m r.prj The method comprises the steps of carrying out a first treatment on the surface of the Mass boundary perception is denoted m Edge The method comprises the steps of carrying out a first treatment on the surface of the The number of the Mass lane lines is recorded as m line-num The method comprises the steps of carrying out a first treatment on the surface of the The type of the Mass lane line is marked as m line-type
First, let m A =m r.prj ,m B =m Edge As can be seen from Table 8, m A (In)=0,m A (OUT)=1,m A (Unknown)=1,m B (In)=0.8,m B (Out)=0.1,m B (nnnown) =0.1. The formula developed in (2) can be used to obtain:
then, let them B =m line-num According to m A (In)=0.8,m A (OUT)=0.1,m A (Unknown)=0.1,m B (In)=0.8,m B (Out)=0.1,m B (nnnown) =0.1. The formula developed in (2) can be used to obtain:
the same can be obtained:
as can be seen from the above examples, although the intermediate vehicle (see fig. 4) is in the overlapping area of the two roads, the GPS location information does not give effective information to distinguish the two roads, but the final LinkA still achieves a higher confidence level due to the introduction of other types of observation evidence (e.g., the several Mass functions described above).
It should be noted that if in the above example, the sensing module has no input for some reason, or the sensing module cannot distinguish the two paths, the confidence of unbown is the highest, for example, the following scenario:
in the scenario of fig. 11, the vehicle cannot distinguish whether it is currently located in Link a or Link B, whether it projects from the road surface or various perceived information. The corresponding BBA table 10 is as follows:
table 10
LINK A Mass road surface projection Mass boundary perception Quantity of Mass lane lines Mass lane line type
In 0 0 0 0
Out 0 0 0 0
unknown 1 1 1 1
The final synthesized BBA table 11 is:
TABLE 11
LINKA Massall
In 0
Out 0
unknown 1
The result shows that all the sensor information received at the current moment has no meaning, the road tracking state at the last moment is maintained in time sequence, and the new evidence is further tracked after the new evidence appears. This is very useful for the long sequence tracking logic to be mentioned later.
3. Topology constraint-based long sequence tracking module
The DS evidence theory described above fuses multiple observations at the same moment to obtain the comprehensive probability of all sensor observations at the current moment. The probability of the Link where the vehicle is located can then be tracked in time sequence.
Referring to the Viterbi algorithm, as shown in fig. 12, the probability of LinkC can be shifted from the maximum probability of Link a, link b, and Link c at the previous time, so that the Link probability of the vehicle can be continuously tracked in the traveling direction of the vehicle.
To avoid truncation problems due to probabilities approaching 0 or 1 during long sequence probability stacking, a probability log may be used to represent confidence. Therefore, it can be expressed as a problem of static binary bayesian filtering. An example of an iterative formula for static binary bayesian filtering is as follows:
wherein p (x|z) t ) May be provided by the DS evidence theory module. For example, if the DS evidence theory module gives a greater confidence level for unknown, p (x|z t ) =0.5. And p (x) =0.5, at which time the iteration of the state can become:
l t =l t-1
therefore, in the case where the confidence of unknown is large, the state of the vehicle can be kept unchanged.
From the perspective of a single HD road, the binding process can be completed as long as the posterior probability of the In state given by the binary filtering is larger than a specified threshold, and the overall algorithm running efficiency can be accelerated. In addition, the 'Out' state can be continuously overlapped on a long track, so that the method is equivalent to the collection of 'negative information', and the algorithm robustness can be effectively improved for scenes such as under an overhead.
2. System architecture and implementation flow
A frame diagram of the overall module is shown in fig. 13. Firstly, inputting sensor information and maps such as an SD map and an HD map, then sequentially passing through a data preprocessing module, a multi-information fusion module and a long track tracking module, and finally outputting a road level positioning result. The function of each module is introduced as follows:
1. Data preprocessing module
And (5) hidden state primary screening: roads with particularly large differences, such as roads in the opposite direction, are screened out based on the current vehicle position and heading information.
HD-SD fusion, which may also be referred to as SD-HD fusion: and correcting the geometry information of the SD by using the related geometry information of the HD, improving the data precision of the corresponding SD, and simultaneously adding prior information such as high-precision lane lines, road edges, labels and the like for the SD. And the fusion strategy completes the association with the corresponding HD according to the SD grade/IC/JCT and other attributes combined with the projection distance relation.
Scene generation: scene information, such as an intersection scene, a lane line type change scene, a lane line number change scene, a special-shaped lane line scene, and the like, is generated according to the features of the map information. In different scenes, the perception can show dynamic characteristics conforming to the scenes, such as degradation and disappearance of the scene lane line perception at the crossroads.
2. The multi-information fusion module can refer to the related description of the multi-information fusion process based on DS-evidence theory in the first part, namely the 2 nd point.
3. For specific functions, the long track tracking module can refer to the related description of the long sequence tracking process based on topology constraint in the first part 3.
According to the embodiment of the disclosure, the multisource information is fused based on evidence theory, static and dynamic observation and GPS observation of visual perception are fully utilized, and the probability of the current road is continuously tracked based on a static binary Bayesian algorithm, so that the accuracy and recall rate of road-level positioning in complex scenes (up-down overhead and parallel main and auxiliary road scenes) are greatly improved.
Fig. 14 is a schematic block diagram of a road level locating apparatus according to an embodiment of the present disclosure. The device comprises:
the function determining module 1401 is configured to determine a probability quality function corresponding to a road scene according to the road scene where the vehicle is located;
the evidence synthesis module 1402 is configured to perform evidence synthesis according to a probability quality function corresponding to the road scene, so as to obtain a comprehensive probability that the vehicle is on the target road, where the comprehensive probability that the vehicle is on the target road is used to determine whether the vehicle is on the target road.
Fig. 15 is a schematic block diagram of a road level locating apparatus according to another embodiment of the present disclosure. The apparatus may include one or more features of the apparatus described above. In one embodiment, the apparatus further comprises:
the scene determining module 1501 is configured to determine a road scene in which the vehicle is located according to at least one of map information, geographical location information of the vehicle, and perception information of the vehicle.
In one embodiment, the evidence synthesis module 1402 is configured to obtain the comprehensive probability corresponding to the road scene according to the intersection of the plurality of elements in the different probability mass functions corresponding to the road scene and the normalization parameter; wherein the normalization parameter is determined according to the values of elements whose intersections are not null in the different probability mass functions.
In one embodiment, the elements in the probability mass function include at least one of the following vehicle to road relationships: the vehicle being on the target road, the vehicle not being on the target road, and an unknown state;
the values of the elements in the probability mass function include at least one of: a first probability that the vehicle is on the target road, a second probability that the vehicle is not on the target road, and an unknown probability.
In one embodiment, according to the intersection of a plurality of elements in different probability quality functions corresponding to the road scene and the normalization parameter, the comprehensive probability corresponding to the road scene is obtained, including at least one of the following:
acquiring the comprehensive probability corresponding to the vehicle on the target road according to a plurality of elements and normalization parameters of the vehicle on the target road, which are obtained by intersection sets in different probability quality functions corresponding to the road scene;
According to a plurality of elements and normalization parameters of which the intersection sets are not on the target road of the vehicle in different probability mass functions corresponding to the road scene, obtaining the comprehensive probability of the vehicle not on the target road; and obtaining the comprehensive probability corresponding to the unknown state according to a plurality of elements and normalization parameters of which the intersection sets are the unknown state in different probability quality functions corresponding to the road scene.
In one embodiment, the values of the elements in the probability mass function include probabilities derived using a base probability distribution.
In one embodiment, a road scene corresponds to one or more probability mass functions, the type of the one or more probability mass functions including at least one of: a road surface projection probability mass function; a boundary-aware probability mass function; a lane line type perception probability mass function; a lane line number perception probability quality function; a road width perception probability mass function; the navigation route follows the probability mass function; scene perception probability quality function.
In one embodiment, the probability in the road projection probability mass function is determined based on at least one of: the boundary given by the map, the map boundary estimated according to the error and the positioning error area where the vehicle is located;
The probability in the boundary-aware probability mass function is determined based on at least one of: sensing boundary intercept, positioning boundary intercept, boundary given by a map, map boundary estimated according to errors, real position of a vehicle and vehicle position with errors;
the probability in the lane-number-aware probability quality function is determined based on at least one of: perceived lane lines, boundaries given by a map, and vehicle position with errors;
the probability in the lane line type perceived probability mass function is determined based on at least one of: perceived solid lane lines, perceived dotted lane lines, boundaries given by a map, vehicle positions with errors;
the probability in the road width perceived probability quality function is determined based on at least one of: perceived boundaries, boundaries given by the map, perceived road widths, road widths given by the map, vehicle locations;
the probability in the navigation route following probability mass function is determined based on at least one of: navigation route information, vehicle position;
the probability in the scene perception probability quality function is determined based on at least one of: vehicle position, scene features around the vehicle position.
In one embodiment, the scene perception probability quality function includes at least one of: a crossroad scene perception probability quality function; a scene perception probability quality function of the number of lane lines; a lane-line type-changing scene-aware probability mass function; and a special-shaped lane line scene perception probability quality function.
In one embodiment, the apparatus further comprises:
the tracking processing module 1502 is configured to perform tracking processing according to a first state of the vehicle at a first moment and a comprehensive probability that the vehicle is on the target road at the first moment, so as to obtain a second state of the vehicle at a second moment.
In one embodiment, the tracking process includes a static binary bayesian filtering.
Descriptions of specific functions and examples of each module of the apparatus in the embodiments of the present disclosure may refer to related descriptions of corresponding steps in the foregoing method embodiments, which are not repeated herein.
In the technical scheme of the disclosure, the acquisition, storage, application and the like of the related user personal information all conform to the regulations of related laws and regulations, and the public sequence is not violated.
According to embodiments of the present disclosure, the present disclosure also provides an electronic device, a readable storage medium and a computer program product.
Fig. 16 illustrates a schematic block diagram of an example electronic device 1600 that can be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile apparatuses, such as personal digital assistants, cellular telephones, smartphones, wearable devices, and other similar computing apparatuses. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 16, the apparatus 1600 includes a computing unit 1601 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 1602 or a computer program loaded from a storage unit 1608 into a Random Access Memory (RAM) 1603. In RAM 1603, various programs and data required for operation of device 1600 may also be stored. The computing unit 1601, ROM 1602, and RAM 1603 are connected to each other by a bus 1604. An input/output (I/O) interface 1605 is also connected to the bus 1604.
Various components in device 1600 are connected to I/O interface 1605, including: an input unit 1606 such as a keyboard, a mouse, and the like; an output unit 1607 such as various types of displays, speakers, and the like; a storage unit 1608, such as a magnetic disk, an optical disk, or the like; and a communication unit 1609, such as a network card, modem, wireless communication transceiver, or the like. Communication unit 1609 allows device 1600 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunications networks.
The computing unit 1601 may be a variety of general purpose and/or special purpose processing components with processing and computing capabilities. Some examples of computing unit 1601 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The computing unit 1601 performs the various methods and processes described above, such as a road level positioning method. For example, in some embodiments, the road level localization method may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as the storage unit 1608. In some embodiments, some or all of the computer programs may be loaded and/or installed onto device 1600 via ROM1602 and/or communication unit 1609. When a computer program is loaded into RAM 1603 and executed by computing unit 1601, one or more steps of the road level locating method described above may be performed. Alternatively, in other embodiments, the computing unit 1601 may be configured to perform the road level locating method by any other suitable means (e.g. by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server incorporating a blockchain.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel, sequentially, or in a different order, provided that the desired results of the disclosed aspects are achieved, and are not limited herein.
The above detailed description should not be taken as limiting the scope of the present disclosure. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions, improvements, etc. that are within the principles of the present disclosure are intended to be included within the scope of the present disclosure.

Claims (25)

1. A road level locating method comprising:
determining a probability quality function corresponding to a road scene according to the road scene of a vehicle;
and performing evidence synthesis according to the probability quality function corresponding to the road scene to obtain the comprehensive probability that the vehicle is on the target road, wherein the comprehensive probability that the vehicle is on the target road is used for determining whether the vehicle is on the target road.
2. The method of claim 1, further comprising:
And determining a road scene where the vehicle is located according to at least one of map information, geographic position information of the vehicle and perception information of the vehicle.
3. The method according to claim 1 or 2, wherein the performing evidence synthesis according to the probability quality function corresponding to the road scene, to obtain the comprehensive probability that the vehicle is on the target road, includes:
obtaining the comprehensive probability corresponding to the road scene according to the intersection sets of a plurality of elements in different probability quality functions corresponding to the road scene and the normalization parameters; wherein the normalization parameter is determined according to the values of elements whose intersections are not null in the different probability mass functions.
4. A method according to claim 3, wherein the elements in the probability mass function comprise at least one of the following vehicle-to-road relationships: the vehicle being on the target road, the vehicle not being on the target road, and an unknown state;
the values of the elements in the probability mass function comprise at least one of the following: a first probability that the vehicle is on the target road, a second probability that the vehicle is not on the target road, and an unknown probability.
5. The method of claim 4, wherein the obtaining the comprehensive probability corresponding to the road scene according to the intersection of the plurality of elements in the different probability mass functions corresponding to the road scene and the normalization parameter comprises at least one of:
acquiring the comprehensive probability corresponding to the vehicle on the target road according to a plurality of elements and normalization parameters of the vehicle on the target road, which are obtained by intersection sets in different probability quality functions corresponding to the road scene;
according to a plurality of elements and normalization parameters, which are intersected into the vehicle and are not on the target road, in different probability quality functions corresponding to the road scene, obtaining the comprehensive probability that the vehicle is not on the target road;
and obtaining the comprehensive probability corresponding to the unknown state according to a plurality of elements and normalization parameters, which are intersected into the unknown state, in different probability quality functions corresponding to the road scene.
6. The method of any of claims 1 to 5, wherein the values of the elements in the probability mass function comprise probabilities derived using a basic probability distribution.
7. The method of any one of claims 1 to 6, wherein one road scene corresponds to one or more probability mass functions, the type of the one or more probability mass functions comprising at least one of: a road surface projection probability mass function; a boundary-aware probability mass function; a lane line type perception probability mass function; a lane line number perception probability quality function; a road width perception probability mass function; the navigation route follows the probability mass function; scene perception probability quality function.
8. The method of claim 7, wherein the probability in the road surface projection probability mass function is determined based on at least one of: the boundary given by the map, the map boundary estimated according to the error and the positioning error area where the vehicle is located;
the probability in the boundary-aware probability mass function is determined based on at least one of: sensing boundary intercept, positioning boundary intercept, boundary given by a map, map boundary estimated according to errors, real position of a vehicle and vehicle position with errors;
the probability in the lane-number-aware probability quality function is determined based on at least one of: perceived lane lines, boundaries given by a map, and vehicle position with errors;
the probability in the lane line type perceived probability mass function is determined based on at least one of: perceived solid lane lines, perceived dotted lane lines, boundaries given by a map, vehicle positions with errors;
the probability in the road width perceived probability quality function is determined based on at least one of: perceived boundaries, boundaries given by the map, perceived road widths, road widths given by the map, vehicle locations;
The probability in the navigation route following probability mass function is determined based on at least one of: navigation route information, vehicle position;
the probability in the scene perception probability quality function is determined based on at least one of: vehicle position, scene features around the vehicle position.
9. The method of claim 7 or 8, wherein the scene-aware probability quality function comprises at least one of: a crossroad scene perception probability quality function; a scene perception probability quality function of the number of lane lines; a lane-line type-changing scene-aware probability mass function; and a special-shaped lane line scene perception probability quality function.
10. The method of any one of claims 1 to 9, further comprising:
and tracking according to the first state of the vehicle at the first moment and the comprehensive probability that the vehicle is on the target road at the first moment, so as to obtain the second state of the vehicle at the second moment.
11. The method of claim 10, wherein the tracking process comprises a static binary bayesian filtering.
12. A road grade locating device comprising:
the function determining module is used for determining a probability quality function corresponding to a road scene according to the road scene where the vehicle is located;
The evidence synthesis module is used for carrying out evidence synthesis according to the probability quality function corresponding to the road scene to obtain the comprehensive probability that the vehicle is on the target road, and the comprehensive probability that the vehicle is on the target road is used for determining whether the vehicle is on the target road or not.
13. The apparatus of claim 12, wherein the apparatus further comprises:
the scene determining module is used for determining a road scene where the vehicle is located according to at least one of map information, geographic position information of the vehicle and perception information of the vehicle.
14. The apparatus according to claim 12 or 13, wherein the evidence synthesis module is configured to obtain a comprehensive probability corresponding to the road scene according to an intersection set of a plurality of elements in different probability quality functions corresponding to the road scene and a normalization parameter; wherein the normalization parameter is determined according to the values of elements whose intersections are not null in the different probability mass functions.
15. The apparatus of claim 14, wherein the element in the probability mass function comprises at least one of the following vehicle-to-road relationships: the vehicle being on the target road, the vehicle not being on the target road, and an unknown state;
The values of the elements in the probability mass function comprise at least one of the following: a first probability that the vehicle is on the target road, a second probability that the vehicle is not on the target road, and an unknown probability.
16. The apparatus of claim 15, wherein the obtaining the composite probability corresponding to the road scene from the intersection of the plurality of elements in the different probability mass functions corresponding to the road scene and the normalization parameter comprises at least one of:
acquiring the comprehensive probability corresponding to the vehicle on the target road according to a plurality of elements and normalization parameters of the vehicle on the target road, which are obtained by intersection sets in different probability quality functions corresponding to the road scene;
according to a plurality of elements and normalization parameters, which are intersected into the vehicle and are not on the target road, in different probability quality functions corresponding to the road scene, obtaining the comprehensive probability that the vehicle is not on the target road; and obtaining the comprehensive probability corresponding to the unknown state according to a plurality of elements and normalization parameters, which are intersected into the unknown state, in different probability quality functions corresponding to the road scene.
17. The apparatus of any of claims 12 to 16, wherein the values of the elements in the probability mass function comprise probabilities derived using a basic probability distribution.
18. The apparatus of any of claims 12 to 17, wherein one road scene corresponds to one or more probability mass functions, the type of the one or more probability mass functions comprising at least one of: a road surface projection probability mass function; a boundary-aware probability mass function; a lane line type perception probability mass function; a lane line number perception probability quality function; a road width perception probability mass function; the navigation route follows the probability mass function; scene perception probability quality function.
19. The apparatus of claim 18, wherein the probability in the road surface projection probability mass function is determined based on at least one of: the boundary given by the map, the map boundary estimated according to the error and the positioning error area where the vehicle is located;
the probability in the boundary-aware probability mass function is determined based on at least one of: sensing boundary intercept, positioning boundary intercept, boundary given by a map, map boundary estimated according to errors, real position of a vehicle and vehicle position with errors;
The probability in the lane-number-aware probability quality function is determined based on at least one of: perceived lane lines, boundaries given by a map, and vehicle position with errors;
the probability in the lane line type perceived probability mass function is determined based on at least one of: perceived solid lane lines, perceived dotted lane lines, boundaries given by a map, vehicle positions with errors;
the probability in the road width perceived probability quality function is determined based on at least one of: perceived boundaries, boundaries given by the map, perceived road widths, road widths given by the map, vehicle locations;
the probability in the navigation route following probability mass function is determined based on at least one of: navigation route information, vehicle position;
the probability in the scene perception probability quality function is determined based on at least one of: vehicle position, scene features around the vehicle position.
20. The apparatus of claim 18 or 19, wherein the scene-aware probability quality function comprises at least one of: a crossroad scene perception probability quality function; a scene perception probability quality function of the number of lane lines; a lane-line type-changing scene-aware probability mass function; and a special-shaped lane line scene perception probability quality function.
21. The apparatus according to any one of claims 12 to 20, wherein the apparatus further comprises:
and the tracking processing module is used for tracking according to the first state of the vehicle at the first moment and the comprehensive probability that the vehicle is on the target road at the first moment to obtain the second state of the vehicle at the second moment.
22. The apparatus of claim 21, wherein the tracking process comprises a static binary bayesian filtering.
23. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-11.
24. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of any one of claims 1-11.
25. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any of claims 1-11.
CN202311755727.6A 2023-12-19 2023-12-19 Road level positioning method, device, equipment and storage medium Pending CN117804478A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311755727.6A CN117804478A (en) 2023-12-19 2023-12-19 Road level positioning method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311755727.6A CN117804478A (en) 2023-12-19 2023-12-19 Road level positioning method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117804478A true CN117804478A (en) 2024-04-02

Family

ID=90426694

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311755727.6A Pending CN117804478A (en) 2023-12-19 2023-12-19 Road level positioning method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117804478A (en)

Similar Documents

Publication Publication Date Title
US10627241B2 (en) Map-centric map matching method and apparatus
CN108519094B (en) Local path planning method and cloud processing terminal
EP3901826A2 (en) Vehicle position determining method, apparatus and electronic device
EP4191532A1 (en) Image annotation
CN115540896B (en) Path planning method and device, electronic equipment and computer readable medium
KR20210151724A (en) Vehicle positioning method, apparatus, electronic device and storage medium and computer program
US20210148711A1 (en) Map updating method and apparatus, and storage medium
CN113989451B (en) High-precision map construction method and device and electronic equipment
EP4098975B1 (en) Vehicle travel control method and apparatus
EP3859273A1 (en) Method for constructing driving coordinate system, and application thereof
CN113688935A (en) High-precision map detection method, device, equipment and storage medium
CN113335310B (en) Decision-based exercise planning method and device, electronic equipment and storage medium
EP4134624A2 (en) Method and apparatus for fusing road data to generate a map
CN112558072A (en) Vehicle positioning method, device, system, electronic equipment and storage medium
Huang et al. An incremental map matching approach with speed estimation constraints for high sampling rate vehicle trajectories
CN114283343A (en) Map updating method, training method and equipment based on remote sensing satellite image
WO2023231459A1 (en) Method for generating intersection surface and related apparatus
CN117804478A (en) Road level positioning method, device, equipment and storage medium
CN115937449A (en) High-precision map generation method and device, electronic equipment and storage medium
CN116215517A (en) Collision detection method, device, apparatus, storage medium, and autonomous vehicle
US20220003566A1 (en) Vehicle position determining method, apparatus and electronic device
US20220198923A1 (en) Method, apparatus, and computer program product for determining a split lane traffic pattern
CN114581869A (en) Method and device for determining position of target object, electronic equipment and storage medium
CN114659537A (en) Navigation starting point road determining method, device, equipment and storage medium
Jin et al. Robust localization via turning point filtering with road map

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination