CN115235493B - Method and device for automatic driving positioning based on vector map - Google Patents

Method and device for automatic driving positioning based on vector map Download PDF

Info

Publication number
CN115235493B
CN115235493B CN202210848452.XA CN202210848452A CN115235493B CN 115235493 B CN115235493 B CN 115235493B CN 202210848452 A CN202210848452 A CN 202210848452A CN 115235493 B CN115235493 B CN 115235493B
Authority
CN
China
Prior art keywords
characteristic
feature
identifier
processing
vehicle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210848452.XA
Other languages
Chinese (zh)
Other versions
CN115235493A (en
Inventor
陶绍源
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hozon New Energy Automobile Co Ltd
Original Assignee
Hozon New Energy Automobile Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hozon New Energy Automobile Co Ltd filed Critical Hozon New Energy Automobile Co Ltd
Priority to CN202210848452.XA priority Critical patent/CN115235493B/en
Publication of CN115235493A publication Critical patent/CN115235493A/en
Application granted granted Critical
Publication of CN115235493B publication Critical patent/CN115235493B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/3407Route searching; Route guidance specially adapted for specific applications
    • G01C21/343Calculating itineraries, i.e. routes leading from a starting point to a series of categorical destinations using a global route restraint, round trips, touristic trips
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • G01C21/1652Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments with ranging devices, e.g. LIDAR or RADAR
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • G01C21/1656Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments with passive imaging devices, e.g. cameras
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/3446Details of route searching algorithms, e.g. Dijkstra, A*, arc-flags, using precalculated routes
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position
    • G01S19/45Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement
    • G01S19/47Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement the supplementary measurement being an inertial measurement, e.g. tightly coupled inertial

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Navigation (AREA)

Abstract

The invention discloses a method and a device for automatic driving positioning based on a vector map, relates to the technical field of automatic driving of vehicles, and aims to achieve more accurate vehicle pose by considering processing cost and positioning precision. The main technical scheme of the invention is as follows: acquiring an initial pose corresponding to a vehicle; processing a current image corresponding to the shot vehicle based on the initial pose to obtain a first characteristic element contained in a first characteristic identifier in the current image; acquiring a second characteristic element of a second characteristic identifier corresponding to the vehicle from a vector map based on the initial pose; and processing the first characteristic elements and the second characteristic elements with the matching relation by using a preset cost function so as to correct the initial pose based on a processing result to obtain a target pose.

Description

Method and device for automatic driving positioning based on vector map
Technical Field
The invention relates to the technical field of automatic driving of vehicles, in particular to a method and a device for automatic driving positioning based on a vector map.
Background
One key technique for high-level autopilot is high-precision positioning, i.e., the need for a vehicle to estimate its pose in the world or map at a moment during autopilot.
Currently, these two classical schemes are mainly adopted to realize high-precision positioning for automatic driving of vehicles: the scheme is that positioning is performed based on a laser radar and a laser point cloud map; another approach is to locate based on visual and visual feature point maps. A brief explanation of these two classical schemes is as follows:
For the former approach, during the patterning phase, a point cloud map of the scene may be constructed using lidar and other devices (e.g., related devices applying inertial navigation techniques, real-time kinematic (Real-TIME KINEMATIC, RTK) carrier-phase differential techniques); and in the positioning stage, matching the point cloud scanned by the current laser radar with the point cloud in the point cloud map to obtain the current vehicle pose. However, in this scheme, the laser radar device is expensive, and the point cloud map contains a large number of original scanning points in the scene, which is bulky, and the matching operation of the point cloud also requires a large amount of labor cost.
For the latter approach, during the composition phase, a visual feature point map of the scene may be constructed using visual sensors (cameras) and other devices (e.g., related devices applying inertial navigation techniques, real-time kinematic (Real-TIME KINEMATIC, RTK) carrier-phase differencing techniques); and in the positioning stage, the feature points detected by the current image frame are matched with the visual feature point map, so that the current vehicle pose is obtained. However, in this scheme, under a large-scale outdoor scene, such as illumination and weather, the extraction of feature points in an image frame is adversely affected, which affects the positioning accuracy.
In the former scheme, although the high-precision positioning of the vehicle is realized, the required hardware cost is high and the calculation cost is also high, so that the implementation difficulty is high; while the latter solution requires less cost, positioning accuracy is difficult to guarantee. How to achieve both the implementation cost and the high-precision positioning needs to be better.
Disclosure of Invention
In view of the above, the present invention provides a method and apparatus for automatic driving positioning based on a vector map, and the main purpose of the present invention is to provide an optimized automatic driving positioning method using a vector map, which has low implementation cost and can meet the high precision requirement for positioning.
In order to achieve the above object, the present invention mainly provides the following technical solutions:
the first aspect of the invention provides a method for automatic driving positioning based on a vector map, which comprises the following steps:
Acquiring an initial pose corresponding to a vehicle;
Processing a current image corresponding to the shot vehicle based on the initial pose to obtain a first characteristic element contained in a first characteristic identifier in the current image;
Acquiring a second characteristic element of a second characteristic identifier corresponding to the vehicle from a vector map based on the initial pose;
and processing the first characteristic elements and the second characteristic elements with the matching relation by using a preset cost function so as to correct the initial pose based on a processing result to obtain a target pose.
In some modified embodiments of the first aspect of the present invention, the processing, based on the initial pose, the current image corresponding to the vehicle to obtain a first feature element included in a first feature identifier in the current image includes:
Processing the current image by using a preset image semantic segmentation model, and extracting at least one first feature identifier from the current image;
creating a first image layer corresponding to the current image according to the number of the first feature identifiers;
Placing each first characteristic identifier into a first layer which is uniquely corresponding to the first characteristic identifier, and obtaining pixel points which are correspondingly covered by the first characteristic identifiers in the first layer;
Performing distance conversion processing on the pixel points correspondingly covered by the first feature identifiers to obtain target pixel points;
and forming a first characteristic element corresponding to the first characteristic mark by using the target pixel point.
In some modified embodiments of the first aspect of the present invention, the obtaining, based on the initial pose, a second feature element of a second feature identifier corresponding to the vehicle from a vector map includes:
Extracting a second characteristic identifier within a preset range from the vehicle from a vector map based on the initial pose;
Creating a second image layer corresponding to the current image;
Projecting the second characteristic identifier into a unique corresponding second image layer to obtain a pixel point covered by the second characteristic identifier in the second image layer;
and forming a second characteristic element corresponding to the second characteristic mark by using the pixel points correspondingly covered by the second characteristic mark.
In some modified embodiments of the first aspect of the present invention, before the projecting the second feature identifier into the uniquely corresponding second layer, the method further includes:
If the linear mark exists in the second characteristic mark, the linear mark is subjected to equidistant sampling processing to obtain a plurality of corresponding discrete points;
and replacing and characterizing the linear identification by using the discrete points.
In some modified embodiments of the first aspect of the present invention, the processing the first feature element and the second feature element having a matching relationship with a preset cost function to correct the initial pose based on a processing result to obtain a target pose includes:
Searching the first characteristic elements matched with the second characteristic elements based on the same characteristic identifiers to obtain characteristic element combinations corresponding to the same characteristic identifiers, wherein the characteristic element combinations contain the first characteristic elements and the second characteristic elements with matching relations;
processing each characteristic element combination by using a preset cost function to obtain a cost function value which is based on the matching degree between the second characteristic element and the first characteristic element and corresponds to each characteristic element combination;
Based on the minimum cost function value, correction of the initial pose is performed to obtain a target pose.
The second aspect of the present invention provides an apparatus for automatic driving positioning based on a vector map, the apparatus comprising:
the first acquisition unit is used for acquiring an initial pose corresponding to the vehicle;
The first processing unit is used for processing the current image corresponding to the shot vehicle based on the initial pose to obtain a first characteristic element contained in a first characteristic identifier in the current image;
The second acquisition unit is used for acquiring second characteristic elements of second characteristic identifiers corresponding to the vehicles from a vector map based on the initial pose;
And the second processing unit is used for processing the first characteristic elements and the second characteristic elements with the matching relation by utilizing a preset cost function so as to correct the initial pose based on a processing result to obtain a target pose.
In some variant embodiments of the second aspect of the present invention, the first processing unit includes:
The first extraction module is used for processing the current image by using a preset image semantic segmentation model and extracting at least one first characteristic identifier from the current image;
The first creating module is used for creating a first image layer corresponding to the current image according to the number of the first characteristic identifiers;
The placement module is used for placing each first characteristic identifier into the first layer which is uniquely corresponding to the first characteristic identifier to obtain pixel points which are correspondingly covered by the first characteristic identifiers in the first layer;
the processing module is used for performing distance conversion processing on the pixel points correspondingly covered by the first characteristic identifiers to obtain target pixel points;
And the first composition module is used for composing a first characteristic element corresponding to the first characteristic identifier by using the target pixel point.
In some modified embodiments of the second aspect of the present invention, the second obtaining unit includes:
the second extraction module is used for extracting a second characteristic identifier within a preset range from the vehicle from a vector map based on the initial pose;
The second creating module is used for creating a second image layer corresponding to the current image;
The projection module is used for projecting the second characteristic identifier into a unique corresponding second image layer to obtain a pixel point which is correspondingly covered by the second characteristic identifier in the second image layer;
And the second composition module is used for composing a second characteristic element corresponding to the second characteristic identifier by using the pixel points correspondingly covered by the second characteristic identifier.
In some modified embodiments of the second aspect of the present invention, the second obtaining unit further includes:
The sampling processing module is used for obtaining a plurality of corresponding discrete points by carrying out equidistant sampling processing on the linear identifications if the linear identifications exist in the second characteristic identifications before the second characteristic identifications are projected to the unique corresponding second image layer to obtain the pixel points correspondingly covered by the second characteristic identifications in the second image layer;
a substitution module for substituting the plurality of discrete points for characterizing the linear identification.
In some variant embodiments of the second aspect of the present invention, the second processing unit includes:
the searching module is used for searching the first characteristic elements matched with the second characteristic elements based on the same characteristic identifiers to obtain characteristic element combinations corresponding to the same characteristic identifiers, wherein the characteristic element combinations contain the first characteristic elements and the second characteristic elements with matching relations;
The processing module is used for processing each characteristic element combination by utilizing a preset cost function to obtain a cost function value which is based on the matching degree between the second characteristic element and the first characteristic element and corresponds to each characteristic element combination;
And the implementation module is used for carrying out correction on the initial pose based on the minimum cost function value so as to obtain the target pose.
A third aspect of the present invention provides a computer readable storage medium having stored thereon a computer program which when executed by a processor implements a method of vector map based autopilot positioning as described above.
A fourth aspect of the present invention provides an electronic device comprising: the system comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor realizes the method for automatically positioning driving based on the vector map when executing the computer program.
By means of the technical scheme, the technical scheme provided by the invention has at least the following advantages:
The invention provides a method and a device for automatic driving positioning based on a vector map. On the premise of the same initial pose, the first characteristic mark and the second characteristic mark acquired from the vector map are photographed under the same condition, so that the first characteristic element and the second characteristic element are matched based on the same characteristic mark. And correcting the initial pose based on the processing result to obtain the target pose, thereby realizing high-precision positioning of the automatic driving vehicle.
Compared with the prior art, the method provided by the invention has the advantages that the required algorithm is not complex, the calculation cost is low, the high-precision vector map is easy to obtain, the implementation cost of the scheme is low, the requirement on high positioning precision is met, and the problem that the implementation cost and the high-precision positioning are difficult to be compatible in the prior art is solved.
The foregoing description is only an overview of the present invention, and is intended to be implemented in accordance with the teachings of the present invention in order that the same may be more clearly understood and to make the same and other objects, features and advantages of the present invention more readily apparent.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to designate like parts throughout the figures. In the drawings:
FIG. 1 is a flow chart of a method for automatic driving positioning based on a vector map according to an embodiment of the present invention;
FIG. 2 is a flowchart of another method for performing automatic driving positioning based on a vector map according to an embodiment of the present invention;
FIG. 3 is a schematic diagram illustrating a distance transformation image Dck obtained by performing a distance transformation process on Ick according to an exemplary embodiment of the present invention;
FIG. 4 is a block diagram of an apparatus for automatic driving positioning based on a vector map according to an embodiment of the present invention;
fig. 5 is a block diagram of another apparatus for performing automatic driving positioning based on a vector map according to an embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present invention will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present invention are shown in the drawings, it should be understood that the present invention may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art.
The embodiment of the invention provides a method for automatic driving positioning based on a vector map, as shown in fig. 1, and the following specific steps are provided for the embodiment of the invention:
101. And acquiring the initial pose corresponding to the vehicle.
Wherein, vehicle pose: vehicle pose refers to the position and pose of a vehicle in the world or map, the position typically being expressed in European coordinates (x, y, z), and the pose typically being expressed in Euler angles (rotation angles about the x/y/z axis) or quaternions (x, y, z, w).
In a vehicle automatic driving application scene, an initial pose of a vehicle is mainly obtained based on two different instance scenes, specifically: example scenario 1, acquiring a current vehicle pose as an initial pose when a vehicle is just started; and 2, selecting any moment in the running process of the vehicle to acquire the pose of the vehicle as an initial pose in the example scene 2. For the embodiment of the invention, the initial pose of the vehicle is calculated based on a large amount of data, which is equivalent to the guessed pose of the current vehicle.
Based on different example scenes, the specific implementation method for respectively acquiring the initial pose by adopting different modes is as follows:
example scenario 1: when the vehicle is just started, the initial pose corresponding to the current moment of the vehicle can be obtained through a global pose sensor, for example, the global navigation satellite system (GlobalNavigation SATELLITE SYSTEM, GNSS) is directly obtained.
Example scenario 2: during the running of the vehicle, the initial pose of the current moment of the vehicle is obtained, and the target pose calculated according to the step 104 at the previous moment can be predicted by other sensors such as an inertial measurement unit (Inertial Measurement Unit, IMU) or wheel speed meter data.
For example, under the condition that the target pose of the vehicle at the previous moment is acquired, the time difference between the previous moment and the current moment is integrated through the acceleration and the angular velocity of the vehicle at the current moment acquired by the IMU, the position and the attitude increment of the current moment relative to the previous moment can be obtained, and the increment is superimposed on the target pose of the previous moment, so that the initial pose of the current moment is obtained. The pose increment can also be obtained by integrating the time through the wheel speed provided by the wheel speed meter of the vehicle.
102. And processing the current image corresponding to the shot vehicle based on the initial pose to obtain a first characteristic element contained in a first characteristic identifier in the current image.
The first feature element refers to a feature element of a feature identifier included in a current image of the vehicle. The feature identification may be, but is not limited to: traffic marks such as lane lines, stop lines, traffic lights and the like; road conditions such as road edges, green isolation belts, street lamps and the like.
Illustratively, the first feature element may be a representation of the pixel level of the corresponding feature identity in one image, namely: which pixels are covered in the image, so that corresponding feature identifiers are visually displayed in the image by using the pixels. For example, if a lane line is shot in the current image, the lane line is a first feature identifier, and the lane line covers corresponding pixel points in the current image, where the pixel points are first feature elements corresponding to the lane line.
It should be noted that, in the technical solution of the present invention, the "feature identifier" and the "feature element" will be mentioned for a plurality of times, so that, for convenience of explanation of the technical solution of the present invention, the "feature identifier" obtained based on capturing the current image is referred to as "first feature identifier", and the "feature element" possessed by the "first feature identifier" is referred to as "first feature element".
And the "feature identifier" which is obtained from the vector map and exists in association with the periphery of the vehicle is marked as a "second feature identifier", and the "feature element" of the feature identifier is marked as a "second feature identifier".
However, it should be noted that, since the pose refers to the positioning and the posture of the vehicle, the imaging effect of the first feature identifier in the current image may be slightly different from the actual shape on the road surface based on the different poses. For example, if the relative positions of the vehicle and the lane lines are not parallel and included, the lane lines imaged by the left or right camera sensor of the vehicle are trapezoidal instead of rectangular, and the imaging effect of the lane lines imaged by the camera sensor is different based on the different mounting positions of the camera sensor on the vehicle (such as the front side of the left end or the rear side of the left end). Accordingly, the first feature elements of the same first feature identifier in the current image are also different based on the different poses.
In the embodiment of the present invention, at least one camera sensor is previously installed in the vehicle, and the installation positions of the camera sensors may be, but are not limited to, the front end, the rear end, and the left/right end of the vehicle. On the premise that the initial pose of the vehicle is determined, the corresponding current images can be respectively shot by using the camera sensors.
When the current image is shot, the embodiment of the invention hopes that more first characteristic identifiers can be shot, so that the first characteristic elements of the more first characteristic identifiers are utilized and applied to subsequent step operation to finally obtain the target pose with higher precision. To achieve this, exemplary, but not limited to, adjusting the mounting position of the camera sensor, the shooting field of view, and the shooting sharpness.
103. And acquiring a second characteristic element of a second characteristic identifier corresponding to the vehicle from the vector map based on the initial pose.
The second feature element refers to a feature element included in a second feature identifier included in the vector map. It should be noted that the second feature identifier may also, but is not limited to: traffic marks such as lane lines, stop lines, traffic lights and the like; road conditions such as road edges, green isolation belts, street lamps and the like. The comparison shows that the first feature identifier and the second feature identifier are identical in actual reference data type and range, but different in source, and the source of the first feature identifier is as follows: based on the initial pose of the vehicle, a feature identifier is acquired from a current image of the vehicle; and the source of the second signature is: based on the initial pose of the vehicle, the feature identifiers existing nearby the vehicle are obtained from the vector map.
Wherein the vector map may be, but is not limited to being downloaded from a third party application, the high precision requirements for the vector map may be met based on different source channels.
In the embodiment of the invention, on the premise of determining the initial pose of the vehicle, second feature identifiers existing in the vicinity of the vehicle are obtained from the vector map, and second feature elements of each second feature identifier are further analyzed. Illustratively, the second feature element may be projecting the second feature identification into an image to characterize at the pixel level, namely: which pixels are covered in the image, so that corresponding feature identifiers are visually displayed in the image by using the pixels. For example. When a lane line near the vehicle is obtained from the vector map, the lane line is used as a second characteristic mark to be projected into an image, and all pixel points covered by the lane line in the image are used as second characteristic elements.
However, although the pose refers to the positioning and the posture of the vehicle, the pose of the vehicle does not affect the second feature identifier obtained from the vector map, and therefore, the imaging effect of the second feature identifier obtained from the vector map is substantially the same as the shape of the actual feature identifier of the road surface, and is only affected by the accuracy of the vector map.
Thus, in an embodiment of the invention, the vehicle positioning is the same, but based on different poses, it has different first and second feature elements for the same feature identifier.
104. And processing the first characteristic element and the second characteristic element which have the matching relation by using a preset cost function so as to correct the initial pose based on the processing result to obtain the target pose.
In the embodiment of the invention, for obtaining the first feature identifier by shooting the current image of the vehicle and for obtaining the second feature identifier corresponding to the vehicle from the vector map, the first feature identifier and the second feature identifier are obtained through two different channel sources due to the operation based on the same initial pose, but in fact, the feature identifiers respectively represented by the first feature identifier and the second feature identifier can be the same.
For example, for a running vehicle, a pose of the vehicle is acquired at any moment as an initial pose, and then a left-side dotted line lane line mark is shot by using a camera sensor arranged at the left end of the vehicle based on the initial pose as a first characteristic mark acquired from a shot current image.
Based on the initial pose, a vector map can be used for acquiring a plurality of characteristic identifiers existing in a vehicle nearby range, such as a left side dotted line lane line identifier and a right side dotted line lane line identifier of a driving lane where the vehicle is located, lane line identifiers of adjacent driving lanes and the like. Accordingly, the left side dotted lane line identification (i.e., this same characteristic identification) is obtained based on both different channels.
But is affected by the vehicle pose contained in the initial pose, the first feature element and the second feature element of the same feature identifier may be the same or different, but it may be determined that the two have a matching relationship based on the same feature identifier.
Accordingly, in the embodiment of the invention, the cost value of the matching degree between the two is calculated by using the second feature element obtained from the vector map and the second feature identifier as a reference and using the preset cost function, so that the matching degree between the two is measured, and the matching degree between the two is used as a processing result to further correct the initial pose (i.e. the guessed pose of the current vehicle) so as to position the more accurate current pose (i.e. the target pose).
The embodiment of the invention provides a method for automatically driving and positioning based on a vector map. On the premise of the same initial pose, the feature identification on the actual road surface and the feature identification obtained from the vector map are photographed under the same condition, so that the first feature element and the second feature element are matched based on the same feature identification. And correcting the initial pose based on the processing result to obtain the target pose, thereby realizing high-precision positioning of the automatic driving vehicle.
Compared with the prior art, the method provided by the embodiment of the invention has the advantages that the required algorithm is not complex, the calculation cost is low, the high-precision vector map is easy to obtain, the implementation cost of the scheme is low, the requirement on high positioning precision is met, and the problem that the implementation cost and high-precision positioning are difficult to be compatible in the prior art is solved.
In order to make a more detailed description of the above embodiments, the embodiment of the present invention further provides another method for performing automatic driving positioning based on a vector map, as shown in fig. 2, and the following specific steps are provided for this embodiment of the present invention:
firstly, it should be noted that, for convenience and clarity in explaining the method for performing automatic driving positioning based on the vector map provided by the embodiment of the invention, the feature identifier, the feature element and the created image layer obtained by shooting the current image will be identified by the word "first"; and for the feature identifier, the feature element and the created image layer obtained from the vector map, the word "second" will be identified, so as to facilitate distinguishing the related data information obtained from two different channels.
201. And acquiring the initial pose corresponding to the vehicle.
In the embodiment of the present invention, for the explanation of this step, refer to step 101, which is not described herein again. For example, the initial pose is taken as the guess pose of the vehicle at the current moment, and is expressed as the pose Tk of the vehicle at the moment k.
Next, in combination with steps 202a-205a, based on the initial pose, the current image corresponding to the captured vehicle is processed to obtain the first feature element contained in the first feature identifier in the current image, so as to explain in detail:
202a, processing the current image by using a preset image semantic segmentation model, and extracting at least one first feature identifier from the current image.
The preset image semantic segmentation model is a model which is trained in advance based on a deep learning network, and the embodiment of the invention mainly utilizes the model to identify the feature identification existing in the image. In order to distinguish between the feature identifiers of different acquisition channels, the feature identifier acquired by the former channel is referred to as a first feature identifier, and the feature identifier acquired by the latter channel is referred to as a second feature identifier.
203A, creating a first layer corresponding to the current image according to the number of the first feature identifiers.
204A, placing each first feature identifier into a first layer corresponding to the first feature identifier, and obtaining pixel points covered by the first feature identifiers in the first layer.
In the embodiment of the invention, the layers are created based on the current image, so that each layer and the current image have the same attribute information, including but not limited to resolution, size and the like.
For ease of reference, the layer created from the first feature identifier is referred to as the first layer in the embodiments of the present invention. If the current image includes three first feature identifiers, namely a lane line, a road edge and a traffic light, the three first image layers are correspondingly created, and each first feature identifier is placed in a unique corresponding first image layer.
For the first feature identifier placed in the first layer, it should be noted that the pixel coordinates of the first feature identifier in the first layer are the same as the pixel coordinates of the first feature identifier in the current image corresponding to the vehicle.
Illustratively, the first feature identifies which pixels are covered in the first layer using equation (1) below.
Wherein k is used for referring to the frame of the current image; the class used for describing the first characteristic mark is not a popular "classification class", and in the embodiment of the invention, each different first characteristic mark is correspondingly judged as a class, such as two lane lines, stop lines and road edges with different positions, and is judged as four classes, so that a pixel point is attributed to which characteristic mark for convenience; ick is used to refer to a first layer, and specifically refers to a first layer based on a first feature identifier c of a current image of a captured kth frame, for example, taking a lane line as an example, ick is a first layer based on a lane line of a current image of a captured kth frame; p I is each pixel point contained in the first layer Ick; the embodiment of the invention is based on the judgment operation of "1" or "0" for "yes" and "no", specifically, whether P I is the pixel point covered by the first feature identifier c on the first layer Ick is judged.
Specifically, taking a lane line as an example, some designated pixels will be covered in the first layer Ick, so that the designated pixels will be imaged as a lane line, then for each pixel P I included in the first layer Ick, a judgment operation ("yes" and "no", i.e. in the form of "1" or "0") is utilized one by one, if it is judged that a certain P I is "1", it is determined that P I is a designated pixel, but if it is not, it is determined that P I is "0", and the corresponding visualization effect on the first layer is as follows: if a certain P I is judged to be "1", the pixel point of the P I is black in the first layer; however, if P I is determined to be "0", the pixel point of P I is white in the first layer; based on determining that each P I within the first layer is a "1" or "0", the imaging effect of the lane line (i.e., the first feature identification) is thereby presented within the first layer. The above thus converts the first layer into a binary (0/1) image, i.e. the first feature identifier contained in the first layer is expressed using the binary (0/1) image.
205A, performing distance conversion processing on the pixel points covered by the first feature identifiers to obtain target pixel points.
In the embodiment of the invention, besides the pixel points covered on the corresponding first image layer by the first feature identifier obtained by using the formula (1), the pixel points are further subjected to distance transformation processing, so that the pixel values at the positions closer to the center of the first feature identifier are mentioned to be the largest and the pixel values at the positions farther from the center of the first feature identifier are mentioned to be the smallest on the first image layer, and the effect of the first feature identifier is as follows: in the first layer, the first feature identifier center appears very black, and the more distant the first feature identifier is, the more blurred or whiter the first feature identifier is, thus obtaining a gradual effect. The distance conversion process is realized by adopting the following formula (2):
For equation (2), the specific explanation is as follows:
(1) P I represents any pixel point in the first layer, and its pixel coordinate is generally represented as (u, v), where u represents the column where the pixel point is located, and v represents the row where the pixel point is located, with the upper left corner of the image as the origin (0, 0);
(2) Ick is the first layer described above, ick (P I) representing the pixel value at the P I pixel location of the Ick image;
(3) Dck is to reconstruct the generated image, and the pixel value of each pixel is as follows:
i. When Ick (P I) =1, i.e., the pixel value at the P I position of Ick is1, the pixel value at the P I position of Dck is set to 0;
ii. When Ick (P I) =0, i.e. the pixel value at the position P I of Ick is 0, a pixel point in Ick, the nearest pixel value to P I of which is 1, is found, and the minimum value of the u, v difference between this pixel point and P I is set as the pixel value at the position P I of Dck.
Specifically, in the schematic diagram of the distance conversion image Dck obtained by performing the distance change processing on the Ick illustrated in fig. 3, in the embodiment of the present invention, the distance change processing is performed on the pixel points in each first layer, so as to obtain a Dck image corresponding to the reconstruction of each first layer.
By the conversion of the above expression, the original binary (0/1) image can be converted into an image with continuous brightness, that is, the farther from the point where the original Ick median value is 1, the brighter. The result Dck of this transformation can be used for nonlinear optimization in subsequent steps.
206A, composing the first feature element corresponding to the first feature identifier by using the target pixel point.
In the embodiment of the invention, for the first feature identifier placed in the first image layer, based on the shooting imaging effect, the first feature identifier covers corresponding pixel points in the first image layer, and the imaging effect is enhanced by performing distance transformation processing on the pixel points. Accordingly, the embodiment of the invention is equivalent to marking the pixel point with the imaging effect after the enhancement processing as the first characteristic element with the first characteristic on the first image layer.
It should be noted that, since the pose refers to the positioning and the pose of the vehicle, the imaging effect of the feature identifier obtained by capturing the current image of the vehicle is slightly different from the actual shape on the road surface based on the different poses, so that the imaging effect of the first feature identifier formed based on the first feature elements is also different from the actual shape on the road surface.
In the following, as a parallel implementation process of "acquiring a first feature element of a first feature identifier from a current image corresponding to a captured vehicle based on an initial pose", the embodiment of the present invention further acquires a second feature element of a second feature identifier corresponding to a vehicle from a vector map based on the initial pose, which is explained specifically in conjunction with steps 202b-205 b.
202B, extracting a second characteristic identifier within a preset range from the vehicle from the vector map based on the initial pose.
203B, creating a second image layer corresponding to the current image based on the number of second feature identifiers.
In the embodiment of the invention, the second characteristic identification within the preset range from the vehicle is extracted from the vector map based on the vehicle positioning contained in the initial pose of the vehicle.
Further, based on the number of the second feature identifiers, the embodiment of the invention creates a corresponding number of second layers, and attribute information of each second layer is the same as a current image obtained by shooting the current running road condition of the vehicle, and the attribute information includes, but is not limited to, resolution, size and the like.
204B, projecting the second feature identifier into the unique corresponding second image layer to obtain the pixel point covered by the second feature identifier in the second image layer.
In the embodiment of the invention, the second feature identifiers obtained from the vector map are projected into the image shot by the camera sensor, specifically, each second feature identifier is projected into a unique corresponding second image layer, which is equivalent to realizing coordinate conversion of the vector map world and the camera sensor world, and further carrying out pixel coordinate conversion, and the method is realized by adopting the following formula (3): ZP I=KTkPm formula (3);
Specifically, the projection process is explained using the formula (3) as follows:
(1) P m is the point in the map, T k is the pose of the camera sensor at the moment k in the map, and since the camera sensor mentioned here is the same as the camera sensor used for shooting the current image in the step 202a, the T k is also equivalent to the initial pose of the vehicle mentioned in the step 201, and P c calculated by P c=TkPm is the coordinate of P m under the coordinate system of the camera sensor;
(2) K is an internal reference matrix of a camera sensor (namely a camera), and the general form is as follows: P p further obtained by P p=KTkPm is the position of the point P c in the pixel coordinate system;
Z is the third dimension coordinate of P p, the P p coordinate is divided by Z at the same time, and the normalized pixel coordinate P I with the third dimension of 1 is obtained, and the normalized pixel coordinate is the final pixel coordinate of P m projected onto the pixel.
It should be noted that, for some linear identifiers (i.e., as the second feature identifiers), the embodiment of the present invention may perform the equidistant sampling processing on such second feature identifiers in advance, so as to transform and characterize such second feature identifiers into a plurality of discrete points in advance, and then perform the projection processing based on these discrete points, thereby facilitating improvement of the efficiency of the projection operation on these linear identifiers.
205B, forming a second feature element corresponding to the second feature identifier by using the pixel points correspondingly covered by the second feature identifier.
In the embodiment of the invention, the second feature identifiers acquired from the vector map are projected to the unique corresponding second image layer, corresponding pixels are imaged and covered in the image, and then the second feature elements corresponding to the second feature identifiers are obtained by utilizing the pixels.
207. And processing the first characteristic element and the second characteristic element which have the matching relation by using a preset cost function so as to correct the initial pose based on the processing result to obtain the target pose.
In the embodiment of the invention, the following is a detailed explanation of this step:
First, based on the same feature identification, searching first feature elements matched with second feature elements to obtain feature element combinations corresponding to the same feature identification, wherein each feature element combination comprises the first feature elements and the second feature elements with matching relations.
For example, based on the initial pose of the vehicle, a first feature identifier is obtained from capturing a current image and a second feature identifier (i.e., two source channels) is obtained by searching for the vicinity of the vehicle from a vector map, where a certain first feature identifier corresponds to a certain second feature identifier actually expressed and is the same feature identifier in the road condition, for example, is the same lane line in the road condition actually.
Because the embodiment of the invention processes the feature identifiers in a layer mode, a certain first layer and a certain second layer have a matching relationship based on the same feature identifier. Accordingly, based on such same feature identifier, a first feature element corresponding to the feature identifier from the first layer having the matching relationship and a second feature element corresponding to the feature identifier from the corresponding second layer are combined to form a feature element combination corresponding to the feature identifier. And the first feature element and the second feature element stored in each feature element combination also have a matching relationship.
And secondly, processing each characteristic element combination by using a preset cost function to obtain a cost function value which is based on the matching degree between the second characteristic element and the first characteristic element and correspondingly measured by each characteristic element combination, and correcting the initial pose based on the minimum cost function value to obtain the target pose.
In the embodiment of the invention, each characteristic element combination is processed by utilizing the preset cost function, and in fact, the first image layer and the second image layer with the matching relationship are processed based on the same characteristic identification, so that the technical scheme of the invention is realized that different characteristic identifications are processed by a plurality of image layers with the matching relationship, each characteristic identification is processed one by one, and the initial pose correction of the vehicle is realized according to the processing result of each characteristic identification.
The preset cost function constructed by the embodiment of the invention is as follows:
specifically, the principle of the method implemented by using the formula (4) is as follows:
(1) For all points m in the map (i.e., P m), using Tk to obtain pixel coordinates P I according to equation (3) above;
(2) For the class c to which the point m belongs (namely, refers to belonging to a certain characteristic identifier), obtaining a pixel value of Dck corresponding to each point m according to a pixel coordinate P I of the image Dck generated according to the formula (2);
For example, for the first layer and the second layer having the matching relationship based on the same feature identifier, based on the second feature element (i.e. the pixel coordinate P I) of the second feature identifier obtained by projection on the second layer, according to P I, looking up the D CK image corresponding to the first layer, for example, looking up fig. 3, to obtain the pixel value of Dck corresponding to each P I, and since each P I is actually equivalent to corresponding to one m point on the map, further obtaining the pixel value of Dck corresponding to the point m;
(3) The pixel values obtained for all points m (i.e., the corresponding pixel values of Dck) are squared and summed to obtain J (Tk). The J (Tk) corresponds to the expression: based on the same feature identification, it obtains and projects an imaging effect in the second layer from the vector map, it obtains and images the effect in the first layer by capturing the current image, the cost function value of the matching degree between the two imaging effects. I.e. the smaller the cost function value, the higher the degree of matching.
Further, in the embodiment of the present invention, a nonlinear optimization algorithm is utilized to find one by using the formula (5)Minimizing the J (Tk) value, solving for the resultant/>As the target pose.
Wherein T k is the current initial pose of the vehicle; To correct the initial pose to obtain the target pose.
It should be further noted that, if the method provided in the embodiment of the present invention is performed by using only the current image obtained by using one camera sensor provided by the vehicle, equations (4) and (5) may be adopted. However, if the method provided by the embodiment of the present invention is executed by using the current images obtained by each of the plurality of camera sensors provided by the vehicle, the corresponding second feature identifiers with a matching relationship are acquired from the vector map based on the first feature identifiers contained in each current image, and the second feature identifiers participate in the subsequent cost function operation processing, so as to obtain a more accurate vehicle pose, specifically, the method is implemented by adopting the following formula (6):
/>
Wherein V represents all the camera sensors, that is, all the camera sensors generate Dck images, all the camera sensors including the point m in the field of view project the point m to their pixel coordinates, and perform Dck value taking and summing.
In the embodiment of the present invention, during the automatic driving and traveling process of the vehicle, the pose of the vehicle is continuously changed, after the target pose of step 207 is obtained according to the initial pose obtained in step 201, the target pose may be continuously used as the guess pose of the current vehicle at the next adjacent unit time, and the steps 202a-206a, 202b-205b and 207 are repeatedly executed, so as to obtain a more accurate pose located at the next adjacent unit time.
Further, as an implementation of the methods shown in fig. 1 and fig. 2, the embodiment of the invention provides a device for performing automatic driving positioning based on a vector map. The embodiment of the device corresponds to the embodiment of the method, and for convenience of reading, details of the embodiment of the method are not repeated one by one, but it should be clear that the device in the embodiment can correspondingly realize all the details of the embodiment of the method. The device is applied to obtaining more accurate vehicle pose, and particularly as shown in fig. 4, the device comprises:
A first acquiring unit 31, configured to acquire an initial pose corresponding to a vehicle;
The first processing unit 32 is configured to process, based on the initial pose, a current image corresponding to the captured vehicle, so as to obtain a first feature element included in a first feature identifier in the current image;
A second obtaining unit 33, configured to obtain, from a vector map, a second feature element that is included in a second feature identifier corresponding to the vehicle, based on the initial pose;
and the second processing unit 34 is configured to process the first feature element and the second feature element that have a matching relationship by using a preset cost function, so as to correct the initial pose based on a processing result, and obtain a target pose.
Further, as shown in fig. 5, the first processing unit 32 includes:
the first extraction module 321 is configured to process the current image by using a preset image semantic segmentation model, and extract at least one first feature identifier from the current image;
A first creating module 322, configured to create a first layer corresponding to the current image according to the number of the first feature identifiers;
A placement module 323, configured to place each first feature identifier into a first layer that corresponds to the first feature identifier, so as to obtain a pixel point that is covered by the first feature identifier in the first layer;
the processing module 324 is configured to perform a distance conversion process on the pixel points covered by the first feature identifier to obtain a target pixel point;
A first composition module 325, configured to compose a first feature element corresponding to the first feature identifier by using the target pixel point.
Further, as shown in fig. 5, the second obtaining unit 33 includes:
the second extracting module 331 is configured to extract, from a vector map, a second feature identifier within a preset range from the vehicle based on the initial pose;
a second creating module 332, configured to create a second layer corresponding to the current image;
a projection module 333, configured to project the second feature identifier to a second layer that corresponds to the second feature identifier, so as to obtain a pixel point that is covered by the second feature identifier in the second layer;
And a second composing module 334, configured to compose a second feature element corresponding to the second feature identifier by using the pixel points covered by the second feature identifier.
Further, as shown in fig. 5, the second obtaining unit 33 further includes:
the sampling processing module 335 is configured to, before the projecting the second feature identifier into the unique corresponding second layer to obtain a pixel point corresponding to and covered by the second feature identifier in the second layer, obtain a plurality of corresponding discrete points by performing equidistant sampling processing on the line type identifier if the line type identifier exists in the second feature identifier;
A substitution module 336 for substituting the plurality of discrete points for characterizing the linear identification.
Further, as shown in fig. 5, the second processing unit 34 includes:
a searching module 341, configured to search, based on the same feature identifier, the first feature element that is matched with the second feature element, and obtain a feature element combination corresponding to the same feature identifier, where the feature element combination includes the first feature element and the second feature element that have a matching relationship;
A processing module 342, configured to process each of the feature element combinations by using a preset cost function, to obtain a cost function value based on a matching degree between the second feature element and the first feature element corresponding to each of the feature element combinations;
An implementation module 343 is configured to implement correction of the initial pose based on the minimum cost function value to obtain a target pose.
In summary, the embodiment of the invention provides a method and a device for automatic driving positioning based on a vector map. On the premise of the same initial pose, the first characteristic mark and the second characteristic mark acquired from the vector map are photographed under the same condition, so that the first characteristic element and the second characteristic element are matched based on the same characteristic mark. And correcting the initial pose based on the processing result to obtain the target pose, thereby realizing high-precision positioning of the automatic driving vehicle. The method provided by the embodiment of the invention has the advantages that the required algorithm is not complex, the calculation cost is low, the high-precision vector map is easy to obtain, the implementation cost of the scheme is low, and the requirement on positioning high precision is met.
The device for carrying out automatic driving positioning based on the vector map comprises a processor and a memory, wherein the first acquisition unit, the first processing unit, the second acquisition unit, the second processing unit and the like are all stored in the memory as program units, and the processor executes the program units stored in the memory to realize corresponding functions.
The processor includes a kernel, and the kernel fetches the corresponding program unit from the memory. The kernel can be provided with one or more, and the vector map is utilized to provide an optimized automatic driving positioning method by adjusting kernel parameters, so that the implementation cost is low, and the high-precision requirement on positioning can be met.
Embodiments of the present invention provide a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements a method for automatic driving positioning based on a vector map as described above.
The embodiment of the invention provides electronic equipment, which comprises: the system comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor realizes the method for automatically positioning driving based on the vector map when executing the computer program.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In one typical configuration, the device includes one or more processors (CPUs), memory, and a bus. The device may also include input/output interfaces, network interfaces, and the like.
The memory may include volatile memory, random Access Memory (RAM), and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM), among other forms in computer readable media, the memory including at least one memory chip. Memory is an example of a computer-readable medium.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises an element.
It will be appreciated by those skilled in the art that embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The foregoing is merely exemplary of the present invention and is not intended to limit the present invention. Various modifications and variations of the present invention will be apparent to those skilled in the art. Any modifications, equivalent insertions, improvements, etc., which are within the spirit and principles of the invention, are intended to be included within the scope of the claims.

Claims (6)

1. A method for automatic driving positioning based on a vector map, the method comprising:
Acquiring an initial pose corresponding to a vehicle;
Based on the initial pose, processing the current image corresponding to the vehicle to obtain a first characteristic element contained in a first characteristic identifier in the current image, wherein the processing comprises the following steps: processing the current image by using a preset image semantic segmentation model, and extracting at least one first feature identifier from the current image; creating a first image layer corresponding to the current image according to the number of the first feature identifiers; placing each first characteristic identifier into a first layer which is uniquely corresponding to the first characteristic identifier, and obtaining pixel points which are correspondingly covered by the first characteristic identifiers in the first layer; performing distance conversion processing on the pixel points correspondingly covered by the first feature identifiers to obtain target pixel points; forming a first characteristic element corresponding to the first characteristic mark by utilizing the target pixel point;
Based on the initial pose, obtaining a second feature element of a second feature identifier corresponding to the vehicle from a vector map, wherein the second feature element comprises the following components: extracting a second characteristic identifier within a preset range from the vehicle from a vector map based on the initial pose; creating a second image layer corresponding to the current image; projecting the second characteristic identifier into a unique corresponding second image layer to obtain a pixel point covered by the second characteristic identifier in the second image layer; forming a second characteristic element corresponding to the second characteristic mark by using the pixel points correspondingly covered by the second characteristic mark;
and processing the first characteristic elements and the second characteristic elements with the matching relation by using a preset cost function so as to correct the initial pose based on a processing result to obtain a target pose.
2. The method of claim 1, wherein prior to said projecting the second signature into the uniquely corresponding second layer, the method further comprises, prior to obtaining pixels in the second layer that the second signature corresponds to coverage:
If the linear mark exists in the second characteristic mark, the linear mark is subjected to equidistant sampling processing to obtain a plurality of corresponding discrete points;
and replacing and characterizing the linear identification by using the discrete points.
3. The method according to claim 1, wherein the processing the first feature element and the second feature element having a matching relationship with a preset cost function to correct the initial pose based on the processing result implementation to obtain a target pose includes:
Searching the first characteristic elements matched with the second characteristic elements based on the same characteristic identifiers to obtain characteristic element combinations corresponding to the same characteristic identifiers, wherein the characteristic element combinations contain the first characteristic elements and the second characteristic elements with matching relations;
processing each characteristic element combination by using a preset cost function to obtain a cost function value which is based on the matching degree between the second characteristic element and the first characteristic element and corresponds to each characteristic element combination;
Based on the minimum cost function value, correction of the initial pose is performed to obtain a target pose.
4. An apparatus for automatic driving positioning based on a vector map, the apparatus comprising:
the first acquisition unit is used for acquiring an initial pose corresponding to the vehicle;
the first processing unit is used for processing the current image corresponding to the shot vehicle based on the initial pose to obtain a first characteristic element contained in the current image;
The first processing unit includes: the first extraction module is used for processing the current image by using a preset image semantic segmentation model and extracting at least one first characteristic identifier from the current image; the first creating module is used for creating a first image layer corresponding to the current image according to the number of the first characteristic identifiers; the placement module is used for placing each first characteristic identifier into the first layer which is uniquely corresponding to the first characteristic identifier to obtain pixel points which are correspondingly covered by the first characteristic identifiers in the first layer; the processing module is used for performing distance conversion processing on the pixel points correspondingly covered by the first characteristic identifiers to obtain target pixel points; the first composition module is used for composing a first characteristic element corresponding to the first characteristic identifier by utilizing the target pixel point;
The second acquisition unit is used for acquiring second characteristic elements of second characteristic identifiers corresponding to the vehicles from a vector map based on the initial pose;
The second acquisition unit includes: the second extraction module is used for extracting a second characteristic identifier within a preset range from the vehicle from a vector map based on the initial pose; the second creating module is used for creating a second image layer corresponding to the current image; the projection module is used for projecting the second characteristic identifier into a unique corresponding second image layer to obtain a pixel point which is correspondingly covered by the second characteristic identifier in the second image layer; the second composition module is used for composing a second characteristic element corresponding to the second characteristic mark by using the pixel points correspondingly covered by the second characteristic mark;
And the second processing unit is used for processing the first characteristic elements and the second characteristic elements with the matching relation by utilizing a preset cost function so as to correct the initial pose based on a processing result to obtain a target pose.
5. A computer readable storage medium, characterized in that it has stored thereon a computer program which, when executed by a processor, implements the method for automatic driving localization based on vector maps according to any of claims 1-3.
6. An electronic device, comprising: a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the method of vector map based autopilot positioning of any one of claims 1-3 when the computer program is executed.
CN202210848452.XA 2022-07-19 2022-07-19 Method and device for automatic driving positioning based on vector map Active CN115235493B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210848452.XA CN115235493B (en) 2022-07-19 2022-07-19 Method and device for automatic driving positioning based on vector map

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210848452.XA CN115235493B (en) 2022-07-19 2022-07-19 Method and device for automatic driving positioning based on vector map

Publications (2)

Publication Number Publication Date
CN115235493A CN115235493A (en) 2022-10-25
CN115235493B true CN115235493B (en) 2024-06-18

Family

ID=83674418

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210848452.XA Active CN115235493B (en) 2022-07-19 2022-07-19 Method and device for automatic driving positioning based on vector map

Country Status (1)

Country Link
CN (1) CN115235493B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117471513B (en) * 2023-12-26 2024-03-15 合众新能源汽车股份有限公司 Vehicle positioning method, positioning device, electronic equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108802785A (en) * 2018-08-24 2018-11-13 清华大学 Vehicle method for self-locating based on High-precision Vector map and monocular vision sensor
CN111220154A (en) * 2020-01-22 2020-06-02 北京百度网讯科技有限公司 Vehicle positioning method, device, equipment and medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110954113B (en) * 2019-05-30 2021-10-15 北京初速度科技有限公司 Vehicle pose correction method and device
CN112284400B (en) * 2020-12-24 2021-03-19 腾讯科技(深圳)有限公司 Vehicle positioning method and device, electronic equipment and computer readable storage medium
CN113838129B (en) * 2021-08-12 2024-03-15 高德软件有限公司 Method, device and system for obtaining pose information
CN114037762A (en) * 2021-11-22 2022-02-11 武汉中海庭数据技术有限公司 Real-time high-precision positioning method based on image and high-precision map registration
CN114494435A (en) * 2022-01-25 2022-05-13 清华大学 Rapid optimization method, system and medium for matching and positioning of vision and high-precision map

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108802785A (en) * 2018-08-24 2018-11-13 清华大学 Vehicle method for self-locating based on High-precision Vector map and monocular vision sensor
CN111220154A (en) * 2020-01-22 2020-06-02 北京百度网讯科技有限公司 Vehicle positioning method, device, equipment and medium

Also Published As

Publication number Publication date
CN115235493A (en) 2022-10-25

Similar Documents

Publication Publication Date Title
Choi et al. KAIST multi-spectral day/night data set for autonomous and assisted driving
JP7485749B2 (en) Video-based localization and mapping method and system - Patents.com
CA3028653C (en) Methods and systems for color point cloud generation
CN112912920B (en) Point cloud data conversion method and system for 2D convolutional neural network
CN111830953A (en) Vehicle self-positioning method, device and system
US20190311209A1 (en) Feature Recognition Assisted Super-resolution Method
US20230138487A1 (en) An Environment Model Using Cross-Sensor Feature Point Referencing
CN111976601B (en) Automatic parking method, device, equipment and storage medium
CN113240813B (en) Three-dimensional point cloud information determining method and device
Zhou et al. Developing and testing robust autonomy: The university of sydney campus data set
JP6278790B2 (en) Vehicle position detection device, vehicle position detection method, vehicle position detection computer program, and vehicle position detection system
CN115235493B (en) Method and device for automatic driving positioning based on vector map
CN114898321A (en) Method, device, equipment, medium and system for detecting road travelable area
Yun et al. Sthereo: Stereo thermal dataset for research in odometry and mapping
KR102195040B1 (en) Method for collecting road signs information using MMS and mono camera
Li et al. Lane detection and road surface reconstruction based on multiple vanishing point & symposia
JP2020076714A (en) Position attitude estimation device
KR102540629B1 (en) Method for generate training data for transportation facility and computer program recorded on record-medium for executing method therefor
KR102540636B1 (en) Method for create map included direction information and computer program recorded on record-medium for executing method therefor
KR102540634B1 (en) Method for create a projection-based colormap and computer program recorded on record-medium for executing method therefor
KR102540624B1 (en) Method for create map using aviation lidar and computer program recorded on record-medium for executing method therefor
KR102540632B1 (en) Method for create a colormap with color correction applied and computer program recorded on record-medium for executing method therefor
US11250275B2 (en) Information processing system, program, and information processing method
CN106650724A (en) Method and device for building traffic sign database
CN117671505A (en) Real-time visual SLAM method based on affine information under dynamic environment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 314500 988 Tong Tong Road, Wu Tong Street, Tongxiang, Jiaxing, Zhejiang

Applicant after: United New Energy Automobile Co.,Ltd.

Address before: 314500 988 Tong Tong Road, Wu Tong Street, Tongxiang, Jiaxing, Zhejiang

Applicant before: Hozon New Energy Automobile Co., Ltd.

CB02 Change of applicant information
GR01 Patent grant