CN114894205A - Three-dimensional lane line information generation method, device, equipment and computer readable medium - Google Patents

Three-dimensional lane line information generation method, device, equipment and computer readable medium Download PDF

Info

Publication number
CN114894205A
CN114894205A CN202210539307.3A CN202210539307A CN114894205A CN 114894205 A CN114894205 A CN 114894205A CN 202210539307 A CN202210539307 A CN 202210539307A CN 114894205 A CN114894205 A CN 114894205A
Authority
CN
China
Prior art keywords
lane line
line information
target vehicle
dimensional lane
dimensional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210539307.3A
Other languages
Chinese (zh)
Other versions
CN114894205B (en
Inventor
胡禹超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Heduo Technology Guangzhou Co ltd
Original Assignee
HoloMatic Technology Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by HoloMatic Technology Beijing Co Ltd filed Critical HoloMatic Technology Beijing Co Ltd
Priority to CN202210539307.3A priority Critical patent/CN114894205B/en
Publication of CN114894205A publication Critical patent/CN114894205A/en
Application granted granted Critical
Publication of CN114894205B publication Critical patent/CN114894205B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • G01C21/30Map- or contour-matching
    • G01C21/32Structuring or formatting of map data
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • G01C21/3807Creation or updating of map data characterised by the type of data
    • G01C21/3815Road data
    • G01C21/3819Road shape data, e.g. outline of a route
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B29/00Maps; Plans; Charts; Diagrams, e.g. route diagram
    • G09B29/003Maps
    • G09B29/004Map manufacture or repair; Tear or ink or water resistant maps; Long-life maps
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Mathematical Physics (AREA)
  • Business, Economics & Management (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • Traffic Control Systems (AREA)

Abstract

The embodiment of the disclosure discloses a three-dimensional lane line information generation method, a three-dimensional lane line information generation device, three-dimensional lane line information generation equipment and a computer readable medium. One embodiment of the method comprises: acquiring a road image and a target vehicle road information set corresponding to a preset communication list; extracting the characteristics of the road image to obtain the current vehicle lane line information; adding the target vehicle road information set and the current vehicle lane line information to a current vehicle data pool to obtain an added data pool; generating a local static map based on the added data pool, wherein the local static map comprises a three-dimensional lane line information set; and responding to the received target vehicle geofence data set, and optimizing each piece of three-dimensional lane line information in the three-dimensional lane line information set in the local static map based on the geofence data set to obtain a target three-dimensional lane line information set. This embodiment can improve the accuracy of the generated three-dimensional lane line information.

Description

Three-dimensional lane line information generation method, device, equipment and computer readable medium
Technical Field
The embodiment of the disclosure relates to the technical field of computers, in particular to a method, a device, equipment and a computer readable medium for generating three-dimensional lane line information.
Background
The generation of the three-dimensional lane line information has important significance for the field of automatic driving. At present, when generating a three-dimensional lane line equation, the following method is generally adopted: and identifying the three-dimensional lane line information of the perception data detected by the perception device to obtain the three-dimensional lane line information.
However, when the three-dimensional lane line information generation is performed in the above manner, there are often technical problems as follows:
it is difficult to perceive road information (e.g., lane line coordinates) outside the field of view of the perception device or obstructed by an obstacle, resulting in insufficient accuracy of the generated three-dimensional lane line information, and further, reducing vehicle driving safety.
The above information disclosed in this background section is only for enhancement of understanding of the background of the inventive concept and, therefore, it may contain information that does not form the prior art that is already known to a person of ordinary skill in the art in this country.
Disclosure of Invention
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
Some embodiments of the present disclosure propose a three-dimensional lane line generation method, apparatus, electronic device, and computer readable medium to solve the technical problems mentioned in the background section above.
In a first aspect, some embodiments of the present disclosure provide a method for generating three-dimensional lane line information, the method including: acquiring a road image and a target vehicle road information set corresponding to a preset communication list; extracting the characteristics of the road image to obtain the current vehicle lane line information; adding the target vehicle road information set and the current vehicle lane line information to a current vehicle data pool to obtain an added data pool; generating a local static map based on the added data pool, wherein the local static map comprises a three-dimensional lane line information set; and responding to the received target vehicle geofence data set, and optimizing each piece of three-dimensional lane line information in the three-dimensional lane line information set in the local static map based on the geofence data set to obtain a target three-dimensional lane line information set.
In a second aspect, some embodiments of the present disclosure provide a three-dimensional lane line information generating apparatus, including: an acquisition unit configured to acquire a road image and a target vehicle road information set corresponding to a preset communication list; the characteristic extraction unit is configured to extract the characteristics of the road image to obtain the current vehicle lane line information; an adding unit configured to add the target vehicle road information set and the current vehicle lane line information to a current vehicle data pool to obtain an added data pool; a generating unit configured to generate a local static map based on the added data pool, wherein the local static map includes a three-dimensional lane line information set; and the optimization processing unit is configured to respond to the received target vehicle geofence data set, and optimize each piece of three-dimensional lane line information in the three-dimensional lane line information set in the local static map based on the geofence data set to obtain a target three-dimensional lane line information set.
In a third aspect, some embodiments of the present disclosure provide an electronic device, comprising: one or more processors; a storage device having one or more programs stored thereon, which when executed by one or more processors, cause the one or more processors to implement the method described in any of the implementations of the first aspect.
In a fourth aspect, some embodiments of the present disclosure provide a computer readable medium on which a computer program is stored, wherein the program, when executed by a processor, implements the method described in any of the implementations of the first aspect.
The above embodiments of the present disclosure have the following advantages: by the three-dimensional lane line information generation method of some embodiments of the present disclosure, the accuracy of the generated three-dimensional lane line information can be improved. Specifically, the reason why the accuracy of the generated three-dimensional lane line information is insufficient is that: it is difficult to perceive road information outside the field of view of the perception device or obstructed by obstacles, resulting in insufficient accuracy of the generated three-dimensional lane line information. Based on this, the three-dimensional lane line information generation method of some embodiments of the present disclosure first acquires a road image and a target vehicle road information set corresponding to a preset communication list. By acquiring the target vehicle road information set, the method can be used for overcoming the defects that the sensor of the current vehicle is blocked or the road information outside the sensing range is blocked. And then, extracting the characteristics of the road image to obtain the current vehicle lane line information. And then, adding the target vehicle road information set and the current vehicle lane line information to a current vehicle data pool to obtain an added data pool. By obtaining the post-addition data pool, the data of the current vehicle (e.g., the current vehicle lane line information) and the data of other vehicles (e.g., the target vehicle road information set) can be simultaneously utilized as data supports for the generation of the three-dimensional lane line information. This can be used to improve the accuracy of the generated three-dimensional lane line information. And then, generating a local static map based on the added data pool, wherein the local static map comprises a three-dimensional lane line information set. By generating a local static map, data of the current vehicle and data of other vehicles may be further transformed into the same map to facilitate generation of three-dimensional lane line information. And finally, responding to the received target vehicle geofence data set, and optimizing each three-dimensional lane line information in the three-dimensional lane line information set in the local static map based on the geofence data set to obtain a target three-dimensional lane line information set. By introducing the template lane geofence data set for optimization processing, data errors caused by the fact that road information outside the field of view of the sensing device or shielded by obstacles is difficult to sense can be further eliminated, and therefore the accuracy of the generated three-dimensional lane lines is improved. Further, the vehicle running safety can be improved.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and elements are not necessarily drawn to scale.
Fig. 1 is a flow diagram of some embodiments of a three-dimensional lane line information generation method according to the present disclosure;
FIG. 2 is a schematic structural diagram of some embodiments of a three-dimensional lane line information generating apparatus according to the present disclosure;
FIG. 3 is a schematic block diagram of an electronic device suitable for use in implementing some embodiments of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings. The embodiments and features of the embodiments in the present disclosure may be combined with each other without conflict.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Fig. 1 illustrates a flow 100 of some embodiments of a three-dimensional lane line information generation method according to the present disclosure. The process 100 of the three-dimensional lane line information generation method includes the following steps:
step 101, acquiring a road image and a target vehicle road information set corresponding to a preset communication list.
In some embodiments, the execution subject of the three-dimensional lane line information generation method may acquire the road image and the target vehicle road information set corresponding to the preset communication list in a wired manner or a wireless manner. Wherein the target vehicle road information may include: lane line sampling point information, lane line color information, lane line type information, target vehicle position coordinates, a camera internal parameter matrix of the target vehicle, a timestamp, historical three-dimensional lane line information, a target vehicle linear velocity value, a target vehicle angular velocity value, a target vehicle attitude matrix, and the like. The target vehicle attitude matrix may characterize the attitude of the target vehicle.
In some optional implementations of some embodiments, the communication list may include a target vehicle identification group. The above-mentioned executing body acquiring the road image and the target vehicle road information set corresponding to the preset communication list may include the steps of:
the method comprises the steps of firstly, acquiring a road image shot by a vehicle-mounted camera of a current vehicle.
And secondly, sending a road information acquisition request to a target vehicle terminal corresponding to each target vehicle identifier in the target vehicle identifier group in the communication list so as to acquire a target vehicle road information set. The communication list may be a list for storing the target vehicle identification group. The target vehicle identification may be a unique identification of a target vehicle terminal having a communication connection with the vehicle terminal of the current vehicle. The target vehicle terminal may be a control system of another vehicle. Each target vehicle road information in the set of target vehicle road information may be transmitted by a corresponding target vehicle terminal to characterize the road information stored by the target vehicle terminal.
In practice, a road information acquisition request may be issued at preset time intervals (e.g., 1 second) to a target vehicle terminal corresponding to each target vehicle identifier in the target vehicle identifier group in the communication list, so as to acquire a target vehicle road information set.
In some optional implementations of some embodiments, the executing body may further perform the following steps:
in the first step, in response to receiving vehicle broadcast data, adding a target vehicle identifier in the vehicle broadcast data to the communication list to obtain an added communication list. Wherein the added communication list may include an added target vehicle identification group.
And secondly, establishing communication connection between the vehicle terminal of the current vehicle and the target vehicle terminal corresponding to each target vehicle identifier in the target vehicle identifier group. Wherein establishing the communication connection may be used to obtain target vehicle road information of the target vehicle. The vehicle terminal of the current vehicle may be a control system of the current vehicle.
In some optional implementations of some embodiments, the executing body may further perform the following steps:
firstly, determining the time difference between the vehicle terminal of the current vehicle and the target vehicle terminal corresponding to each target vehicle identifier in the target vehicle identifier group as a time difference set. Wherein the time difference between the time of the vehicle terminal of the current vehicle and the time of each target vehicle terminal may be determined through a network time synchronization protocol. In this way, a set of time differences can be obtained.
And secondly, removing the target vehicle identification corresponding to the time difference which does not meet the preset time difference condition in the time difference set from the target vehicle identification group to obtain a first removed target vehicle identification group. The preset time difference condition may be that the time difference is greater than a preset time difference threshold (e.g., 30 seconds). Additionally, if a target vehicle identification is removed, it may not be added to the communication list for a predetermined length of time (e.g., 100 seconds).
And thirdly, determining the distance value between the current vehicle and the target vehicle corresponding to each first removal target vehicle identification in the first removal target vehicle identification group as a distance value set. Wherein, for each first removal target vehicle identification, the following steps may be performed: first, current vehicle coordinates may be acquired. Then, the position coordinates of the target vehicle included in the road information of the target vehicle corresponding to the first removal target vehicle identifier may be converted into a vehicle coordinate system of the current vehicle, so as to obtain the converted coordinates of the target vehicle. Finally, a distance value between the current vehicle coordinates and the target vehicle transformed coordinates may be determined. Thus, a set of distance values may be obtained.
And fourthly, removing the first removal target vehicle identification corresponding to the distance value which does not meet the preset distance value condition in the distance value set from the first removal target vehicle identification group to obtain a second removal target vehicle identification group so as to finish updating the communication list. The preset distance value condition may be that the distance value is greater than a preset distance threshold (for example, 1000 meters). In addition, if a certain first removal target vehicle identifier is removed, the communication connection between the current vehicle and the corresponding target vehicle can be interrupted. And may not be added to the communication list for a second preset length of time (e.g., 30 seconds). Thus, the consumption of computing resources can be greatly reduced.
And 102, extracting the characteristics of the road image to obtain the information of the current vehicle lane line.
In some embodiments, the executing entity may perform feature extraction on the road image to obtain current vehicle lane line information. The road image can be subjected to feature extraction through a feature extraction algorithm to obtain the current vehicle lane line information. The current vehicle lane line information may be two-dimensional lane line information.
As an example, the feature extraction algorithm may include, but is not limited to, at least one of: FCN (full volumetric Networks) model, Resnet (Residual neural Network) model, VGG (Visual Geometry Group Network) model, Google Net (deep neural Network) model, and the like. Additionally, the two-dimensional lane line information may include, but is not limited to, at least one of: lane line feature points, lane line type, lane line color, or the like.
For example, the lane line type may be a dotted line or a solid line, etc.
And 103, adding the target vehicle road information set and the current vehicle lane line information to a current vehicle data pool to obtain an added data pool.
In some embodiments, the executing entity may add the target vehicle road information set and the current vehicle lane line information to a current vehicle data pool to obtain an added data pool. The current vehicle data pool may be a preset storage space for storing vehicle data for a period of time. For example, the period of time may be a time within ten seconds before and after the time of the current vehicle. Additionally, the data in the current vehicle data pool may be stored in time-stamped order. Therefore, the time stamp of each target vehicle road information may be used as a key, and the target vehicle road information may be stored in order as a value. And a time stamp for generating the current vehicle lane line information may be used as a key, and the current vehicle lane line information may be stored in order as a value. Specifically, the added data pool may include not only road information of other vehicles, but also road information of the current vehicle. Due to the fact that errors exist in different vehicle time in the data transmission process, the target vehicle road information of which the time stamp is not within the time period can be deleted. Therefore, the interference item is removed, and the timeliness of the added data pool is improved. Further, it can be used to improve the accuracy of generating the three-dimensional lane line information.
And 104, generating a local static map based on the added data pool.
In some embodiments, the execution subject may generate a local static map based on the added data pool. The local static map may include a three-dimensional lane line information set. Second, a local static map may be a set of three-dimensional lane-line coordinate points within a certain range (e.g., 300 meters) of the current vehicle. In addition, the three-dimensional lane line information in the three-dimensional lane line information set may include a three-dimensional lane line key point coordinate set.
In some optional implementations of some embodiments, the post-addition data pool may include a road data set. The executing body generates the local static map based on the added data pool, and may include the following steps:
and inputting the road data in the road data set to a preset map optimizer to generate a local static map. The road data in the road data set may be a set of key value pairs (for example, the target vehicle road information is used as a key and the target vehicle road information is used as a value) in the post-addition data pool. The map optimizer may be a preset optimizer for generating a three-dimensional set of lane line coordinate points using road data. In addition, each three-dimensional lane line coordinate point in the generated set of three-dimensional lane line coordinate points is in the vehicle coordinate system of the current vehicle.
And 105, responding to the received target vehicle geofence data set, and optimizing each three-dimensional lane line information in the three-dimensional lane line information set in the local static map based on the geofence data set to obtain a target three-dimensional lane line information set.
In some embodiments, the executing entity may perform, in response to receiving the target vehicle geofence data set, optimization processing on each three-dimensional lane line information in the three-dimensional lane line information set in the local static map based on the geofence data set, so as to obtain a target three-dimensional lane line information set. The execution main body may send a target vehicle geofence data acquisition signal to each target vehicle terminal in communication connection at a preset time interval (e.g., 3 seconds), so as to acquire the target vehicle geofence data set. In addition, each acquired target vehicle geo-fence data is stored by the target vehicle terminal, and the geo-fence data is within a certain range (for example, 300 meters) with the position of the current vehicle as the center of a circle.
In some optional implementations of some embodiments, the geofence data in the geofence data set described above may include a set of detection lane line information. And the executing body, in response to receiving the target vehicle geofence data set, performing optimization processing on each three-dimensional lane line information in the three-dimensional lane line information set in the local static map based on the geofence data set to obtain a target three-dimensional lane line information set, which may include the following steps:
firstly, calibrating each piece of detection lane line information in a detection lane line information set included in each piece of geo-fence data in the geo-fence data set to generate a calibration lane line information set, and obtaining a calibration lane line information set. Each piece of detected lane line information in the detected lane line information set may include a three-dimensional lane line sampling point coordinate set. The calibration processing can be carried out on the coordinate set of the three-dimensional lane line sampling point included by each piece of detection lane line information in the detection lane line information set included by each piece of geo-fence data through the following steps, so that a calibration lane line information group is obtained:
the first substep, determining target vehicle road information corresponding to the geofence data.
And a second substep of determining a time difference between the time stamp included in the target vehicle road information and the current time as a time synchronization difference.
And a third sub-step of determining a time difference displacement value. The time difference displacement value may be a product of a target vehicle linear velocity value included in the target vehicle road information and the time synchronization difference value.
And a fourth substep of determining coordinates of a position where the distance extending the time difference displacement value in the direction of the target vector is located, to obtain synchronous position coordinates. The target vector may be a vector formed between the position coordinates of the target vehicle included in the target vehicle road information and the origin.
And a fifth substep of determining a synchronization posture matrix as a product of the angular velocity included in the target vehicle road information and the target vehicle posture matrix and the time difference displacement value.
A sixth substep of converting each three-dimensional lane line sampling point in the three-dimensional lane line sampling point coordinate set included in each piece of detected lane line information in the detected lane line information set included in the geofence data from the target vehicle coordinate system to the current vehicle coordinate system using the synchronized position coordinates and the synchronized attitude matrix to generate a converted coordinate set as calibrated lane line information. Thus, a calibration lane line information group can be obtained.
And secondly, executing the following optimization processing steps to each piece of three-dimensional lane line information in the three-dimensional lane line information set in the local static map to generate target three-dimensional lane line information:
and optimizing the three-dimensional lane line information by using the calibration lane line information matched with the three-dimensional lane line information in the calibration lane line information group set to obtain the target three-dimensional lane line information. Firstly, the calibration lane line information matched with the three-dimensional lane line key point coordinate set included in the three-dimensional lane line information in the calibration lane line information set can be determined through a coordinate matching algorithm. In addition, the conversion coordinate set included in the calibration lane line information that matches the three-dimensional lane line key point coordinate set included in the three-dimensional lane line information may correspond to the same lane line. Therefore, each conversion coordinate in the conversion coordinate set included in the matched calibration lane line information and the three-dimensional lane line key point coordinate in the three-dimensional lane line key point coordinate set can be subjected to fitting processing, and a three-dimensional lane line equation is obtained. Finally, the three-dimensional lane line equation can be determined as the target three-dimensional lane line information.
As an example, the coordinate matching algorithm may be icp (iterative close point) data registration method.
The above steps and their related contents are regarded as an inventive point of the embodiments of the present disclosure, and the technical problems mentioned in the background art are further solved. The condition that road information outside the field of view of the sensing equipment or blocked by an obstacle is difficult to sense can be avoided by introducing historical data of other vehicles within a certain range in real time and utilizing methods such as calibration processing and optimization processing. In addition, the time error and the detection error can be eliminated better by using methods such as calibration processing, optimization processing and the like. Therefore, the accuracy of the generated target three-dimensional lane line information can be further improved.
Optionally, the execution main body may further send the target three-dimensional lane line information set to a display terminal for display.
The above embodiments of the present disclosure have the following advantages: by the three-dimensional lane line information generation method of some embodiments of the present disclosure, the accuracy of the generated three-dimensional lane line information can be improved. Specifically, the reason why the accuracy of the generated three-dimensional lane line information is insufficient is that: it is difficult to perceive road information outside the field of view of the perception device or obstructed by obstacles, resulting in insufficient accuracy of the generated three-dimensional lane line information. Based on this, the three-dimensional lane line information generation method of some embodiments of the present disclosure first acquires a road image and a target vehicle road information set corresponding to a preset communication list. The method can be used for overcoming the defects that the sensor of the current vehicle is blocked or the road information outside the sensing range is blocked by acquiring the target vehicle road information set. And then, extracting the characteristics of the road image to obtain the current vehicle lane line information. And then, adding the target vehicle road information set and the current vehicle lane line information to a current vehicle data pool to obtain an added data pool. By obtaining the post-addition data pool, the data of the current vehicle (e.g., the current vehicle lane line information) and the data of other vehicles (e.g., the target vehicle road information set) can be simultaneously utilized as data supports for the generation of the three-dimensional lane line information. This can be used to improve the accuracy of the generated three-dimensional lane line information. And then, generating a local static map based on the added data pool, wherein the local static map comprises a three-dimensional lane line information set. By generating a local static map, data of the current vehicle and data of other vehicles may be further transformed into the same map to facilitate generation of three-dimensional lane line information. And finally, responding to the received target vehicle geofence data set, and optimizing each three-dimensional lane line information in the three-dimensional lane line information set in the local static map based on the geofence data set to obtain a target three-dimensional lane line information set. By introducing the template lane geofence data set for optimization processing, data errors caused by the fact that road information outside the field of view of the sensing device or shielded by obstacles is difficult to sense can be further eliminated, and therefore the accuracy of the generated three-dimensional lane lines is improved. Further, the vehicle running safety can be improved.
With further reference to fig. 2, as an implementation of the methods shown in the above figures, the present disclosure provides some embodiments of a three-dimensional lane line information generating apparatus, which correspond to those shown in fig. 1, and which may be applied in various electronic devices in particular.
As shown in fig. 2, the three-dimensional lane line information generation apparatus 200 of some embodiments includes: an acquisition unit 201, a feature extraction unit 202, an addition unit 203, a generation unit 204, and an optimization processing unit 205. Wherein the acquiring unit 201 is configured to acquire a road image and a target vehicle road information set corresponding to a preset communication list; a feature extraction unit 202 configured to perform feature extraction on the road image to obtain current vehicle lane line information; an adding unit 203 configured to add the target vehicle road information set and the current vehicle lane line information to a current vehicle data pool to obtain an added data pool; a generating unit 204 configured to generate a local static map based on the added data pool, wherein the local static map includes a three-dimensional lane line information set; and the optimization processing unit 205 is configured to, in response to receiving the target vehicle geofence data set, perform optimization processing on each three-dimensional lane line information in the three-dimensional lane line information set in the local static map based on the geofence data set to obtain a target three-dimensional lane line information set.
It will be understood that the units described in the apparatus 200 correspond to the various steps in the method described with reference to fig. 1. Thus, the operations, features and resulting advantages described above with respect to the method are also applicable to the apparatus 200 and the units included therein, and are not described herein again.
Referring now to FIG. 3, a block diagram of an electronic device 300 suitable for use in implementing some embodiments of the present disclosure is shown. The electronic device shown in fig. 3 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 3, the electronic device 300 may include a processing means (e.g., a central processing unit, a graphics processor, etc.) 301 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)302 or a program loaded from a storage means 308 into a Random Access Memory (RAM) 303. In the RAM303, various programs and data necessary for the operation of the electronic apparatus 300 are also stored. The processing device 301, the ROM 302, and the RAM303 are connected to each other via a bus 304. An input/output (I/O) interface 305 is also connected to bus 304.
Generally, the following devices may be connected to the I/O interface 305: input devices 306 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 307 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage devices 308 including, for example, magnetic tape, hard disk, etc.; and a communication device 309. The communication means 309 may allow the electronic device 300 to communicate wirelessly or by wire with other devices to exchange data. While fig. 3 illustrates an electronic device 300 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided. Each block shown in fig. 3 may represent one device or may represent multiple devices, as desired.
In particular, according to some embodiments of the present disclosure, the processes described above with reference to the flow diagrams may be implemented as computer software programs. For example, some embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In some such embodiments, the computer program may be downloaded and installed from a network through the communication device 309, or installed from the storage device 308, or installed from the ROM 302. The computer program, when executed by the processing apparatus 301, performs the above-described functions defined in the methods of some embodiments of the present disclosure.
It should be noted that the computer readable medium described above in some embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In some embodiments of the disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In some embodiments of the present disclosure, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network Protocol, such as HTTP (HyperText Transfer Protocol), and may interconnect with any form or medium of digital data communication (e.g., a communications network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the apparatus; or may exist separately without being assembled into the electronic device. The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: acquiring a road image and a target vehicle road information set corresponding to a preset communication list; extracting the characteristics of the road image to obtain the current vehicle lane line information; adding the target vehicle road information set and the current vehicle lane line information to a current vehicle data pool to obtain an added data pool; generating a local static map based on the added data pool, wherein the local static map comprises a three-dimensional lane line information set; and responding to the received target vehicle geofence data set, and optimizing each piece of three-dimensional lane line information in the three-dimensional lane line information set in the local static map based on the geofence data set to obtain a target three-dimensional lane line information set.
Computer program code for carrying out operations for embodiments of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in some embodiments of the present disclosure may be implemented by software, and may also be implemented by hardware. The described units may also be provided in a processor, and may be described as: a processor includes an acquisition unit, a feature extraction unit, an addition unit, a generation unit, and an optimization processing unit. Here, the names of these units do not constitute a limitation to the unit itself in some cases, and for example, the acquisition unit may also be described as a "unit that acquires a road image and a target vehicle road information set corresponding to a preset communication list".
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention in the embodiments of the present disclosure is not limited to the specific combination of the above-mentioned features, but also encompasses other embodiments in which any combination of the above-mentioned features or their equivalents is made without departing from the inventive concept as defined above. For example, the above features and (but not limited to) technical features with similar functions disclosed in the embodiments of the present disclosure are mutually replaced to form the technical solution.

Claims (10)

1. A three-dimensional lane line information generation method includes:
acquiring a road image and a target vehicle road information set corresponding to a preset communication list;
extracting the characteristics of the road image to obtain the current vehicle lane line information;
adding the target vehicle road information set and the current vehicle lane line information to a current vehicle data pool to obtain an added data pool;
generating a local static map based on the added data pool, wherein the local static map comprises a three-dimensional lane line information set;
and responding to the received target vehicle geo-fence data set, and optimizing each three-dimensional lane line information in the three-dimensional lane line information set in the local static map based on the geo-fence data set to obtain a target three-dimensional lane line information set.
2. The method of claim 1, wherein the method further comprises:
and sending the target three-dimensional lane line information set to a display terminal for display.
3. The method of claim 1, wherein the communication list includes a target vehicle identification group; and
the acquiring of the road image and the target vehicle road information set corresponding to the preset communication list includes:
acquiring a road image shot by a vehicle-mounted camera of a current vehicle;
and sending a road information acquisition request to a target vehicle terminal corresponding to each target vehicle identifier in the target vehicle identifier group in the communication list so as to acquire a target vehicle road information set.
4. The method of claim 1, wherein the method further comprises:
in response to receiving vehicle broadcast data, adding a target vehicle identifier in the vehicle broadcast data to the communication list to obtain an added communication list, wherein the added communication list comprises an added target vehicle identifier group;
and establishing communication connection between the vehicle terminal of the current vehicle and the target vehicle terminal corresponding to each target vehicle identifier in the target vehicle identifier group, wherein the communication connection is established for acquiring the target vehicle road information of the target vehicle.
5. The method of claim 4, wherein the method further comprises:
determining the time difference between the vehicle terminal of the current vehicle and the target vehicle terminal corresponding to each target vehicle identifier in the target vehicle identifier group as a time difference set;
removing the target vehicle identification corresponding to the time difference which does not meet the preset time difference condition in the time difference set from the target vehicle identification group to obtain a first removed target vehicle identification group;
determining distance values between a current vehicle and target vehicles corresponding to each first removal target vehicle identification in the first removal target vehicle identification group as a distance value set;
and removing the first removal target vehicle identification corresponding to the distance value which does not meet the preset distance value condition in the distance value set from the first removal target vehicle identification group to obtain a second removal target vehicle identification group so as to finish updating the communication list.
6. The method of claim 1, wherein the post-addition data pool comprises a road data set; and
generating a local static map based on the added data pool, including:
and inputting each road data in the road data set to a preset map optimizer to generate a local static map.
7. The method of claim 6, wherein the geo-fence data in the set of geo-fence data comprises a set of detected lane line information; and
the optimizing each three-dimensional lane line information in the three-dimensional lane line information set in the local static map based on the geo-fence data set to obtain a target three-dimensional lane line information set includes:
calibrating each piece of detection lane line information in a detection lane line information set included in each piece of geo-fence data in the geo-fence data set to generate a calibration lane line information set, so as to obtain a calibration lane line information set;
for each three-dimensional lane line information in the three-dimensional lane line information set in the local static map, performing the following optimization processing steps to generate target three-dimensional lane line information:
and optimizing the three-dimensional lane line information by utilizing the calibration lane line information matched with the three-dimensional lane line information in the calibration lane line information group set to obtain the target three-dimensional lane line information.
8. A three-dimensional lane line information generating apparatus comprising:
an acquisition unit configured to acquire a road image and a target vehicle road information set corresponding to a preset communication list;
the characteristic extraction unit is configured to extract the characteristics of the road image to obtain the current vehicle lane line information;
an adding unit configured to add the target vehicle road information set and the current vehicle lane line information to a current vehicle data pool, resulting in an added data pool;
a generating unit configured to generate a local static map based on the added data pool, wherein the local static map includes a three-dimensional lane line information set;
and the optimization processing unit is configured to respond to the received target vehicle geofence data set, and optimize each three-dimensional lane line information in the three-dimensional lane line information set in the local static map based on the geofence data set to obtain a target three-dimensional lane line information set.
9. An electronic device, comprising:
one or more processors;
a storage device having one or more programs stored thereon,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-7.
10. A computer-readable medium, on which a computer program is stored, wherein the program, when executed by a processor, implements the method of any one of claims 1-7.
CN202210539307.3A 2022-05-18 2022-05-18 Three-dimensional lane line information generation method, device, equipment and computer readable medium Active CN114894205B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210539307.3A CN114894205B (en) 2022-05-18 2022-05-18 Three-dimensional lane line information generation method, device, equipment and computer readable medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210539307.3A CN114894205B (en) 2022-05-18 2022-05-18 Three-dimensional lane line information generation method, device, equipment and computer readable medium

Publications (2)

Publication Number Publication Date
CN114894205A true CN114894205A (en) 2022-08-12
CN114894205B CN114894205B (en) 2023-05-23

Family

ID=82723845

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210539307.3A Active CN114894205B (en) 2022-05-18 2022-05-18 Three-dimensional lane line information generation method, device, equipment and computer readable medium

Country Status (1)

Country Link
CN (1) CN114894205B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115471708A (en) * 2022-09-27 2022-12-13 禾多科技(北京)有限公司 Lane line type information generation method, device, equipment and computer readable medium
CN115493609A (en) * 2022-09-27 2022-12-20 禾多科技(北京)有限公司 Lane-level path information generation method, apparatus, device, medium, and program product

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090012709A1 (en) * 2007-07-05 2009-01-08 Aisin Aw Co., Ltd. Road information generating apparatus, road information generating method, and road information generating program
CN104183131A (en) * 2013-05-28 2014-12-03 现代自动车株式会社 Apparatus and method for detecting traffic lane using wireless communication
CN105959908A (en) * 2016-04-26 2016-09-21 中国联合网络通信集团有限公司 Vehicle communication system and method
CN106415692A (en) * 2014-06-24 2017-02-15 哈曼国际工业有限公司 Vehicle communication through dedicated channel
US20200344820A1 (en) * 2019-04-24 2020-10-29 Here Global B.V. Lane aware clusters for vehicle to vehicle communication
CN112400095A (en) * 2018-07-11 2021-02-23 日产自动车株式会社 Method for generating driving environment information, driving control method, and driving environment information generating device
CN112598762A (en) * 2020-09-16 2021-04-02 禾多科技(北京)有限公司 Three-dimensional lane line information generation method, device, electronic device, and medium
CN113723216A (en) * 2021-08-06 2021-11-30 西人马帝言(北京)科技有限公司 Lane line detection method and device, vehicle and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090012709A1 (en) * 2007-07-05 2009-01-08 Aisin Aw Co., Ltd. Road information generating apparatus, road information generating method, and road information generating program
CN104183131A (en) * 2013-05-28 2014-12-03 现代自动车株式会社 Apparatus and method for detecting traffic lane using wireless communication
CN106415692A (en) * 2014-06-24 2017-02-15 哈曼国际工业有限公司 Vehicle communication through dedicated channel
CN105959908A (en) * 2016-04-26 2016-09-21 中国联合网络通信集团有限公司 Vehicle communication system and method
CN112400095A (en) * 2018-07-11 2021-02-23 日产自动车株式会社 Method for generating driving environment information, driving control method, and driving environment information generating device
US20200344820A1 (en) * 2019-04-24 2020-10-29 Here Global B.V. Lane aware clusters for vehicle to vehicle communication
CN112598762A (en) * 2020-09-16 2021-04-02 禾多科技(北京)有限公司 Three-dimensional lane line information generation method, device, electronic device, and medium
CN113723216A (en) * 2021-08-06 2021-11-30 西人马帝言(北京)科技有限公司 Lane line detection method and device, vehicle and storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115471708A (en) * 2022-09-27 2022-12-13 禾多科技(北京)有限公司 Lane line type information generation method, device, equipment and computer readable medium
CN115493609A (en) * 2022-09-27 2022-12-20 禾多科技(北京)有限公司 Lane-level path information generation method, apparatus, device, medium, and program product
CN115471708B (en) * 2022-09-27 2023-09-12 禾多科技(北京)有限公司 Lane line type information generation method, device, equipment and computer readable medium

Also Published As

Publication number Publication date
CN114894205B (en) 2023-05-23

Similar Documents

Publication Publication Date Title
CN112598762B (en) Three-dimensional lane line information generation method, device, electronic device, and medium
CN114894205B (en) Three-dimensional lane line information generation method, device, equipment and computer readable medium
CN112328731B (en) Vehicle lane level positioning method and device, electronic equipment and computer readable medium
CN113607185B (en) Lane line information display method, lane line information display device, electronic device, and computer-readable medium
CN114399588B (en) Three-dimensional lane line generation method and device, electronic device and computer readable medium
CN114399589B (en) Three-dimensional lane line generation method and device, electronic device and computer readable medium
CN113674357B (en) Camera external reference calibration method and device, electronic equipment and computer readable medium
CN116182878B (en) Road curved surface information generation method, device, equipment and computer readable medium
CN115616937B (en) Automatic driving simulation test method, device, equipment and computer readable medium
CN113327318B (en) Image display method, image display device, electronic equipment and computer readable medium
CN114993328B (en) Vehicle positioning evaluation method, device, equipment and computer readable medium
CN115272182B (en) Lane line detection method, lane line detection device, electronic equipment and computer readable medium
CN114445597B (en) Three-dimensional lane line generation method and device, electronic device and computer readable medium
CN111586295B (en) Image generation method and device and electronic equipment
CN116758498B (en) Obstacle information generation method, obstacle information generation device, electronic device, and computer-readable medium
CN113808134B (en) Oil tank layout information generation method, oil tank layout information generation device, electronic apparatus, and medium
CN113780247B (en) Traffic light detection method and device, electronic equipment and computer readable medium
CN114140538B (en) Vehicle-mounted camera pose adjusting method, device, equipment and computer readable medium
CN114724115B (en) Method, device and equipment for generating obstacle positioning information and computer readable medium
CN115565158A (en) Parking space detection method and device, electronic equipment and computer readable medium
CN112597174B (en) Map updating method and device, electronic equipment and computer readable medium
CN115393826A (en) Three-dimensional lane line generation method and device, electronic device and computer readable medium
CN112232451B (en) Multi-sensor data fusion method and device, electronic equipment and medium
CN115273012A (en) Dotted lane line identification method and device, electronic equipment and computer readable medium
CN112595330B (en) Vehicle positioning method and device, electronic equipment and computer readable medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address

Address after: 201, 202, 301, No. 56-4 Fenghuang South Road, Huadu District, Guangzhou City, Guangdong Province, 510806

Patentee after: Heduo Technology (Guangzhou) Co.,Ltd.

Address before: 100099 101-15, 3rd floor, building 9, yard 55, zique Road, Haidian District, Beijing

Patentee before: HOLOMATIC TECHNOLOGY (BEIJING) Co.,Ltd.

CP03 Change of name, title or address