CN114140770A - Automatic dynamic target identification method - Google Patents

Automatic dynamic target identification method Download PDF

Info

Publication number
CN114140770A
CN114140770A CN202111422375.3A CN202111422375A CN114140770A CN 114140770 A CN114140770 A CN 114140770A CN 202111422375 A CN202111422375 A CN 202111422375A CN 114140770 A CN114140770 A CN 114140770A
Authority
CN
China
Prior art keywords
dynamic target
point cloud
cloud data
dynamic
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111422375.3A
Other languages
Chinese (zh)
Inventor
刘春成
惠念
李汉玢
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Heading Data Intelligence Co Ltd
Original Assignee
Heading Data Intelligence Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Heading Data Intelligence Co Ltd filed Critical Heading Data Intelligence Co Ltd
Priority to CN202111422375.3A priority Critical patent/CN114140770A/en
Publication of CN114140770A publication Critical patent/CN114140770A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • G01C21/1652Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments with ranging devices, e.g. LIDAR or RADAR
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • G01C21/30Map- or contour-matching
    • G01C21/32Structuring or formatting of map data
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • G01C21/3833Creation or updating of map data characterised by the source of data
    • G01C21/3841Data obtained from two or more sources, e.g. probe vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position
    • G01S19/45Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position
    • G01S19/45Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement
    • G01S19/47Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement the supplementary measurement being an inertial measurement, e.g. tightly coupled inertial
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/251Fusion techniques of input or preprocessed data

Landscapes

  • Engineering & Computer Science (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Automation & Control Theory (AREA)
  • Theoretical Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a dynamic target automatic identification method, which comprises the following steps: respectively acquiring laser point cloud data and image data; identifying a first dynamic target identification result in the laser point cloud data and identifying a second dynamic target identification result in the image data; converting the first dynamic target identification result into an image coordinate system based on the pose information; and matching the first dynamic target recognition result with the second dynamic target recognition result under the image coordinate system to determine a final first dynamic target recognition result. According to the method, high-precision three-dimensional point cloud data and image data are used as main data sources, dynamic targets are extracted from the point cloud data by means of dynamic target reasoning results of a deep learning technology, data redundancy and high-precision map making interference factors are reduced by means of the deep learning technology, the efficiency of making a high-precision map from the point cloud data is improved, and compared with a traditional extraction method, the method has better scene generalization capability.

Description

Automatic dynamic target identification method
Technical Field
The invention relates to the field of automatic driving high-precision map making, in particular to a dynamic target automatic identification method.
Background
The automatic driving high-precision map is used as an indispensable important component of an automatic driving vehicle, and provides favorable support for vehicle positioning, path planning, vehicle energy conservation and the like. In order to ensure the freshness of the high-precision map, how to make and update the map with low cost and high efficiency under the condition of meeting the precision requirement becomes the key for ensuring the effectiveness of the map and having competitiveness. At present, no matter high-precision map making or updating is carried out, complete automation cannot be achieved, and manual intervention is still needed. The automatic driving high-precision map updating has the following characteristics: 1) the automatic driving high-precision map is different from the traditional navigation map, contains three-dimensional information and has higher precision requirement. 2) The high-precision map is a lane-level map 3) the map information is richer and the manufacturing and updating difficulty is higher. In summary, the automatic driving high-precision map manufacturing and updating process needs to ensure high precision and high efficiency.
Disclosure of Invention
Aiming at the technical problems in the prior art, the invention provides a dynamic target automatic identification method.
According to a first aspect of the present invention, there is provided a dynamic object automatic identification method, including: respectively acquiring laser point cloud data and image data; identifying a first dynamic target identification result in the laser point cloud data based on a point cloud dynamic target detection model, and identifying a second dynamic target identification result in the image data based on an image dynamic target detection model; converting the first dynamic target recognition result into an image coordinate system based on pose information; and matching the first dynamic target recognition result with the second dynamic target recognition result under the image coordinate system to determine a final first dynamic target recognition result.
On the basis of the technical scheme, the invention can be improved as follows.
Optionally, the laser point cloud data is road point cloud data acquired by a single line laser radar, and the image data is forward-looking image data acquired by a wide-angle camera.
Optionally, the respectively acquiring the laser point cloud data and the image data further includes:
acquiring GPS track data, and acquiring WGS84 absolute coordinates of a laser radar sensor center based on offset compensation numbers of a GPS center and the laser radar sensor center, so that single-frame point cloud data are converted to a WGS84 absolute coordinate system from a relative coordinate system, and multi-frame point cloud data are fused;
and under the condition that the GPS signal is unlocked, supplementing and correcting the GPS track data by utilizing IMU information and odometer information.
Optionally, the point cloud dynamic target detection model is obtained as follows:
cutting the fused laser point cloud data to obtain a plurality of laser point cloud data blocks;
and training the point cloud dynamic target detection model based on a plurality of laser point cloud data blocks to obtain the trained point cloud dynamic target detection model.
Optionally, the cutting the fused laser point cloud data to obtain a plurality of laser point cloud data blocks includes:
and under an absolute coordinate system, performing thinning on the GPS track data according to a fixed distance interval, and uniformly cutting the fused laser point cloud data according to the same fixed distance interval to obtain a plurality of laser point cloud data blocks.
Optionally, the pose information is calculated according to GPS information, IMU information, and odometer information.
Optionally, the determining a final first dynamic target recognition result by matching the first dynamic target recognition result with the second dynamic target recognition result in the image coordinate system includes:
for any first dynamic target area, if any first dynamic target area can be matched in the second dynamic target identification result, reserving any first dynamic target area;
and traversing all the first dynamic target areas, and taking all the reserved first dynamic target areas as a final first dynamic target identification result.
Optionally, the matching, in the image coordinate system, the first dynamic target recognition result and the second dynamic target recognition result to determine a final first dynamic target recognition result, further includes:
if the first dynamic target area which is not matched with the second dynamic target area exists, the second dynamic target area is inversely calculated to the three-dimensional space of the laser point cloud data;
based on the second dynamic target area after the back calculation, neighborhood retrieval and clustering are carried out on the laser point cloud data, and a target rectangular frame containing a target is extracted;
judging whether the corresponding target is a dynamic target or not based on the target rectangular frame;
and if the dynamic target exists, adding a corresponding dynamic target area in the laser point cloud data to a final first dynamic target identification result.
Optionally, the performing neighborhood retrieval and clustering on the laser point cloud data based on the back-calculated second dynamic target region, and extracting a target rectangular frame containing a target includes:
taking the center point of the second dynamic target area after the back calculation as the center, performing iterative neighborhood retrieval and clustering on the laser point cloud data to obtain laser point cloud areas belonging to the same target;
and extracting a target rectangular frame containing the target based on the acquired laser point cloud area belonging to the same target.
Optionally, the determining whether the corresponding target is a dynamic target based on the target rectangular frame includes:
and extracting the length, the width and the height of the target rectangular frame, and judging whether the corresponding target is a dynamic target or not based on the length, the width and the height ratio value.
According to a second aspect of the present invention, there is provided a dynamic object automatic identification system, comprising:
the acquisition module is used for respectively acquiring laser point cloud data and image data;
the identification module is used for identifying a first dynamic target identification result in the laser point cloud data based on a point cloud dynamic target detection model and identifying a second dynamic target identification result in the image data based on an image dynamic target detection model;
the conversion module is used for converting the first dynamic target identification result into an image coordinate system based on the pose information;
and the determining module is used for matching the first dynamic target recognition result with the second dynamic target recognition result in the image coordinate system to determine a final first dynamic target recognition result.
According to a third aspect of the present invention, there is provided an electronic device comprising a memory, a processor for implementing the steps of the dynamic object automated identification method when executing a computer management class program stored in the memory.
According to a fourth aspect of the present invention, there is provided a computer readable storage medium having stored thereon a computer management class program, which when executed by a processor implements the steps of a dynamic object automated identification method.
According to the automatic dynamic target identification method provided by the invention, high-precision three-dimensional point cloud and image data are used as main data sources, dynamic targets are extracted from the point cloud data by means of dynamic target reasoning results of a deep learning technology, data redundancy and high-precision map making interference factors are reduced by means of the deep learning technology, the efficiency of making a high-precision map by using the point cloud data is improved, and compared with the traditional extraction method, the automatic dynamic target identification method has better scene generalization capability.
Drawings
FIG. 1 is a flow chart of a method for automatically identifying a dynamic target according to the present invention;
FIG. 2 is an overall flow chart of a dynamic object automated identification method;
FIG. 3 is a schematic structural diagram of a dynamic object automatic identification system according to the present invention;
FIG. 4 is a schematic diagram of a hardware structure of a possible electronic device provided in the present invention;
fig. 5 is a schematic diagram of a hardware structure of a possible computer-readable storage medium according to the present invention.
Detailed Description
The following detailed description of embodiments of the present invention is provided in connection with the accompanying drawings and examples. The following examples are intended to illustrate the invention but are not intended to limit the scope of the invention.
In the process of manufacturing and updating the high-precision map, the aim of improving the efficiency can be achieved by means of a deep learning technology from the aspects of reducing human intervention and improving the efficiency of manual operation. The technology of deep learning in the aspect of image processing is mature, but an image can only express two-dimensional information, and in the high-precision map making process, the technology of reversely calculating two-dimensional image points into three-dimensional points from an image is a technical difficulty at present, so that the precision requirement cannot be met, and the laser radar data becomes an essential data source. In the aspect of improving the manual work efficiency, the classification of the laser radar data can be considered, for example, dynamic targets are removed, so that redundant data are reduced, the data loading speed of a map making tool is improved, and manual making interference items are reduced.
Example one
A dynamic target automatic identification method, see FIG. 1, mainly includes the following steps:
and S1, respectively acquiring laser point cloud data and image data.
It can be understood that the road point cloud data used by the invention is point cloud data acquired by a single line laser radar, a plurality of approximately parallel scanning lines formed by dense discrete points are arranged on a road surface, and an image is forward-looking image data acquired by a wide-angle camera.
The vehicle is provided with a Global Positioning System (GPS), an inertial unit and the like, when laser point cloud data of a road are obtained, GPS track data and IMU information are obtained, and WGS84 absolute coordinates of a laser radar sensor center are obtained based on offset compensation numbers of a GPS center and the laser radar sensor center, so that single-frame point cloud data are converted to a WGS84 absolute coordinate system from a relative coordinate system, multi-frame point cloud data are fused, and the laser point cloud data are converted to the absolute coordinate system which is the same as the GPS track data from the relative coordinate system.
Wherein, under the condition that the GPS signal is unlocked, the GPS track data can be supplemented and corrected by utilizing the IMU information and the odometer information.
S2, identifying a first dynamic target identification result in the laser point cloud data based on the point cloud dynamic target detection model, and identifying a second dynamic target identification result in the image data based on the image dynamic target detection model.
As an example, the point cloud dynamic target detection model is obtained by: cutting the fused laser point cloud data to obtain a plurality of laser point cloud data blocks; and training the point cloud dynamic target detection model based on a plurality of laser point cloud data blocks to obtain the trained point cloud dynamic target detection model.
As an embodiment, the cutting the fused laser point cloud data to obtain a plurality of laser point cloud data blocks includes: and under an absolute coordinate system, performing thinning on the GPS track data according to a fixed distance interval, and uniformly cutting the fused laser point cloud data according to the same fixed distance interval to obtain a plurality of laser point cloud data blocks.
Specifically, in step S1, each frame of laser point cloud data is converted into an absolute coordinate system, in the absolute coordinate system, dense GPS track data is thinned at fixed distance intervals, and the laser point cloud data fused in step S1 is cut at the same intervals to obtain a plurality of laser point cloud data blocks. The step eliminates redundant track data, and then adjusts the overlapping of the cut point cloud data blocks to a reasonable range.
Specifically, a dynamic target sample is manufactured by utilizing the cut point cloud data block, a required point cloud dynamic target detection model is trained, new laser point cloud data is subjected to reasoning and identification based on the trained point cloud dynamic target detection model, namely, a dynamic target area is identified from the laser point cloud data, and a first dynamic target identification result is formed. The point cloud data is the core data of the invention, and the detection result of the point cloud dynamic target detection model is used as a reference, so that the generalization capability is stronger compared with the detection mode of the traditional clustering and expert system.
Similarly, a dynamic target sample is made by using the image data, a required image dynamic target detection model is trained, and new image data is inferred based on the trained image dynamic target detection model. The method fully utilizes the advantages of high recall rate and high precision of image deep learning to identify all dynamic targets on the road in the RGB image as far as possible.
And S3, converting the first dynamic target recognition result into an image coordinate system based on the pose information.
It can be understood that the dynamic target area of the first dynamic target identification result identified by the point cloud dynamic target detection model is converted into an image coordinate system by means of pose information. And the pose information is obtained by calculation according to the GPS information, the IMU information and the odometer information.
And S4, matching the first dynamic target recognition result with the second dynamic target recognition result in the image coordinate system, and determining the final first dynamic target recognition result.
It is understood that step S3 converts the dynamic object region of the first dynamic object recognition result into an image coordinate system, and this step matches the first dynamic object recognition result with the second dynamic object recognition result in the image coordinate system. And taking the second dynamic target recognition result as a reference, if any first dynamic target area in the first dynamic target recognition result can be matched in the second dynamic target result, keeping any first dynamic target area, and indicating that any first dynamic target area is a real dynamic target area, wherein the fact that a dynamic target really exists in the area can also be understood. And traversing all the first dynamic target areas in the first dynamic target recognition result, and taking all the reserved first dynamic target areas as a final first dynamic target result.
As an embodiment, the matching, in the image coordinate system, the first dynamic target recognition result and the second dynamic target recognition result to determine a final first dynamic target recognition result, further includes: if the first dynamic target area which is not matched with the second dynamic target area exists, the second dynamic target area is inversely calculated to the three-dimensional space of the laser point cloud data; based on the second dynamic target area after the back calculation, neighborhood retrieval and clustering are carried out on the laser point cloud data, and a target rectangular frame containing a target is extracted; judging whether the corresponding target is a dynamic target or not based on the target rectangular frame; and if the dynamic target exists, adding a corresponding dynamic target area in the laser point cloud data to a final first dynamic target identification result.
It can be understood that, if there is a dynamic target area in the image data that does not exist in the point cloud data, in short, the dynamic target is detected in the image data, but the dynamic target is not detected in the point cloud data, it is necessary to verify the corresponding area of the point cloud data, and whether the dynamic target does exist in the area.
Specifically, the method for performing neighborhood retrieval and clustering on the laser point cloud data based on the back-calculated second dynamic target region and extracting a target rectangular frame containing a target includes: taking the center point of the second dynamic target area after the back calculation as the center, performing iterative neighborhood retrieval and clustering on the laser point cloud data to obtain laser point cloud areas belonging to the same target; and extracting a target rectangular frame containing the target based on the acquired laser point cloud area belonging to the same target.
Finally, the length, the width and the height of the target rectangular frame are extracted, whether the corresponding target is a dynamic target or not is judged based on the length, the width and the height ratio value, for example, whether the corresponding target is a dynamic target or not is determined according to the length, the width and the height of the target rectangular frame and an empirical value of the length, the width and the height ratio of each dynamic target in practice, for example, whether the corresponding target is a pedestrian, a vehicle and the like is determined. And if the dynamic target exists in the area, adding the block area in the point cloud data into the final first dynamic target identification result. The image data is inversely calculated into the three-dimensional space of the point cloud data, whether the point cloud data is a real dynamic target or not is judged again, the missing detection part of the point cloud detection is supplemented, and the recall ratio of the dynamic target detection is improved.
Example two
A dynamic target automatic identification method comprises the following steps: respectively acquiring laser point cloud data and image data; identifying a first dynamic target identification result in the laser point cloud data based on a point cloud dynamic target detection model, and identifying a second dynamic target identification result in the image data based on an image dynamic target detection model; converting the first dynamic target recognition result into an image coordinate system based on pose information; and matching the first dynamic target recognition result with the second dynamic target recognition result under the image coordinate system to determine a final first dynamic target recognition result.
It can be understood that, referring to fig. 2, first, the single-line point cloud data is respectively obtained, and the obtained single-line point cloud data is fused and resolved according to the GPS data, the IMU data and the odometer information. And cutting and partitioning the fused point cloud data to obtain a plurality of point cloud data blocks. And identifying a dynamic target from the laser point cloud data based on the point cloud dynamic target detection model, wherein the dynamic target is called a first dynamic target identification result.
Similarly, for image data, thinning processing is firstly carried out, redundant track data are eliminated, and then overlapping of the cut point cloud blocks is adjusted to a reasonable range. And for the image data after the thinning processing, identifying a dynamic target from the image data based on an image dynamic target detection model, and referring to a second dynamic target identification result.
And converting the first dynamic target recognition result into an image coordinate system based on the pose information of the camera, and confirming the first dynamic target recognition result based on the second dynamic target recognition result in the image coordinate system to obtain a final first dynamic target recognition result.
For the dynamic target which is missed to be detected, the corresponding image data is inversely calculated into the three-dimensional space of the point cloud data, whether the dynamic target exists or not is verified and judged based on the point cloud data in the three-dimensional space, the missed detection part of the point cloud detection is supplemented, and the recall ratio of the dynamic target detection is improved.
Aiming at the adverse effects of redundant components of dynamic targets in the point cloud data of the laser radar on the data loading speed of a map making tool and the manual making efficiency in the current high-precision map making, the dynamic target extraction method takes high-precision three-dimensional point cloud and image data as main data sources and extracts the dynamic targets from the point cloud data by means of the dynamic target reasoning result of a deep learning technology. By means of the deep learning technology, data redundancy and high-precision map making interference factors are reduced, the efficiency of point cloud data making high-precision maps is improved, and compared with a traditional extraction method, the method has better scene generalization capability.
EXAMPLE III
A dynamic object automated identification system, see fig. 3, comprising an acquisition module 301, an identification module 302, a scaling module 303 and a determination module 304, wherein:
an obtaining module 301, configured to obtain laser point cloud data and image data respectively; an identification module 302 configured to identify a first dynamic target result in the laser point cloud data based on a point cloud dynamic target detection model, and identify a second dynamic target result in the image data based on an image dynamic target detection model; a conversion module 303, configured to convert the first dynamic target result into an image coordinate system based on the pose information; the determining module 304 is configured to match the first dynamic target result with the second dynamic target result in the image coordinate system, and determine a final first dynamic target result.
It can be understood that the dynamic target automatic identification system provided by the present invention corresponds to the dynamic target automatic identification methods provided in the foregoing embodiments, and the relevant technical features of the dynamic target automatic identification system may refer to the relevant technical features of the dynamic target automatic identification method, and are not described herein again.
Example four
Referring to fig. 4, fig. 4 is a schematic view of an embodiment of an electronic device according to an embodiment of the invention. As shown in fig. 4, an embodiment of the present invention provides an electronic device 400, which includes a memory 410, a processor 420, and a computer program 411 stored in the memory 410 and executable on the processor 420, and when the processor 420 executes the computer program 411, the following steps are implemented: respectively acquiring laser point cloud data and image data; identifying a first dynamic target result in the laser point cloud data based on a point cloud dynamic target detection model, and identifying a second dynamic target result in the image data based on an image dynamic target detection model; converting the first dynamic target result into an image coordinate system based on pose information; and matching the first dynamic target result with the second dynamic target result under the image coordinate system to determine a final first dynamic target result.
EXAMPLE five
Referring to fig. 5, fig. 5 is a schematic diagram of an embodiment of a computer-readable storage medium according to the present invention. As shown in fig. 5, the present embodiment provides a computer-readable storage medium 500 having a computer program 511 stored thereon, the computer program 511 implementing the following steps when executed by a processor: respectively acquiring laser point cloud data and image data; identifying a first dynamic target result in the laser point cloud data based on a point cloud dynamic target detection model, and identifying a second dynamic target result in the image data based on an image dynamic target detection model; converting the first dynamic target result into an image coordinate system based on pose information; and matching the first dynamic target result with the second dynamic target result under the image coordinate system to determine a final first dynamic target result.
According to the automatic dynamic target identification method provided by the embodiment of the invention, the high-precision three-dimensional point cloud and the image data are used as main data sources, the dynamic target is extracted from the point cloud data by means of the dynamic target reasoning result of the deep learning technology, the data redundancy and the high-precision map making interference factors are reduced by means of the deep learning technology, the efficiency of making the high-precision map by the point cloud data is improved, and the automatic dynamic target identification method has better scene generalization capability compared with the traditional extraction method. Compared with the prior art that the high-precision map is manufactured without dynamic target segmentation on the point cloud, the method has the advantages that the speed of loading data by a tool is effectively increased, the operation difficulty of a manufacturer is greatly reduced, the efficiency is higher, the image deep learning technology is fused, and the accuracy of target detection can be improved
It should be noted that, in the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to relevant descriptions of other embodiments for parts that are not described in detail in a certain embodiment.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (10)

1. A dynamic target automatic identification method is characterized by comprising the following steps:
respectively acquiring laser point cloud data and image data;
identifying a first dynamic target identification result in the laser point cloud data based on a point cloud dynamic target detection model, and identifying a second dynamic target identification result in the image data based on an image dynamic target detection model;
converting the first dynamic target recognition result into an image coordinate system based on pose information;
and matching the first dynamic target recognition result with the second dynamic target recognition result under the image coordinate system to determine a final first dynamic target recognition result.
2. The method for automatically identifying a dynamic target according to claim 1, wherein the laser point cloud data is road point cloud data obtained by a single line laser radar, and the image data is forward-looking image data collected by a wide-angle camera.
3. The method for automatically identifying a dynamic target according to claim 1, wherein the separately acquiring laser point cloud data and image data further comprises:
acquiring GPS track data, and acquiring WGS84 absolute coordinates of a laser radar sensor center based on offset compensation numbers of a GPS center and the laser radar sensor center, so that single-frame point cloud data are converted to a WGS84 absolute coordinate system from a relative coordinate system, and multi-frame point cloud data are fused;
and under the condition that the GPS signal is unlocked, supplementing and correcting the GPS track data by utilizing IMU information and odometer information.
4. The method of claim 3, wherein the point cloud dynamic target detection model is obtained by:
cutting the fused laser point cloud data to obtain a plurality of laser point cloud data blocks;
and training the point cloud dynamic target detection model based on a plurality of laser point cloud data blocks to obtain the trained point cloud dynamic target detection model.
5. The method for automatically identifying a dynamic target according to claim 4, wherein the step of cutting the fused laser point cloud data to obtain a plurality of laser point cloud data blocks comprises:
and under an absolute coordinate system, performing thinning on the GPS track data according to a fixed distance interval, and uniformly cutting the fused laser point cloud data according to the same fixed distance interval to obtain a plurality of laser point cloud data blocks.
6. The method of claim 1, wherein the pose information is calculated based on GPS information, IMU information, and odometer information.
7. The method according to claim 1, wherein the first dynamic target recognition result includes a plurality of first dynamic target areas, the second dynamic target recognition result includes a plurality of second dynamic target areas, and the determining the final first dynamic target recognition result by matching the first dynamic target recognition result with the second dynamic target recognition result in the image coordinate system includes:
for any first dynamic target area, if any first dynamic target area can be matched in the second dynamic target identification result, reserving any first dynamic target area;
and traversing all the first dynamic target areas, and taking all the reserved first dynamic target areas as a final first dynamic target identification result.
8. The method according to claim 7, wherein the step of matching the first dynamic target recognition result with the second dynamic target recognition result in the image coordinate system to determine a final first dynamic target recognition result further comprises:
if the first dynamic target area which is not matched with the second dynamic target area exists, the second dynamic target area is inversely calculated to the three-dimensional space of the laser point cloud data;
based on the second dynamic target area after the back calculation, neighborhood retrieval and clustering are carried out on the laser point cloud data, and a target rectangular frame containing a target is extracted;
judging whether the corresponding target is a dynamic target or not based on the target rectangular frame;
and if the dynamic target exists, adding a corresponding dynamic target area in the laser point cloud data to a final first dynamic target identification result.
9. The method of claim 8, wherein the performing neighborhood search and clustering on the laser point cloud data based on the back-calculated second dynamic target region to extract a target rectangular frame containing a target comprises:
taking the center point of the second dynamic target area after the back calculation as the center, performing iterative neighborhood retrieval and clustering on the laser point cloud data to obtain laser point cloud areas belonging to the same target;
and extracting a target rectangular frame containing the target based on the acquired laser point cloud area belonging to the same target.
10. The method according to claim 8, wherein the determining whether the corresponding target is a dynamic target based on the target rectangular frame comprises:
and extracting the length, the width and the height of the target rectangular frame, and judging whether the corresponding target is a dynamic target or not based on the length, the width and the height ratio value.
CN202111422375.3A 2021-11-26 2021-11-26 Automatic dynamic target identification method Pending CN114140770A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111422375.3A CN114140770A (en) 2021-11-26 2021-11-26 Automatic dynamic target identification method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111422375.3A CN114140770A (en) 2021-11-26 2021-11-26 Automatic dynamic target identification method

Publications (1)

Publication Number Publication Date
CN114140770A true CN114140770A (en) 2022-03-04

Family

ID=80388487

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111422375.3A Pending CN114140770A (en) 2021-11-26 2021-11-26 Automatic dynamic target identification method

Country Status (1)

Country Link
CN (1) CN114140770A (en)

Similar Documents

Publication Publication Date Title
CN108256574B (en) Robot positioning method and device
CN104833370B (en) System and method for mapping, positioning and pose correction
CN111179152B (en) Road identification recognition method and device, medium and terminal
CN111830953B (en) Vehicle self-positioning method, device and system
JP2018124787A (en) Information processing device, data managing device, data managing system, method, and program
US20220398856A1 (en) Method for reconstruction of a feature in an environmental scene of a road
WO2021254019A1 (en) Method, device and system for cooperatively constructing point cloud map
CN110992424B (en) Positioning method and system based on binocular vision
CN111415364A (en) Method, system and storage medium for converting image segmentation samples in computer vision
CN113255578A (en) Traffic identification recognition method and device, electronic equipment and storage medium
CN116222539A (en) High-precision map data differentiated updating method and system
CN112418193B (en) Lane line identification method and system
CN113189610B (en) Map-enhanced autopilot multi-target tracking method and related equipment
CN114428259A (en) Automatic vehicle extraction method in laser point cloud of ground library based on map vehicle acquisition
CN113465615A (en) Lane line generation method and related device
WO2020118623A1 (en) Method and system for generating an environment model for positioning
CN114140770A (en) Automatic dynamic target identification method
CN110544201A (en) Large-range splicing method and device for vehicle-mounted laser scanning point cloud
CN115773747A (en) High-precision map generation method, device, equipment and storage medium
CN111323026A (en) Ground filtering method based on high-precision point cloud map
CN113157827B (en) Lane type generation method and device, data processing equipment and storage medium
CN114475615A (en) Apparatus and method for identifying driving lane based on multiple sensors
CN114791936A (en) Storage, efficient editing and calling method for passable area of unmanned vehicle
CN114120279A (en) Traffic signboard updating method, system, electronic equipment and storage medium
KR101561382B1 (en) Device and method for tracking car lane based on road geometry model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination