CN111323027A - Method and device for manufacturing high-precision map based on fusion of laser radar and panoramic camera - Google Patents

Method and device for manufacturing high-precision map based on fusion of laser radar and panoramic camera Download PDF

Info

Publication number
CN111323027A
CN111323027A CN201811545531.3A CN201811545531A CN111323027A CN 111323027 A CN111323027 A CN 111323027A CN 201811545531 A CN201811545531 A CN 201811545531A CN 111323027 A CN111323027 A CN 111323027A
Authority
CN
China
Prior art keywords
point cloud
data
picture
cloud data
precision map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811545531.3A
Other languages
Chinese (zh)
Inventor
周睿
孙宇
许易冲
王金强
周庆国
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lanzhou University
Original Assignee
Lanzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lanzhou University filed Critical Lanzhou University
Priority to CN201811545531.3A priority Critical patent/CN111323027A/en
Publication of CN111323027A publication Critical patent/CN111323027A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • G01C21/30Map- or contour-matching
    • G01C21/32Structuring or formatting of map data

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses a method and a device for manufacturing a high-precision map based on fusion of a laser radar and a panoramic camera, which are mainly applied to the field of unmanned driving and are a method for manufacturing a high-precision map by utilizing multi-sensor information fusion. The method comprises the following steps: acquiring three-dimensional point cloud data of a surrounding environment acquired by a laser radar; acquiring picture data of a surrounding environment collected by a panoramic camera; marking temporary obstacles on the picture by adopting a deep learning method; clustering the original point cloud; fusing point cloud and picture information, and deleting point cloud data to which the temporary barrier belongs in the point cloud data; and generating a high-precision map by using the processed point cloud data. By adopting the method of fusing laser radar and camera data for secondary processing, the influence of the information of the object with unfixed position in the sampling map on the unmanned driving is effectively reduced, accurate road information is obtained to the maximum extent, and an effective high-precision map is provided for the unmanned vehicle.

Description

Method and device for manufacturing high-precision map based on fusion of laser radar and panoramic camera
Technical Field
The invention relates to the field of unmanned driving, in particular to a method for manufacturing a high-precision map based on fusion of a laser radar and a panoramic camera.
Background
At present, scientific and technological development is continuously carried out, the technology in the field of artificial intelligence is gradually formed by combining with the sensor technology, and the intelligent unmanned automobile with the functions of sensing, planning and controlling is applied to the automobile industry, the unmanned automobile can sense the surrounding environment through a sensor of the unmanned automobile, and the unmanned automobile can be planned and driven on a road according to the action calculated by a planning module, so that the aim of no manual operation of a human driver is achieved. The high-precision map is obtained by data fusion processing of each sensor, can provide detailed information of the surrounding environment of the vehicle, including lane types, traffic light positions, curve curvatures and the like, provides important basis for planning, and occupies an important position in the unmanned driving technology.
In the prior art, no matter a high-precision map is generated by only using a visual sensor or only using a laser radar, the high-precision map is generated by collecting real-time environment information data around a vehicle by using the sensor. In the data acquired by the method, a large amount of temporary data only existing in the acquisition process exists, and the later planning work is greatly interfered.
Disclosure of Invention
The invention aims to provide a method for manufacturing a high-precision map based on the fusion of a laser radar and a panoramic camera, so as to solve the technical defects mentioned in the background technology. The technical scheme is as follows:
according to a first aspect of the embodiments of the present invention, a method for manufacturing a high-precision map based on fusion of a laser radar and a panoramic camera is provided, the method including:
acquiring three-dimensional point cloud data of the surrounding environment of the vehicle body, which is acquired by a laser radar fixed above the vehicle body;
acquiring picture data of the surrounding environment of the vehicle, which is acquired by a panoramic camera fixed in front of the vehicle body;
marking temporary obstacles on the pictures by adopting a deep learning method according to the collected picture data;
clustering the point cloud according to the collected three-dimensional point cloud data;
fusing three-dimensional point cloud and picture information according to the point cloud and picture data;
deleting data information of the temporary obstacles in the point cloud data according to the fused data;
and processing the three-dimensional point cloud to obtain data and generating a high-precision map.
The marking of the picture by adopting the deep learning method comprises the following steps: and training a reliable model capable of accurately identifying temporary obstacles possibly existing on public roads such as vehicles, pedestrians and the like, and processing the picture by applying the model.
The point cloud data and picture data fusion comprises the following steps: and jointly calibrating the laser radar and the panoramic camera to ensure that the coordinate system is unified and the positions of the objects marked in the picture information are the same as the objects at the corresponding positions in the three-dimensional point cloud data.
And based on the clustering processing of the three-dimensional point cloud data, obtaining the point cloud data position corresponding to the obstacle according to the picture mark, and determining the point cloud scale of the object at the specified position.
And deleting point cloud data belonging to temporary obstacles based on the point cloud data, processing the point cloud data, and making a high-precision map.
According to a second aspect of the embodiments of the present invention, there is provided an apparatus for manufacturing a high-precision map based on fusion of a laser radar and a panoramic camera, the apparatus including:
the data acquisition module is used for acquiring three-dimensional point cloud data and visual picture data around the vehicle body;
the temporary obstacle marking module is used for processing the picture data by applying a deep learning algorithm and marking temporary obstacles in the picture;
the data fusion module is used for fusing the image data with the three-dimensional point cloud data and deleting the point cloud to which the corresponding temporary obstacle belongs in the point cloud data;
and determining a manufacturing module, and manufacturing a high-precision map by using the processed point cloud data.
The data acquisition module comprises:
collecting three-dimensional point cloud data of the surrounding environment of the vehicle body by using a laser radar;
and acquiring visual picture data of the surrounding environment of the vehicle body by using the all-around camera.
The temporary obstacle marking module is used for processing the picture information acquired by the data acquisition module and marking temporary obstacles such as vehicles, pedestrians and the like in the picture.
And according to a reliable deep learning model which can accurately identify temporary obstacles in the picture, the model is applied to process the picture information acquired by the data acquisition module.
And the data fusion module fuses the three-dimensional point cloud data and the panoramic camera data, corresponds the temporary obstacles in the point cloud data with the marked objects, and determines the range of the point cloud data to be deleted.
And clustering according to the original point cloud data, and unifying the image data and the three-dimensional point cloud data coordinate system.
And based on the three-dimensional point cloud and the panoramic camera data, making the high-precision map according to the matching operation of the modules.
The technical scheme provided by the embodiment of the invention at least comprises the following beneficial effects:
acquiring picture information according to a panoramic camera, marking temporary obstacles on a picture by applying a deep learning method, fusing three-dimensional point cloud and the picture information, deleting the corresponding parts of the temporary obstacles in original three-dimensional point cloud data, and generating a high-precision map according to the point cloud data. The process can effectively reduce the influence of non-fixed information such as vehicles, pedestrians and the like in the sampling map on the unmanned planning module, obtain accurate road information to the maximum extent, and provide a more effective high-precision map for the unmanned vehicles.
Drawings
For a clearer explanation of the exemplary embodiments of the invention, reference is made to the following brief description of the drawings in the description of the exemplary embodiments:
FIG. 1 is a diagram of a smart car architecture upon which embodiments of the present invention are based;
fig. 2 is a flowchart of a method for manufacturing a high-precision map based on the fusion of a laser radar and a panoramic camera according to an embodiment of the present invention;
FIG. 3 is a flowchart of a deep learning model for training and recognizing temporary obstacles according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of an apparatus for manufacturing a high-precision map based on fusion of a laser radar and a panoramic camera according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of a data processing module in the high-precision map production according to an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. The specific embodiments described herein are merely illustrative of the relevant invention and are not intended to be limiting of the invention. It should be further noted that, for the convenience of description, only the parts relevant to the present invention are shown in the drawings:
FIG. 1 shows a vehicle equipped with a lidar and a look-around camera, including a lidar 101, a look-around camera 102, and a vehicle 103. Wherein:
the laser radar 101 is located above the vehicle;
the look-around camera 102 is located in front of the vehicle;
fig. 2 shows a flow of a method for producing a high-precision map based on the fusion of a laser radar and a look-around camera according to the invention. The method for making a high-precision map may include the steps of:
step 201, three-dimensional point cloud data around a vehicle body collected by a laser radar is obtained.
Step 202, image data of the surrounding environment of the vehicle collected by the looking-around camera is obtained.
And step 203, marking temporary obstacles on the picture by adopting a deep learning method according to the acquired picture data.
The model can accurately identify temporary obstacles, such as pedestrians and vehicles, which are set artificially and can appear in normal road situations in the image data. The method for marking the temporary obstacle by the model is to mark the position information of the obstacle in the picture, namely a position determined by the coordinate value under the picture coordinate system.
Fig. 3 is a training process of the deep learning model, which includes the following specific steps:
step 2031, collecting and arranging road scene picture data;
step 2032, labeling the picture, marking temporary obstacles, and generating a data set;
step 2033, training the data set by applying a deep learning algorithm;
step 2034, a deep learning model capable of identifying temporary obstacles is obtained.
There are many options for the algorithm for training the deep learning model, including the use of convolutional neural networks, cyclic neural networks, feed-forward neural networks, and the like.
And 204, clustering the point cloud according to the acquired three-dimensional point cloud data.
Based on the three-dimensional point cloud data acquired by the laser radar in step 201 as original point cloud data, clustering the original point cloud data to aggregate the originally dispersedly displayed point clouds into different and independent individuals according to the outlines. These individuals are only a collection of point clouds and cannot be determined to be a specific object.
And step 205, fusing three-dimensional point cloud and picture information according to the point cloud and picture data.
In the embodiment of the invention, the laser radar and the panoramic camera are jointly calibrated, so that the unification of a coordinate system and the condition that the position of an object marked in the picture information is the same as the position of the object in the three-dimensional point cloud data are ensured. The three-dimensional point cloud data and the picture are fused, that is, the position coordinates of the three-dimensional point cloud data and the picture are unified, and the position of each object on the picture can find the corresponding coordinate in the three-dimensional point cloud data, so that the correspondence of each individual in the data is realized.
And step 206, deleting the data information belonging to the temporary obstacle in the point cloud data according to the fused data.
Based on the fusion of the three-dimensional point cloud data and the picture data, the correspondence of the point cloud set in the point cloud data can be found according to the temporary obstacles marked in the picture data by the deep learning model. And after a point cloud set of temporary obstacles in the point cloud data is determined, processing the original point cloud data, and deleting the point cloud set to obtain the point cloud data for manufacturing the high-precision map.
And step 207, processing the three-dimensional point cloud to obtain data, and generating a high-precision map.
The process of generating the high-precision map based on the processed three-dimensional point cloud comprises the following steps:
1. the point cloud data is subjected to down-sampling processing, so that on one hand, the interference in the data can be reduced, and the noise is reduced; on the other hand, the map made is more simplified, and the occupied memory is smaller;
2. based on the intensity information of the three-dimensional point cloud, the point cloud is represented by different colors, and the visualization effect of the point cloud map is enhanced.
3. And adding additional information of roads in the point cloud map, marking out road conditions and necessary traffic identification content, and providing support for the later planning of the unmanned vehicle.
In summary, according to the method for manufacturing a high-precision map based on the fusion of the laser radar and the panoramic camera, provided by the embodiment of the invention, the processed camera data and the three-dimensional point cloud data are fused, and the original point cloud data is optimized to obtain the point cloud data for generating the high-precision map. The method can effectively reduce the influence of non-fixed position information such as vehicles, pedestrians and the like in the sampled map on the unmanned planning module, can obtain accurate road information to the maximum extent, and provides an effective high-precision map for the unmanned vehicles.
The above embodiments are only preferred embodiments of the present invention, and are not intended to limit the scope of the present invention, and the sequence of the steps for manufacturing the high-precision map provided by the present invention may be appropriately adjusted according to the actual situation, and the manufacturing steps may also be increased or decreased accordingly. All equivalent changes made in the claims and the specification of the invention are within the protection scope of the patent claims of the invention.
The embodiment of the invention provides a device 300 for manufacturing a high-precision map based on the fusion of a laser radar and a panoramic camera, which is shown in detail in figure 4 and comprises:
the data acquisition module 301 is configured to acquire three-dimensional point cloud data and visual image data around the vehicle body.
And the temporary obstacle marking module 302 is used for processing the picture data by applying a deep learning algorithm and marking the temporary obstacles in the picture.
And the data fusion module 303 is used for fusing the image data with the three-dimensional point cloud data and deleting the point cloud to which the corresponding temporary obstacle belongs in the point cloud data.
And a determining and manufacturing module 304 for manufacturing a high-precision map by using the processed point cloud data.
The data acquisition module includes:
collecting three-dimensional point cloud data 3011 of the surrounding environment of the vehicle body by using a laser radar;
the around-the-vehicle camera is used to capture visual picture data 3012 of the environment surrounding the vehicle body.
The temporary obstacle marking module further comprises: and training a reliable deep learning model capable of accurately identifying temporary obstacles in the picture, processing the picture information acquired by the data acquisition module by using the model, and marking the temporary obstacles such as vehicles, pedestrians and the like in the picture.
The data fusion module further comprises:
1. clustering the original point cloud data;
2. after the image data and the three-dimensional point cloud data are fused, the coordinate systems of the image data and the three-dimensional point cloud data are unified, and objects clustered in the point cloud data correspond to temporary obstacles marked in the image;
3. and determining the range of the point cloud data needing to be deleted.
Since the specific working processes of the above devices and modules are described in the foregoing method embodiments, they are not described herein again for the sake of brevity.
The above description is only a preferred embodiment of the present invention, and it should be understood by those skilled in the art that the scope of the present invention is not limited to the above technical features or their equivalent technical solutions in specific combinations. But also other solutions formed by any combination of the above mentioned technical features or their equivalents without departing from the inventive concept.

Claims (10)

1. A method for manufacturing a high-precision map based on fusion of a laser radar and a camera is characterized by comprising the following steps:
acquiring three-dimensional point cloud data of the surrounding environment of the vehicle body, which is acquired by a laser radar fixed above the vehicle body;
acquiring picture data of the surrounding environment of the vehicle, which is acquired by a panoramic camera fixed in front of the vehicle body;
marking temporary obstacles on the pictures by adopting a deep learning method according to the collected picture data;
clustering the point cloud according to the collected three-dimensional point cloud data;
fusing three-dimensional point cloud and picture information according to the point cloud and picture data;
deleting data information belonging to temporary obstacles in the point cloud data according to the fused data;
and generating a high-precision map based on the data obtained after the three-dimensional point cloud processing.
2. The method for marking temporary obstacles on pictures by adopting deep learning as claimed in claim 1, characterized in that a reliable model capable of accurately identifying temporary obstacles possibly existing on public roads such as vehicles, pedestrians is trained, and the pictures are processed by applying the model.
3. The fused three-dimensional point cloud and picture information data of claim 1, wherein a laser radar and a look-around camera are calibrated jointly to ensure that the uniformity of a coordinate system and the position of an object marked in the picture information are the same as the position of the object in the corresponding three-dimensional point cloud data.
4. The method for deleting the data information to which the temporary obstacle belongs in the point cloud data based on the fused data as claimed in claim 1, wherein the point cloud data position corresponding to the obstacle is obtained according to the picture mark, and the point cloud scale of the object at the specified position is determined.
5. The method of claim 1, wherein the high-precision map is generated based on the processed point cloud data.
6. A device for manufacturing a high-precision map based on fusion of a laser radar and a panoramic camera is characterized by comprising:
the data acquisition module is used for acquiring three-dimensional point cloud data and visual picture data around the vehicle body;
the temporary obstacle marking module is used for processing the picture data by applying a deep learning algorithm and marking temporary obstacles in the picture;
the data fusion module is used for fusing the image data with the three-dimensional point cloud data and deleting the point cloud to which the corresponding temporary obstacle belongs in the point cloud data;
and determining a manufacturing module, and manufacturing a high-precision map by using the processed point cloud data.
7. The data acquisition module of claim 6, wherein:
collecting three-dimensional point cloud data of the surrounding environment of the vehicle body by using a laser radar;
and acquiring visual picture data of the surrounding environment of the vehicle body by using the all-around camera.
8. The module of claim 6, wherein a reliable deep learning model capable of accurately recognizing the temporary obstacle in the picture is trained, and the model is applied to process the picture information collected by the data collection module to mark the temporary obstacle such as a vehicle or a pedestrian in the picture.
9. The data fusion module of claim 6, wherein the original point cloud data is clustered, and objects clustered in the point cloud data are mapped to temporary obstacles marked in the picture according to the coordinate systems of the picture data and the three-dimensional point cloud data after the picture data and the three-dimensional point cloud data are fused, so as to determine the range of the point cloud data to be deleted.
10. The apparatus of claim 6, wherein the modules are adapted to cooperate with each other to implement the method of claim 1 to 5.
CN201811545531.3A 2018-12-17 2018-12-17 Method and device for manufacturing high-precision map based on fusion of laser radar and panoramic camera Pending CN111323027A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811545531.3A CN111323027A (en) 2018-12-17 2018-12-17 Method and device for manufacturing high-precision map based on fusion of laser radar and panoramic camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811545531.3A CN111323027A (en) 2018-12-17 2018-12-17 Method and device for manufacturing high-precision map based on fusion of laser radar and panoramic camera

Publications (1)

Publication Number Publication Date
CN111323027A true CN111323027A (en) 2020-06-23

Family

ID=71172450

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811545531.3A Pending CN111323027A (en) 2018-12-17 2018-12-17 Method and device for manufacturing high-precision map based on fusion of laser radar and panoramic camera

Country Status (1)

Country Link
CN (1) CN111323027A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112365582A (en) * 2020-11-17 2021-02-12 电子科技大学 Countermeasure point cloud generation method, storage medium and terminal
CN112434119A (en) * 2020-11-13 2021-03-02 武汉中海庭数据技术有限公司 High-precision map production device based on heterogeneous data fusion
CN112528771A (en) * 2020-11-27 2021-03-19 深兰科技(上海)有限公司 Obstacle detection method, obstacle detection device, electronic device, and storage medium
CN112577499A (en) * 2020-11-19 2021-03-30 上汽大众汽车有限公司 VSLAM feature map scale recovery method and system
CN112991455A (en) * 2021-02-01 2021-06-18 武汉光庭信息技术股份有限公司 Method and system for fusing and labeling point cloud and picture

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104764457A (en) * 2015-04-21 2015-07-08 北京理工大学 Urban environment composition method for unmanned vehicles
CN204881531U (en) * 2015-05-21 2015-12-16 兰州大学 Ice and snow height measurement device based on image processing techniques
CN106291736A (en) * 2016-08-16 2017-01-04 张家港长安大学汽车工程研究院 Pilotless automobile track dynamic disorder object detecting method
CN107515891A (en) * 2017-07-06 2017-12-26 杭州南江机器人股份有限公司 A kind of robot cartography method, apparatus and storage medium
CN107609522A (en) * 2017-09-19 2018-01-19 东华大学 A kind of information fusion vehicle detecting system based on laser radar and machine vision
CN107845095A (en) * 2017-11-20 2018-03-27 维坤智能科技(上海)有限公司 Mobile object real time detection algorithm based on three-dimensional laser point cloud
CN108152831A (en) * 2017-12-06 2018-06-12 中国农业大学 A kind of laser radar obstacle recognition method and system
KR20180066668A (en) * 2016-12-09 2018-06-19 동의대학교 산학협력단 Apparatus and method constructing driving environment of unmanned vehicle
CN108416257A (en) * 2018-01-19 2018-08-17 北京交通大学 Merge the underground railway track obstacle detection method of vision and laser radar data feature
CN108536154A (en) * 2018-05-14 2018-09-14 重庆师范大学 Low speed automatic Pilot intelligent wheel chair construction method based on bioelectrical signals control
CN108663681A (en) * 2018-05-16 2018-10-16 华南理工大学 Mobile Robotics Navigation method based on binocular camera Yu two-dimensional laser radar
CN108731693A (en) * 2018-06-05 2018-11-02 北京智行者科技有限公司 Block map acquisition method
CN108917761A (en) * 2018-05-07 2018-11-30 西安交通大学 A kind of accurate positioning method of unmanned vehicle in underground garage
CN108920584A (en) * 2018-06-25 2018-11-30 广州视源电子科技股份有限公司 A kind of semanteme grating map generation method and its device
CN108921925A (en) * 2018-06-27 2018-11-30 广州视源电子科技股份有限公司 The semantic point cloud generation method and device merged based on laser radar and vision
CN108958266A (en) * 2018-08-09 2018-12-07 北京智行者科技有限公司 A kind of map datum acquisition methods

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104764457A (en) * 2015-04-21 2015-07-08 北京理工大学 Urban environment composition method for unmanned vehicles
CN204881531U (en) * 2015-05-21 2015-12-16 兰州大学 Ice and snow height measurement device based on image processing techniques
CN106291736A (en) * 2016-08-16 2017-01-04 张家港长安大学汽车工程研究院 Pilotless automobile track dynamic disorder object detecting method
KR20180066668A (en) * 2016-12-09 2018-06-19 동의대학교 산학협력단 Apparatus and method constructing driving environment of unmanned vehicle
CN107515891A (en) * 2017-07-06 2017-12-26 杭州南江机器人股份有限公司 A kind of robot cartography method, apparatus and storage medium
CN107609522A (en) * 2017-09-19 2018-01-19 东华大学 A kind of information fusion vehicle detecting system based on laser radar and machine vision
CN107845095A (en) * 2017-11-20 2018-03-27 维坤智能科技(上海)有限公司 Mobile object real time detection algorithm based on three-dimensional laser point cloud
CN108152831A (en) * 2017-12-06 2018-06-12 中国农业大学 A kind of laser radar obstacle recognition method and system
CN108416257A (en) * 2018-01-19 2018-08-17 北京交通大学 Merge the underground railway track obstacle detection method of vision and laser radar data feature
CN108917761A (en) * 2018-05-07 2018-11-30 西安交通大学 A kind of accurate positioning method of unmanned vehicle in underground garage
CN108536154A (en) * 2018-05-14 2018-09-14 重庆师范大学 Low speed automatic Pilot intelligent wheel chair construction method based on bioelectrical signals control
CN108663681A (en) * 2018-05-16 2018-10-16 华南理工大学 Mobile Robotics Navigation method based on binocular camera Yu two-dimensional laser radar
CN108731693A (en) * 2018-06-05 2018-11-02 北京智行者科技有限公司 Block map acquisition method
CN108920584A (en) * 2018-06-25 2018-11-30 广州视源电子科技股份有限公司 A kind of semanteme grating map generation method and its device
CN108921925A (en) * 2018-06-27 2018-11-30 广州视源电子科技股份有限公司 The semantic point cloud generation method and device merged based on laser radar and vision
CN108958266A (en) * 2018-08-09 2018-12-07 北京智行者科技有限公司 A kind of map datum acquisition methods

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112434119A (en) * 2020-11-13 2021-03-02 武汉中海庭数据技术有限公司 High-precision map production device based on heterogeneous data fusion
CN112365582A (en) * 2020-11-17 2021-02-12 电子科技大学 Countermeasure point cloud generation method, storage medium and terminal
CN112577499A (en) * 2020-11-19 2021-03-30 上汽大众汽车有限公司 VSLAM feature map scale recovery method and system
CN112528771A (en) * 2020-11-27 2021-03-19 深兰科技(上海)有限公司 Obstacle detection method, obstacle detection device, electronic device, and storage medium
CN112991455A (en) * 2021-02-01 2021-06-18 武汉光庭信息技术股份有限公司 Method and system for fusing and labeling point cloud and picture
CN112991455B (en) * 2021-02-01 2022-06-17 武汉光庭信息技术股份有限公司 Method and system for fusing and labeling point cloud and picture

Similar Documents

Publication Publication Date Title
CN110363820B (en) Target detection method based on laser radar and pre-image fusion
CN111323027A (en) Method and device for manufacturing high-precision map based on fusion of laser radar and panoramic camera
CN108196535B (en) Automatic driving system based on reinforcement learning and multi-sensor fusion
CN110009765B (en) Scene format conversion method of automatic driving vehicle scene data system
CN109583415B (en) Traffic light detection and identification method based on fusion of laser radar and camera
US8751154B2 (en) Enhanced clear path detection in the presence of traffic infrastructure indicator
US8699754B2 (en) Clear path detection through road modeling
DE112019001657T5 (en) SIGNAL PROCESSING DEVICE AND SIGNAL PROCESSING METHOD, PROGRAM AND MOBILE BODY
CN108594244B (en) Obstacle recognition transfer learning method based on stereoscopic vision and laser radar
US20220373353A1 (en) Map Updating Method and Apparatus, and Device
CN110766760B (en) Method, device, equipment and storage medium for camera calibration
JPWO2007083494A1 (en) Graphic recognition apparatus, graphic recognition method, and graphic recognition program
CN112949366B (en) Obstacle identification method and device
CN114359181B (en) Intelligent traffic target fusion detection method and system based on image and point cloud
CN113885062A (en) Data acquisition and fusion equipment, method and system based on V2X
CN110728720B (en) Method, apparatus, device and storage medium for camera calibration
US11676403B2 (en) Combining visible light camera and thermal camera information
CN110901638B (en) Driving assistance method and system
CN110751693B (en) Method, apparatus, device and storage medium for camera calibration
WO2020133415A1 (en) Systems and methods for constructing a high-definition map based on landmarks
CN115564865A (en) Construction method and system of crowdsourcing high-precision map, electronic equipment and vehicle
CN113362394A (en) Vehicle real-time positioning method based on visual semantic segmentation technology
CN113724387A (en) Laser and camera fused map construction method
CN114155720B (en) Vehicle detection and track prediction method for roadside laser radar
CN116403186A (en) Automatic driving three-dimensional target detection method based on FPN Swin Transformer and Pointernet++

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information

Inventor after: Zhou Rui

Inventor after: Wang Jinqiang

Inventor after: Zhou Qingguo

Inventor after: Sun Yu

Inventor after: Xu Yichong

Inventor before: Zhou Rui

Inventor before: Sun Yu

Inventor before: Xu Yichong

Inventor before: Wang Jinqiang

Inventor before: Zhou Qingguo

CB03 Change of inventor or designer information
RJ01 Rejection of invention patent application after publication

Application publication date: 20200623

RJ01 Rejection of invention patent application after publication