CN110008843B - Vehicle target joint cognition method and system based on point cloud and image data - Google Patents

Vehicle target joint cognition method and system based on point cloud and image data Download PDF

Info

Publication number
CN110008843B
CN110008843B CN201910182570.XA CN201910182570A CN110008843B CN 110008843 B CN110008843 B CN 110008843B CN 201910182570 A CN201910182570 A CN 201910182570A CN 110008843 B CN110008843 B CN 110008843B
Authority
CN
China
Prior art keywords
point cloud
data
image
module
dimensional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910182570.XA
Other languages
Chinese (zh)
Other versions
CN110008843A (en
Inventor
李明
曹晶
石强
谢兴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Huanyu Zhixing Technology Co ltd
Original Assignee
Wuhan Huanyu Zhixing Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Huanyu Zhixing Technology Co ltd filed Critical Wuhan Huanyu Zhixing Technology Co ltd
Priority to CN201910182570.XA priority Critical patent/CN110008843B/en
Publication of CN110008843A publication Critical patent/CN110008843A/en
Application granted granted Critical
Publication of CN110008843B publication Critical patent/CN110008843B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a vehicle target joint cognition method and system based on point cloud and image data, which comprises a data cascade joint module, a deep learning target detection wood block and a joint cognition module, wherein the data cascade joint module acquires three-dimensional point cloud data and image data and is used for fusing the point cloud data and the image data, the fused data are gathered in the deep learning target detection module to carry out feature level detection and recognition, a detection result is output, the joint cognition module judges the feature level fusion detection result and the data level fusion detection result by adopting an evidence theory method, and a reliability distribution is obtained as an output.

Description

Vehicle target joint cognition method and system based on point cloud and image data
Technical Field
The invention relates to the technical field of unmanned driving, in particular to a vehicle target joint cognition method and system based on point cloud and image data.
Background
The unmanned vehicle is a vehicle with autonomous driving behaviors, and artificial intelligent modules such as environment perception, intelligent decision, path planning, behavior control and the like are added on the basis of a traditional vehicle, so that the unmanned vehicle can interact with the surrounding environment and rent out a mobile wheeled robot with corresponding decision and action.
Thanks to the rapid development of a novel sensor technology and a learning technology thereof, various sensors are used for sensing the surrounding environment completely, accurately, robustly and in real time in unmanned driving. The process of environment perception mainly comprises sensor calibration, structured road detection, unstructured road detection, pedestrian detection, vehicle detection, traffic signal lamp detection, traffic sign detection and the like.
In the prior art, the detection of an optical image and three-dimensional point cloud data can be combined, and the image characteristics and the three-dimensional point cloud characteristics are matched through a convolutional neural network, so that an outsourcing rectangular frame of a target is obtained and is used for estimating the position of the target.
Disclosure of Invention
In view of the above, the invention provides a more accurate and reliable vehicle target joint cognition method and system based on point cloud and image data.
The technical scheme of the invention is realized as follows: the invention provides a vehicle target joint cognition method based on point cloud and image data, which comprises the following steps:
acquiring three-dimensional point cloud data of a laser radar and a plane image of an image sensor, performing grid division on the three-dimensional point cloud data to obtain a plurality of voxels with the same size, and calculating mass points of the three-dimensional point cloud in each voxel;
step two, calculating the position of the mass point on the corresponding plane image obtained in the step one and the image information of the position on the plane image according to the geometric mapping relation between the three-dimensional point cloud and the plane image;
step three, calculating the distances between the mass points and all points in the corresponding voxels according to the mass points obtained in the step one and the voxels corresponding to the mass points, and forming distance feature vectors;
step four, mutually fusing the distance characteristic vector obtained in the step three and the image information obtained in the step two to obtain a fusion vector;
inputting the fusion vector obtained in the step four into a three-dimensional convolution neural network, and calculating the corresponding depth characteristic of the fusion vector;
step six, calculating candidate areas around the target by using an area proposal network according to the depth characteristics obtained in the step five, training whether the target is the target or not, and obtaining a classifier of whether the target is the target or not by training for judging and classifying whether the target is the target or not;
and step seven, inputting the outsourcing rectangle information of the candidate targets of the area proposal network into a regression network, and regressing to obtain the outsourcing rectangle of the target.
On the basis of the above technical solution, preferably, the first step further includes: and calculating all three-dimensional point cloud points in each voxel grid based on the point cloud data by using a three-dimensional deep learning frame to obtain the coordinates of the mass points.
On the basis of the above technical solution, preferably, the second step further includes: and calculating image coordinates corresponding to particles of each voxel by using a geometric position transformation matrix between the laser radar sensor and the image sensor, wherein the image information comprises RGB (red, green and blue) feature vectors formed by pixel values of the image.
On the basis of the above technical solution, preferably, in the second step, the calculation method of the image information is a bilinear interpolation algorithm, and four pixel points whose distances from the corresponding points of the mass point on the plane image are the nearest are used for obtaining the image information.
Still more preferably, in step four, the fusion vector is vector-concatenated by the distance feature vector and the gray-scale feature vector.
On the basis of the above technical solution, preferably, in step five, the depth features include depth features of the three-dimensional point cloud data after passing through a three-dimensional deep learning model, depth features of the image after passing through a two-dimensional deep learning model, and depth features of the distance features and the RGB grayscale features after passing through three-dimensional deep learning.
The invention also provides a vehicle target joint cognitive system based on point cloud and image data, which is characterized by comprising the following components: the system comprises a data cascading joint module, a deep learning target detection module and a joint cognition module, wherein the data cascading joint module is used for receiving three-dimensional point cloud data and image data, performing association fusion on the point cloud data and the image data, and transmitting the point cloud data and the image data to the deep learning target detection module, the deep learning target detection module performs detection and identification on the three-dimensional point cloud data and the image data from the aspect of a feature level by utilizing deep learning, provides a detection result based on feature level fusion and a detection result based on data level fusion, and transmits the detection result to the joint cognition module, and the joint cognition module judges the feature level fusion detection result and the data level fusion detection result by adopting an evidence theory method and obtains a credibility distribution as output.
On the basis of the above technical solution, preferably, the data cascade connection module includes a point cloud and image geometric registration sub-module, a point cloud data processing sub-module, and an image data processing sub-module. The point cloud and image geometric registration submodule is used for realizing the mapping from the point cloud to the image; the point cloud data processing submodule is used for realizing the calculation of geometric particles in each grid and the calculation of three-dimensional grid characteristic vectors on the basis of the division of the point cloud data three-dimensional grid; and the image processing submodule is used for realizing the extraction of the image gray level characteristics corresponding to the geometric particles of the three-dimensional grid.
On the basis of the above technical scheme, preferably, the deep learning target detection module further includes a feature level fusion detection submodule and a data level fusion detection submodule, the feature level fusion detection submodule is used for extracting the depth features of the point cloud data and the depth features of the plane image, fusing the depth features of the point cloud data and the depth features of the plane image, and finally outputting a three-dimensional outsourcing rectangular frame, and the data level fusion detection submodule is used for serially connecting the distance feature vector and the gray level feature vector.
On the basis of the above technical solution, preferably, the joint cognitive module calculates a probability distribution function of the detection result based on a probability theory, on the basis of the feature level detection result and the data level detection result, respectively, and calculates a new basic probability distribution function reflecting the fusion information generated based on the combined action of the feature level detection result and the data level detection result by using a Dempster combination rule.
Compared with the prior art, the vehicle target joint cognition method and the vehicle target joint cognition system based on the point cloud and the image data have the following beneficial effects that:
the vehicle detection method and system based on deep learning automatically extract high-level, abstract and high-generalization-capability deep features from data, have high flexibility and generalization capability, convert the problem of vehicle target detection in a scene into a second-class classification problem of targets under the framework of a fast regional convolutional neural network, and train a classifier based on deep learning by utilizing a large amount of data of marked vehicle targets in an actual traffic scene to realize the detection and identification of the vehicle targets.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a block diagram of a flow chart of a vehicle target joint cognition method based on point cloud and image data according to the invention;
FIG. 2 is a block diagram schematically illustrating the structure of a vehicle target joint recognition system based on point cloud and image data according to the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without any inventive step based on the embodiments of the present invention, are within the scope of the present invention.
As shown in FIG. 1, the vehicle target joint cognition method based on point cloud and influence data of the invention comprises the following steps:
acquiring three-dimensional point cloud data of a laser radar and a plane image of an image sensor, performing grid division on the three-dimensional point cloud data to obtain a plurality of voxels with the same size, and calculating mass points of the three-dimensional point cloud in each voxel;
step two, calculating the position of the mass point on the corresponding plane image obtained in the step one and the image information of the position on the plane image according to the geometric mapping relation between the three-dimensional point cloud and the plane image;
step three, calculating the distances between the mass points and all points in the corresponding voxels according to the mass points obtained in the step one and the voxels corresponding to the mass points, and forming distance feature vectors;
step four, mutually fusing the distance characteristic vector obtained in the step three and the image information obtained in the step two to obtain a fusion vector;
inputting the fusion vector obtained in the step four into a three-dimensional convolution neural network, and calculating the corresponding depth characteristic of the fusion vector;
step six, calculating candidate areas around the target by using an area proposal network according to the depth characteristics obtained in the step five, training whether the target is the target or not, and obtaining a classifier of whether the target is the target or not by training for judging and classifying whether the target is the target or not;
and step seven, inputting the outsourcing rectangle information of the candidate targets of the area proposal network into a regression network, and regressing to obtain the outsourcing rectangle of the target.
In a specific embodiment, the first step further includes: and calculating all three-dimensional point cloud points in each voxel grid based on the point cloud data by using a three-dimensional deep learning frame to obtain the coordinates of the mass points.
In a specific embodiment, the second step further includes: and calculating image coordinates corresponding to particles of each voxel by using a geometric position transformation matrix between the laser radar sensor and the image sensor, wherein the image information comprises RGB (red, green and blue) feature vectors formed by pixel values of the image.
In a specific embodiment, in the second step, the calculation method of the image information is a bilinear interpolation algorithm, and four pixel points whose distances from the corresponding points of the mass point on the plane image are the nearest are used for obtaining the image information.
In a specific embodiment, in step four, the fusion vector is vector-concatenated by the distance feature vector and the RGB feature vector.
In a specific embodiment, in the fifth step, the depth features include depth features of the three-dimensional point cloud data after passing through the three-dimensional deep learning model, depth features of the image after passing through the two-dimensional deep learning model, and depth features of the distance features and the RGB grayscale features after passing through three-dimensional deep learning.
In a specific embodiment, a vehicle target joint cognitive system based on point cloud and image data is further provided, which includes: the system comprises a data cascading joint module, a deep learning target detection module and a joint cognition module, wherein the data cascading joint module is used for receiving three-dimensional point cloud data and image data, performing association fusion on the point cloud data and the image data, and transmitting the point cloud data and the image data to the deep learning target detection module, the deep learning target detection module performs detection and identification on the three-dimensional point cloud data and the image data from the aspect of a feature level by utilizing deep learning, provides a detection result based on feature level fusion and a detection result based on data level fusion, and transmits the detection result to the joint cognition module, and the joint cognition module judges the feature level fusion detection result and the data level fusion detection result by adopting an evidence theory method and obtains a credibility distribution as output.
In a specific embodiment, the data cascade module further includes a point cloud and image geometric registration sub-module, a point cloud data processing sub-module, and an image data processing sub-module. The point cloud and image geometric registration submodule is used for realizing the mapping from the point cloud to the image; the point cloud data processing submodule is used for realizing the calculation of geometric particles in each grid and the calculation of three-dimensional grid characteristic vectors on the basis of the division of the point cloud data three-dimensional grid; and the image processing submodule is used for realizing the extraction of the image gray level characteristics corresponding to the geometric particles of the three-dimensional grid.
In a specific embodiment, the deep learning target detection module further includes a feature level fusion detection submodule and a data level fusion detection submodule, the feature level fusion detection submodule is used for extracting depth features of point cloud data and depth features of a plane image, fusing the depth features of the point cloud data and the depth features of the plane image, and finally outputting a three-dimensional outsourcing rectangular frame, the data level fusion detection submodule firstly connects a distance feature vector and an RGB gray-scale vector calculated by the point cloud data in series, inputs the connected features into a three-dimensional deep learning model, extracts high-level, abstract and high-classification-capability depth features, and finally outputs the three-dimensional outsourcing rectangular frame according to a region proposal network.
In a specific embodiment, the joint cognition module calculates a probability distribution function of the detection result based on a probability theory and on the basis of the feature level detection result and the data level detection result respectively, and calculates a new basic probability distribution function which is generated based on the combined action of the feature level detection result and the data level detection result and reflects the fusion information by adopting a Dempster combination rule.
Specifically, firstly, a laser radar sensor and a plane image sensor respectively acquire time-synchronized three-dimensional point cloud data and plane image data, grid division is performed on the point cloud data according to a certain size according to the coverage area of the laser radar sensor, a plurality of voxels with the same size are obtained, each voxel comprises a plurality of spatial point clouds, and the particle coordinates of points in the voxels are calculated.
According to the geometric mapping relation between the laser radar sensor and the plane image sensor, calculating a coordinate point on the plane image corresponding to the mass point in each voxel and a plane image corresponding to the coordinate point, and calculating the RGB characteristic vector of the coordinate point by adopting a bilinear interpolation method according to four closest pixel points of the plane coordinate point.
On the premise of obtaining the mass points in the voxels, the distances between other points in each voxel and the mass points in the voxel are calculated, and a distance feature vector is formed.
And fusing the gray value vector and the distance characteristic vector of the plane coordinate point corresponding to the mass point by adopting a vector series connection mode to obtain a fusion vector.
And inputting the fusion vector into a three-dimensional convolution neural network, and calculating the corresponding depth feature of the fusion vector.
On the basis of the depth features, a candidate area around the target is calculated by using an area proposal network, and whether the target is the target or not is trained to obtain a classifier of whether the target is the target or not, wherein the classifier is used for judging whether the target is the target or not.
And inputting the outsourcing rectangle information of the candidate target of the area proposal network into the regression network, and estimating the outsourcing rectangle of the target.
In the above embodiment, the information is provided by using multiple sensors based on the combination of the point cloud of the voxels and the image data level, so that a solid foundation is provided for the subsequent detection and identification.
In the above embodiment, the data-level fusion detection uses the information provided by the multi-resource sensor as much as possible, and a more comprehensive detection result is obtained.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (9)

1. A vehicle target joint cognition method based on point cloud and image data is characterized by comprising the following steps:
acquiring three-dimensional point cloud data of a laser radar and a plane image of an image sensor, performing grid division on the three-dimensional point cloud data to obtain a plurality of voxels with the same size, and calculating geometric particles of the three-dimensional point cloud in each voxel;
calculating the position of the mass point obtained in the step one on the corresponding plane image according to the geometric mapping relation between the three-dimensional point cloud and the plane image, and obtaining the image coordinate corresponding to the mass point of each voxel and the image information of the position on the plane image by utilizing a geometric position transformation matrix between a laser radar sensor and an image sensor, wherein the image information comprises RGB (red, green and blue) feature vectors formed by pixel values of the image;
step three, calculating the distances between the mass points and all points in the corresponding voxels according to the mass points obtained in the step one and the voxels corresponding to the mass points, and forming distance feature vectors;
step four, mutually fusing the distance characteristic vector obtained in the step three and the image information obtained in the step two to obtain a fusion vector;
inputting the fusion vector obtained in the step four into a three-dimensional convolution neural network, and calculating the corresponding depth characteristic of the fusion vector;
step six, calculating candidate areas around the target by using an area proposal network according to the depth characteristics obtained in the step five, training whether the target is the target or not, and obtaining a classifier whether the target is the target or not by training for judging and classifying whether the target is the target or not;
and step seven, inputting the outsourcing rectangle information of the candidate targets of the area proposal network into a regression network, and regressing to obtain the outsourcing rectangle of the target.
2. The method for joint vehicle target recognition based on point cloud and image data as claimed in claim 1, wherein the step one further comprises: and calculating all three-dimensional point cloud points in each voxel grid based on the point cloud data by using a three-dimensional deep learning frame to obtain the coordinates of the mass points.
3. The method for jointly recognizing the vehicle target based on the point cloud and the image data as claimed in claim 1, wherein in the second step, the calculation method of the image information is a bilinear interpolation algorithm, and the calculation method is obtained by using four pixel points of the mass point, which are closest to the corresponding point on the plane image.
4. The method for joint vehicle target recognition based on point cloud and image data as claimed in claim 1, wherein in step four, the fusion vector is formed by vector concatenation of distance feature vector and RGB feature vector.
5. The method for jointly recognizing the vehicle target based on the point cloud and the image data as claimed in claim 1, wherein in the fifth step, the depth features comprise depth features of the three-dimensional point cloud data after passing through a three-dimensional deep learning model, depth features of the image after passing through a two-dimensional deep learning model, and depth features of the distance features and the RGB gray scale features after passing through three-dimensional deep learning.
6. The system for using the point cloud and image data-based vehicle target joint cognition method according to claim 1, is characterized by comprising the following steps: the system comprises a data cascading joint module, a deep learning target detection module and a joint cognition module, wherein the data cascading joint module is used for receiving three-dimensional point cloud data and image data, performing association fusion on the point cloud data and the image data, and transmitting the point cloud data and the image data to the deep learning target detection module, the deep learning target detection module performs detection and identification on the three-dimensional point cloud data and the image data from the aspect of a feature level by utilizing deep learning, provides a detection result based on feature level fusion and a detection result based on data level fusion, and transmits the detection result to the joint cognition module, and the joint cognition module judges the feature level fusion detection result and the data level fusion detection result by adopting an evidence theory method and obtains a credibility distribution as output.
7. The system of claim 6, wherein the data cascade module comprises a point cloud and image geometric registration sub-module, a point cloud data processing sub-module, an image data processing sub-module, a point cloud and image geometric registration sub-module, enabling mapping of a point cloud to an image; the point cloud data processing submodule is used for realizing the calculation of geometric particles in each grid and the calculation of three-dimensional grid characteristic vectors on the basis of the division of the point cloud data three-dimensional grid; and the image processing submodule is used for realizing the extraction of the image gray level characteristics corresponding to the geometric particles of the three-dimensional grid.
8. The system of claim 6, wherein the deep learning object detection module further comprises a feature level fusion detection submodule and a data level fusion detection submodule, wherein the feature level fusion detection submodule is used for extracting and fusing a depth feature of the point cloud data and a depth feature of the plane image to finally output a three-dimensional outsourcing rectangular frame, and the data level fusion detection submodule is used for serially connecting the distance feature vector and the gray scale feature vector.
9. The system of claim 6, wherein the joint learning module calculates probability distribution functions of the detection results based on probability theory based on the feature level detection results and the data level detection results, respectively, and calculates a new basic probability distribution function reflecting fusion information based on the combined action of the feature level detection results and the data level detection results using Dempster combination rule.
CN201910182570.XA 2019-03-11 2019-03-11 Vehicle target joint cognition method and system based on point cloud and image data Active CN110008843B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910182570.XA CN110008843B (en) 2019-03-11 2019-03-11 Vehicle target joint cognition method and system based on point cloud and image data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910182570.XA CN110008843B (en) 2019-03-11 2019-03-11 Vehicle target joint cognition method and system based on point cloud and image data

Publications (2)

Publication Number Publication Date
CN110008843A CN110008843A (en) 2019-07-12
CN110008843B true CN110008843B (en) 2021-01-05

Family

ID=67166731

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910182570.XA Active CN110008843B (en) 2019-03-11 2019-03-11 Vehicle target joint cognition method and system based on point cloud and image data

Country Status (1)

Country Link
CN (1) CN110008843B (en)

Families Citing this family (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112348993A (en) * 2019-08-07 2021-02-09 财团法人车辆研究测试中心 Dynamic graph resource establishing method and system capable of providing environment information
CN110458112B (en) * 2019-08-14 2020-11-20 上海眼控科技股份有限公司 Vehicle detection method and device, computer equipment and readable storage medium
CN110738121A (en) * 2019-09-17 2020-01-31 北京科技大学 front vehicle detection method and detection system
CN110781927B (en) * 2019-10-11 2023-05-23 苏州大学 Target detection and classification method based on deep learning under vehicle-road cooperation
CN110827202A (en) * 2019-11-07 2020-02-21 上海眼控科技股份有限公司 Target detection method, target detection device, computer equipment and storage medium
CN111144315A (en) * 2019-12-27 2020-05-12 北京三快在线科技有限公司 Target detection method and device, electronic equipment and readable storage medium
CN111209840B (en) * 2019-12-31 2022-02-18 浙江大学 3D target detection method based on multi-sensor data fusion
CN111507938B (en) * 2020-03-10 2023-04-21 博微太赫兹信息科技有限公司 Human body dangerous goods detection method and system
CN111444839B (en) * 2020-03-26 2023-09-08 北京经纬恒润科技股份有限公司 Target detection method and system based on laser radar
CN111582399B (en) * 2020-05-15 2023-07-18 吉林省森祥科技有限公司 Multi-sensor information fusion method for sterilization robot
CN111489556B (en) * 2020-05-20 2022-06-21 上海评驾科技有限公司 Method for judging attaching behavior of commercial vehicle
CN111723721A (en) * 2020-06-15 2020-09-29 中国传媒大学 Three-dimensional target detection method, system and device based on RGB-D
CN114758333B (en) * 2020-12-29 2024-02-13 北京瓦特曼科技有限公司 Identification method and system for unhooking hook of ladle lifted by travelling crane of casting crane
CN113239726B (en) * 2021-04-06 2022-11-08 北京航空航天大学杭州创新研究院 Target detection method and device based on coloring point cloud and electronic equipment
CN113490178B (en) * 2021-06-18 2022-07-19 天津大学 Intelligent networking vehicle multistage cooperative sensing system
CN113688738B (en) * 2021-08-25 2024-04-09 北京交通大学 Target identification system and method based on laser radar point cloud data
CN114463579A (en) * 2022-01-13 2022-05-10 中铁第四勘察设计院集团有限公司 Point cloud classification method and device, electronic equipment and storage medium
CN115082886B (en) * 2022-07-04 2023-09-29 小米汽车科技有限公司 Target detection method, device, storage medium, chip and vehicle

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103984936A (en) * 2014-05-29 2014-08-13 中国航空无线电电子研究所 Multi-sensor multi-feature fusion recognition method for three-dimensional dynamic target recognition
CN107633532A (en) * 2017-09-22 2018-01-26 武汉中观自动化科技有限公司 A kind of point cloud fusion method and system based on white light scanning instrument
CN108021891A (en) * 2017-12-05 2018-05-11 广州大学 The vehicle environmental recognition methods combined based on deep learning with traditional algorithm and system
CN109345510A (en) * 2018-09-07 2019-02-15 百度在线网络技术(北京)有限公司 Object detecting method, device, equipment, storage medium and vehicle
CN109344813A (en) * 2018-11-28 2019-02-15 北醒(北京)光子科技有限公司 A kind of target identification and scene modeling method and device based on RGBD

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8179393B2 (en) * 2009-02-13 2012-05-15 Harris Corporation Fusion of a 2D electro-optical image and 3D point cloud data for scene interpretation and registration performance assessment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103984936A (en) * 2014-05-29 2014-08-13 中国航空无线电电子研究所 Multi-sensor multi-feature fusion recognition method for three-dimensional dynamic target recognition
CN107633532A (en) * 2017-09-22 2018-01-26 武汉中观自动化科技有限公司 A kind of point cloud fusion method and system based on white light scanning instrument
CN108021891A (en) * 2017-12-05 2018-05-11 广州大学 The vehicle environmental recognition methods combined based on deep learning with traditional algorithm and system
CN109345510A (en) * 2018-09-07 2019-02-15 百度在线网络技术(北京)有限公司 Object detecting method, device, equipment, storage medium and vehicle
CN109344813A (en) * 2018-11-28 2019-02-15 北醒(北京)光子科技有限公司 A kind of target identification and scene modeling method and device based on RGBD

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Frustum PointNets for 3D Object Detection from RGB-D Data;Charles R. Qi 等;《arXiv:1711.08488v2[cs.CV]》;20180413;1-15 *
基于深度学习的高效3维车辆检测;黄鸿胜;《电子世界》;20180331(第3期);26-27 *
多重空间特征融合的手势识别;高喆;《小型微型计算机***》;20170731(第7期);1577-1582 *
融合深度及边界信息的图像目标识别;原彧鑫 等;《计算机应用与软件》;20170430;第34卷(第4期);183-187,220 *

Also Published As

Publication number Publication date
CN110008843A (en) 2019-07-12

Similar Documents

Publication Publication Date Title
CN110008843B (en) Vehicle target joint cognition method and system based on point cloud and image data
CN108229366B (en) Deep learning vehicle-mounted obstacle detection method based on radar and image data fusion
Kukkala et al. Advanced driver-assistance systems: A path toward autonomous vehicles
CN110175576B (en) Driving vehicle visual detection method combining laser point cloud data
CN110147706B (en) Obstacle recognition method and device, storage medium, and electronic device
Ahmad et al. Design & implementation of real time autonomous car by using image processing & IoT
Premebida et al. Pedestrian detection combining RGB and dense LIDAR data
CN108780154B (en) 3D point cloud processing method
CN111563415B (en) Binocular vision-based three-dimensional target detection system and method
Jebamikyous et al. Autonomous vehicles perception (avp) using deep learning: Modeling, assessment, and challenges
CN113111887B (en) Semantic segmentation method and system based on information fusion of camera and laser radar
US20200302237A1 (en) System and method for ordered representation and feature extraction for point clouds obtained by detection and ranging sensor
Vaquero et al. Deconvolutional networks for point-cloud vehicle detection and tracking in driving scenarios
CN114639115B (en) Human body key point and laser radar fused 3D pedestrian detection method
CN114821507A (en) Multi-sensor fusion vehicle-road cooperative sensing method for automatic driving
Zimmer et al. Infradet3d: Multi-modal 3d object detection based on roadside infrastructure camera and lidar sensors
CN114332494A (en) Three-dimensional target detection and identification method based on multi-source fusion under vehicle-road cooperation scene
CN117808689A (en) Depth complement method based on fusion of millimeter wave radar and camera
CN115147333A (en) Target detection method and device
CN113988197A (en) Multi-camera and multi-laser radar based combined calibration and target fusion detection method
Shi et al. Cobev: Elevating roadside 3d object detection with depth and height complementarity
Jebamikyous et al. Deep learning-based semantic segmentation in autonomous driving
Zhao et al. DHA: Lidar and vision data fusion-based on road object classifier
Budzan Fusion of visual and range images for object extraction
Bhatlawande et al. LIDAR based Detection of Small Vehicles

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant