CN108875844A - The matching process and system of lidar image and camera review - Google Patents

The matching process and system of lidar image and camera review Download PDF

Info

Publication number
CN108875844A
CN108875844A CN201810801137.5A CN201810801137A CN108875844A CN 108875844 A CN108875844 A CN 108875844A CN 201810801137 A CN201810801137 A CN 201810801137A CN 108875844 A CN108875844 A CN 108875844A
Authority
CN
China
Prior art keywords
layer
image
camera review
lidar image
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810801137.5A
Other languages
Chinese (zh)
Inventor
冯梓航
孙辉
周坤
张伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Automotive Research Institute of Tsinghua University
Original Assignee
Suzhou Automotive Research Institute of Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Automotive Research Institute of Tsinghua University filed Critical Suzhou Automotive Research Institute of Tsinghua University
Priority to CN201810801137.5A priority Critical patent/CN108875844A/en
Publication of CN108875844A publication Critical patent/CN108875844A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10044Radar image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Signal Processing (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Multimedia (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses the matching process of a kind of lidar image and camera review, including:Son is described to lidar image and camera review learning characteristic using convolutional neural networks;Compare the distance between the Feature Descriptor extracted by convolutional neural networks, confirms classification and the pose of target;Lidar image is projected on camera review by the pose parameter that convolutional neural networks extract, obtains the fusion figure of lidar image and camera review.It can solve the dependence to the texture of target surface, reduce the influence of environment, improve the precision of the images match of laser radar and video camera.

Description

The matching process and system of lidar image and camera review
Technical field
The present invention relates to the technical field of image matching of laser radar and video camera, more particularly to a kind of laser radar figure As the matching process and system with camera review.
Background technique
In recent years, due to the fast development of computer technology etc., artificial intelligence is an awfully hot topic, and computer regards Feel is just the battle position of artificial intelligence.In computer vision field, single sensor application can not meet ours Demand, Multi-sensor Fusion are necessary, and the fusion of laser radar and camera is exactly typical represents.Laser radar is distance Sensor emits and receives time or the phase difference of signal by calculating, obtains the distance between target and sensor information, But it cannot get the color information of target, and video camera is just the opposite.The fusion application of laser radar and camera is very extensive, than The birth of such as intelligent automobile, the birth of intelligent robot.
The fusion application of laser radar and camera is more and more, and the images match of laser radar and video camera just seems ten Divide important.The traditional matching algorithm of the image of laser radar and video camera mainly has the method for sparse features, dense image fast Method and template matching method, firstly, traditional pose measuring method is the method based on geometrical characteristic mostly, and base This method of geometry has certain dependence for the texture of target surface;Secondly, in true environment, due to by illumination Etc. factors influence, camera imaging quality can degenerate, and the method based on geometrical characteristic is easy by extreme influence;Finally, hair When raw partial occlusion, deformation is had occurred in object, and the different pose measurement etc. of being surely competent at of the method based on geometrical characteristic is appointed Business, the present invention is therefore.
Summary of the invention
In order to solve above-mentioned technical problem, the present invention provides a kind of lidar image and camera reviews Matching process and system describe laser light radar image and camera review learning characteristic using convolutional neural networks algorithm Son compares the distance between description, realizes target identification and pose measurement, carries out image after obtaining the parameter of the pose of target Match, can solve the dependence to the texture of target surface, reduce the influence of environment, improves the image of laser radar and video camera Matched precision.
The technical scheme is that:
A kind of matching process of lidar image and camera review, includes the following steps:
S01:Son is described to lidar image and camera review learning characteristic using convolutional neural networks;
S02:Compare the distance between the Feature Descriptor extracted by convolutional neural networks, confirms classification and the pose of target;
S03:Lidar image is projected on camera review by the pose parameter that convolutional neural networks extract, is swashed The fusion figure of optical radar image and camera review.
In preferred technical solution, further include before the step S01:
Camera review is demarcated and is corrected;The each harness of laser radar is corrected and lidar image completion is grasped Make, so that the reflection Distribution value of each harness is identical.
In preferred technical solution, the first layer and the second layer of the convolutional neural networks are convolutional layer, third layer, the 4th Layer and layer 5 are full articulamentum, are also connected with ReLU layers, Pooling layers and Dropout layers;First layer convolution kernel size is 11*5*3, totally 64, second layer convolution kernel size be 5*3*64, totally 200;Implicit full articulamentum is 1024,2048 minds Through member.
The invention also discloses the matching systems of a kind of lidar image and camera review, including:
Information acquisition module:Image Acquisition, including lidar image and video camera figure are carried out using camera and laser radar Picture;
Image processing module describes son to lidar image and camera review learning characteristic using convolutional neural networks;Than Compared with the distance between the Feature Descriptor extracted by convolutional neural networks, classification and the pose of target are confirmed;Pass through convolution mind The pose parameter extracted through network projects to lidar image on camera review, obtains lidar image and video camera The fusion figure of image.
In preferred technical solution, further include:
Image pre-processing module is demarcated camera review and is corrected;The each harness of laser radar is corrected and is swashed Optical radar image completion operation, so that the reflection Distribution value of each harness is identical.
In preferred technical solution, the first layer and the second layer of the convolutional neural networks are convolutional layer, third layer, the 4th Layer and layer 5 are full articulamentum, are also connected with ReLU layers, Pooling layers and Dropout layers;First layer convolution kernel size is 11*5*3, totally 64, second layer convolution kernel size be 5*3*64, totally 200;Implicit full articulamentum is 1024,2048 minds Through member.
Compared with prior art, it is an advantage of the invention that:
1)Son is described to laser light radar image and camera review learning characteristic by convolutional neural networks algorithm, solves laser The image matching system of radar and video camera reduces matching system by the shadow of environment for the dependence of the texture of target surface It rings.
2)Description of whole picture based on template carries out target identification and pose estimation, reaches and mentions high-precision mesh 's.
Detailed description of the invention
The invention will be further described with reference to the accompanying drawings and embodiments:
Fig. 1 is that the module of software and hardware of the matching system of lidar image of the present invention and camera review forms figure;
Fig. 2 is the hardware block diagram of the matching system of lidar image of the present invention and camera review;
Fig. 3 is the matching process flow chart of lidar image of the present invention and camera review.
Specific embodiment
In order to make the objectives, technical solutions and advantages of the present invention clearer, With reference to embodiment and join According to attached drawing, the present invention is described in more detail.It should be understood that these descriptions are merely illustrative, and it is not intended to limit this hair Bright range.In addition, in the following description, descriptions of well-known structures and technologies are omitted, to avoid this is unnecessarily obscured The concept of invention.
Embodiment:
With reference to the accompanying drawing, presently preferred embodiments of the present invention is described further.
As shown in Figure 1, 2, 3, the matching system of lidar image and camera review of the present invention, system are based primarily upon mould The method and deep learning algorithm of plate, the image obtained by laser radar and camera utilize convolutional neural networks algorithm pair Laser light radar image and camera review learning characteristic description, compare the distance between description, realize that position object pose is surveyed Amount, carries out images match after obtaining the parameter of the pose of target.
Hardware system includes image collecting device, system processor.Software systems include information collection and transmission module, figure As processing module.Hardware system function is as follows:
Image collecting device:Generally monocular wide-angle camera and 32 line photomechanical laser radars, in front of system and peripheral region Area image information, and image information is transmitted to system processor, camera, laser radar, associated peripheral circuits are fixed on and take the photograph As head shell and pedestal on.
System processor:Generally arm processor, for carrying the pretreated algorithm of operation image and image Attitude estimation Algorithm, and have information receive, storage and transfer function;
Software systems functions are as follows:
Information collection and transmission module:Image Acquisition is carried out using monocular cam and laser radar and its transmission circuit and is passed It is defeated;
Image processing module:Convolutional neural networks algorithm carries out posture to laser light radar image and camera review overlapping region Estimation carries out specific Attitude estimation algorithm and realizes process using model by the fusion of laser light radar image and camera review For:
1) image preprocessing.It the operation such as demarcated, corrected to the image of video camera first, reducing the influence of distortion;Secondly right The each harness of laser radar is corrected to be operated with radar image completion, so that the reflection Distribution value of each harness is identical;
2)Construct convolutional neural networks.Convolutional neural networks mainly have five layers, and first two layers is convolutional layer, and latter three layers are to connect entirely Layer, there are also the over-fittings that ReLU layers, Pooling layers and Dropout layers prevent convolutional network.First layer convolution kernel 11*5*3 is big It is small, totally 64, second layer convolution kernel 5*3*64 size, totally 200;Implicit full articulamentum is 1024,2048 neurons.
3) training convolutional neural networks model obtains description.Each a certain amount of laser radar of stochastic inputs and camera shooting Machine image, training convolutional neural model obtain description, and the distance between description of different objects is larger, between identical description Distance is smaller, estimates the pose of target.
4)The pose obtained using convolution model(6 freedom degrees), lidar image is projected on camera review, The fusion figure of laser radar and video camera is just obtained.
It should be understood that above-mentioned specific embodiment of the invention is used only for exemplary illustration or explains of the invention Principle, but not to limit the present invention.Therefore, that is done without departing from the spirit and scope of the present invention is any Modification, equivalent replacement, improvement etc., should all be included in the protection scope of the present invention.In addition, appended claims purport of the present invention Covering the whole variations fallen into attached claim scope and boundary or this range and the equivalent form on boundary and is repairing Change example.

Claims (6)

1. the matching process of a kind of lidar image and camera review, which is characterized in that include the following steps:
S01:Son is described to lidar image and camera review learning characteristic using convolutional neural networks;
S02:Compare the distance between the Feature Descriptor extracted by convolutional neural networks, confirms classification and the pose of target;
S03:Lidar image is projected on camera review by the pose parameter that convolutional neural networks extract, is swashed The fusion figure of optical radar image and camera review.
2. the matching process of lidar image according to claim 1 and camera review, which is characterized in that the step Further include before rapid S01:
Camera review is demarcated and is corrected;The each harness of laser radar is corrected and lidar image completion is grasped Make, so that the reflection Distribution value of each harness is identical.
3. the matching process of lidar image according to claim 1 and camera review, which is characterized in that the volume The first layer and the second layer of product neural network are convolutional layer, and third layer, the 4th layer and layer 5 are full articulamentum, are also connected with ReLU Layer, Pooling layers and Dropout layers;First layer convolution kernel size is 11*5*3, and totally 64, second layer convolution kernel size is 5*3*64, totally 200;Implicit full articulamentum is 1024,2048 neurons.
4. the matching system of a kind of lidar image and camera review, which is characterized in that including:
Information acquisition module:Image Acquisition, including lidar image and video camera figure are carried out using camera and laser radar Picture;
Image processing module describes son to lidar image and camera review learning characteristic using convolutional neural networks;Than Compared with the distance between the Feature Descriptor extracted by convolutional neural networks, classification and the pose of target are confirmed;Pass through convolution mind The pose parameter extracted through network projects to lidar image on camera review, obtains lidar image and video camera The fusion figure of image.
5. the matching system of lidar image according to claim 4 and camera review, which is characterized in that also wrap It includes:
Image pre-processing module is demarcated camera review and is corrected;The each harness of laser radar is corrected and is swashed Optical radar image completion operation, so that the reflection Distribution value of each harness is identical.
6. the matching system of lidar image according to claim 4 and camera review, which is characterized in that the volume The first layer and the second layer of product neural network are convolutional layer, and third layer, the 4th layer and layer 5 are full articulamentum, are also connected with ReLU Layer, Pooling layers and Dropout layers;First layer convolution kernel size is 11*5*3, and totally 64, second layer convolution kernel size is 5*3*64, totally 200;Implicit full articulamentum is 1024,2048 neurons.
CN201810801137.5A 2018-07-20 2018-07-20 The matching process and system of lidar image and camera review Pending CN108875844A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810801137.5A CN108875844A (en) 2018-07-20 2018-07-20 The matching process and system of lidar image and camera review

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810801137.5A CN108875844A (en) 2018-07-20 2018-07-20 The matching process and system of lidar image and camera review

Publications (1)

Publication Number Publication Date
CN108875844A true CN108875844A (en) 2018-11-23

Family

ID=64303846

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810801137.5A Pending CN108875844A (en) 2018-07-20 2018-07-20 The matching process and system of lidar image and camera review

Country Status (1)

Country Link
CN (1) CN108875844A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109634279A (en) * 2018-12-17 2019-04-16 武汉科技大学 Object positioning method based on laser radar and monocular vision
CN112085801A (en) * 2020-09-08 2020-12-15 清华大学苏州汽车研究院(吴江) Calibration method for three-dimensional point cloud and two-dimensional image fusion based on neural network

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106780484A (en) * 2017-01-11 2017-05-31 山东大学 Robot interframe position and orientation estimation method based on convolutional neural networks Feature Descriptor
CN107609522A (en) * 2017-09-19 2018-01-19 东华大学 A kind of information fusion vehicle detecting system based on laser radar and machine vision
CN108196535A (en) * 2017-12-12 2018-06-22 清华大学苏州汽车研究院(吴江) Automated driving system based on enhancing study and Multi-sensor Fusion

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106780484A (en) * 2017-01-11 2017-05-31 山东大学 Robot interframe position and orientation estimation method based on convolutional neural networks Feature Descriptor
CN107609522A (en) * 2017-09-19 2018-01-19 东华大学 A kind of information fusion vehicle detecting system based on laser radar and machine vision
CN108196535A (en) * 2017-12-12 2018-06-22 清华大学苏州汽车研究院(吴江) Automated driving system based on enhancing study and Multi-sensor Fusion

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109634279A (en) * 2018-12-17 2019-04-16 武汉科技大学 Object positioning method based on laser radar and monocular vision
CN109634279B (en) * 2018-12-17 2022-08-12 瞿卫新 Object positioning method based on laser radar and monocular vision
CN112085801A (en) * 2020-09-08 2020-12-15 清华大学苏州汽车研究院(吴江) Calibration method for three-dimensional point cloud and two-dimensional image fusion based on neural network
CN112085801B (en) * 2020-09-08 2024-03-19 清华大学苏州汽车研究院(吴江) Calibration method for fusion of three-dimensional point cloud and two-dimensional image based on neural network

Similar Documents

Publication Publication Date Title
Jörgensen et al. Monocular 3d object detection and box fitting trained end-to-end using intersection-over-union loss
CN109308693B (en) Single-binocular vision system for target detection and pose measurement constructed by one PTZ camera
WO2020240284A3 (en) Vehicle environment modeling with cameras
CN110874864A (en) Method, device, electronic equipment and system for obtaining three-dimensional model of object
WO2021071995A1 (en) Systems and methods for surface normals sensing with polarization
CN106934809A (en) Unmanned plane based on binocular vision autonomous oiling rapid abutting joint air navigation aid in the air
WO2021139176A1 (en) Pedestrian trajectory tracking method and apparatus based on binocular camera calibration, computer device, and storage medium
CN109359514B (en) DeskVR-oriented gesture tracking and recognition combined strategy method
CN109858437B (en) Automatic luggage volume classification method based on generation query network
CN113743391A (en) Three-dimensional obstacle detection system and method applied to low-speed autonomous driving robot
CN108764080B (en) Unmanned aerial vehicle visual obstacle avoidance method based on point cloud space binarization
CN110619660A (en) Object positioning method and device, computer readable storage medium and robot
JP2021531601A (en) Neural network training, line-of-sight detection methods and devices, and electronic devices
CN115032648B (en) Three-dimensional target identification and positioning method based on laser radar dense point cloud
CN113228103A (en) Target tracking method, device, unmanned aerial vehicle, system and readable storage medium
CN110647782A (en) Three-dimensional face reconstruction and multi-pose face recognition method and device
CN114761997A (en) Target detection method, terminal device and medium
CN111179330A (en) Binocular vision scene depth estimation method based on convolutional neural network
CN108875844A (en) The matching process and system of lidar image and camera review
CN113221953B (en) Target attitude identification system and method based on example segmentation and binocular depth estimation
JP7498404B2 (en) Apparatus, method and program for estimating three-dimensional posture of subject
WO2021217450A1 (en) Target tracking method and device, and storage medium
CN114092564B (en) External parameter calibration method, system, terminal and medium for non-overlapping vision multi-camera system
Ren et al. Application of stereo vision technology in 3D reconstruction of traffic objects
CN112949769B (en) Target detection method and target detection system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20181123

RJ01 Rejection of invention patent application after publication