CN111932635B - Image calibration method adopting combination of two-dimensional and three-dimensional vision processing - Google Patents

Image calibration method adopting combination of two-dimensional and three-dimensional vision processing Download PDF

Info

Publication number
CN111932635B
CN111932635B CN202010791609.0A CN202010791609A CN111932635B CN 111932635 B CN111932635 B CN 111932635B CN 202010791609 A CN202010791609 A CN 202010791609A CN 111932635 B CN111932635 B CN 111932635B
Authority
CN
China
Prior art keywords
dimensional
image
images
foreign matter
processed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010791609.0A
Other languages
Chinese (zh)
Other versions
CN111932635A (en
Inventor
范生宏
勾志阳
吴树林
王贺
丁立顺
邵江
陈雨辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu Puda Ditai Technology Co ltd
Original Assignee
Jiangsu Puda Ditai Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu Puda Ditai Technology Co ltd filed Critical Jiangsu Puda Ditai Technology Co ltd
Priority to CN202010791609.0A priority Critical patent/CN111932635B/en
Publication of CN111932635A publication Critical patent/CN111932635A/en
Application granted granted Critical
Publication of CN111932635B publication Critical patent/CN111932635B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration using histogram techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/149Segmentation; Edge detection involving deformable models, e.g. active contour models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/155Segmentation; Edge detection involving morphological operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30261Obstacle

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of airport pavement foreign matter detection, in particular to an image calibration method combining two-dimensional and three-dimensional vision processing. The invention utilizes a two-dimensional image acquisition mechanism and a three-dimensional image acquisition mechanism to acquire airport pavement images respectively and independently, and the images are fed back to an image calculation core respectively after the acquisition is completed; the image calculation core calculates the two-dimensional image and the three-dimensional image respectively, and then performs fusion operation to finally determine the foreign body form and obtain the coordinate position of the foreign body. According to the invention, the two-dimensional and three-dimensional pavement foreign matter detection results are fused by utilizing the Dempster-Shafer evidence theory, and the depth characteristics and the outline of the pavement foreign matter are obtained, so that the accuracy of foreign matter detection is improved, and the working efficiency of pavement cleaning staff is improved.

Description

Image calibration method adopting combination of two-dimensional and three-dimensional vision processing
Technical Field
The invention relates to the technical field of airport pavement foreign matter detection, in particular to an image calibration method combining two-dimensional and three-dimensional vision processing.
Background
Airport runway foreign matter (FOD for short) is one of the important factors for endangering the flight safety of an airplane, which affects the unfolding and convergence of landing gear and the normal operation of equipment such as flaps and the like, and even worse, once the foreign matter is sucked into an engine, the engine is damaged, so that serious economic loss is caused, and even accidents can be caused to take valuable lives of pilots and passengers. According to national civil aviation statistics, aircraft tires are damaged by airport pavement foreign matters over 4000 per year, and direct loss of the airport pavement foreign matters is at least 30 to 40 hundred million dollars per year worldwide, so that the detection of the airport pavement foreign matters is not slow.
At present, china is also developing an independent FOD detection system, or adopts a radar or a visual scanning mechanism, but in general, compared with advanced foreign equipment, the FOD detection system has a small gap, and the FOD detection system is mainly characterized in that the recognition accuracy is insufficient, and misjudgment is often easily caused, such as misjudgment of a tire brake mark into a foreign matter or incapability of recognizing a small-size object. This is mainly because the two-dimensional plane image acquired by the visual scanning mechanism in the prior art cannot clearly restore the airport pavement information, cannot obtain the depth characteristics of the foreign matters, and is difficult to determine the outline of the foreign matters, so that misjudgment and missed judgment frequently occur, therefore, if a foreign matter identification technology capable of fusing the two-dimensional plane image with the three-dimensional outline image can be designed, the detection precision of the foreign matters can be greatly improved, and convenience is brought to foreign matter cleaning of airport staff.
Disclosure of Invention
The invention aims to provide an image calibration method combining two-dimensional and three-dimensional vision processing, which is used for fusing a two-dimensional plane image and a three-dimensional contour image acquired by an image acquisition mechanism, acquiring depth characteristics and contours of foreign matters and improving the detection precision of the foreign matters.
The above object of the present invention is achieved by the following technical solutions:
the image calibration method adopting the combination of two-dimensional and three-dimensional vision processing comprises the following steps:
step one: the two-dimensional image acquisition mechanism and the three-dimensional image acquisition mechanism are used for respectively and independently acquiring airport pavement images, and the images are respectively fed back to the image calculation core after the acquisition is completed;
step two: the image computing core sequentially performs calibration and preprocessing operations on the images fed back by the two-dimensional image acquisition mechanism;
step three: the image computing core extracts contour gradients of the images processed in the second step;
step four: the image computing core carries out preprocessing operation on the image fed back by the three-dimensional image acquisition mechanism;
step five: the image calculation core carries out fusion operation on the two-dimensional image processed in the third step and the three-dimensional contour image processed in the fourth step, and finally determines the form of the foreign matter and obtains the coordinate position of the foreign matter;
the preprocessing of the three-dimensional contour image in the fourth step comprises the following steps:
s1: carrying out graying treatment on the three-dimensional contour image;
s2: determining the position of a structured light bar in the image processed in the step S1, and extracting the laser center line of the image;
s3: carrying out three-time mean value noise reduction on the extracted laser center line through the weight value, and outputting a depth map;
s4: performing linear stretching transformation on the depth image, and separating foreign matters from the background by adopting a watershed algorithm;
s5: performing depth feature recognition on the image obtained in the step S4 by using an Adaboost recognition algorithm, and generating a depth histogram;
the fusion processing of the two-dimensional image and the three-dimensional image in the fifth step specifically comprises the following steps:
s1: detecting possible foreign matters on the image processed in the third step;
s2: calculating the number of three-dimensional laser light bars passing through each foreign object on each image, and detecting the intersection point of the three-dimensional laser light bars and the two-dimensional foreign object lines;
s3: according to whether each light bar on the intersection point is deformed or not, fusing all 3D laser light bar foreign matter information by adopting a Dempster-Shafer theory, and obtaining a foreign matter detection result of a 3D structure light detection method;
s4: the two-dimensional and three-dimensional pavement foreign matter detection results are fused by using the Dempster-Shafer evidence theory, and the comprehensive decision judgment is carried out on the foreign matter information by adopting a certain decision rule, so that the pavement foreign matter detection results are obtained.
Further, the preprocessing for the two-dimensional image in the second step includes the following steps:
s1: graying and spatial filtering are carried out on the calibrated image;
s2: performing piecewise linear gray scale enhancement processing on the image processed in the step S1;
s3: dividing the image processed in the step S2, and then carrying out morphological image processing;
s4: and (3) carrying out feature extraction and identification on the processed image and outputting a result.
Further, the contour gradient extraction for the two-dimensional image in the third step includes the following steps:
s1: let the iteration coefficient number be t=0, the precision control parameter res=0.1;
s2: further improving the accuracy t=t+1 by using a Forstner operator and an interpolation method;
s3: the distance between the updated corner point and the original corner point is as follows:
s4: if delta is less than or equal to res or t is more than or equal to 10, ending the iteration; otherwise, repeating the steps S2 and S3, and obtaining the angular point position with the precision of 0.1 pixel level after repeated iteration for a plurality of times.
Furthermore, the image computing core is formed by interconnecting three I7 processors and one exchanger and is used for processing and computing images, one I7 processor is used for acquiring and computing two-dimensional images and fusion computing, and the other two I7 processors are used for acquiring three-dimensional images.
Further, the two-dimensional image acquisition mechanism consists of at least ten linear array CCD cameras, the three-dimensional image acquisition mechanism consists of at least six planar array CCD cameras, and the linear array CCD cameras and the planar array CCD cameras are connected with corresponding I7 processors.
Furthermore, the linear array CCD camera needs to cooperate with scanning motion when acquiring images, and the area array CCD camera is kept relatively static when acquiring images.
Compared with the prior art, the invention provides an image calibration method adopting the combination of two-dimensional and three-dimensional vision processing, which has the following beneficial effects:
1. according to the invention, the two-dimensional and three-dimensional pavement foreign matter detection results are fused by using the Dempster-Shafer evidence theory, and the depth characteristics and the outline of the pavement foreign matter are obtained, so that the accuracy of foreign matter detection is improved, and the working efficiency of pavement cleaning staff is improved;
2. the invention extracts the contour gradient aiming at the two-dimensional plane image, thereby obtaining the angular point position with the precision of 0.1 pixel, greatly improving the detection precision of foreign matters and reducing the probability of false judgment and missed judgment.
Drawings
FIG. 1 is a block diagram of an image computation core of the present invention;
FIG. 2 is a schematic diagram of a two-dimensional image processing process according to the present invention;
FIG. 3 is a flow chart of preprocessing a two-dimensional image according to the present invention;
FIG. 4 is a schematic diagram of a three-dimensional image processing process according to the present invention;
fig. 5 is an algorithm formula used in the present invention to extract the laser centerline.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Examples: referring to fig. 1-5, an image calibration method combining two-dimensional and three-dimensional vision processing is adopted, which comprises the following steps:
step one: the two-dimensional image acquisition mechanism and the three-dimensional image acquisition mechanism are used for respectively and independently acquiring airport pavement images, and the images are respectively fed back to the image calculation core after the acquisition is completed; the image computing core is formed by interconnecting three I7 processors and an exchanger and is used for processing and computing images, one I7 processor is used for acquiring and computing two-dimensional images and fusing and computing the other two I7 processors are used for acquiring three-dimensional images. The two-dimensional image acquisition mechanism consists of ten linear array CCD cameras, the three-dimensional image acquisition mechanism consists of six linear array CCD cameras, and the linear array CCD cameras are connected with corresponding I7 processors; the linear array CCD cameras are all gigabit cameras and are mutually connected and are connected with the same I7 processor, the area array CCD cameras are all full-speed USB3.0 cameras and each I7 processor is respectively connected with the three-table-board array CCD cameras.
The linear array CCD camera needs to cooperate with scanning motion when acquiring images, and the area array CCD camera is kept relatively static when acquiring images; the linear array CCD camera for acquiring the two-dimensional image has the advantages of high acquisition speed and simple morphological processing, and can make up the characteristics of road surface textures, appearance, gloss and the like which are lost during the acquisition of the three-dimensional image.
Step two: the image computing core sequentially performs calibration and preprocessing operations on the images fed back by the two-dimensional image acquisition mechanism; the preprocessing for the two-dimensional image in the second step comprises the following steps:
s1: graying and spatial filtering are carried out on the calibrated image;
s2: performing piecewise linear gray scale enhancement processing on the image processed in the step S1;
s3: dividing the image processed in the step S2, and then carrying out morphological image processing;
s4: and (3) carrying out feature extraction and identification on the processed image and outputting a result.
The spatial filtering treatment is to superpose all noisy images and take an average value to remove noise and optimize the images; in addition, after the graying process, the brightness of the gray image is often lost, and such geometric distortion is unfavorable for the subsequent process, so that the image enhancement is required by a gray interpolation method, that is, a complete information chain is established according to incomplete information point back-pushing.
Step three: the image computing core extracts contour gradients of the images processed in the second step; the contour gradient extraction for the two-dimensional image in the third step comprises the following steps:
s1: let the iteration coefficient number be t=0, the precision control parameter res=0.1;
s2: further improving the accuracy t=t+1 by using a Forstner operator and an interpolation method;
s3: the distance between the updated corner point and the original corner point is as follows:
s4: if delta is less than or equal to res or t is more than or equal to 10, ending the iteration; otherwise, repeating the steps S2 and S3, and obtaining the angular point position with the precision of 0.1 pixel level after repeated iteration for a plurality of times.
The corner points are points with obvious differences from the surrounding adjacent points, and indicate the positions of the images with severe gray level change in the two-dimensional space, namely the positions with large curvature change on the curves, so that the positions of the corner points on the images are calculated, namely the outlines of the objects are roughly determined.
Step four: the image computing core carries out preprocessing operation on the image fed back by the three-dimensional image acquisition mechanism; the preprocessing of the three-dimensional contour image in the fourth step comprises the following steps:
s1: carrying out graying treatment on the three-dimensional contour image;
s2: determining the position of a structured light bar in the image processed in the step S1, and extracting the laser center line of the image;
s3: carrying out three-time mean value noise reduction on the extracted laser center line through the weight value, and outputting a depth map;
s4: performing linear stretching transformation on the depth image, and separating foreign matters from the background by adopting a watershed algorithm;
s5: and (3) carrying out depth feature recognition on the image obtained in the step (S4) by using an Adaboost recognition algorithm, and generating a depth histogram.
The extraction of the laser center line mainly comprises six steps of light bar area positioning, light bar center extraction, light line center inflection point extraction, feature point topological coordinate determination, light bar point extraction and light bar center sub-pixel precision calculation, wherein a gray level square weighted gravity center method is adopted for a calculation algorithm of the light bar center sub-pixel, the formula is shown in fig. 5, wherein gray (X, y) represents a gray value with a coordinate position of a position (X, y), X is a transverse position and also serves as a weight, the light bar position is basically determined, and when a target background has a larger gray level difference, an obvious advantageous extraction effect can be achieved.
Step five: the image calculation core carries out fusion operation on the two-dimensional image processed in the third step and the three-dimensional contour image processed in the fourth step, and finally determines the form of the foreign matter and obtains the coordinate position of the foreign matter; the fusion processing of the two-dimensional image and the three-dimensional image in the fifth step specifically comprises the following steps:
s1: detecting possible foreign matters on the image processed in the third step;
s2: calculating the number of three-dimensional laser light bars passing through each foreign object on each image, and detecting the intersection point of the three-dimensional laser light bars and the two-dimensional foreign object lines;
s3: according to whether each light bar on the intersection point is deformed or not, fusing all 3D laser light bar foreign matter information by adopting a Dempster-Shafer theory, and obtaining a foreign matter detection result of a 3D structure light detection method;
s4: the two-dimensional and three-dimensional pavement foreign matter detection results are fused by using the Dempster-Shafer evidence theory, and the comprehensive decision judgment is carried out on the foreign matter information by adopting a certain decision rule, so that the pavement foreign matter detection results are obtained.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises an element.
Although embodiments of the present invention have been shown and described, it will be understood by those skilled in the art that various changes, modifications, substitutions and alterations can be made therein without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.

Claims (6)

1. The image calibration method combining two-dimensional and three-dimensional vision processing is characterized by comprising the following steps of: the method comprises the following steps:
step one: the two-dimensional image acquisition mechanism and the three-dimensional image acquisition mechanism are used for respectively and independently acquiring airport pavement images, and the images are respectively fed back to the image calculation core after the acquisition is completed;
step two: the image computing core sequentially performs calibration and preprocessing operations on the images fed back by the two-dimensional image acquisition mechanism;
step three: the image computing core extracts contour gradients of the images processed in the second step;
step four: the image computing core carries out preprocessing operation on the image fed back by the three-dimensional image acquisition mechanism;
step five: the image calculation core carries out fusion operation on the two-dimensional image processed in the third step and the three-dimensional contour image processed in the fourth step, and finally determines the form of the foreign matter and obtains the coordinate position of the foreign matter;
the preprocessing of the three-dimensional contour image in the fourth step comprises the following steps:
s1: carrying out graying treatment on the three-dimensional contour image;
s2: determining the position of a structured light bar in the image processed in the step S1, and extracting the laser center line of the image;
s3: carrying out three-time mean value noise reduction on the extracted laser center line through the weight value, and outputting a depth map;
s4: performing linear stretching transformation on the depth image, and separating foreign matters from the background by adopting a watershed algorithm;
s5: performing depth feature recognition on the image obtained in the step S4 by using an Adaboost recognition algorithm, and generating a depth histogram;
the fusion processing of the two-dimensional image and the three-dimensional image in the fifth step specifically comprises the following steps:
s1: detecting possible foreign matters on the image processed in the third step;
s2: calculating the number of three-dimensional laser light bars passing through each foreign object on each image, and detecting the intersection point of the three-dimensional laser light bars and the two-dimensional foreign object lines;
s3: according to whether each light bar on the intersection point is deformed or not, fusing all 3D laser light bar foreign matter information by adopting a Dempster-Shafer theory, and obtaining a foreign matter detection result of a 3D structure light detection method;
s4: the two-dimensional and three-dimensional pavement foreign matter detection results are fused by using the Dempster-Shafer evidence theory, and the comprehensive decision judgment is carried out on the foreign matter information by adopting a certain decision rule, so that the pavement foreign matter detection results are obtained.
2. The method for calibrating an image by combining two-dimensional and three-dimensional vision processing according to claim 1, wherein: the preprocessing for the two-dimensional image in the second step comprises the following steps:
s1: graying and spatial filtering are carried out on the calibrated image;
s2: performing piecewise linear gray scale enhancement processing on the image processed in the step S1;
s3: dividing the image processed in the step S2, and then carrying out morphological image processing;
s4: and (3) carrying out feature extraction and identification on the processed image and outputting a result.
3. The method for calibrating an image by combining two-dimensional and three-dimensional vision processing according to claim 1, wherein: the contour gradient extraction for the two-dimensional image in the third step comprises the following steps:
s1: let the iteration coefficient number be t=0, the precision control parameter res=0.1;
s2: further improving the accuracy t=t+1 by using a Forstner operator and an interpolation method;
s3: the distance between the updated corner point and the original corner point is as follows:
s4: if delta is less than or equal to res or t is more than or equal to 10, ending the iteration; otherwise, repeating the steps S2 and S3, and obtaining the angular point position with the precision of 0.1 pixel level after repeated iteration for a plurality of times.
4. The method for calibrating an image by combining two-dimensional and three-dimensional vision processing according to claim 1, wherein: the image computing core is formed by interconnecting three I7 processors and an exchanger and is used for processing and computing images, one I7 processor is used for acquiring and computing two-dimensional images and fusion computing, and the other two I7 processors are used for acquiring three-dimensional images.
5. The method for calibrating an image using a combination of two-dimensional and three-dimensional vision processing according to claim 4, wherein: the two-dimensional image acquisition mechanism consists of at least ten linear array CCD cameras, the three-dimensional image acquisition mechanism consists of at least six area array CCD cameras, and the linear array CCD cameras and the area array CCD cameras are connected with corresponding I7 processors.
6. The method for calibrating an image using a combination of two-dimensional and three-dimensional vision processing according to claim 5, wherein: the linear array CCD camera needs to cooperate with scanning motion when acquiring images, and the area array CCD camera is kept relatively static when acquiring images.
CN202010791609.0A 2020-08-07 2020-08-07 Image calibration method adopting combination of two-dimensional and three-dimensional vision processing Active CN111932635B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010791609.0A CN111932635B (en) 2020-08-07 2020-08-07 Image calibration method adopting combination of two-dimensional and three-dimensional vision processing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010791609.0A CN111932635B (en) 2020-08-07 2020-08-07 Image calibration method adopting combination of two-dimensional and three-dimensional vision processing

Publications (2)

Publication Number Publication Date
CN111932635A CN111932635A (en) 2020-11-13
CN111932635B true CN111932635B (en) 2023-11-17

Family

ID=73307969

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010791609.0A Active CN111932635B (en) 2020-08-07 2020-08-07 Image calibration method adopting combination of two-dimensional and three-dimensional vision processing

Country Status (1)

Country Link
CN (1) CN111932635B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113000910B (en) * 2021-03-01 2023-01-20 创新奇智(上海)科技有限公司 Hub machining auxiliary method and device, storage medium, control device and system

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102706880A (en) * 2012-06-26 2012-10-03 哈尔滨工业大学 Road information extraction device based on two-dimensional image and depth information and road crack information detection method based on same
US8326084B1 (en) * 2003-11-05 2012-12-04 Cognex Technology And Investment Corporation System and method of auto-exposure control for image acquisition hardware using three dimensional information
CN105654732A (en) * 2016-03-03 2016-06-08 上海图甲信息科技有限公司 Road monitoring system and method based on depth image
CN107578464A (en) * 2017-06-30 2018-01-12 长沙湘计海盾科技有限公司 A kind of conveyor belt workpieces measuring three-dimensional profile method based on line laser structured light
CN109166125A (en) * 2018-07-06 2019-01-08 长安大学 A kind of three dimensional depth image partitioning algorithm based on multiple edge syncretizing mechanism
US10205929B1 (en) * 2015-07-08 2019-02-12 Vuu Technologies LLC Methods and systems for creating real-time three-dimensional (3D) objects from two-dimensional (2D) images
CN110456377A (en) * 2019-08-15 2019-11-15 中国人民解放军63921部队 It is a kind of that foreign matter detecting method and system are attacked based on the satellite of three-dimensional laser radar
CN111476767A (en) * 2020-04-02 2020-07-31 南昌工程学院 High-speed rail fastener defect identification method based on heterogeneous image fusion

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8326084B1 (en) * 2003-11-05 2012-12-04 Cognex Technology And Investment Corporation System and method of auto-exposure control for image acquisition hardware using three dimensional information
CN102706880A (en) * 2012-06-26 2012-10-03 哈尔滨工业大学 Road information extraction device based on two-dimensional image and depth information and road crack information detection method based on same
US10205929B1 (en) * 2015-07-08 2019-02-12 Vuu Technologies LLC Methods and systems for creating real-time three-dimensional (3D) objects from two-dimensional (2D) images
CN105654732A (en) * 2016-03-03 2016-06-08 上海图甲信息科技有限公司 Road monitoring system and method based on depth image
CN107578464A (en) * 2017-06-30 2018-01-12 长沙湘计海盾科技有限公司 A kind of conveyor belt workpieces measuring three-dimensional profile method based on line laser structured light
CN109166125A (en) * 2018-07-06 2019-01-08 长安大学 A kind of three dimensional depth image partitioning algorithm based on multiple edge syncretizing mechanism
CN110456377A (en) * 2019-08-15 2019-11-15 中国人民解放军63921部队 It is a kind of that foreign matter detecting method and system are attacked based on the satellite of three-dimensional laser radar
CN111476767A (en) * 2020-04-02 2020-07-31 南昌工程学院 High-speed rail fastener defect identification method based on heterogeneous image fusion

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
Automatic Extraction of Railroad Centerlines from Mobile Laser Scanning Data;Sandr Oude Elberink 等;《remote sensing》;1-19 *
Training convolutional neural network from multi-domain contour images for 3D shape retrieval;Zongxiao Zhu 等;《Pattern Recognition Letters》;1-8 *
一般大气环境下锈蚀结构钢表面特征与随机模型;王友德 等;《金属学报》;第56卷(第2期);148-160 *
基于多模态信息的机器人视觉识别与定位研究;魏玉锋 等;《光电工程》;第45卷(第2期);170650-1-12 *
基于深度图像的建筑物点云平面边界线提取算法;赵玲娜 等;《测绘地理信息》;第42卷(第3期);48-52 *
基于视觉的无人驾驶车辆运动控制的研究;刘红星;《中国优秀硕士学位论文全文数据库 (工程科技Ⅱ辑)》;C035-18 *

Also Published As

Publication number Publication date
CN111932635A (en) 2020-11-13

Similar Documents

Publication Publication Date Title
CN103047943B (en) Based on the door skin geomery detection method of single projection coded structured light
CN103714541B (en) Method for identifying and positioning building through mountain body contour area constraint
CN104299260B (en) Contact network three-dimensional reconstruction method based on SIFT and LBP point cloud registration
CN105260737B (en) A kind of laser scanning data physical plane automatization extracting method of fusion Analysis On Multi-scale Features
CN106846340B (en) A kind of striation boundary extraction method based on on-fixed characteristic point
CN105574527A (en) Quick object detection method based on local feature learning
CN101398886A (en) Rapid three-dimensional face identification method based on bi-eye passiveness stereo vision
CN112825192B (en) Object identification system and method based on machine learning
CN109559324A (en) A kind of objective contour detection method in linear array images
CN113011388B (en) Vehicle outer contour size detection method based on license plate and lane line
CN110222661B (en) Feature extraction method for moving target identification and tracking
Li et al. Road markings extraction based on threshold segmentation
CN117058646B (en) Complex road target detection method based on multi-mode fusion aerial view
CN112683228A (en) Monocular camera ranging method and device
CN111932635B (en) Image calibration method adopting combination of two-dimensional and three-dimensional vision processing
CN113902709B (en) Real-time surface flatness analysis method for guiding aircraft composite skin repair
CN105335751B (en) A kind of berth aircraft nose wheel localization method of view-based access control model image
Jing et al. Island road centerline extraction based on a multiscale united feature
CN117710458A (en) Binocular vision-based carrier aircraft landing process relative position measurement method and system
CN112733678A (en) Ranging method, ranging device, computer equipment and storage medium
CN106355576A (en) SAR image registration method based on MRF image segmentation algorithm
CN116310552A (en) Three-dimensional target detection method based on multi-scale feature fusion
CN112907574B (en) Landing point searching method, device and system of aircraft and storage medium
CN114565629A (en) Large skin edge defect detection method based on multi-scale neighborhood
CN111553876B (en) Pneumatic optical sight error image processing method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant