CN110310371B - Method for constructing three-dimensional contour of object based on vehicle-mounted monocular focusing sequence image - Google Patents

Method for constructing three-dimensional contour of object based on vehicle-mounted monocular focusing sequence image Download PDF

Info

Publication number
CN110310371B
CN110310371B CN201910447327.6A CN201910447327A CN110310371B CN 110310371 B CN110310371 B CN 110310371B CN 201910447327 A CN201910447327 A CN 201910447327A CN 110310371 B CN110310371 B CN 110310371B
Authority
CN
China
Prior art keywords
image
pixel point
pixel
sequence
coordinate system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910447327.6A
Other languages
Chinese (zh)
Other versions
CN110310371A (en
Inventor
董志国
武肖搏
刘建成
张宇超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Taiyuan University of Technology
Original Assignee
Taiyuan University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Taiyuan University of Technology filed Critical Taiyuan University of Technology
Priority to CN201910447327.6A priority Critical patent/CN110310371B/en
Publication of CN110310371A publication Critical patent/CN110310371A/en
Application granted granted Critical
Publication of CN110310371B publication Critical patent/CN110310371B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture
    • G06T7/41Analysis of texture based on statistical description of texture
    • G06T7/45Analysis of texture based on statistical description of texture using co-occurrence matrix computation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/571Depth or shape recovery from multiple images from focus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/579Depth or shape recovery from multiple images from motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Probability & Statistics with Applications (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention belongs to the technical field of automatic driving and object recognition in vehicle engineering, and solves the technical problems of time consumption, labor consumption, low efficiency and difficulty in achieving high precision in the existing intelligent driving field. Acquiring a sequence image of a target object through a shooting unit, and processing to obtain a preprocessing image sequence; the method comprises the steps of extracting clear pixel points of each frame of preprocessed sequence image, calculating a focusing factor of each pixel point of the preprocessed sequence image, obtaining an image serial number corresponding to all pixel points in a full-focusing image when the pixel points reach the maximum focusing factor, calculating the depth distance delta z between adjacent sequence images according to the product of real-time vehicle speed v and delta t, taking the delta z as the depth value of the corresponding pixel point, calculating the coordinates of the corresponding points on a pixel coordinate system and a world coordinate system according to the result calibrated by a Zhang-Yongyou calibration method, and reconstructing the three-dimensional outline of a target object through a two-dimensional image sequence.

Description

Method for constructing three-dimensional contour of object based on vehicle-mounted monocular focusing sequence image
Technical Field
The invention belongs to the field of automatic driving and the technical field of object recognition in vehicle engineering, and particularly relates to a vehicle-mounted object automatic three-dimensional model building system for building an object three-dimensional contour based on image processing.
Background
At present, two main schemes of vision and laser radar are available in the field of intelligent driving for realizing three-dimensional reconstruction of objects. The vision is to acquire the surface information of the object by using the cameras, and can be divided into monocular vision and binocular vision according to the number of the cameras. Monocular vision is to emit a known pattern by using structured light, and after a camera receives the pattern reflected by the surface of an object, the difference between the pattern and the original pattern is calculated through image processing, so that three-dimensional reconstruction is realized. The binocular vision is to restore the three-dimensional geometric information of an object based on the parallax principle and reconstruct the three-dimensional outline and position of the object.
The short plate for three-dimensional reconstruction by using a vision method in the prior art is obvious, the monocular and binocular robustness is poor, the precision of the system is influenced along with the change of the surrounding environment, and when the external light is weakened from strong to weak, the precision of binocular vision is greatly reduced. The monocular vision is just opposite, and the camera is only suitable for environments with darker light rays, and if the surrounding light rays are very strong, the camera is difficult to accurately identify bright spots.
Lidar implementations also fall generally into two categories, one based on triangulation and the other known as ToF ranging. The principle of the triangulation-based algorithm is that a sequence CCD senses reflection information formed by laser emitted to the surface of an object in real time, and the distance between a radar and the object can be calculated according to the sine theorem by knowing an emission angle alpha, a receiving angle beta and the distance between a laser head and the CCD. The principle Of ToF (Time Of Flight) is to calculate the distance Of a target object by measuring the transmission delay Time between light pulses.
The difficulty of the laser radar is how to acquire high-speed data through hardware and process the data in real time through an algorithm to obtain high-precision original point cloud data, and the manufacturing cost is relatively higher than that of the camera vision.
Disclosure of Invention
The technical problem to be solved by the invention is as follows: how to change and solve the defects that the realization of three-dimensional reconstruction of an object in the field of intelligent driving is time-consuming, labor-consuming, low in efficiency and difficult to achieve high precision, thereby providing a system capable of realizing rapid and high-precision reconstruction of the three-dimensional contour of the object.
The invention is realized by adopting the following technical scheme: a method for constructing a three-dimensional contour of an object based on a vehicle-mounted monocular focusing sequence image comprises the following steps:
(a) The moving carrier drives the shooting unit to move along the extension direction of the target object, and an image of the target object is shot at intervals of delta t, so that N frame sequence images of the target object are obtained within a period of time;
(b) Processing images of all sequence images shot by a shooting unit, and performing cutting transformation on the images to remove part of non-relevant background areas except a target object measurement area in the sequence images to obtain a preprocessed sequence image;
(c) Extracting clear pixel points of each frame of the pre-processing sequence image to construct a full-focus image; determining a pixel point focusing evaluation window of any pixel point in the pre-processing sequence image, and calculating a focusing factor of each pixel point of the pre-processing sequence image: for any pixel point (i, j) in the preprocessed sequence image, taking the pixel point as a starting point, generating four gray level co-occurrence matrixes in four neighborhoods of the upper, lower, left and right, respectively calculating and obtaining the maximum value of the correlation characteristic value of the four gray level co-occurrence matrixes, determining the maximum correlation pixel point of the pixel point in four directions, and determining a focusing evaluation window U (i, j) of the pixel point by taking the four pixel points as the edge position of an evaluation window of the pixel point, wherein the width size W1= D1+ D2+1 and the height size W2= D3+ D4+1 of the focusing evaluation window, the width size W1, D2, D3 and D4 of the focusing evaluation window respectively represent that the number of pixels at intervals between the maximum correlation pixel point in the four directions and any pixel point (i, j) calculated is added with 1 multiplied by W2; calculating the focusing factor of each pixel point in each frame of image by using the focusing evaluation window; the focusing factor is the average value of the evaluation value sum of each pixel point of the corresponding pixel point in the focusing evaluation window; the focusing factor of any pixel point (i, j) in each frame of preprocessed image is the average value of the evaluation value sum of each pixel point of the pixel point in the evaluation window, and is shown as the following formula: f k (i,j)=∑(g x (x,y) 2 +g y (x,y) 2 ) 2 /(w 1 ×w 2 )
Wherein, g x (x, y) and g y (x, y) respectively represent the k frame pre-processing sequence images I k And the convolution of the Sobel operator in the X and Y directions; taking the pixel point reaching the maximum focusing factor in each frame image as a clear pixel point of the frame image; clear pixel points from all the images form a full focusing image;
(d) The image serial number corresponding to each pixel point in the full-focus image is solved, the relative position relation of the surface points of the target object is represented according to the sequence of the image serial numbers, the depth distance delta z between adjacent sequence images can be calculated according to the product of the speed v and delta t of the real-time moving carrier, and then the depth relation of all the points on the surface of the target object is obtained: determining the Z-coordinate Z of the shooting unit in the world coordinate system when the k-th image is acquired k ,Z k Is the sum of all Δ z from 1 to k, as follows:
Figure BDA0002074043040000041
recording the pixel coordinate (i) of each clear pixel point in the kth image in the image coordinate system of the image in which the clear pixel point is positioned k 、j k ) (ii) a Pixel coordinates (i) of all sharp pixels according to a transformation matrix from a pixel coordinate system to a world coordinate system k 、j k ) Obtain the corresponding world coordinate (X) k ,Y k ,Z k ) (ii) a According to the method, the coordinate set { (X) of the clear pixel points of all the images in the preprocessed sequence image in the world coordinate system is obtained k ,Y k ,Z k ) L 1 is more than or equal to k and less than or equal to N, wherein N is the total image number of the sequence images;
(e) According to the coordinate set of the surface point of the object in the world coordinate system { (X) k ,Y k ,Z k ) And l 1 is less than or equal to k and less than or equal to N, connecting each point with points in the surrounding field to form a triangular grid, and connecting surfaces formed by the triangular grids to form a contour graph of the surface of the target object to realize the reconstruction of the three-dimensional contour graph of the surface of the target object in a world coordinate system.
The working principle of the invention is as follows: according to the lens imaging principle, when the focal length and the distance are fixed, the object distance is uniquely determined; when imaging, only the point meeting the determined object distance on the object can obtain a clear image on the image plane, which is called a focused clear image; if the determined object distance is not satisfied, a clear point image cannot be obtained, a fuzzy circle is obtained, and the image at the moment is called an out-of-focus image. According to the focusing clear principle, a series of sequence images of the object in the depth of field direction are firstly acquired, so that the whole sequence covers all information of the object in the depth of field direction; then, each point with clear focus is obtained in the sequence image through a certain fusion rule, so that an image with each depth of field part being quite clear is reconstructed, and the image is called as a full-focus image; and then restoring the depth information through focusing analysis, thereby performing three-dimensional reconstruction through a two-dimensional image sequence. Compared with other methods, the method does not need to carry out complicated light source calibration operation, is not harsh on illumination conditions during image acquisition, has a large distance measurement range, and has considerable speed for constructing the three-dimensional model.
Drawings
Fig. 1 is a schematic diagram of a certain pixel and a focus evaluation window thereof.
FIG. 2 is a schematic view of an experimental apparatus used in the present invention.
Fig. 3 is a schematic diagram of images shot by the camera at positions (1), (2) and (3) and a contour of a target object restored by three-dimensional reconstruction.
Detailed Description
The invention designs a target object surface contour measuring method based on image processing, which comprises the following steps: the automobile carries a shooting unit to move along the extension direction of the target object, and shoots an image of the target object at intervals of delta t, so as to obtain a sequence image of the target object within a period of time; processing images of all sequence images shot by a shooting unit, and performing cutting transformation on the images to remove part of non-relevant background areas except a target object measurement area in the sequence images to obtain a preprocessed sequence image; the method comprises the steps of extracting clear pixel points of each frame of preprocessed sequence image, calculating a focusing factor of each pixel point of the preprocessed sequence image, obtaining an image serial number corresponding to all pixel points in a full-focusing image when the pixel points reach the maximum focusing factor, calculating the depth distance delta z between adjacent sequence images according to the product of real-time vehicle speed v and delta t, taking the delta z as the depth value of the corresponding pixel point, calculating the coordinates of the corresponding points on a pixel coordinate system and a world coordinate system according to the result calibrated by a Zhang-Yongyou calibration method, and reconstructing the three-dimensional outline of a target object through a two-dimensional image sequence.
The technical solution of the present invention will be further described in more detail with reference to the following embodiments. The technical scheme adopted by the invention is as follows: an automatic vehicle-mounted measuring system based on image processing is carried out according to the following steps:
fixing a CCD image acquisition and transmission device on a vehicle body;
and step two, printing a checkerboard, and pasting the checkerboard on a plane as a calibration object. By adjusting the orientation of the calibration object or the camera, some photographs in different directions are taken of the calibration object. The group is subjected to a Zhang-Yongyou scaling method to obtain 4 internal parameters in a camera imaging linear model: u. of 0 、v 0 、f x 、f y Respectively representing the pixel coordinates of the principal point and the effective focal length of the camera; 2 external parameters: and R and t respectively represent the rotation relation and the translation relation between the camera coordinate system and the world coordinate system, and the conversion relation from the pixel coordinate system to the world coordinate system can be calculated according to the six parameters.
And step three, the automobile drives the CCD image acquisition and transmission device to move along the extension direction of the target object, and images of one target object are shot at intervals of delta t, so that N frame sequential images of the target object are acquired within a period of time, and the acquired images are transmitted to a computer through a data line.
And step four, performing image processing on all sequence images shot by the shooting unit, and performing cutting transformation on the images to remove part of non-relevant background areas except the target object measurement area in the sequence images to obtain preprocessed sequence images.
Step five, extracting clear pixel points of each frame of the pre-processing sequence image to construct a full-focus image, and determining any pixel point in the full-focus imageAnd (3) a pixel point focusing evaluation window, calculating a focusing factor of each pixel point of the preprocessing sequence image: for any pixel point (i, j) in the full-focus image (a light-color centered square in fig. 1), taking the pixel point as a starting point, generating four gray level co-occurrence matrixes in four neighborhoods of the upper, lower, left and right of the pixel point, respectively calculating and obtaining the maximum value of the correlation characteristic values of the four gray level co-occurrence matrixes (one maximum value is respectively arranged in four directions), determining the maximum correlation pixel point (four dark squares in fig. 1) in four directions of the pixel point, and determining the focus evaluation window U (i, j) of the pixel point by taking the four pixel points as the edge position of the evaluation window, wherein the width size W1= D1+ D2+1, the height size W2= D3+ D4+1, wherein D1, D2, D3 and D4 represent the number of pixels, and the size of the focus evaluation window is W1 × W2. As shown in fig. 1, D1 indicates the number of pixels spaced between a certain pixel (light square) and the leftmost maximum relevant pixel plus 1 (the number of spacings is 1, D1= 2), and D2 indicates the number of pixels spaced between a certain pixel and the rightmost maximum relevant pixel plus 1 (the number of spacings is 0, D2= 1). Calculating the focusing factor of each pixel point in each frame of image by using the focusing evaluation window; and the focusing factor is the average value of the evaluation value sum of each pixel point of the corresponding pixel point in the focusing evaluation window. The focusing factor of any pixel point (i, j) in each frame of preprocessed image is the average value of the evaluation value sum of each pixel point of the pixel point in the evaluation window, and is as follows: f k (i,j)=∑(g x (x,y) 2 +g y (x,y) 2 ) 2 /(w 1 ×w 2 )
Wherein, g x (x, y) and g y (x, y) respectively represent the k frame pre-processed sequence images I k And the convolution of the Sobel operator in the X and Y directions.
And step six, solving image serial numbers corresponding to all pixel points in the full-focus image when the pixel points reach the maximum focus factor, representing the relative position relation of the surface points of the target object according to the sequence of the serial numbers, and calculating the depth distance delta z between adjacent sequence images according to the product of the real-time vehicle speed v and the delta t so as to obtain the depth relation of all the points on the surface of the target object. Find the k-th pair (k is the order)Column number) image, and Z-direction coordinate Z of CCD image acquisition and transmission device in world coordinate system k ,Z k Is the sum of all Δ z from 1 to k, as follows:
Figure BDA0002074043040000071
recording the pixel coordinate (i) of each pixel point with clear focus in the k picture image coordinate system k 、j k ) (ii) a And then according to the conversion relation between the pixel coordinate system and the world coordinate system obtained by the camera calibration in the second step:
Figure BDA0002074043040000081
wherein u is 0 、v 0 、f x 、f y Respectively representing the pixel coordinates of the principal point and the effective focal length of the camera; r and t respectively represent the rotation relation and the translation relation of a camera coordinate system and a world coordinate system, and z c Representing the z-coordinate of the principal point in the camera coordinate system. According to the conversion relation, the pixel coordinates (i) of all the clear pixel points k 、j k ) Obtain the corresponding world coordinate (X) k ,Y k ,Z k ). According to the method, the coordinate set of the points of the target object corresponding to the focus clear pixels of all the images in the sequence image in the world coordinate system is obtained { (X) k ,Y k ,Z k ) L 1 is more than or equal to k and less than or equal to N, wherein N is the total image number of the sequence images;
step seven, according to the coordinate set { (X) of the surface point of the target object in the world coordinate system k ,Y k ,Z k ) L 1 is more than or equal to k and less than or equal to N, each point and points in the surrounding field are connected to form a triangular grid, and the surfaces formed by the triangular grids are connected to form a contour figure of the surface of the target object so as to reconstruct a three-dimensional contour figure of the surface of the target object in a world coordinate system; the coordinate values between adjacent points are obtained by linear interpolation, and the reconstructed graph of the surface of the target object is displayed on a display screen of a computer control device.
As shown in fig. 2, the vehicle-mounted camera takes one picture of the target object (4) to be recognized at the positions (1), (2) and (3), the focusing position is changed due to the fact that the distance between the vehicle body and the target object is changed, and the focusing positions corresponding to the shooting positions (1), (2) and (3) are respectively. The images taken by the camera at positions (1), (2) and (3) and the contour of the target object restored by three-dimensional reconstruction are shown in fig. 3.

Claims (3)

1. A method for constructing a three-dimensional contour of an object based on a vehicle-mounted monocular focusing sequence image is characterized by comprising the following steps:
(a) The moving carrier drives the shooting unit to move along the extension direction of the target object, and an image of the target object is shot at intervals of delta t, so that N frame sequence images of the target object are obtained within a period of time;
(b) Processing images of all sequence images shot by a shooting unit, and performing cutting transformation on the images to remove part of non-relevant background areas except a target object measurement area in the sequence images to obtain a preprocessed sequence image;
(c) Extracting clear pixel points of each frame of the preprocessing sequence image to construct a full-focus image; determining a pixel point focusing evaluation window of any pixel point in the pre-processing sequence image, and calculating a focusing factor of each pixel point of the pre-processing sequence image: for any pixel point (i, j) in the preprocessed sequence image, taking the pixel point as a starting point, generating four gray level co-occurrence matrixes in four neighborhoods of the upper, lower, left and right, respectively calculating and obtaining the maximum value of the correlation characteristic value of the four gray level co-occurrence matrixes, determining the maximum correlation pixel point of the pixel point in four directions by using the maximum correlation matrix, and determining a focusing evaluation window U (i, j) of the pixel point by using the four pixel points as the edge position of an evaluation window of the pixel point, wherein the width size W1= D1+ D2+1 and the height size W2= D3+ D4+1 of the focusing evaluation window, the width sizes W1, D2, D3 and D4 respectively represent that the number of pixels at intervals between the maximum correlation pixel point in four directions and any calculated pixel point (i, j) is added with 1 multiplied by W2, and the size of the focusing evaluation window is W1 multiplied by W2; calculating the focusing factor of each pixel point in each frame of image by using the focusing evaluation window; wherein the focusing factor is pairThe average value of the evaluation value sum of each pixel point of the pixel point in the focusing evaluation window is calculated; the focusing factor of any pixel point (i, j) in each frame of preprocessed image is the average value of the evaluation value sum of each pixel point of the pixel point in the evaluation window, and is shown as the following formula: f k (i,j)=∑g x (x,y) 2 +g y (x,y) 2 ) 2 /(w 1 ×w 2 )
Wherein, g x (x, y) and g y (x, y) respectively represent the k frame pre-processing sequence images I k And the convolution of the Sobel operator in the X and Y directions; taking the pixel point reaching the maximum focusing factor in each frame image as a clear pixel point of the frame image; clear pixel points from all the images form a full focusing image;
(d) The image serial number corresponding to each pixel point in the full-focus image is solved, the relative position relation of the surface points of the target object is represented according to the sequence of the image serial numbers, the depth distance delta z between adjacent sequence images can be calculated according to the product of the speed v and delta t of the real-time moving carrier, and then the depth relation of all the points on the surface of the target object is obtained: determining the Z-coordinate Z of the shooting unit in the world coordinate system when the k-th image is acquired k ,Z k Is the sum of all Δ z from 1 to k, as follows:
Figure FDA0004086878420000021
recording the pixel coordinate (i) of each clear pixel point in the k picture in the image coordinate system of the picture where the clear pixel point is located k 、j k ) (ii) a Pixel coordinates (i) of all sharp pixels based on the transformation matrix from the pixel coordinate system to the world coordinate system k 、j k ) Obtain the corresponding world coordinate (X) k ,Y k ,Z k ) (ii) a According to the step (d), obtaining a coordinate set { (X) of clear pixel points of all images in the preprocessed sequence image in a world coordinate system k ,Y k ,Z k ) L 1 is more than or equal to k and less than or equal to N, wherein N is the total image number of the sequence images;
(e) According to the target object tableCoordinate set of a face point in a world coordinate system { (X) k ,Y k ,Z k ) And l 1 is less than or equal to k and less than or equal to N, connecting each point with points in the surrounding field to form a triangular grid, and connecting surfaces formed by the triangular grids to form a contour graph of the surface of the target object to realize the reconstruction of the three-dimensional contour graph of the surface of the target object in a world coordinate system.
2. The method for constructing the three-dimensional contour of the object based on the vehicle-mounted monocular focusing sequence images as claimed in claim 1, wherein the conversion relation from the pixel coordinate system to the world coordinate system is obtained by adopting the following method:
step one, fixing a shooting unit on a movable carrier;
fixing the calibration object on a plane; through adjusting the direction of the calibration object or the shooting unit, some photos in different directions are shot for the calibration object, and 4 internal parameters in the camera imaging linear model are obtained by the group of photos through a Zhang friend calibration method: u. of 0 、v 0 、f x 、f y Respectively representing the pixel coordinates of the principal point and the effective focal length of the camera; 2 external parameters: r and t respectively represent the rotation relation and the translation relation of a camera coordinate system and a world coordinate system, and z c The z coordinate of the principal point under the camera coordinate system is represented, and the conversion relation from the pixel coordinate system to the world coordinate system can be calculated according to the seven parameters:
Figure FDA0004086878420000031
3. the method for constructing the three-dimensional contour of the object based on the vehicle monocular focusing sequence images as set forth in claim 1 or 2, wherein the shooting unit adopts a CCD image acquisition and transmission device, and the mobile carrier is an automobile.
CN201910447327.6A 2019-05-27 2019-05-27 Method for constructing three-dimensional contour of object based on vehicle-mounted monocular focusing sequence image Active CN110310371B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910447327.6A CN110310371B (en) 2019-05-27 2019-05-27 Method for constructing three-dimensional contour of object based on vehicle-mounted monocular focusing sequence image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910447327.6A CN110310371B (en) 2019-05-27 2019-05-27 Method for constructing three-dimensional contour of object based on vehicle-mounted monocular focusing sequence image

Publications (2)

Publication Number Publication Date
CN110310371A CN110310371A (en) 2019-10-08
CN110310371B true CN110310371B (en) 2023-04-04

Family

ID=68075133

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910447327.6A Active CN110310371B (en) 2019-05-27 2019-05-27 Method for constructing three-dimensional contour of object based on vehicle-mounted monocular focusing sequence image

Country Status (1)

Country Link
CN (1) CN110310371B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114564014A (en) * 2022-02-23 2022-05-31 杭州萤石软件有限公司 Object information determination method, mobile robot system, and electronic device
CN115641368B (en) * 2022-10-31 2024-06-04 安徽农业大学 Out-of-focus checkerboard image feature extraction method for calibration

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103801989A (en) * 2014-03-10 2014-05-21 太原理工大学 Airborne automatic measurement system for determining origin of coordinates of workpiece according to image processing
CN105869167A (en) * 2016-03-30 2016-08-17 天津大学 High-resolution depth map acquisition method based on active and passive fusion
CN107680152A (en) * 2017-08-31 2018-02-09 太原理工大学 Target surface topography measurement method and apparatus based on image procossing
US10205929B1 (en) * 2015-07-08 2019-02-12 Vuu Technologies LLC Methods and systems for creating real-time three-dimensional (3D) objects from two-dimensional (2D) images
CN109671115A (en) * 2017-10-16 2019-04-23 三星电子株式会社 The image processing method and device estimated using depth value

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10304203B2 (en) * 2015-05-14 2019-05-28 Qualcomm Incorporated Three-dimensional model generation
JP6989276B2 (en) * 2017-04-05 2022-01-05 株式会社Soken Position measuring device
KR102275310B1 (en) * 2017-04-20 2021-07-12 현대자동차주식회사 Mtehod of detecting obstacle around vehicle

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103801989A (en) * 2014-03-10 2014-05-21 太原理工大学 Airborne automatic measurement system for determining origin of coordinates of workpiece according to image processing
US10205929B1 (en) * 2015-07-08 2019-02-12 Vuu Technologies LLC Methods and systems for creating real-time three-dimensional (3D) objects from two-dimensional (2D) images
CN105869167A (en) * 2016-03-30 2016-08-17 天津大学 High-resolution depth map acquisition method based on active and passive fusion
CN107680152A (en) * 2017-08-31 2018-02-09 太原理工大学 Target surface topography measurement method and apparatus based on image procossing
CN109671115A (en) * 2017-10-16 2019-04-23 三星电子株式会社 The image processing method and device estimated using depth value

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"3D reconstruction from a monocular vision system for unmanned ground vehicles";Tompkins R.Cortland等;《ELECTRO-OPTICAL REMOTE SENSING, PHOTONIC TECHNOLOGIES, AND APPLICATIONS V》;20110101;第8186卷;第818608页 *
"三维自由曲线的立体匹配及重构方法";刘双印;《中国优秀硕士学位论文全文数据库信息科技辑》;20180815(第8期);第I138-613页 *

Also Published As

Publication number Publication date
CN110310371A (en) 2019-10-08

Similar Documents

Publication Publication Date Title
CN107578464B (en) Conveyor belt workpiece three-dimensional contour measuring method based on line laser scanning
CN110230998B (en) Rapid and precise three-dimensional measurement method and device based on line laser and binocular camera
Strecha et al. On benchmarking camera calibration and multi-view stereo for high resolution imagery
EP2568253B1 (en) Structured-light measuring method and system
GB2593960A (en) 3-D imaging apparatus and method for dynamically and finely detecting small underwater objects
CN110657785B (en) Efficient scene depth information acquisition method and system
CN103900494B (en) For the homologous points fast matching method of binocular vision 3 D measurement
CN110766669B (en) Pipeline measuring method based on multi-view vision
CN114998499A (en) Binocular three-dimensional reconstruction method and system based on line laser galvanometer scanning
CN115330958A (en) Real-time three-dimensional reconstruction method and device based on laser radar
CN110310371B (en) Method for constructing three-dimensional contour of object based on vehicle-mounted monocular focusing sequence image
CN110738731A (en) 3D reconstruction method and system for binocular vision
CN111640156A (en) Three-dimensional reconstruction method, equipment and storage equipment for outdoor weak texture target
CN114782632A (en) Image reconstruction method, device and equipment
CN111156921A (en) Contour data processing method based on sliding window mean filtering
CN113160416B (en) Speckle imaging device and method for coal flow detection
CN112525106B (en) Three-phase machine cooperative laser-based 3D detection method and device
CN113808019A (en) Non-contact measurement system and method
CN117710588A (en) Three-dimensional target detection method based on visual ranging priori information
CN108961378B (en) Multi-eye point cloud three-dimensional reconstruction method, device and equipment
Hongsheng et al. Three-dimensional reconstruction of complex spatial surface based on line structured light
CN113129348A (en) Monocular vision-based three-dimensional reconstruction method for vehicle target in road scene
WO2008044096A1 (en) Method for three-dimensionally structured light scanning of shiny or specular objects
CN114252024B (en) Single-measurement-module multi-working-mode workpiece three-dimensional measurement device and method
CN113902791B (en) Three-dimensional reconstruction method and device based on liquid lens depth focusing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant