CN110728745A - Underwater binocular stereoscopic vision three-dimensional reconstruction method based on multilayer refraction image model - Google Patents

Underwater binocular stereoscopic vision three-dimensional reconstruction method based on multilayer refraction image model Download PDF

Info

Publication number
CN110728745A
CN110728745A CN201910874161.6A CN201910874161A CN110728745A CN 110728745 A CN110728745 A CN 110728745A CN 201910874161 A CN201910874161 A CN 201910874161A CN 110728745 A CN110728745 A CN 110728745A
Authority
CN
China
Prior art keywords
coordinate system
new
image
stereoscopic vision
light
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910874161.6A
Other languages
Chinese (zh)
Other versions
CN110728745B (en
Inventor
屠大维
金攀
庄苏锋
张旭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Shanghai for Science and Technology
Original Assignee
University of Shanghai for Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Shanghai for Science and Technology filed Critical University of Shanghai for Science and Technology
Priority to CN201910874161.6A priority Critical patent/CN110728745B/en
Publication of CN110728745A publication Critical patent/CN110728745A/en
Application granted granted Critical
Publication of CN110728745B publication Critical patent/CN110728745B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)

Abstract

The invention provides an underwater binocular stereoscopic vision three-dimensional reconstruction method based on a multilayer refraction image model. Belongs to the field of underwater computer vision research, and is typically applied to three-dimensional reconstruction of underwater objects. The method uses multilayer refraction theory based on the light field to calculate and obtain the direction information images and the position images of the left camera and the right camera, and then the parallax map can be obtained by directly using a stereo matching method in the air based on the direction information images. And finally, determining corresponding points on the left and right direction information images by using the disparity map, and calculating the three-dimensional coordinates of the matching points by combining position image data of the corresponding coordinates. And traversing the whole image according to the method to obtain the point cloud picture of the whole matching area. The invention not only realizes the three-dimensional reconstruction of the underwater object, but also ensures that the calculation efficiency is obviously improved on the premise of higher precision.

Description

Underwater binocular stereoscopic vision three-dimensional reconstruction method based on multilayer refraction image model
Technical Field
The invention belongs to the field of underwater computer vision research, and relates to an underwater three-dimensional reconstruction method based on a multilayer refraction image model, in particular to a binocular stereoscopic vision three-dimensional reconstruction method in a multilayer refraction system in an underwater light field model.
Background
With the progress of science and technology, mankind has new cognition on the exploitation and utilization of ocean resources, and various countries develop underwater detection technology without much residual force. The underwater three-dimensional reconstruction technology is an important technical means for detecting deep lakes and oceans, and can be used for underwater topography scanning detection, seabed archaeology and body type data acquisition of underwater slow moving organisms. At present, sonar is mainly used as a main detection technology for underwater detection, but the method is low in precision and cannot meet the requirement of underwater accurate detection. By using the visual detection, the underwater environment can be observed visually, and more accurate three-dimensional information can be obtained.
Although the existing three-dimensional reconstruction technology in the air is mature, the application of a precise optical instrument in an underwater environment has a lot of difficulties due to the special imaging environment, and at this time, a waterproof cover is often required to be additionally arranged on a camera, so that the imaging quality is reduced due to the absorption and scattering of light by water originally, and light is refracted at the interface of the water, the waterproof cover glass and the air, so that an epipolar constraint model in the air is not suitable any more, namely the original three-dimensional reconstruction method in the air is invalid.
In the article of "laser line scanning technique based on binocular stereo vision [ J ]. mechanical design and manufacture, 2011(8): 200-. The traditional camera model is used for describing an underwater refraction environment, and the deviation caused by refraction is corrected by distortion parameters. However, this method is not accurate, and needs to calibrate within the field of view in different waters, and the camera and the calibration plate must be relatively static during calibration, so the practical operability is poor.
Chinese patent CN201410195345.7 proposes a three-dimensional reconstruction method for underwater targets based on line structured light. The method utilizes a single camera and line structured light combination method to carry out three-dimensional reconstruction, and utilizes calibration data obtained at different underwater depths to correct the position of the center of a laser stripe in an obtained image so as to eliminate the influence caused by refraction. Although the method is high in precision compared with the traditional small-hole imaging model, the calibration plate is needed to be used for calibration in water with different depths, calibration pictures need to be shot in different depths of a view field in the calibration process, and the calibration pictures are difficult to achieve underwater, so that the actual operability is poor.
Disclosure of Invention
The invention provides an underwater binocular stereoscopic vision three-dimensional reconstruction method based on a multilayer refraction image model to solve the problems. The method has high practical operability and improves the calculation efficiency of the algorithm on the premise of ensuring high precision.
In order to achieve the purpose, the invention specifically adopts the following technical scheme:
an underwater binocular stereoscopic vision three-dimensional reconstruction method based on a multilayer refraction image model comprises the following steps:
step 1: calculating direction information images of the left camera and the right camera, namely dir _ L and dir _ R;
step 2: calculating position images of corresponding pixels, namely pos _ L and pos _ R;
and step 3: calculating a disparity map disp by using a matching algorithm in the air based on the direction information images dir _ L and dir _ R of the left and right cameras, and determining coordinate corresponding points on the left and right direction information images according to the disparity map;
and 4, step 4: and calculating the three-dimensional coordinates of the matching points according to the calculated direction information image and the position image.
Thanks to the solution described above, the invention has the following obvious advantages:
1. the precision is high. The light rays are described by adopting the multilayer refraction coordinate system of the light field, the precision is much higher than that of the traditional distortion correction, and no system error exists.
2. The left and right images obtained by calculation can directly utilize a matching algorithm in the air, and the portability is good.
3. The calculation amount is reduced, the calculation speed is higher, and the execution efficiency of the system is effectively improved.
Drawings
FIG. 1 is a detailed flow chart of the algorithm of the present invention.
Fig. 2 is a direction information image calculated by the present invention, wherein a is a right camera direction information image and b is a left camera direction information image.
FIG. 3 is a calculated position image of the present invention, wherein a is the right camera position image and b is the left camera position image.
Fig. 4 is a point cloud image calculated by the algorithm of the present invention.
Detailed Description
The following detailed description of preferred embodiments of the invention refers to the accompanying drawings.
As shown in fig. 1, an underwater binocular stereo vision three-dimensional reconstruction method based on a multilayer refraction image model includes the following steps:
step 1: and calculating the left and right direction information images.
Firstly, the multi-layer refraction model in the underwater stereoscopic vision system calibration method based on the multi-layer refraction model in the chinese patent CN201710702222 is adopted in this embodiment. After the camera is enclosed in the chamber, the z-axis of the camera coordinate system, i.e., the optical axis of the camera, is generally not perpendicular to the "air-water" interface. Therefore, a multilayer refraction coordinate system with the z axis perpendicular to the air-water interface is established, and normal vector parameters (n) are obtained by adopting a multilayer refraction model-based underwater stereoscopic vision system calibration method in Chinese patent CN201710702222L,nR) And calculating the conversion relation between the camera coordinate system and the multilayer refraction coordinate system according to the normal vector parameters. The relationship of the multi-layer refraction imaging coordinate system and the camera coordinate system can be expressed as:
PrcRrPr+ctr
cRr=[nc×zcnc×(nc×zc) nc]
ctr=[0 0 0]T
zc=[0 0 1]T
and then establishing a stereoscopic vision coordinate system according to the multilayer refraction model of the camera. The optical center of the left camera is used as an original point, the connecting line direction of the optical center of the left camera and the optical center of the right camera is used as an x-axis, the z-axis (namely the normal of an interface) of the light field of the current left camera of the left camera is crossed with the x-axis to obtain a y-axis, and then the x-axis is crossed with the y-axis to obtain the z-axis. Obtaining a stereoscopic vision coordinate system:
PrrRnewPnew+rtnew
rRnew=[nxnr×zxnx×(nr×zx)]
rtnew=[0 0 0]T
zc=[0 0 1]T
wherein n isxA unit vector composed of the left and right camera optical centers is represented in a multi-layer refractive coordinate system. The relationship of the multi-layer refractive coordinate system with respect to the stereoscopic coordinate system can be expressed as: pnewnewRrPr+newtrWherein:newRrrRnew -1. Then, defining a direction image internal reference matrix, and establishing a left direction image matrix and a right direction image matrix under a stereoscopic vision coordinate system.
And calculating the direction vector of the light ray under the multilayer refraction coordinate system. Firstly, calculating the direction vector I of the corresponding light ray of any pixel in the left and right direction information image under the stereoscopic vision coordinate systemL_stereoAnd IR_stereo. And then according to the coordinate transformation relation between the stereoscopic vision coordinate system and the multilayer refraction coordinate system: prrRnewPnewObtaining the direction vector I of the light ray under the multilayer refraction coordinate systemL_reflectAnd IR_reflect
According to the light field representation method, the light rays of the left and right direction information image points which are transmitted and refracted to the air through the multilayer interface are respectively calculated
Figure BDA0002203796110000031
And converted into light vectors
Figure BDA0002203796110000032
Andaccording to the light field model of the chinese patent CN109490251A "underwater refractive index self-calibration method based on light field multilayer refraction model", the light ray vectors of the left and right direction information images are expressed as light fields:
light LrPropagation distance d0Then refraction occurs from the water into the air, and the incident ray and the refracted ray can be expressed as:
Figure BDA0002203796110000035
1Lr=R(s0t01.333 1)×T(d00Lr
wherein
The light rays reaching the air after the left and right direction information image points are transmitted and refracted by water can be obtained according to the formula, and the light rays can be converted into light ray vectors by using the following formula
Figure BDA0002203796110000042
Figure BDA0002203796110000043
Vector of the above light
Figure BDA0002203796110000044
And converting the image into a left camera coordinate system and a right camera coordinate system, calculating the pixel position of any image point on the direction information image corresponding to the original image according to the internal parameters of the left camera and the right camera, and establishing a mapping table in the x direction and the y direction.
Then, an underwater binocular vision measurement system is utilized to obtain an image of an underwater target, green dispersion point laser is used for increasing the texture of the underwater image, then after the distortion of the collected left and right images is corrected, a remap function in opencv and the mapping tables in the x direction and the y direction are utilized, and the left and right direction information images can be rapidly calculated: dir _ L and dir _ R.
Step 2: and calculating left and right position images.
First, according to the light vector
Figure BDA0002203796110000045
And calculating the intersection point of the left image ray and the interface.
L,L=R(sL,tLL,μ’L)×LL=R(sL,tLL,μ’L)×T(dL)×Lr L=(u’L,v’L,s’L,t,L)T
The intersection point of the left image ray and the interface is obtained as follows: cL=(u’L,v’L,dL)。
Light direction:
Figure BDA0002203796110000046
the above intersection point is then converted to the left stereovision coordinate system. Converting matrix according to position and posture of left stereoscopic vision coordinate system and left multilayer refraction coordinate systemnewRr LTransforming the intersection point and direction of the calculated light ray and the interface to obtain an intersection point C of the light ray and the interface in a new coordinate systemnew LAnd the direction of light Inew L
Cnew LnewRr LCL=(unew L,vnew L,dnew L)
Figure BDA0002203796110000047
And then, establishing a new light field coordinate system according to the stereoscopic vision coordinate system, and obtaining the position information of the light rays in the stereoscopic vision coordinate system. First, a new light field coordinate system is established, which is defined as follows: the u-v coordinate system is parallel to the x-y plane of the stereoscopic vision coordinate system, and the origin is coincident with the origin of the stereoscopic vision coordinate system; a parallel plane one unit length away from the u-v plane is defined as an s-t plane, and an s-t coordinate system is parallel to the u-v coordinate system. Then the ray is represented in the new light field coordinate system as:
Lnew L=[unew Lvnew Lsnew Ltnew L]T
intersection of the light field with the x-y plane of the new coordinate system:
LnewL=T(-dnew L)×Lnew L=[unewL,vnewL,snew L,tnew L]T
according to Pnew L=(unewL,vnewLAnd 0) obtaining the position information of each ray on the left image and storing the position information in the position image. By adopting the same method, the position information of each light ray on the right image can be obtained: pnew R=(unewR,vnewR0), as shown in fig. 3, the position image is a two-channel image.
And step 3: based on the left and right direction information images (as shown in fig. 2) obtained above, a disparity map disp is calculated by using an SGBM matching algorithm in the air. Let a certain pixel point coordinate of the left direction information image be (x)l,yl) Then the coordinate corresponds to the coordinate (x) in the disparity map dispdisp,ydisp);
xr=xl+xdisp
yr=yl
Pixel point (x) of left direction imagel,yl) And pixel point (x) of right direction imager,yr) And correspondingly.
And 4, step 4: from the corresponding coordinates (x) on the left and right direction information imagel,yl)、(xr,yr) And pixel data to obtain left and right direction vectorsla、ra. In the left position image (see fig. 3(a)), stored is the point position data of the left camera in the left multi-layered refraction coordinate systemlq, in the right position image (see FIG. 3(b)), the dot position data of the right camera in the right multi-layered refraction coordinate system is storedrq is calculated. Then the direction vector of the right camera under a multilayer refraction coordinate systemra and point positionrq is converted to the left camera multi-layer refractive coordinate system. Since point P is on two straight lines at the same time in the left camera multi-layer refraction coordinate system, the following constraint is satisfied:
Figure BDA0002203796110000051
the above constraint can then be converted to the following form using an antisymmetric matrix representation instead of cross multiplication:
Figure BDA0002203796110000052
and finally, carrying out singular value decomposition on the formula to calculate the three-dimensional coordinate of the matching point P. In this way, a point cloud image of the matching region can be obtained by traversing the whole image (as shown in fig. 4).
And completing underwater three-dimensional reconstruction based on the multilayer refraction model.

Claims (6)

1. An underwater binocular stereoscopic vision three-dimensional reconstruction method based on a multilayer refraction image model is characterized by comprising the following steps:
step 1: calculating direction information images of the left camera and the right camera, namely dir _ L and dir _ R;
step 2: calculating position images of corresponding pixels, namely pos _ L and pos _ R;
and step 3: calculating a disparity map disp by using a matching algorithm in the air based on the direction information images dir _ L and dir _ R of the left and right cameras, and determining coordinate corresponding points on the left and right direction information images according to the disparity map;
and 4, step 4: and calculating the three-dimensional coordinates of the matching points according to the calculated direction information image and the position image.
2. The underwater binocular stereoscopic vision three-dimensional reconstruction method based on the multilayer refraction image model as claimed in claim 1, wherein the step 1 of calculating the direction information images of the left and right cameras specifically comprises the following steps:
step 1-1: establishing a stereoscopic vision coordinate system by adopting a multilayer refraction stereoscopic model of a camera; defining a direction image internal reference matrix, and establishing a left direction image and a right direction image under a stereoscopic vision coordinate system;
step 1-2: calculating the corresponding light direction vector I of any pixel of the left and right direction information images in the stereoscopic vision coordinate systemL_stereo、IR_stereoAnd according to the coordinate transformation relation between the stereoscopic vision coordinate system and the multilayer refraction coordinate system: prrRnewPnewCalculating to obtain the direction vector I of the light ray under the multilayer refraction coordinate systemL_reflect、IR_reflect
Step 1-3: according to the light field model, the light ray vectors of the left and right direction information images are expressed as light fields:
Figure FDA0002203796100000011
for refractive index of munMedium of, lightnLrPropagation distance dnBack-entry refractive index of mun+1Refraction occurs in the medium, and the incident ray and the refracted ray are expressed as:
Figure FDA0002203796100000012
n+1Lr=R(sntnμnμn+1)×T(dnnLr
wherein
Calculating the light rays reaching the air after the left and right direction information image points are transmitted and refracted by the multilayer interface according to the formulaAnd converted into light vectors
Step 1-4: vector light
Figure FDA0002203796100000021
Converting the image into a left camera coordinate system and a right camera coordinate system, calculating according to internal parameters of the left camera and the right camera to obtain pixel positions of any image point on the directional image corresponding to the original image, and establishing a position mapping table;
step 1-5: and (3) rapidly calculating left and right direction information images by using a Remap function in OpenCV according to the underwater target image obtained by the underwater binocular vision measuring system and the position mapping table obtained by calculation in the step 1-4.
3. The underwater binocular stereoscopic vision three-dimensional reconstruction method based on the multilayer refraction image model as claimed in claim 2, wherein the establishing of the stereoscopic vision coordinate system in the step 1-1 comprises the following steps:
step 1-1-1: taking the optical center of the left camera as an origin, and taking the connecting line direction of the optical center of the left camera and the optical center of the right camera as the x axis of the coordinate system of the left stereoscopic vision;
step 1-1-2: taking the cross product of the z axis of the left multilayer refraction coordinate system, namely the normal of the interface, and the x axis of the left stereoscopic vision coordinate system as the y axis of the left stereoscopic vision coordinate system;
step 1-1-3: the x axis and the y axis are cross-multiplied to form a z axis;
step 1-1-4: and translating the left stereoscopic vision coordinate system to the optical center of the right camera to obtain the right stereoscopic vision coordinate system.
4. The underwater binocular stereoscopic vision three-dimensional reconstruction method based on the multilayer refraction image model as claimed in claim 1, wherein the step 2 of calculating the left and right position images specifically comprises the following steps:
step 2-1: using light field representation, from the ray vectors in steps 1-4
Figure FDA0002203796100000022
Calculating the intersection point of the light ray and the interface, and representing the light ray by using a light field:
Figure FDA0002203796100000023
L’=R(s,t,μ,μ’)×L=R(s,t,μ,μ’)×T(d)×Lr
=(u’,v’,s’,t’)T
this gives the intersection of the ray with the interface: c ═ C (u ', v', d);
light direction:
Figure FDA0002203796100000024
step 2-2: converting the intersection points into corresponding stereoscopic vision coordinate systems;
converting matrix according to position and posture of stereoscopic vision coordinate system and multilayer refraction coordinate systemnewRrTransforming the intersection point and direction of the calculated light ray and the interface to obtain an intersection point C of the light ray and the interface in a new coordinate systemnewAnd the direction of light Inew
CnewnewRrC=(cx,cy,dnew)
Figure FDA0002203796100000031
Step 2-3: establishing a new light field coordinate system according to the stereoscopic vision coordinate system, and solving the position information of the light rays under the stereoscopic vision coordinate system;
the new light field coordinate system is defined as follows: the u-v coordinate system is parallel to the x-y plane of the stereoscopic vision coordinate system, and the origin is coincident with the origin of the stereoscopic vision coordinate system; a parallel plane one unit length away from the u-v plane is defined as the s-t plane, the s-t coordinate system is parallel to the u-v coordinate system, and then the light ray is represented in the new light field coordinate system as:
Figure FDA0002203796100000032
intersection of the light field and the xy plane of the new coordinate system:
Lnew’=T(-dnew)×Lnew=[unew’,vnew’,snew,tnew]T
according to Pnew=(unew’,vnew', 0) obtaining the position data of each light ray of the left and right images and storing the position data in the position images.
5. The underwater binocular stereoscopic vision three-dimensional reconstruction method based on the multilayer refraction image model according to claim 1, wherein the corresponding points of the left and right direction information images are determined in the step 3, and the specific method is as follows: calculating a disparity map by using a matching algorithm in the air based on the left and right direction information images obtained in the step one; let a certain pixel point coordinate of the left direction information image be (x)l,yl) Then the coordinate corresponds to the coordinate (x) in the disparity map dispdisp,ydisp);
xr=xl+xdisp
yr=yl
Pixel point (x) of left direction imagel,yl) And pixel point (x) of right direction imager,yr) And correspondingly.
6. The underwater binocular stereo vision three-dimensional reconstruction method based on the multilayer refraction image model according to claim 1, wherein the three-dimensional coordinates of the matching points are calculated in the step 4, and the method specifically comprises the following steps: from the coordinates (x) of the left and right direction information imagel,yl)、(xr,yr) Obtaining left and right direction vectors from the pixel datala andra; the left position image stores the position data of the left camera under the left multilayer refraction coordinate systemlq, the position data of the right camera under the right multilayer refraction coordinate system is stored in the right position imagerq; then the direction vector of the right camera under a multilayer refraction coordinate systemra and point positionrq is converted to a left camera multilayer refraction coordinate system; since point P is on two straight lines at the same time in the left camera multi-layer refraction coordinate system, the following constraint is satisfied:
the above constraint is then converted to the following form using an antisymmetric matrix representation instead of cross multiplication:
Figure FDA0002203796100000042
and finally, carrying out singular value decomposition on the formula, and calculating the three-dimensional coordinate of the matching point P.
CN201910874161.6A 2019-09-17 2019-09-17 Underwater binocular stereoscopic vision three-dimensional reconstruction method based on multilayer refraction image model Active CN110728745B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910874161.6A CN110728745B (en) 2019-09-17 2019-09-17 Underwater binocular stereoscopic vision three-dimensional reconstruction method based on multilayer refraction image model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910874161.6A CN110728745B (en) 2019-09-17 2019-09-17 Underwater binocular stereoscopic vision three-dimensional reconstruction method based on multilayer refraction image model

Publications (2)

Publication Number Publication Date
CN110728745A true CN110728745A (en) 2020-01-24
CN110728745B CN110728745B (en) 2023-09-15

Family

ID=69218997

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910874161.6A Active CN110728745B (en) 2019-09-17 2019-09-17 Underwater binocular stereoscopic vision three-dimensional reconstruction method based on multilayer refraction image model

Country Status (1)

Country Link
CN (1) CN110728745B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111563921A (en) * 2020-04-17 2020-08-21 西北工业大学 Underwater point cloud acquisition method based on binocular camera
CN111784753A (en) * 2020-07-03 2020-10-16 江苏科技大学 Three-dimensional reconstruction stereo matching method for autonomous underwater robot recovery butt joint foreground view field
CN114967763A (en) * 2022-08-01 2022-08-30 电子科技大学 Plant protection unmanned aerial vehicle sowing control method based on image positioning

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130058581A1 (en) * 2010-06-23 2013-03-07 Beihang University Microscopic Vision Measurement Method Based On Adaptive Positioning Of Camera Coordinate Frame
US20170085864A1 (en) * 2015-09-22 2017-03-23 The Governors Of The University Of Alberta Underwater 3d image reconstruction utilizing triple wavelength dispersion and camera system thereof
CN107507242A (en) * 2017-08-16 2017-12-22 华中科技大学无锡研究院 A kind of multilayer dioptric system imaging model construction method based on ligh field model
CN108921936A (en) * 2018-06-08 2018-11-30 上海大学 A kind of underwater laser grating matching and stereo reconstruction method based on ligh field model
CN109490251A (en) * 2018-10-26 2019-03-19 上海大学 Underwater refractive index self-calibrating method based on light field multilayer refraction model

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130058581A1 (en) * 2010-06-23 2013-03-07 Beihang University Microscopic Vision Measurement Method Based On Adaptive Positioning Of Camera Coordinate Frame
US20170085864A1 (en) * 2015-09-22 2017-03-23 The Governors Of The University Of Alberta Underwater 3d image reconstruction utilizing triple wavelength dispersion and camera system thereof
CN107507242A (en) * 2017-08-16 2017-12-22 华中科技大学无锡研究院 A kind of multilayer dioptric system imaging model construction method based on ligh field model
CN108921936A (en) * 2018-06-08 2018-11-30 上海大学 A kind of underwater laser grating matching and stereo reconstruction method based on ligh field model
CN109490251A (en) * 2018-10-26 2019-03-19 上海大学 Underwater refractive index self-calibrating method based on light field multilayer refraction model

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张旭等: "针对机器人位姿测量立体标靶的单目视觉标定方法" *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111563921A (en) * 2020-04-17 2020-08-21 西北工业大学 Underwater point cloud acquisition method based on binocular camera
CN111563921B (en) * 2020-04-17 2022-03-15 西北工业大学 Underwater point cloud acquisition method based on binocular camera
CN111784753A (en) * 2020-07-03 2020-10-16 江苏科技大学 Three-dimensional reconstruction stereo matching method for autonomous underwater robot recovery butt joint foreground view field
CN111784753B (en) * 2020-07-03 2023-12-05 江苏科技大学 Jing Shichang three-dimensional reconstruction stereo matching method before recovery and docking of autonomous underwater robot
CN114967763A (en) * 2022-08-01 2022-08-30 电子科技大学 Plant protection unmanned aerial vehicle sowing control method based on image positioning

Also Published As

Publication number Publication date
CN110728745B (en) 2023-09-15

Similar Documents

Publication Publication Date Title
CN110044300B (en) Amphibious three-dimensional vision detection device and detection method based on laser
CN109919911B (en) Mobile three-dimensional reconstruction method based on multi-view photometric stereo
CN109544628B (en) Accurate reading identification system and method for pointer instrument
CN104537707B (en) Image space type stereoscopic vision moves real-time measurement system online
CN109727290B (en) Zoom camera dynamic calibration method based on monocular vision triangulation distance measurement method
CN106204731A (en) A kind of multi-view angle three-dimensional method for reconstructing based on Binocular Stereo Vision System
CN103115613B (en) Three-dimensional space positioning method
CN103337069B (en) High-quality three-dimensional color image acquisition methods and device based on multiple camera
CN105115560B (en) A kind of non-contact measurement method of cabin volume of compartment
Kunz et al. Hemispherical refraction and camera calibration in underwater vision
CN110728745B (en) Underwater binocular stereoscopic vision three-dimensional reconstruction method based on multilayer refraction image model
CN110337674B (en) Three-dimensional reconstruction method, device, equipment and storage medium
CN111127540B (en) Automatic distance measurement method and system for three-dimensional virtual space
CN102831601A (en) Three-dimensional matching method based on union similarity measure and self-adaptive support weighting
CN105004324A (en) Monocular vision sensor with triangulation ranging function
CN110363838A (en) Big field-of-view image three-dimensionalreconstruction optimization method based on more spherical surface camera models
Fernandez et al. Planar-based camera-projector calibration
CN112634379B (en) Three-dimensional positioning measurement method based on mixed vision field light field
Mahdy et al. Projector calibration using passive stereo and triangulation
CN109490251A (en) Underwater refractive index self-calibrating method based on light field multilayer refraction model
Fan et al. Underwater optical 3-D reconstruction of photometric stereo considering light refraction and attenuation
CN111429571B (en) Rapid stereo matching method based on spatio-temporal image information joint correlation
CN115714855A (en) Three-dimensional visual perception method and system based on stereoscopic vision and TOF fusion
CN115359127A (en) Polarization camera array calibration method suitable for multilayer medium environment
Zhuang et al. A standard expression of underwater binocular vision for stereo matching

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant