CN112330794B - Single-camera image acquisition system based on rotary bipartite prism and three-dimensional reconstruction method - Google Patents

Single-camera image acquisition system based on rotary bipartite prism and three-dimensional reconstruction method Download PDF

Info

Publication number
CN112330794B
CN112330794B CN202011072913.6A CN202011072913A CN112330794B CN 112330794 B CN112330794 B CN 112330794B CN 202011072913 A CN202011072913 A CN 202011072913A CN 112330794 B CN112330794 B CN 112330794B
Authority
CN
China
Prior art keywords
prism
camera
bipartite
image
point cloud
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011072913.6A
Other languages
Chinese (zh)
Other versions
CN112330794A (en
Inventor
李安虎
刘兴盛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tongji University
Original Assignee
Tongji University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tongji University filed Critical Tongji University
Priority to CN202011072913.6A priority Critical patent/CN112330794B/en
Publication of CN112330794A publication Critical patent/CN112330794A/en
Application granted granted Critical
Publication of CN112330794B publication Critical patent/CN112330794B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/10Beam splitting or combining systems
    • G02B27/12Beam splitting or combining systems operating by refraction only
    • G02B27/126The splitting element being a prism or prismatic array, including systems based on total internal reflection

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Optics & Photonics (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Stereoscopic And Panoramic Photography (AREA)
  • Studio Devices (AREA)

Abstract

The invention relates to a single-camera image acquisition system and a three-dimensional reconstruction method based on a rotary bipartite prism, wherein the system comprises a camera device and a rotary bipartite prism device, and the camera device comprises a camera and a camera bracket for supporting the camera; the rotary bipartite prism device comprises a bipartite prism, a prism supporting structure, a rotating mechanism and an outer shell for supporting the rotary bipartite prism device; the three-dimensional reconstruction method comprises the following steps: the method comprises the steps of system construction and parameter calibration, multi-view image sequence acquisition, stereo matching and cross optimization, three-dimensional reconstruction and point cloud filtering. Compared with the prior art, the invention changes the imaging visual angle of the single camera through the rotary motion of the bipartite prism, so that the dynamic binocular vision system can simulate the function of capturing multi-visual-angle target information, and the precision, efficiency, implementation flexibility and dynamic adaptability of multi-visual-angle stereo matching and three-dimensional reconstruction of the single camera can be effectively improved.

Description

Single-camera image acquisition system based on rotary bipartite prism and three-dimensional reconstruction method
Technical Field
The invention relates to the field of multi-view three-dimensional reconstruction, in particular to a single-camera image acquisition system and a three-dimensional reconstruction method based on a rotating bipartite prism.
Background
The multi-view three-dimensional reconstruction is a technology for obtaining three-dimensional shape information of a space target by utilizing a plurality of image sequences captured under different views and recovering the three-dimensional shape information through a solid geometry vision principle, and has important application values in the fields of autonomous navigation, geographical mapping, space remote sensing and the like. The traditional double-camera or multi-camera vision system realizes multi-view image information acquisition by increasing the number of sensors, but the cost is that the complexity of the system is improved, the physical size is enlarged, the common view field is reduced, and the like. In comparison, the single-camera vision system has a simple structure and high integration level, and can provide a more economical, flexible and effective solution for three-dimensional target reconstruction or scene restoration by introducing additional optical elements or camera motion constraints and other modes to acquire a multi-view target image sequence.
The following prior studies propose several typical single-camera multi-view three-dimensional reconstruction systems and methods:
the prior art comprises the following steps: a "single-camera multi-angle space point coordinate measuring method" (zhao 31066;, xi, et al, publication No. CN 109141226a, publication date: 2019, 1 month and 4 days) discloses a method of pasting a plurality of mark points with known coordinates on a target surface and acquiring a multi-view target image by changing the shooting angle of a single camera. The prior art comprises the following steps: "a measuring system and method of arc guide rail type single camera" (great-day, publication number: CN 110645962a) discloses a method for shooting a target image sequence containing code points from multiple directions by using a single camera moving along an arc guide rail, and then calculating three-dimensional information of target measuring points by using a photogrammetric principle. The above method requires that the target surface has a cooperative mark satisfying a certain constraint condition, and requires that the camera position and the shooting angle are changed many times, so that the flexibility of the specific implementation and the universality of the actual application occasion are limited to a certain extent.
The prior art comprises the following steps: "Single-camera binocular vision apparatus" (Zhang Qingchuan et al, publication No.: CN 109856895A, published: 2019, 6/7) discloses a method of capturing region-of-interest image information at a viewing angle that is bilaterally symmetric and allows adjustment, using a single camera in combination with two sets of symmetrically distributed mirrors. The prior art comprises the following steps: a novel single-camera three-dimensional digital image correlation system (Pan et al, publication No. CN 110530286A, publication date: 2019, 12 and 3) using a light-combining prism discloses a method for realizing high-precision three-dimensional measurement by combining a single camera, an X-cube light-combining prism and a group of symmetrically distributed reflectors, fusing and recording target image information of different color channels to a camera target surface and then utilizing a digital image correlation algorithm. The method changes the imaging visual angle by depending on the beam deflection effect generated by at least one group of reflecting mirrors, and has the advantages that enough arrangement space and adjustment angle are provided to ensure the field range of three-dimensional reconstruction, and the compactness and the integration of the system and the inhibition and the adaptability of the system to error disturbance are sacrificed.
In summary, the prior art has the following disadvantages:
1. the flexibility of the specific implementation mode and the universality of the practical application occasion of the method for acquiring the multi-view target image by changing the position and the shooting angle of the camera are limited to a certain degree;
2. the method for collecting multi-view target images by adopting a plurality of groups of symmetrically distributed reflectors has the advantages that enough arrangement space and adjustment angles are provided to ensure the field range of three-dimensional reconstruction, and the compactness and the integration of the system and the inhibition and the adaptability of the system to error disturbance are sacrificed.
Disclosure of Invention
The present invention is directed to overcome the above-mentioned drawbacks of the prior art, and provides a single-camera multi-view single-camera image acquisition system and a three-dimensional reconstruction method based on a rotating bipartite prism.
The purpose of the invention can be realized by the following technical scheme:
a single-camera image acquisition system based on a rotary bipartite prism comprises a camera device and a rotary bipartite prism device, wherein the camera device comprises a camera and a camera support for supporting the camera;
the rotating bipartite prism device comprises a bipartite prism, a prism supporting structure, a rotating mechanism and an outer shell for supporting the rotating bipartite prism device, wherein the bipartite prism is fixedly arranged in the central area of the prism supporting structure, and the output end of the rotating mechanism is connected with the prism supporting structure and is used for driving the prism supporting structure to rotate on a vertical plane;
the detection end of the camera is aligned with the bipartite prism.
Further, the target surface of the camera and the back surface of the bipartite prism satisfy a parallel relationship, and the optical axis of the camera intersects and is perpendicular to the top ridge line opposite to the back surface of the bipartite prism.
Further, rotary mechanism is torque motor, including torque motor rotor, torque motor brush and torque motor stator, prism bearing structure connects torque motor rotor, torque motor stator installs on the shell body.
The invention also provides a three-dimensional reconstruction method of the single-camera image acquisition system based on the rotating bipartite prism, which comprises the following steps:
system construction and parameter calibration: adjusting the position and the posture of the camera device and the rotary bipartite prism device to construct a single-camera imaging system and a working coordinate system thereof; acquiring internal parameters of the camera and an axial distance between the camera and the bipartite prism by using a visual calibration method;
acquiring a multi-view image sequence: the rotating mechanism is controlled to drive the bipartite prism to rotate, and the camera is used for collecting double-view images containing target information at the corner position of each bipartite prism to form a multi-view target image sequence for three-dimensional reconstruction;
stereo matching and cross optimization: deducing a dynamic virtual binocular system model equivalent to the single-camera imaging system according to the direction of a camera visual axis after deflection of a bipartite prism and the rotation angle of the bipartite prism, establishing an epipolar constraint relation of a double-visual-angle image corresponding to each bipartite prism rotation angle position, searching a homonymy image point in the double-visual-angle image through a window matching algorithm, and performing cross inspection and optimization on the homonymy image point in the double-visual-angle images at different bipartite prism rotation angle positions to realize stereo matching;
three-dimensional reconstruction and point cloud filtering: acquiring initial estimation of three-dimensional point cloud of a target according to the homonymous image point of the double-view-angle image corresponding to the corner position of the bipartite prism; and supplementing point cloud information missing from the initially estimated three-dimensional point cloud according to the homonymous image points of the double-view-angle images corresponding to the corner positions of the other bipartite prisms, so as to update the three-dimensional point cloud, and then carrying out noise filtering to obtain a final three-dimensional point cloud reconstruction result.
Further, in the step of system construction and parameter calibration, the step of constructing the single-camera imaging system is specifically to adjust the positions and postures of the camera device and the rotary bipartite prism device so as to ensure the parallel relationship between the camera target surface and the back surface of the bipartite prism, the perpendicular relationship between the camera optical axis and the crest line at the top of the bipartite prism, and the axial distance relationship between the camera and the bipartite prism;
specifically, the working coordinate system of the single-camera imaging system is established, an origin O is fixed at the optical center position of the camera, a Z axis coincides with the optical axis direction of the camera, an X axis and a Y axis are both orthogonal to the Z axis, the X axis corresponds to the line scanning direction of the camera image sensor, and the Y axis corresponds to the column scanning direction of the camera image sensor.
Further, in the stereo matching and cross optimization steps, the derivation process of the dynamic virtual binocular system model is specifically,
calculating two optical axes of the single-camera imaging system pointed after the camera visual axis is deflected by the bipartite prism by using a ray tracing methodTwo directions d of symmetrical directionLAnd dRDetermining two imaging visual angles corresponding to the rotation angle of any bipartite prism; deriving the dynamic virtual binocular system model according to the change relation of the camera visual axis orientation along with the rotation angle of the bipartite prism;
the calculation expression of the direction of the camera visual axis after deflection of the bipartite prism is as follows:
Figure BDA0002715734980000041
Figure BDA0002715734980000042
Figure BDA0002715734980000043
do=[0,0,1]T
Figure BDA0002715734980000044
in the formula (d)oIs the optical axis direction, n, of the single-camera imaging systemLIs the normal vector of the left side of the bipartite prism, nRIs the normal vector of the right side of the bipartite prism, nBThe normal vector of the back of the bipartite prism is the included angle between the side surface and the back of the bipartite prism, n is the refractive index of the material used by the bipartite prism, and omega is the rotation angle of the bipartite prism;
the dynamic virtual binocular system model comprises a left virtual camera alpha machine and a right virtual camera, and the calculation expressions of the rotation matrix and the translation vector of the left virtual camera and the right virtual camera relative to the actual camera under any bipartite prism rotation angle omega are as follows:
Figure BDA0002715734980000045
Figure BDA0002715734980000046
in the formula, RL(omega) is a rotation matrix of the left virtual camera relative to the actual camera under any bipartite prism rotation angle omega, tL(omega) is the translation vector of the left virtual camera relative to the actual camera under any bipartite prism rotation angle omega, RR(omega) is a rotation matrix of the right virtual camera relative to the actual camera under any bipartite prism rotation angle omega, tRAnd (omega) is a translation vector of the right virtual camera relative to the actual camera under an arbitrary rotation angle omega of the bipartite prism, Rot represents a certain angle of rotation around an axis direction defined by an outer product of two vectors, the angle is determined by a vector cosine law, and g is a distance from an optical center of the actual camera to the back of the bipartite prism.
Further, in the system construction and parameter calibration step, the stereo matching and cross optimization step, the dynamic virtual binocular system model includes a basis matrix between the left virtual camera and the right virtual camera at any bipartite prism rotation angle ω, and a calculation expression of the basis matrix is as follows:
F(ω)=(Aint)-T(RLR)-1TLR(Aint)-1
RLR=RL(ω)RR(ω)-1
tLR=tL-RL(ω)RR(ω)-1tR
in the formula, AintIs an internal parameter matrix, R, of the cameraLRIs a relative rotation matrix of the left virtual camera and the right virtual camera, TLRAn oblique symmetric matrix corresponding to the relative translation vector;
and multiplying the base matrix F (omega) of the left virtual camera and the right virtual camera by the homogeneous coordinates of the image points contained in one half part of the dual-view angle image to obtain the positions of the image points corresponding to the epipolar lines in the other half part of the image, thereby obtaining the epipolar constraint relation.
Further, in the step of stereo matching and cross optimization, the cross inspection and optimization specifically includes filtering out homonymous image points with too large deviation from the polar line intersection points according to the principle that homonymous image points are theoretically located at the intersection points of the plurality of polar lines.
Further, in the three-dimensional reconstruction and point cloud filtering step, the calculation expression of the initial estimation of the three-dimensional point cloud is as follows:
Figure BDA0002715734980000051
in the formula, PiAs a three-dimensional point cloud collection
Figure BDA0002715734980000052
The three-dimensional coordinates of the medium element i,
Figure BDA0002715734980000053
the homogeneous pixel coordinates of the same-name image points contained in the left half of the dual-view image,
Figure BDA0002715734980000054
respectively homogeneous pixel coordinates of the image points of the same name contained in the right half of the dual-view image,
Figure BDA0002715734980000055
is a set of positive integers; lambda [ alpha ]LIs composed of
Figure BDA0002715734980000056
Corresponding to the scale factor, λ, of the projected light vectorRIs composed of
Figure BDA0002715734980000057
Corresponding to the scale factor of the projected ray vector.
Further, in the three-dimensional reconstruction and point cloud filtering step, the noise filtering specifically includes performing point cloud filtering according to the deviation of the three-dimensional point cloud before and after updating, and the calculation process of the point cloud filtering at each time is represented as:
Figure BDA0002715734980000058
in the formula (I), the compound is shown in the specification,
Figure BDA0002715734980000059
for filtered three-dimensional point cloud sets, Pi estimateFor the initially estimated elements in the three-dimensional point cloud set, Pi updateTo update the elements in the computed three-dimensional point cloud set, ε is the deviation threshold between the updated point cloud and the initial estimate.
Compared with the prior art, the invention has the following advantages:
(1) according to the invention, the rotary bipartite prism device is introduced in front of the single camera, and the single camera can synchronously acquire image information of two symmetrical visual angles by utilizing the beam splitting effect of the bipartite prism, so that the compactness of the whole structure is ensured; the rotation mechanism drives the bipartite prism to rotate, so that the visual axis direction and the visual field range of the imaging system are effectively increased, the problem of information loss caused by factors such as movement and shielding can be solved to a certain extent, and the precision, the efficiency, the implementation flexibility and the dynamic adaptability of single-camera multi-view three-dimensional matching and three-dimensional reconstruction can be effectively improved by a method for capturing multi-view target information through a dynamic binocular vision system.
(2) The method combines the traditional stereoscopic vision calculation theory and the dynamic virtual binocular system model, realizes simplified description of the single-camera multi-view imaging process and efficient processing of redundant image information, and can effectively improve the precision, flexibility and adaptability of single-camera three-dimensional reconstruction.
(3) The invention utilizes the multi-polar line constraint and cross inspection method of the multi-view image sequence, not only can screen out the homonymous image points which are wrongly matched, but also can supplement the homonymous image points which are not contained in a specific view angle, can improve the accuracy and the rapidity of the stereo matching of the multi-view image with lower operation cost, and particularly can provide an effective solution for the problem of the stereo matching of a weak texture area.
(4) The invention does not need to require the camera to carry out any motion, does not depend on any form of cooperative mark or introduces an optical element with a complex structure, realizes multi-view image capture and three-dimensional reconstruction only by virtue of the rotation motion of the refraction type bipartite prism, can ensure the structural compactness and the disturbance resistance of an imaging system, and can provide a potential technical approach for the application fields of mode identification, product detection and the like.
Drawings
FIG. 1 is an isometric view of the appearance of a single camera image acquisition system;
FIG. 2 is an assembly view of the structure of the rotary bipartite prism device, in which (a) is a front view of the rotary bipartite prism device and (b) is a sectional view taken along line A-A in (a);
fig. 3 is a schematic structural diagram of a bipartite prism, in which: (a) (b), (c) and (d) are respectively a front view, a left view, a top view and an axonometric view;
FIG. 4 is a schematic view of a prism support structure, wherein: (a) is a front view, and (B) is a sectional view B-B in (a);
fig. 5 is a schematic structural diagram of the outer shell, wherein: (a) is a front view, and (b) is a cross-sectional view of C-C in (a);
FIG. 6 is a basic flow diagram of a three-dimensional reconstruction method;
FIG. 7 is a schematic diagram of a dynamic virtual binocular model;
FIG. 8 is a schematic diagram of a multi-view image sequence stereo matching process;
in the figure, 1, a camera device, 11, a camera, 12, a camera support, 2, a rotary bipartite prism device, 21, a bipartite prism, 22, a prism support structure, 23, a torque motor rotor, 24, a torque motor brush, 25, a torque motor stator, 26 and an outer shell.
Detailed Description
The invention is described in detail below with reference to the figures and specific embodiments. The present embodiment is implemented on the premise of the technical solution of the present invention, and a detailed implementation manner and a specific operation process are given, but the scope of the present invention is not limited to the following embodiments.
Example 1
The embodiment provides a single-camera image acquisition system based on a rotary bipartite prism, which comprises a camera device and a rotary bipartite prism device, wherein the rotary bipartite prism device is used for changing the propagation direction of imaging light rays in a camera view field so as to generate two symmetrical imaging view angles, and the camera device is used for synchronously acquiring and recording target image information under the two imaging view angles; the camera device comprises a camera and a camera support, wherein the camera support is used for adjusting the posture and the angle of the camera; the rotary bipartite prism device comprises a bipartite prism assembly, a rotary mechanism and an outer shell; the rotating mechanism is used for driving the bisection prism assembly to rotate, and the outer shell is used for rotating the mechanism and protecting the bisection prism assembly; the axial distance between the camera device and the rotating bipartite prism device is allowed to be adjusted within a certain range, and more degrees of freedom are provided for multi-view image capturing and three-dimensional reconstruction.
Furthermore, the bipartite prism assembly comprises a bipartite prism and a prism support structure, the bipartite prism is arranged in the central area of the prism support structure in a glue bonding mode or a spring plate fixing mode, and the prism support structure is used for fixing and supporting the bipartite prism.
Furthermore, the rotating mechanism adopts a torque motor direct drive mode or a gear transmission mode, a synchronous belt transmission mode, a worm and gear transmission mode and the like, and the torque motor comprises a rotor and a stator; the two-prism assembly is in threaded connection with the torque motor rotor through the prism supporting structure, and the torque motor stator is installed on the outer shell in a threaded connection mode; the torque motor drives the bipartite prism assembly to rotate in the outer shell.
Furthermore, the target surface of the camera and the back surface of the bipartite prism meet the parallel relation, the optical axis of the camera is intersected and perpendicular with the top ridge line opposite to the back surface of the bipartite prism, and meanwhile, the view field of the camera is guaranteed not to be shielded by the rotating bipartite prism device.
The embodiment further provides a three-dimensional reconstruction method adopting the single-camera image acquisition system based on the rotating bipartite prism, which comprises the following steps:
s1, system construction and parameter calibration: constructing a single-camera imaging system and a working coordinate system thereof according to the relative position relationship between the camera and the bipartite prism, and acquiring internal parameters of the camera and the distance between the camera and the bipartite prism in the optical axis direction by using a visual calibration method;
s2, multi-view image sequence acquisition: the rotation angle change of the bipartite prism is realized by controlling the rotating mechanism, and a camera is utilized to collect double-view images containing target information at the rotation angle position of each bipartite prism to generate a multi-view target image sequence for three-dimensional reconstruction;
s3, stereo matching and cross optimization: establishing a polar constraint relation of the dual-view images corresponding to the corner positions of each bipartite prism by combining a dynamic virtual binocular system model, searching homonymous image points contained in the dual-view images through a window matching algorithm, and simultaneously performing cross check and optimizing a stereo matching result of the multi-view image sequence by using multi-polar constraint provided by image sequences corresponding to different bipartite prism corners;
s4, three-dimensional reconstruction and point cloud filtering: utilizing the homonymous image points contained in the collected image at the corner position of the specific bipartite prism, and calculating and recovering the position coordinates of the corresponding target point by combining the triangulation principle to obtain the initial estimation of the three-dimensional point cloud; and then, redundant stereo matching provided by images acquired at the corner positions of other bipartite prisms is utilized to supplement point cloud information missing in initial estimation, and noise possibly existing in the three-dimensional point cloud is gradually filtered.
Further, the step S1 specifically includes:
s11, constructing an imaging system consisting of a single camera and a rotary bipartite prism device, and sequentially adjusting the postures of the camera and the bipartite prism device to ensure the parallel relation between the target surface of the camera and the back surface of the bipartite prism, the vertical relation between the optical axis of the camera and the crest line at the top of the bipartite prism and the axial distance relation between the camera and the bipartite prism;
s12, establishing a working coordinate system O-XYZ of the imaging system, fixing an origin O at the optical center position of the camera, enabling a Z axis to coincide with the optical axis direction of the camera, enabling an X axis and a Y axis to be orthogonal to the Z axis, and enabling the X axis and the Y axis to respectively correspond to the row scanning direction and the column scanning direction of the camera image sensor;
s13, acquiring internal parameters of the camera and distortion coefficients of the lens by adopting a traditional vision calibration method such as a Zhangyingyou calibration method, a direct linear transformation method or a two-step calibration method, and adjusting the axial distance between the camera and the bipartite prism by the aid of measuring tools such as a vernier caliper and a laser interferometer.
Further, in step S2, the dichotomous prism assembly is driven by the rotating mechanism to successively reach m kinds of rotational angle positions, and the camera is triggered to acquire a corresponding dual-view image immediately after the dichotomous prism reaches the specified rotational angle position, wherein the motion control of the rotating mechanism and the image acquisition triggering of the camera are both realized by software.
Further, the step S3 specifically includes:
s31, calculating the direction of the camera visual axis after deflection of the bipartite prism by using a ray tracing method, and determining two imaging visual angles corresponding to the rotation angle of any bipartite prism;
s32, deducing a dynamic virtual binocular system model equivalent to the imaging system according to the change relation of the camera visual axis orientation along with the rotation angle of the bipartite prism, and determining the position posture and the motion rule of the virtual binocular system;
s33, calculating a basic matrix and a change rule of the dynamic virtual binocular system according to internal parameters and external parameters of the virtual binocular system under any bipartite prism corner by referring to a traditional binocular vision theory;
s34, deriving epipolar constraint relations among the dual-view images collected by the system under any bipartite prism corner according to the basic matrix of the dynamic virtual binocular system, and thus constructing multi-epipolar constraint relations of multi-view image sequences corresponding to different bipartite prism corners;
s35, polar line constraint between the left virtual camera and the right virtual camera at a specific prism corner position is utilized, meanwhile, a proper window matching algorithm is combined to search for the homonymous image points contained in the dual-view image, polar line constraint of the homonymous image points in the dual-view image corresponding to other prism corner positions is determined on the basis, and the homonymous image points with overlarge polar line intersection deviation are filtered according to the principle that the homonymous image points are theoretically located at the intersection point positions of a plurality of polar lines.
Further, the method can be used for preparing a novel materialIn step S31, the camera visual axis is deflected by the bipartite prism and then points to two symmetric directions d about the system optical axisLAnd dRThe ray tracing method can obtain:
Figure BDA0002715734980000091
wherein
Figure BDA0002715734980000092
And
Figure BDA0002715734980000093
are all intermediate variables, do=[0,0,1]TIs the optical axis direction of a single-camera imaging system, nLIs the normal vector of the left side of the bipartite prism, nRIs the normal vector of the right side of the bipartite prism, nBThe normal vector of the back of the bipartite prism is the included angle between the side surface and the back of the bipartite prism, n is the refractive index of the material used by the bipartite prism, and omega is the rotation angle of the bipartite prism; the normal vectors of the side surface and the back surface of the bipartite prism are respectively as follows:
Figure BDA0002715734980000094
further, in step S32, the dynamic virtual binocular system is composed of two symmetrically distributed virtual cameras, and is used to simplify and describe the process of acquiring the dual-view image by the cameras under the action of the rotating bipartite prism device; the internal parameters of the two virtual cameras are completely the same as those of the actually used cameras, the external parameters of the two virtual cameras mainly depend on the structural parameters and the motion parameters of the rotating bipartite prism, and the external parameters are expressed as follows under the rotating angle omega of any bipartite prism:
Figure BDA0002715734980000095
Figure BDA0002715734980000096
in the formula, RL(omega) is a rotation matrix of the left virtual camera relative to the actual camera under any bipartite prism rotation angle omega, tL(omega) is the translation vector of the left virtual camera relative to the actual camera under any bipartite prism rotation angle omega, RR(omega) is a rotation matrix of the right virtual camera relative to the actual camera under any bipartite prism rotation angle omega, tRAnd (omega) is a translation vector of the right virtual camera relative to the actual camera under an arbitrary rotation angle omega of the bipartite prism, Rot represents a certain angle of rotation around an axis direction defined by an outer product of two vectors, the angle is determined by a vector cosine law, and g is a distance from an optical center of the actual camera to the back of the bipartite prism.
Further, in step S33, a basic matrix exists between the left and right virtual cameras included in the dynamic virtual binocular system at any bipartite prism rotation angle ω:
F(ω)=(Aint)-T(RLR)-1TLR(Aint)-1
RLR=RL(ω)RR(ω)-1
tLR=tL-RL(ω)RR(ω)-1tR
in the formula, AintIs an internal parameter matrix, R, of the cameraLRIs a relative rotation matrix of the left virtual camera and the right virtual camera, TLRIs an oblique symmetric matrix corresponding to the relative translation vector.
Further, in step S34, the base matrix F (ω) of the left and right virtual cameras is multiplied by the homogeneous coordinates of the pixels included in one half of the dual view images, so as to obtain the positions of the pixels corresponding to the epipolar lines in the other half of the images; similarly, according to the change relationship between the left virtual camera position and the right virtual camera position and the half-prism rotation angle, the basic matrix and the corresponding polar line position between any two virtual camera positions under m kinds of half-prism rotation angles can be obtained by adopting the method, so that a series of redundant stereo matching constraint conditions are generated.
Further, in the step S35, the window matching algorithm may be selected from a Sum of Absolute Difference (SAD) algorithm, a sum of squared error (SSD) algorithm, a Normalized Cross Correlation (NCC) algorithm, and the like.
Further, the step S4 specifically includes:
s41, calculating initial three-dimensional point cloud distribution of the target by utilizing a triangulation principle according to a result of stereo matching of the dual-view image acquired at the first prism corner position;
s42, collecting each corresponding double-view-angle image at the corner positions of other prisms, and updating the three-dimensional point cloud information of the target by using a triangulation principle after completing stereo matching;
and S43, comparing the initial three-dimensional point cloud with the updated three-dimensional point cloud, supplementing data which are not contained in the initial estimation, continuously correcting and optimizing the three-dimensional point cloud corresponding to the image point with the same name by utilizing the gradually introduced redundant information, and filtering the data with larger deviation before and after updating as noise.
Further, in step S41, a corresponding three-dimensional point cloud is calculated according to the stereo matching result of the dual-view image, and the calculation process may be represented as:
Figure BDA0002715734980000101
in the formula, PiAs a three-dimensional point cloud collection
Figure BDA0002715734980000102
The three-dimensional coordinates of the medium element i,
Figure BDA0002715734980000103
the homogeneous pixel coordinates of the same-name image points contained in the left half of the dual-view image,
Figure BDA0002715734980000104
respectively homogeneous pixel coordinates of the image points of the same name contained in the right half of the dual-view image,
Figure BDA0002715734980000105
is a positive integer set; lambda [ alpha ]LIs composed of
Figure BDA0002715734980000106
Corresponding to the scale factor, λ, of the projected light vectorRIs composed of
Figure BDA0002715734980000111
The scale factors corresponding to the projected ray vectors can be eliminated by a simultaneous system of equations.
Further, in step S43, point cloud filtering is performed according to the deviation of the three-dimensional point cloud before and after updating, and each filtering calculation process is represented as:
Figure BDA0002715734980000112
in the formula (I), the compound is shown in the specification,
Figure BDA0002715734980000113
for filtered three-dimensional point cloud sets, Pi estimateFor the initially estimated elements in the three-dimensional point cloud set, Pi updateTo update the elements in the computed three-dimensional point cloud set, ε is the deviation threshold between the updated point cloud and the initial estimate.
The embodiment also provides a specific implementation process of the single-camera image acquisition system and the three-dimensional reconstruction method based on the rotating bipartite prism, which are respectively described in detail below.
Single-camera image acquisition system based on rotary bipartite prism
As shown in fig. 1 to 5, the present embodiment provides a single-camera image capturing system based on a rotating bipartite prism, which includes a camera device and a rotating bipartite prism device. The camera device comprises a camera and a camera support, and the rotary bipartite prism device comprises a bipartite prism, a prism support structure, a rotary mechanism and an outer shell.
The camera device 1 specifically includes a camera 11 and a camera mount 12. The camera 11 is adjusted in position and attitude by the camera mount 12 with the camera target surface parallel to the back surface of the half prism 21 and the viewing axis directed perpendicular to the top ridge of the half prism 21. Parameters such as the focal length, the field angle and the depth of field of the camera 11 must be reasonably matched with parameters such as the included angle between the side surface and the back surface of the bipartite prism 21 and the refractive index so as to avoid the problem of field shielding.
The rotary bipartite prism device 2 comprises a bipartite prism assembly, a rotary mechanism and an outer shell. The bipartite prism assembly comprises a bipartite prism 21 and a prism support structure 22, wherein the bipartite prism 21 is installed on a rectangular installation surface of the central area of the prism support structure 22 in a spring fixing or glue bonding mode, and the prism support structure 22 is provided with an arc-shaped slot hole in the circumferential direction to reduce the moment of inertia.
The rotating mechanism adopts a torque motor direct drive mode or a gear drive mode, a synchronous belt drive mode, a worm and gear drive mode and the like, and the torque motor direct drive mode is selected in the embodiment. The torque motor mainly comprises a rotor 23, a brush 24 and a stator 25, specifically, the bipartite prism assembly is fixedly connected with the rotor 23 of the torque motor in a threaded connection mode, and the stator 25 of the torque motor is fixed on the end face of the outer shell 26 in a threaded connection mode.
The outer housing 26 provides fixing and protecting functions for the bipartite prism assembly and the torque motor, and the torque motor drives the bipartite prism assembly inside to rotate.
The axial distance between the camera device 1 and the rotary bipartite prism device 2 can be dynamically adjusted according to specific application occasions and requirements, longitudinal change freedom degree is provided for a multi-view image capturing process, and richer image information is provided for a three-dimensional calculation reconstruction process.
This embodiment introduces rotatory bipartite prism device in camera the place ahead, can adjust the visual axis of camera pointing and formation of image visual angle wantonly through the beam splitting effect and the full circular rotary motion of bipartite prism to gather the multi-view target image sequence that contains abundant information, can effectively promote the precision and the efficiency that multi-view stereo matching and three-dimensional are rebuild. Compared with the existing single-camera three-dimensional reconstruction system using the cooperative marker or the reflector group, the three-dimensional reconstruction system of the embodiment does not need to use the cooperative marker as prior information, does not introduce a reflecting element sensitive to error disturbance, and can realize better structural compactness, imaging flexibility and environmental adaptability.
Single-camera multi-view three-dimensional reconstruction method based on rotating bipartite prism
As shown in fig. 6 to 8, the present embodiment provides a three-dimensional reconstruction method using the above single-camera image acquisition system based on a rotating bipartite prism, which specifically includes the following steps:
s1, system construction and parameter calibration
S11, constructing an imaging system consisting of the camera device 1 and the rotary bipartite prism device 2, and sequentially adjusting the postures of the camera 11 and the bipartite prism 21 to ensure the parallel relation between the target surface of the camera and the back surface of the bipartite prism, the vertical relation between the optical axis of the camera and the crest line of the top of the bipartite prism and the axial distance relation between the camera and the bipartite prism;
s12, establishing a working coordinate system O-XYZ of the imaging system, fixing an origin O at the optical center position of the camera, enabling a Z axis to coincide with the optical axis direction of the camera, enabling an X axis and a Y axis to be orthogonal to the Z axis, and enabling the X axis and the Y axis to respectively correspond to the row scanning direction and the column scanning direction of the camera image sensor;
s13, obtaining internal parameters of the camera and distortion coefficients of the lens by using a traditional visual calibration method such as a zhangzhengyou calibration method, a direct linear transformation method, or a two-step calibration method, which is adopted in this embodiment; determining the axial distance between the camera and the bipartite prism through measuring tools such as a vernier caliper and a laser interferometer, wherein the vernier caliper is adopted in the embodiment; the calibration method and the measurement method are mature methods in the prior art, and are not developed any more.
S2, Multi-view image sequence acquisition
The motion rule of the rotating mechanism is controlled by software, so that the bipartite prism is sequentially rotated to m-3 rotation angle positions which are recorded as omega1=0°、ω245 ° and ω390 °; when the bipartite prism reaches a specified corner position, the camera is triggered by software to acquire a corresponding double-view image containing target information, and finally a multi-view target image for three-dimensional reconstruction is obtainedLike a sequence of pictures.
S3, stereo matching and cross optimization
Establishing a polar constraint relation of the dual-view images corresponding to the corner positions of each bipartite prism by combining a dynamic virtual binocular system model, searching homonymous image points contained in the dual-view images through a window matching algorithm, and simultaneously performing cross check and optimizing a stereo matching result of the multi-view image sequence by using multi-polar constraint provided by image sequences corresponding to different bipartite prism corners;
s31, calculating the direction of the camera visual axis after deflection of the bipartite prism by using a ray tracing method, and determining two imaging visual angles corresponding to the rotation angle of any bipartite prism; the camera visual axis deflected by the bipartite prism is directed in two directions d symmetrical with respect to the system optical axis directionLAnd dRDerived from the vector refraction law:
Figure BDA0002715734980000131
wherein
Figure BDA0002715734980000132
And
Figure BDA0002715734980000133
are all intermediate variables, do=[0,0,1]TIs the optical axis direction of a single-camera imaging system, nLIs the normal vector of the left side of the bipartite prism, nRIs the normal vector of the right side of the bipartite prism, nBThe normal vector of the back of the bipartite prism is the included angle between the side surface and the back of the bipartite prism, n is the refractive index of the material used by the bipartite prism, and omega is the rotation angle of the bipartite prism; the normal vectors of the side surface and the back surface of the bipartite prism are respectively as follows:
Figure BDA0002715734980000134
in this embodiment, the angle between the side surface and the back surface of the bipartite prism is α 5 °, and the refractive index n is 1.52.
S32, deducing a dynamic virtual binocular system model equivalent to the imaging system according to the change relation of the camera visual axis orientation along with the rotation angle of the bipartite prism; the dynamic virtual binocular system consists of two virtual cameras which are symmetrically distributed and is used for simplifying and describing the process of acquiring the double-view-angle images by the cameras under the action of the rotating bipartite prism device; the internal parameters of the two virtual cameras are completely the same as those of the actually used cameras, the external parameters of the two virtual cameras mainly depend on the structural parameters and the motion parameters of the rotating bipartite prism, and the external parameters are expressed as follows under the rotating angle omega of any bipartite prism:
Figure BDA0002715734980000135
Figure BDA0002715734980000136
in the formula, RL(omega) is a rotation matrix of the left virtual camera relative to the actual camera under any bipartite prism rotation angle omega, tL(omega) is the translation vector of the left virtual camera relative to the actual camera under any bipartite prism rotation angle omega, RR(omega) is a rotation matrix of the right virtual camera relative to the actual camera under any bipartite prism rotation angle omega, tRAnd (omega) is a translation vector of the right virtual camera relative to the actual camera under an arbitrary rotation angle omega of the bipartite prism, Rot represents a rotation angle around an axis direction defined by an outer product of two vectors, the angle is determined by a vector cosine law, and g is a distance from an optical center of the actual camera to the back of the bipartite prism.
S33, referring to the traditional binocular vision theory, according to the internal parameters and the external parameters of the virtual binocular system under the corner omega of any bipartite prism, the left virtual camera and the right virtual camera contained in the dynamic virtual binocular system meet the basic matrix:
F(ω)=(Aint)-T(RLR)-1TLR(Aint)-1
RLR=RL(ω)RR(ω)-1
tLR=tL-RL(ω)RR(ω)-1tR
in the formula, AintIs an internal parameter matrix, R, of the cameraLRA relative rotation matrix, T, for the left and right virtual camerasLRIs an oblique symmetric matrix corresponding to the relative translation vector.
S34, enabling the basic matrix F (omega) of the left virtual camera and the right virtual camera in the dynamic virtual binocular system and the bipartite prism to have a rotation angle omega1When the angle is 0 degrees, the homogeneous coordinates of image points contained in the left half part and the right half part in the collected image are multiplied, and the position of the image point corresponding to the epipolar line in the right half part and the left half part of the image can be obtained; and then according to the relation between the left virtual camera position and the right virtual camera position and the half-prism corner, a basic matrix between any two virtual camera positions under different half-prism corners and the corresponding polar line position can be obtained, so that a series of redundant stereo matching constraint conditions are generated.
S35, utilizing the corner position of the prism as omega1Finding the homonymous image points contained in the double-view image by combining the polar constraint between the left virtual camera and the right virtual camera at 0 DEG and combining window matching algorithms such as a Sum of Absolute Differences (SAD) algorithm, a sum of squared errors (SSD) algorithm, a Normalized Cross Correlation (NCC) algorithm and the like
Figure BDA0002715734980000141
And
Figure BDA0002715734980000142
in this embodiment, an SAD algorithm is adopted; on the basis, the image points with the same name are determined
Figure BDA0002715734980000143
And
Figure BDA0002715734980000144
at omega245 ° and ω3Acquiring epipolar constraints within the image at 90 DEG, for each group of like-named image points
Figure BDA0002715734980000145
And
Figure BDA0002715734980000146
can determine 5 corresponding epipolar line positions; according to the principle that the homonymous image point of each visual angle is theoretically located at the intersection point position of 5 corresponding polar lines, filtering out homonymous image points with the polar line intersection point deviation exceeding a threshold value delta which is 0.5 pixel.
S4, three-dimensional reconstruction and point cloud filtering
S41, utilizing the corner position of the prism as omega1Corresponding image points pi of the acquired image at 0 DEGL1And pi R1Calculating and recovering the position coordinates of the corresponding target points by combining a triangulation principle to obtain initial estimation of the three-dimensional point cloud; each target point PiThe calculation process of (a) can be expressed as:
Figure BDA0002715734980000147
in the formula, PiAs a three-dimensional point cloud collection
Figure BDA0002715734980000148
The three-dimensional coordinates of the medium element i,
Figure BDA0002715734980000149
the homogeneous pixel coordinates of the same-name image points contained in the left half of the dual-view image,
Figure BDA00027157349800001410
respectively homogeneous pixel coordinates of the image points of the same name contained in the right half of the dual-view image,
Figure BDA00027157349800001411
is a positive integer set; lambda [ alpha ]LAnd λRAre respectively as
Figure BDA00027157349800001412
And
Figure BDA00027157349800001413
the scale factors corresponding to the projected ray vectors can be eliminated by a simultaneous system of equations.
S42, sequentially utilizing the corner position of the prism as omega245 ° and ω3Recalculating the three-dimensional point cloud according to the stereo matching result of the acquired image when the angle is 90 degrees, wherein the calculation method is the same as the step S41;
s43, comparing the previous three-dimensional point cloud with the updated three-dimensional point cloud after each calculation is completed, and supplementing the originally contained data; gradually correcting and optimizing the distribution condition of the three-dimensional target point cloud by utilizing the corresponding relation of the image points with the same name, and simultaneously, taking the data with larger deviation before and after updating as noise for filtering, wherein the point cloud filtering process is expressed as follows:
Figure BDA0002715734980000151
in the formula (I), the compound is shown in the specification,
Figure BDA0002715734980000152
for filtered three-dimensional point cloud sets, Pi estimateFor the initially estimated elements in the three-dimensional point cloud set, Pi updateFor updating elements in the calculated three-dimensional point cloud set, epsilon is a deviation threshold between the updated point cloud and the initial estimate; in this embodiment, the deviation threshold value ∈ is 1 mm.
The foregoing detailed description of the preferred embodiments of the invention has been presented. It should be understood that numerous modifications and variations could be devised by those skilled in the art in light of the present teachings without departing from the inventive concepts. Therefore, the technical solutions that can be obtained by a person skilled in the art through logical analysis, reasoning or limited experiments based on the prior art according to the concepts of the present invention should be within the scope of protection determined by the claims.

Claims (9)

1. A three-dimensional reconstruction method of a single-camera image acquisition system based on a rotary bipartite prism is characterized in that the single-camera image acquisition system based on the rotary bipartite prism comprises a camera device (1) and a rotary bipartite prism device (2), wherein the camera device (1) comprises a camera (11) and a camera support (12) for supporting the camera;
the rotary bipartite prism device (2) comprises a bipartite prism (21), a prism supporting structure (22), a rotating mechanism and an outer shell (26) for supporting the rotary bipartite prism device, wherein the bipartite prism (21) is fixedly arranged in the central area of the prism supporting structure (22), and the output end of the rotating mechanism is connected with the prism supporting structure (22) and is used for driving the prism supporting structure (22) to rotate on a vertical plane;
the detection end of the camera (11) is aligned with the bipartite prism (21);
the three-dimensional reconstruction method comprises the following steps:
system construction and parameter calibration: adjusting the position and the posture of the camera device and the rotary bipartite prism device to construct a single-camera imaging system and a working coordinate system thereof; acquiring internal parameters of the camera and an axial distance between the camera and the bipartite prism by using a visual calibration method;
acquiring a multi-view image sequence: the rotating mechanism is controlled to drive the bipartite prism to rotate, and the camera is used for collecting double-view images containing target information at the corner position of each bipartite prism to form a multi-view target image sequence for three-dimensional reconstruction;
stereo matching and cross optimization: deducing a dynamic virtual binocular system model equivalent to the single-camera imaging system according to the direction of a camera visual axis after deflection of a bipartite prism and the rotation angle of the bipartite prism, establishing an epipolar constraint relation of a double-visual-angle image corresponding to each bipartite prism rotation angle position, searching a homonymy image point in the double-visual-angle image through a window matching algorithm, and performing cross inspection and optimization on the homonymy image point in the double-visual-angle images at different bipartite prism rotation angle positions to realize stereo matching;
three-dimensional reconstruction and point cloud filtering: acquiring initial estimation of three-dimensional point cloud of a target according to the homonymous image point of the double-view-angle image corresponding to the corner position of the bipartite prism; and supplementing point cloud information missing from the initially estimated three-dimensional point cloud according to the homonymous image points of the double-view-angle images corresponding to the corner positions of the other bipartite prisms, so as to update the three-dimensional point cloud, and then carrying out noise filtering to obtain a final three-dimensional point cloud reconstruction result.
2. The method according to claim 1, characterized in that the target surface of the camera (11) and the back surface of the bipartite prism (21) satisfy a parallel relationship, and the optical axis of the camera (11) intersects and is perpendicular to the opposite top ridge line of the back surface of the bipartite prism (21).
3. The method of claim 1, wherein the rotating mechanism is a torque motor comprising a torque motor rotor (23), a torque motor brush (24), and a torque motor stator (25), the prism support structure (22) being connected to the torque motor rotor (23), the torque motor stator (25) being mounted on the outer housing (26).
4. The method according to claim 1, wherein in the system construction and parameter calibration step, the position and posture of the camera device and the rotating bipartite prism device are adjusted to ensure the parallel relationship between the camera target surface and the back surface of the bipartite prism, the perpendicular relationship between the camera optical axis and the top ridge line of the bipartite prism, and the axial distance relationship between the camera and the bipartite prism;
specifically, the working coordinate system of the single-camera imaging system is established, an origin O is fixed at the optical center position of the camera, a Z axis coincides with the optical axis direction of the camera, an X axis and a Y axis are both orthogonal to the Z axis, the X axis corresponds to the line scanning direction of the camera image sensor, and the Y axis corresponds to the column scanning direction of the camera image sensor.
5. The method according to claim 1, wherein in the stereo matching and cross optimization step, the derivation process of the dynamic virtual binocular system model is specifically,
method for calculating camera visual axis by utilizing ray tracing methodTwo directions d which are symmetrical to the direction of the optical axis of the single-camera imaging system and point after deflection of the bipartite prismLAnd dRDetermining two imaging visual angles corresponding to the rotation angle of any bipartite prism; deriving the dynamic virtual binocular system model according to the change relation of the camera visual axis orientation along with the rotation angle of the bipartite prism;
the calculation expression of the direction of the camera visual axis after deflection of the bipartite prism is as follows:
Figure FDA0003523783060000021
Figure FDA0003523783060000022
Figure FDA0003523783060000023
do=[0,0,1]T
Figure FDA0003523783060000024
in the formula (d)oIs the optical axis direction, n, of the single-camera imaging systemLIs the normal vector of the left side of the bipartite prism, nRIs the normal vector of the right side of the bipartite prism, nBThe normal vector of the back of the bipartite prism, alpha is the included angle between the side and the back of the bipartite prism, n is the refractive index of the material used by the bipartite prism, and omega is the rotation angle of the bipartite prism;
the dynamic virtual binocular system model comprises a left virtual camera and a right virtual camera, and the calculation expressions of the rotation matrix and the translation vector of the left virtual camera and the right virtual camera relative to the actual camera under any bipartite prism rotation angle omega are as follows:
Figure FDA0003523783060000031
Figure FDA0003523783060000032
in the formula, RL(omega) is a rotation matrix of the left virtual camera relative to the actual camera under any bipartite prism rotation angle omega, tL(omega) is the translation vector of the left virtual camera relative to the actual camera under any bipartite prism rotation angle omega, RR(omega) is a rotation matrix of the right virtual camera relative to the actual camera under any bipartite prism rotation angle omega, tRAnd (omega) is a translation vector of the right virtual camera relative to the actual camera under an arbitrary rotation angle omega of the bipartite prism, Rot represents a rotation angle around an axis direction defined by an outer product of two vectors, the angle is determined by a vector cosine law, and g is a distance from an optical center of the actual camera to the back of the bipartite prism.
6. The method according to claim 5, wherein in the system construction and parameter calibration step, the stereo matching and cross optimization step, the dynamic virtual binocular system model comprises a basis matrix between the left virtual camera and the right virtual camera at any bipartite prism rotation angle ω, and the calculation expression of the basis matrix is as follows:
F(ω)=(Aint)-T(RLR)-1TLR(Aint)-1
RLR=RL(ω)RR(ω)-1
tLR=tL-RL(ω)RR(ω)-1tR
in the formula, AintIs an internal parameter matrix, R, of the cameraLRIs a relative rotation matrix of the left virtual camera and the right virtual camera, TLRAn oblique symmetric matrix corresponding to the relative translation vector;
and multiplying the base matrix F (omega) of the left virtual camera and the right virtual camera by the homogeneous coordinates of the image points contained in one half part of the dual-view angle image to obtain the positions of the image points corresponding to the epipolar lines in the other half part of the image, thereby obtaining the epipolar constraint relation.
7. The method as claimed in claim 1, wherein in the step of stereo matching and cross optimization, the cross inspection and optimization is implemented by filtering out the homonymous image points with too large deviation from the epipolar lines according to the principle that the homonymous image points are theoretically located at the intersection points of the epipolar lines.
8. The method of claim 1, wherein in the three-dimensional reconstruction and point cloud filtering step, the computational expression of the initial estimation of the three-dimensional point cloud is as follows:
Figure FDA0003523783060000041
in the formula, PiAs a three-dimensional point cloud collection
Figure FDA0003523783060000042
The three-dimensional coordinates of the medium element i,
Figure FDA0003523783060000043
the homogeneous pixel coordinates of the same-name image points contained in the left half of the dual-view image,
Figure FDA0003523783060000044
respectively homogeneous pixel coordinates of the image points of the same name contained in the right half of the dual-view image,
Figure FDA0003523783060000045
is a positive integer set; lambda [ alpha ]LIs composed of
Figure FDA0003523783060000046
Corresponding projection light vectorScale factor of (a), λRIs composed of
Figure FDA0003523783060000047
Corresponding to the scale factor of the projected ray vector.
9. The method according to claim 1, wherein in the three-dimensional reconstruction and point cloud filtering step, the noise filtering is specifically performed by performing point cloud filtering according to the deviation of the three-dimensional point cloud before and after updating, and each time the calculation process of the point cloud filtering is represented as:
Figure FDA0003523783060000048
in the formula (I), the compound is shown in the specification,
Figure FDA0003523783060000049
for filtered three-dimensional point cloud sets, Pi estimateFor the initially estimated elements in the three-dimensional point cloud set, Pi updateTo update the elements in the computed three-dimensional point cloud set, ε is the deviation threshold between the updated point cloud and the initial estimate.
CN202011072913.6A 2020-10-09 2020-10-09 Single-camera image acquisition system based on rotary bipartite prism and three-dimensional reconstruction method Active CN112330794B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011072913.6A CN112330794B (en) 2020-10-09 2020-10-09 Single-camera image acquisition system based on rotary bipartite prism and three-dimensional reconstruction method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011072913.6A CN112330794B (en) 2020-10-09 2020-10-09 Single-camera image acquisition system based on rotary bipartite prism and three-dimensional reconstruction method

Publications (2)

Publication Number Publication Date
CN112330794A CN112330794A (en) 2021-02-05
CN112330794B true CN112330794B (en) 2022-06-14

Family

ID=74314749

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011072913.6A Active CN112330794B (en) 2020-10-09 2020-10-09 Single-camera image acquisition system based on rotary bipartite prism and three-dimensional reconstruction method

Country Status (1)

Country Link
CN (1) CN112330794B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113156641B (en) * 2021-02-24 2022-09-16 同济大学 Image space scanning imaging method based on achromatic cascade prism
CN113885195B (en) * 2021-08-17 2023-10-03 成都九天画芯科技有限公司 Color field correction method for eliminating image deviation of light combining prism
CN113971719B (en) * 2021-10-26 2024-04-12 上海脉衍人工智能科技有限公司 System, method and equipment for sampling and reconstructing nerve radiation field
CN114157852B (en) * 2021-11-30 2022-12-13 北京理工大学 Virtual camera array three-dimensional imaging method and system based on rotating double prisms
KR20230144231A (en) * 2022-04-07 2023-10-16 (주) 인텍플러스 Shape profile measurement device using line beams
CN115063567B (en) * 2022-08-19 2022-10-28 中国石油大学(华东) Three-dimensional light path analysis method of double-prism monocular stereoscopic vision system

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130101276A1 (en) * 2011-10-21 2013-04-25 Raytheon Company Single axis gimbal optical stabilization system
CN103142202B (en) * 2013-01-21 2014-12-31 东北大学 Prism-based medical endoscope system with measurement function and method
CN105700320B (en) * 2016-04-13 2018-06-26 苏州大学 A kind of hologram three-dimensional display methods and device based on spatial light modulator
CN105938318B (en) * 2016-05-30 2018-06-12 苏州大学 Color holographic three-dimensional display method and system based on time division multiplexing
CN107014307A (en) * 2017-04-17 2017-08-04 深圳广田机器人有限公司 The acquisition methods of three-dimensional laser scanner and three-dimensional information
CN108253939B (en) * 2017-12-19 2020-04-10 同济大学 Variable visual axis monocular stereo vision measuring method
CN109668509A (en) * 2019-01-18 2019-04-23 南京理工大学 Based on biprism single camera three-dimensional measurement industrial endoscope system and measurement method
CN110111262B (en) * 2019-03-29 2021-06-04 北京小鸟听听科技有限公司 Projector projection distortion correction method and device and projector
CN110336987B (en) * 2019-04-03 2021-10-08 北京小鸟听听科技有限公司 Projector distortion correction method and device and projector
CN110243283B (en) * 2019-05-30 2021-03-26 同济大学 Visual measurement system and method with variable visual axis
CN110570463B (en) * 2019-09-11 2023-04-07 深圳市道通智能航空技术股份有限公司 Target state estimation method and device and unmanned aerial vehicle
CN111416972B (en) * 2020-01-21 2021-03-26 同济大学 Three-dimensional imaging system and method based on axially adjustable cascade rotating mirror

Also Published As

Publication number Publication date
CN112330794A (en) 2021-02-05

Similar Documents

Publication Publication Date Title
CN112330794B (en) Single-camera image acquisition system based on rotary bipartite prism and three-dimensional reconstruction method
Akbarzadeh et al. Towards urban 3d reconstruction from video
CN109211107B (en) Measuring device, rotating body and method for generating image data
US6304284B1 (en) Method of and apparatus for creating panoramic or surround images using a motion sensor equipped camera
US7126630B1 (en) Method and apparatus for omni-directional image and 3-dimensional data acquisition with data annotation and dynamic range extension method
US6643396B1 (en) Acquisition of 3-D scenes with a single hand held camera
US7176960B1 (en) System and methods for generating spherical mosaic images
CN111442721B (en) Calibration equipment and method based on multi-laser ranging and angle measurement
CN114998499B (en) Binocular three-dimensional reconstruction method and system based on line laser galvanometer scanning
US20020113867A1 (en) Stereoscopic image display apparatus for detecting viewpoint position and forming stereoscopic image while following up viewpoint position
US11015932B2 (en) Surveying instrument for scanning an object and image acquisition of the object
CN111416972B (en) Three-dimensional imaging system and method based on axially adjustable cascade rotating mirror
WO1999035855A1 (en) Method of determining relative camera orientation position to create 3-d visual images
CN111854636B (en) Multi-camera array three-dimensional detection system and method
WO2002065786A1 (en) Method and apparatus for omni-directional image and 3-dimensional data acquisition with data annotation and dynamic range extension method
US6839081B1 (en) Virtual image sensing and generating method and apparatus
CN102679959A (en) Omnibearing 3D (Three-Dimensional) modeling system based on initiative omnidirectional vision sensor
CN111445529B (en) Calibration equipment and method based on multi-laser ranging
CN111879354A (en) Unmanned aerial vehicle measurement system that becomes more meticulous
CN114359406A (en) Calibration of auto-focusing binocular camera, 3D vision and depth point cloud calculation method
CN113781576A (en) Binocular vision detection system, method and device for multi-degree-of-freedom pose real-time adjustment
Strelow et al. Extending shape-from-motion to noncentral onmidirectional cameras
CN107454375A (en) 3D panoramic imaging devices and method
CN111273439A (en) Full scene three-dimensional optical scanning system and optimization method
Castanheiro et al. Modeling hyperhemispherical points and calibrating a dual-fish-eye system for close-range applications

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant