CN107767339B - Binocular stereo image splicing method - Google Patents
Binocular stereo image splicing method Download PDFInfo
- Publication number
- CN107767339B CN107767339B CN201710948182.9A CN201710948182A CN107767339B CN 107767339 B CN107767339 B CN 107767339B CN 201710948182 A CN201710948182 A CN 201710948182A CN 107767339 B CN107767339 B CN 107767339B
- Authority
- CN
- China
- Prior art keywords
- eye
- representing
- transformed
- eye pattern
- eye diagram
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 35
- 238000010586 diagram Methods 0.000 claims abstract description 112
- 230000001131 transforming effect Effects 0.000 claims abstract description 14
- 238000012216 screening Methods 0.000 claims abstract description 7
- 230000002194 synthesizing effect Effects 0.000 claims abstract description 4
- 230000009466 transformation Effects 0.000 claims description 37
- 150000001875 compounds Chemical class 0.000 claims description 21
- 239000011159 matrix material Substances 0.000 claims description 15
- 230000014759 maintenance of location Effects 0.000 claims description 11
- 238000004364 calculation method Methods 0.000 claims description 5
- 238000000605 extraction Methods 0.000 claims description 3
- 238000012804 iterative process Methods 0.000 claims description 3
- 238000013507 mapping Methods 0.000 claims description 3
- 239000000126 substance Substances 0.000 claims description 3
- 230000003287 optical effect Effects 0.000 description 6
- 238000005516 engineering process Methods 0.000 description 5
- 238000004826 seaming Methods 0.000 description 4
- 238000012545 processing Methods 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4038—Image mosaicing, e.g. composing plane images from plane sub-images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/698—Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
- G06T2207/10012—Stereo images
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Artificial Intelligence (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a binocular stereo image splicing method, which comprises the following steps: s1: acquiring a plurality of groups of images by using a binocular camera, extracting features and screening to obtain a feature point set, wherein each group of images comprises a first eye pattern acquired by a camera in a first direction and a second eye pattern acquired by a camera in a second direction; s2: transforming and splicing the first eye pattern according to the feature point set screened in the step S1 to obtain a transformed and spliced first eye pattern; s3: calculating to obtain a target disparity map according to the collected multiple groups of images; s4: transforming and splicing the second eye pattern according to the transformed and spliced first eye pattern and the target parallax image to obtain a transformed and spliced second eye pattern; s5: and synthesizing the first eye diagram after the transform splicing in the step S2 and the second eye diagram after the transform splicing in the step S4 to obtain a final perspective view. The binocular stereo image splicing method provided by the invention can realize seamless splicing and reduce double images, and is not limited by the arrangement position and angle of the camera.
Description
Technical Field
The invention relates to the field of computer vision technology and image processing, in particular to a binocular stereo image splicing method.
Background
With the rise of VR and AR, people have higher and higher requirements on the resolution, the view angle and the quality of an image, and due to the limited view angle range of a single camera, an image with a wide view angle or even a 360-degree panorama needs to be obtained by using an image stitching technology; at present, an image splicing technology is one of keys in VR and AR, and the application of image splicing is more and more extensive, including the aspects of medical treatment, education, sports, aerospace and the like.
After a plurality of pictures are shot by a single or a plurality of cameras, all the pictures are spliced into a wide-view-angle or panoramic picture, however, when a single picture is spliced into a wide-view-angle or panoramic picture, the depth from the camera to the picture scene is unknown, so that the splicing result has the problems of blurring, ghosting and misregistration. With the rise of stereo images and videos, more and more people begin to research the stereo image stitching technology, and the existing stereo image stitching technology generally has the following difficulties and problems: firstly, parallax processing is carried out, and due to the existence of parallax, ghost images can appear in splicing, so that the final visual effect is influenced; secondly, splicing randomly acquired stereo images, some high-quality image splicing algorithms generally require that a camera is placed or rotated according to a certain rule at present, for example, the camera needs to be fixed into a circle for shooting and splicing.
The above background disclosure is only for the purpose of assisting understanding of the concept and technical solution of the present invention and does not necessarily belong to the prior art of the present patent application, and should not be used for evaluating the novelty and inventive step of the present application in the case that there is no clear evidence that the above content is disclosed at the filing date of the present patent application.
Disclosure of Invention
In order to solve the technical problems, the invention provides a binocular stereo image splicing method which can realize seamless splicing and reduce double images and is not limited by the arrangement position and the angle of a camera.
In order to achieve the purpose, the invention adopts the following technical scheme:
the invention discloses a binocular stereo image splicing method, which comprises the following steps:
s1: acquiring a plurality of groups of images by using a binocular camera, extracting features and screening to obtain a feature point set, wherein each group of images comprises a first eye pattern acquired by a camera in a first direction and a second eye pattern acquired by a camera in a second direction;
s2: transforming and splicing the first eye pattern according to the feature point set screened in the step S1 to obtain a transformed and spliced first eye pattern;
s3: calculating to obtain a target disparity map according to the collected multiple groups of images;
s4: transforming and splicing the second eye pattern according to the transformed and spliced first eye pattern and the target parallax image to obtain a transformed and spliced second eye pattern;
s5: and synthesizing the first eye diagram after the transform splicing in the step S2 and the second eye diagram after the transform splicing in the step S4 to obtain a final perspective view.
In a further aspect, step S2 specifically includes:
s21: adopting the feature point set screened in the step S1, obtaining a homography matrix of the first eye diagram through iterative computation, and then transforming the first eye diagram to be transformed according to the homography matrix by taking the first eye diagram as a reference to obtain a transformed first eye diagram;
s22: and after the grid transformation is carried out on the transformed first eye diagram, splicing is carried out to obtain the transformed and spliced first eye diagram.
In a further aspect, step S4 specifically includes: according to the target disparity map, calculating to obtain depth information by using the relation between disparity and depth and the focal length of a camera, and clustering the feature point set of the second eye map by using the depth information; and mapping the mesh vertexes of the transformed and spliced first eye pattern into a second eye pattern through the target view to obtain coordinates of the mesh vertexes in the first eye pattern, then carrying out mesh transformation on all the second eye patterns, and splicing to obtain the transformed and spliced second eye pattern.
Compared with the prior art, the invention has the beneficial effects that: according to the binocular stereo image splicing method, seamless splicing can be achieved, double images are reduced, and splicing of randomly acquired stereo images is achieved regardless of the placement position and the angle of a camera.
In a further scheme, an optimized homography matrix is obtained by adding a parallax energy item, and parallax is processed, so that images are more natural and continuous; and through introducing the energy item of perpendicular parallax error for in the use, only need have 30% overlap region to splice between the image that shoots, thereby need not the mounted position and the angle of fixed camera again, it is more convenient to use.
Drawings
Fig. 1 is a schematic flow chart of a binocular stereo image stitching method according to a preferred embodiment of the present invention.
Detailed Description
The invention will be further described with reference to the accompanying drawings and preferred embodiments.
As shown in fig. 1, a preferred embodiment of the present invention provides a binocular stereo image stitching method, including the following steps:
s1: acquiring a plurality of groups of images by using a binocular camera, extracting characteristics and screening to obtain a characteristic point set;
specifically, the acquired image is denoted as IiN is equal to or greater than 2, and each group of stereograms is shown as IiFirst eye diagram I including camera shot of first directioni,lAnd a second eye pattern I shot by a camera in a second directioni,rExtracting a first eye pattern to be transformed and a first eye pattern (I) by using a feature extraction algorithm (such as SIFT algorithm, SURF algorithm and the like)j,l,I1,l) And a first eye pattern to be transformed and a corresponding second eye pattern (I)j,l,Ij,r) And screening the extracted feature points by using RANSAC algorithm to obtain a final matched feature point set (F)j,l,F1,l) And (F)j,l,Fj,r) Wherein j is 2, 3.. n, n is not less than 2.
S2: transforming and splicing the first eye pattern according to the feature point set screened in the step S1 to obtain a transformed and spliced first eye pattern; the method specifically comprises the following steps:
s21: using the feature point set (F) calculated in step S1j,l,F1,l) And (F)j,l,Fj,r) Iteratively computing a homography matrix H of the first eye diagramLIn an iterative process, the first energy term E is madefHas a minimum value; then obtaining a homography matrix H according to calculationLFor the first eye pattern I needing to be transformedj,lWith a first eye diagram I1,lTransforming for reference to obtain transformed first eye diagramWherein the first energy term EfThe expression of (a) is:
in the formula, n1Is a set of feature points (F)j,l,F1,l) Number of middle feature points, n2Is a set of feature points (F)j,l,Fj,r) The number of the middle feature points, H is a homography matrix of the first eye diagram in the iteration process; w is amAnd wkThe weight value is related to the Gaussian distance from the current characteristic point to all the characteristic points on the image;
wherein the content of the first and second substances,representing the weight value of the mth characteristic point in the jth first eye diagram;representing the weight value of the kth characteristic point in the jth first eye diagram;the y coordinate of the k characteristic point after the j first eye diagram is transformed is shown,and representing the y coordinate of the k characteristic point after the j second eye diagram is transformed.
S22: for the transformed first eye diagramPerforming grid transformation to obtain a first total energy term ELMinimum; finding out a splicing line of an overlapping area based on a seaming method, and splicing the first eye pattern after grid transformation to obtain a final first eye pattern IL。
Wherein the first total energy term ELThe expression of (a) is as follows:
EL=αEgl+βEsl+Eyl+Edl
in the formula, EglGlobal registration term representing the first eye, EslShape retention term, E, representing the first eye diagramylA vertical parallax limiting term, E, representing the first eye patterndlRepresenting a horizontal parallax limiting term of the first eye diagram, wherein alpha and beta are weight terms, and the values of alpha and beta are 0-1 respectively;
wherein the global registration term E of the first eye diagramglThe positions of the feature points after the transformation and the feature points in the corresponding reference map (first eye map) are expressed as follows:
in the formula (I), the compound is shown in the specification,representing the mth characteristic point after the jth first eye diagram is transformed.
Shape retention term E of first eye diagramslThe specific expression of (A) is as follows:
in the formula (I), the compound is shown in the specification,three vertexes, omega, after mesh unit transformationiThe saliency of the grid is represented as,wherein v isi、vj、vkRespectively three vertices before the mesh unit transformation,
vertical parallax limiting term E of first eye patternylThe ordinate representing the corresponding feature points in the first eye diagram and the second eye diagram should be as close as possible, which is expressed as follows:
in the formula (I), the compound is shown in the specification,representing the y coordinate of the transformed jth first eye diagram,representing the y coordinate of the j second eye diagram after transformation;
horizontal parallax limiting term E of first eye patterndlThe difference between the abscissa representing the feature points in the transformed first eye diagram and the transformed second eye diagram and the difference between the abscissas representing the feature points in the transformed first eye diagram and the transformed second eye diagram should be as close as possible, and is expressed as follows:
in the formula (I), the compound is shown in the specification,representing the x-coordinate of the transformed jth first eye diagram,x-coordinate, F, representing the j-th transformed second eye diagramj,l,xRepresenting the x-coordinate, F, of the jth first eye before transformationj,r,xRepresenting the x-coordinate of the jth second eye before transformation.
S3: obtaining a target disparity map: for each set of perspective views (I)i,l,Ii,r) Down sampling is carried out, the optical flow method is utilized to estimate the correlation density of the optical flow vector, then the optical flow vector is amplified according to the scale to obtain a disparity map DiAnd splicing to obtain a target disparity map D.
S4: transforming and splicing the second eye pattern according to the transformed and spliced first eye pattern and the target parallax image to obtain a transformed and spliced second eye pattern; the method comprises the following specific steps:
according to the target disparity map D obtained in the step S3, depth information is obtained through calculation by utilizing the relation between disparity and depth and the focal length of a camera, the feature point set of the second eye map is clustered (more than or equal to two types) by utilizing the depth information, corresponding transformation matrixes are obtained in overlapping areas according to classification, and a global homography matrix is used in non-overlapping areas; the spliced first eye chart I is obtained through the target disparity mapLThe vertex of the mesh is mapped into the second eye diagram to obtain the coordinate of the vertex of the mesh in the first eye diagram, and then all the second eye diagrams are subjected to mesh transformation to ensure that the second total energy item ERAnd finally, finding a splicing line of the overlapping area based on a seaming method, and splicing to obtain a final second eye pattern IR。
Wherein the second total energy term ERThe expression of (a) is as follows:
ER=Egr+Esr+Eyr+Edr
in the formula, EgrGlobal registration term representing the second eye, EsrA shape retention term representing a second eye diagram, EyrA vertical parallax limiting term, E, representing the second eye patterndrA horizontal disparity constraint term representing a second eye diagram;
wherein the global registration term E of the second eye patterngrIndicating that the coordinates before and after the vertex transformation of the second eye pattern mesh are as consistent as possible, and is expressed asThe following:
in the formula (I), the compound is shown in the specification,representing transformed coordinates of mesh vertices, viRepresenting coordinates before mesh vertex transformation;
shape retention term E of the second eye diagramsrThe specific expression of (A) is as follows:
in the formula (I), the compound is shown in the specification,three vertexes, omega, after mesh unit transformationiRepresenting the significance of the grid, u-0,wherein v isi、vj、vkThree vertices before mesh unit transformation, R ═
Vertical parallax limiting term E of second eye patternyrThe ordinate representing the corresponding feature points in the first eye diagram and the second eye diagram should be as close as possible, which is expressed as follows:
in the formula (I), the compound is shown in the specification,representing the y coordinate of the transformed jth first eye diagram,representing the y coordinate of the j second eye diagram after transformation;
horizontal parallax limiting term E of second eye patterndrThe expression of (a) is as follows:
in the formula (I), the compound is shown in the specification,representing the x coordinate of the vertex of the jth first eye diagram after transformation,x coordinate, v, representing the transformed jth second eye vertexj,l,xRepresenting the x-coordinate, v, of the vertex of the jth first eye diagram before transformationj,r,xRepresenting the x coordinate of the vertex of the jth second eye before transformation.
S5: transforming and splicing the first eye pattern I obtained in the step S2LAnd the second eye pattern I after the transform stitching in step S4RAnd (4) synthesizing to form a final perspective view.
The first eye pattern and the second eye pattern are respectively images shot by cameras on two sides of the binocular camera, namely a left eye pattern shot by a left camera and a right eye pattern shot by a right camera; that is, the first eye diagram is a left eye diagram and the second eye diagram is a right eye diagram, or the first eye diagram is a right eye diagram and the second eye diagram is a left eye diagram.
The binocular stereo image stitching method according to the preferred embodiment of the present invention is further described below with specific examples.
A1: two groups of images are collected by adopting a binocular camera, characteristics are extracted and screened, and the collected images are recorded as I1And I2Each group of stereograms includes a left eye image I taken by the left camera1,l、I2,lAnd a right eye pattern I shot by a right camera1,r、I2,rExtracting the left eye image and the first left eye image to be transformed by using a feature extraction algorithm (such as SIFT algorithm, SURF algorithm, etc.)(I2,l,I1,l) And a left eye diagram to be transformed and a corresponding right eye diagram (I)2,l,I2,r) And screening the extracted feature points by using RANSAC algorithm to obtain a final matched feature point set (F)2,l,F1,l) And (F)2,l,F2,r)。
A2: transforming and splicing the left eye diagram, taking parallax into consideration, and adopting the feature point set (F) obtained by calculation in the step A12,l,F1,l) And (F)2,l,F2,r) Iteratively calculating the homography matrix H of the left eye diagramLSo that the parallax energy term E is in the iterative processfHaving a minimum value, and then obtaining a homography matrix H according to the calculationLTransforming the residual left eye diagram by taking the first left eye diagram as a reference to obtain a transformed left eye diagramParallax energy term EfThe expression of (a) is:
in the formula, n1Is a set of feature points (F)2,l,F1,l) Number of middle feature points, n2Is a set of feature points (F)2,l,F2,r) The number of the middle feature points, H is a homography matrix of the left eye image in the iteration process; w is amAnd wkThe weight value is related to the Gaussian distance from the current characteristic point to all the characteristic points on the image;
wherein the content of the first and second substances,representing the weight value of the mth characteristic point in the second left eye diagram;representing the weight value of the kth characteristic point in the second left eye diagram;representing the y coordinate of the k characteristic point after the transformation of the second left eye diagram,and representing the y coordinate of the k characteristic point after the transformation of the second right eye diagram.
For the obtained transformed left eye imagePerforming grid transformation to obtain a first total energy term ELMinimum; then finding out a splicing line of the overlapping area based on a seaming method, and splicing to obtain a final left eye image IL(ii) a First Total energy term ELThe expression of (a) is as follows:
EL=αEgl+βEsl+Eyl+Edl
in the formula, EglGlobal registration term representing the left eye, EslShape retention term representing left eye diagram, EylVertical parallax limiting term, E, representing the left eye diagramdlThe horizontal parallax limiting terms represent a left eye diagram, wherein alpha and beta are weight terms, and are respectively 0-1, and in some examples, alpha is 0.7, and beta is 0.4;
global registration term E for left eyeglThe specific expression is as follows:
in the formula (I), the compound is shown in the specification,representing a transformed feature point set, wherein the transformed feature points and the feature points in the reference picture (the first left eye picture) are consistent as much as possible;
shape retention term E for left eye diagramslThe specific expression of (A) is as follows:
in the formula (I), the compound is shown in the specification,three vertexes, omega, after mesh unit transformationiRepresenting the significance of the grid, u-0,wherein v isi、vj、vkRespectively three vertices before the mesh unit transformation,
vertical parallax limiting term E of left eye diagramylThe specific expression is as follows:
Eyl=||F2,l,y-F2,r,y||2
in the formula (I), the compound is shown in the specification,representing the y-coordinate of the transformed second left eye image,representing the y coordinate of the transformed second right eye diagram;
horizontal parallax limiting term E of left eye diagramdlThe specific expression is as follows:
in the formula (I), the compound is shown in the specification,representing the x-coordinate of the transformed second left eye image,x-coordinate, F, representing the transformed second right eye diagram2,l,xX-coordinate, F, representing the second left eye image before transformation2,r,xRepresenting the x-coordinate of the second right eye diagram before transformation.
A3: obtaining a disparity map for each set of stereograms (I)1,l,I1,r) And (I)2,l,I2,r) Down sampling is carried out, the optical flow method is utilized to estimate the correlation density of the optical flow vector, then the optical flow vector is amplified according to the scale to obtain a disparity map DiSplicing to obtain a target disparity map D;
a4: transforming and splicing the right eye images, calculating to obtain depth information by using the relation between parallax and depth and the camera focal length according to a target parallax image D obtained in A3, clustering (more than or equal to two types) the feature point set of the right eye images by using the depth information, obtaining corresponding transformation matrixes in the overlapping regions according to classification, and using a global homography matrix in the non-overlapping regions; the spliced left eye pattern I is processed by the target parallax mapLThe grid vertexes of the left eye graph are mapped into the right eye graph to obtain the coordinates of the grid vertexes of the left eye graph, and then all the right eye graphs are subjected to grid transformation to enable the second total energy item E to beRAnd finally, finding a splicing line of the overlapped area based on a seaming method, and splicing to obtain a final right eye diagram IRSecond Total energy term ERThe expression is as follows:
ER=Egr+Esr+Eyr+Edr
in the formula, EgrGlobal registration term representing the right eye diagram, EsrShape retention term representing right eye diagram, EyrVertical parallax limiting term, E, representing the right eye diagramdrA horizontal parallax limiting term representing a right eye diagram;
in formula (I), global registration term E of right eye diagramgrThe following definitions are needed to make the coordinates before and after the vertex transformation of the right eye mesh consistent as much as possible:
in the formula (I), the compound is shown in the specification,representing transformed coordinates of mesh vertices, viRepresenting coordinates before mesh vertex transformation;
horizontal parallax term E of right eye diagramdrThe expression is as follows:
in the formula (I), the compound is shown in the specification,representing the x-coordinate of the vertex of the transformed second left eye image,x-coordinate, v, representing the vertex of the second right-eye diagram after transformationj,l,xX-coordinate, v, representing the vertex of the second left eye image before transformationj,r,xRepresenting the x-coordinate of the vertex of the second right eye before transformation.
The other two items Esr、EyrRespectively corresponding to E in the left eye diagramsl、EylThe definition is consistent.
A5: and combining the left eye pattern spliced in the step A2 with the right eye pattern spliced in the step A4 into a perspective view.
The stereo image is obtained by splicing through the splicing method, the characteristic points are limited through the parallax energy item to be optimized when the homography matrix is calculated in an iterative mode, and the grids are optimized through defining the energy item which fully considers the vertical parallax and the horizontal parallax in the image splicing process, so that the spliced images can be spliced seamlessly, double images are reduced, and the method is not limited by the arrangement position and the angle of a camera.
The foregoing is a more detailed description of the invention in connection with specific preferred embodiments and it is not intended that the invention be limited to these specific details. For those skilled in the art to which the invention pertains, several equivalent substitutions or obvious modifications can be made without departing from the spirit of the invention, and all the properties or uses are considered to be within the scope of the invention.
Claims (8)
1. A binocular stereo image splicing method is characterized by comprising the following steps:
s1: acquiring a plurality of groups of images by using a binocular camera, extracting features and screening to obtain a feature point set, wherein each group of images comprises a first eye pattern acquired by a camera in a first direction and a second eye pattern acquired by a camera in a second direction;
s2: transforming and splicing the first eye pattern according to the feature point set screened in the step S1 to obtain a transformed and spliced first eye pattern;
s3: calculating to obtain a target disparity map according to the collected multiple groups of images;
s4: transforming and splicing the second eye pattern according to the transformed and spliced first eye pattern and the target parallax image to obtain a transformed and spliced second eye pattern;
s5: synthesizing the first eye diagram after the transform splicing in the step S2 and the second eye diagram after the transform splicing in the step S4 to obtain a final perspective view;
wherein:
step S1 specifically includes: adopt two mesh cameras to gather multiunit images and mark as IiN, n is equal to or greater than 2, and each group of images includes a first eye pattern Ii,lAnd a second eye pattern Ii,rExtracting a first eye pattern to be transformed and a first eye pattern (I) by adopting a feature extraction algorithmj,l,I1,l) And a first eye pattern to be transformed and a corresponding second eye pattern (I)j,l,Ij,r) And screening the extracted feature points by using RANSAC algorithm to obtain a final matched feature point set (F)j,l,F1,l) And (F)j,l,Fj,r) Wherein j is 2, 3.. n, n is more than or equal to 2;
the step S2 of transforming the first eye diagram according to the feature point set filtered in the step S1 specifically includes:
using the values calculated in step S1Feature point set (F)j,l,F1,l) And (F)j,l,Fj,r) Iteratively computing a homography matrix H of the first eye diagramLIn an iterative process, the first energy term E is madefHas a minimum value; then obtaining a homography matrix H according to calculationLFor the first eye pattern I needing to be transformedj,lWith a first eye diagram I1,lTransforming for reference to obtain transformed first eye diagramWherein the first energy term EfThe expression of (a) is:
in the formula, n1Is a set of feature points (F)j,l,F1,l) Number of middle feature points, n2Is a set of feature points (F)j,l,Fj,r) The number of the middle feature points, H is a homography matrix of the first eye diagram in the iteration process; w is amAnd wkThe weight value is related to the Gaussian distance from the current characteristic point to all the characteristic points on the image;
wherein the content of the first and second substances,representing the weight value of the mth characteristic point in the jth first eye diagram;representing the weight value of the kth characteristic point in the jth first eye diagram;the y coordinate of the k characteristic point after the j first eye diagram is transformed is shown,representing the jth second eye diagramAnd transforming the y coordinate of the k characteristic point.
2. The binocular stereo image stitching method according to claim 1, wherein the step S4 specifically includes: according to the target disparity map, calculating to obtain depth information by using the relation between disparity and depth and the focal length of a camera, and clustering the feature point set of the second eye map by using the depth information; and mapping the mesh vertexes of the transformed and spliced first eye pattern into a second eye pattern through the target parallax pattern to obtain coordinates of the mesh vertexes in the first eye pattern, then carrying out mesh transformation on all the second eye patterns, and splicing to obtain the transformed and spliced second eye pattern.
4. The binocular stereo image stitching method according to claim 1, wherein in step S22, the transformed first eye diagram is subjected to pair transformationMaking the first total energy term E in the process of making grid changeLMinimum; wherein the first total energy term ELThe expression of (a) is as follows:
EL=αEgl+βEsl+Eyl+Edl
in the formula, EglGlobal registration term representing the first eye, EslShape retention term, E, representing the first eye diagramylA vertical parallax limiting term, E, representing the first eye patterndlA horizontal parallax limiting term representing the first eye diagram, wherein alpha and beta are weight terms and are divided into alpha and betaThe values are 0-1 respectively.
5. The binocular stereo image stitching method according to claim 4, wherein:
global registration term E of the first eyeglThe expression of (a) is as follows:
in the formula (I), the compound is shown in the specification,representing the mth characteristic point after the jth first eye diagram is transformed;
shape retention term E of first eye diagramslThe specific expression of (A) is as follows:
in the formula (I), the compound is shown in the specification,three vertexes, omega, after mesh unit transformationiRepresenting the significance of the grid, u-0,wherein v isi、vj、vkRespectively three vertices before the mesh unit transformation,
vertical parallax limiting term E of first eye patternylThe specific expression of (a) is as follows:
in the formula (I), the compound is shown in the specification,representing the y coordinate of the transformed jth first eye diagram,representing the y coordinate of the j second eye diagram after transformation;
horizontal parallax limiting term E of first eye patterndlThe specific expression of (a) is as follows:
in the formula (I), the compound is shown in the specification,representing the x-coordinate of the transformed jth first eye diagram,x-coordinate, F, representing the j-th transformed second eye diagramj,l,xRepresenting the x-coordinate, F, of the jth first eye before transformationj,r,xRepresenting the x-coordinate of the jth second eye before transformation.
6. The binocular stereo image stitching method according to claim 3, wherein the step S4 specifically includes: according to the target disparity map, calculating to obtain depth information by using the relation between disparity and depth and the focal length of a camera, and clustering the feature point set of the second eye map by using the depth information; and mapping the mesh vertexes of the transformed and spliced first eye pattern into a second eye pattern through the target parallax pattern to obtain coordinates of the mesh vertexes in the first eye pattern, then carrying out mesh transformation on all the second eye patterns, and splicing to obtain the transformed and spliced second eye pattern.
7. The binocular stereo image stitching method according to claim 6, wherein the second total energy term E is made in the process of mesh-transforming all the second eye diagramsRMinimum, wherein the second total energy term ERThe expression of (a) is as follows:
ER=Egr+Esr+Eyr+Edr
in the formula, EgrGlobal registration term representing the second eye, EsrA shape retention term representing a second eye diagram, EyrA vertical parallax limiting term, E, representing the second eye patterndrA horizontal disparity constraint term representing a second eye diagram.
8. The binocular stereo image stitching method according to claim 6, wherein:
global registration term E for the second eyegrThe specific expression of (a) is as follows:
shape retention term E of the second eye diagramsrThe specific expression of (a) is as follows:
in the formula (I), the compound is shown in the specification,three vertexes, omega, after mesh unit transformationiRepresenting the significance of the grid, u-0,wherein v isi、vj、vkRespectively three vertices before the mesh unit transformation,
vertical parallax limiting term E of second eye patternyrThe specific expression of (a) is as follows:
in the formula (I), the compound is shown in the specification,representing the y coordinate of the transformed jth first eye diagram,representing the y coordinate of the j second eye diagram after transformation;
horizontal parallax limiting term EdrThe specific expression of (a) is as follows:
in the formula (I), the compound is shown in the specification,representing the x coordinate of the vertex of the jth first eye diagram after transformation,x coordinate, v, representing the transformed jth second eye vertexj,l,xRepresenting the x-coordinate, v, of the vertex of the jth first eye diagram before transformationj,r,xRepresenting the x coordinate of the vertex of the jth second eye before transformation.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710948182.9A CN107767339B (en) | 2017-10-12 | 2017-10-12 | Binocular stereo image splicing method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710948182.9A CN107767339B (en) | 2017-10-12 | 2017-10-12 | Binocular stereo image splicing method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107767339A CN107767339A (en) | 2018-03-06 |
CN107767339B true CN107767339B (en) | 2021-02-02 |
Family
ID=61267165
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710948182.9A Active CN107767339B (en) | 2017-10-12 | 2017-10-12 | Binocular stereo image splicing method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107767339B (en) |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108470324B (en) * | 2018-03-21 | 2022-02-25 | 深圳市未来媒体技术研究院 | Robust binocular stereo image splicing method |
CN109727194B (en) * | 2018-11-20 | 2023-08-04 | 广东智媒云图科技股份有限公司 | Method for obtaining nose patterns of pets, electronic equipment and storage medium |
CN110111255B (en) * | 2019-04-24 | 2023-02-28 | 天津大学 | Stereo image splicing method |
TWI743477B (en) * | 2019-05-07 | 2021-10-21 | 威盛電子股份有限公司 | Image processing device and method for image processing |
CN110458870B (en) * | 2019-07-05 | 2020-06-02 | 北京迈格威科技有限公司 | Image registration, fusion and occlusion detection method and device and electronic equipment |
CN110866868A (en) * | 2019-10-25 | 2020-03-06 | 江苏荣策士科技发展有限公司 | Splicing method of binocular stereo images |
CN111062873B (en) | 2019-12-17 | 2021-09-24 | 大连理工大学 | Parallax image splicing and visualization method based on multiple pairs of binocular cameras |
CN111028155B (en) * | 2019-12-17 | 2023-02-14 | 大连理工大学 | Parallax image splicing method based on multiple pairs of binocular cameras |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8989481B2 (en) * | 2012-02-13 | 2015-03-24 | Himax Technologies Limited | Stereo matching device and method for determining concave block and convex block |
CN103345736B (en) * | 2013-05-28 | 2016-08-31 | 天津大学 | A kind of virtual viewpoint rendering method |
CN105389787A (en) * | 2015-09-30 | 2016-03-09 | 华为技术有限公司 | Panorama image stitching method and device |
CN105678687A (en) * | 2015-12-29 | 2016-06-15 | 天津大学 | Stereo image stitching method based on content of images |
CN106127690A (en) * | 2016-07-06 | 2016-11-16 | *** | A kind of quick joining method of unmanned aerial vehicle remote sensing image |
CN106683045A (en) * | 2016-09-28 | 2017-05-17 | 深圳市优象计算技术有限公司 | Binocular camera-based panoramic image splicing method |
CN106910253B (en) * | 2017-02-22 | 2020-02-18 | 天津大学 | Stereo image cloning method based on different camera distances |
CN107240082B (en) * | 2017-06-23 | 2020-11-24 | 微鲸科技有限公司 | Splicing line optimization method and equipment |
-
2017
- 2017-10-12 CN CN201710948182.9A patent/CN107767339B/en active Active
Non-Patent Citations (1)
Title |
---|
图像拼接中多单应性矩阵配准及错位消除算法研究;王莹;《中国优秀硕士学位论文全文数据库 信息科技辑》;20170615(第6期);第I138-1253页 * |
Also Published As
Publication number | Publication date |
---|---|
CN107767339A (en) | 2018-03-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107767339B (en) | Binocular stereo image splicing method | |
Wei et al. | A survey on image and video stitching | |
Wang et al. | 360sd-net: 360 stereo depth estimation with learnable cost volume | |
KR101175097B1 (en) | Panorama image generating method | |
CN108470324B (en) | Robust binocular stereo image splicing method | |
CN106447601B (en) | Unmanned aerial vehicle remote sensing image splicing method based on projection-similarity transformation | |
CN105005964B (en) | Geographic scenes panorama sketch rapid generation based on video sequence image | |
CN110211043A (en) | A kind of method for registering based on grid optimization for Panorama Mosaic | |
CN110717936B (en) | Image stitching method based on camera attitude estimation | |
Li et al. | A unified framework for street-view panorama stitching | |
CN110517211B (en) | Image fusion method based on gradient domain mapping | |
CN111553845B (en) | Quick image stitching method based on optimized three-dimensional reconstruction | |
CN110880191B (en) | Infrared stereo camera dynamic external parameter calculation method based on histogram equalization | |
CN112862683B (en) | Adjacent image splicing method based on elastic registration and grid optimization | |
CN106204507B (en) | Unmanned aerial vehicle image splicing method | |
Wan et al. | Drone image stitching using local mesh-based bundle adjustment and shape-preserving transform | |
CN110580715B (en) | Image alignment method based on illumination constraint and grid deformation | |
CN116012432A (en) | Stereoscopic panoramic image generation method and device and computer equipment | |
CN108616746A (en) | The method that 2D panoramic pictures based on deep learning turn 3D panoramic pictures | |
Voronin et al. | Missing area reconstruction in 3D scene from multi-view satellite images for surveillance applications | |
Pathak et al. | Virtual reality with motion parallax by dense optical flow-based depth generation from two spherical images | |
CN110910457B (en) | Multispectral three-dimensional camera external parameter calculation method based on angular point characteristics | |
Bergmann et al. | Gravity alignment for single panorama depth inference | |
CN108426566B (en) | Mobile robot positioning method based on multiple cameras | |
CN109089100B (en) | Method for synthesizing binocular stereo video |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |