CN103533266B - 360 degree of spliced panoramic cameras of the wide ken of vertical direction - Google Patents

360 degree of spliced panoramic cameras of the wide ken of vertical direction Download PDF

Info

Publication number
CN103533266B
CN103533266B CN201310459441.3A CN201310459441A CN103533266B CN 103533266 B CN103533266 B CN 103533266B CN 201310459441 A CN201310459441 A CN 201310459441A CN 103533266 B CN103533266 B CN 103533266B
Authority
CN
China
Prior art keywords
image
lap
pixel
imageing sensor
degree
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201310459441.3A
Other languages
Chinese (zh)
Other versions
CN103533266A (en
Inventor
张茂军
熊志辉
徐玮
李靖
谭树人
王炜
刘煜
张政
尹晓晴
彭杨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan Yuan Xin Electro-Optical Technology Inc (us) 62 Martin Road Concord Massachusetts 017
National University of Defense Technology
Original Assignee
Hunan Yuan Xin Electro-Optical Technology Inc (us) 62 Martin Road Concord Massachusetts 017
National University of Defense Technology
Filing date
Publication date
Application filed by Hunan Yuan Xin Electro-Optical Technology Inc (us) 62 Martin Road Concord Massachusetts 017, National University of Defense Technology filed Critical Hunan Yuan Xin Electro-Optical Technology Inc (us) 62 Martin Road Concord Massachusetts 017
Priority to CN201310459441.3A priority Critical patent/CN103533266B/en
Publication of CN103533266A publication Critical patent/CN103533266A/en
Application granted granted Critical
Publication of CN103533266B publication Critical patent/CN103533266B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention belongs to panoramic mosaic field, provide 360 degree of spliced panoramic cameras of the wide ken of a kind of vertical direction, the demand of visual field size is determined according to panoramic mosaic application including the imageing sensor being vertically arranged that several are identical, the model of sensor, quantity and relative position.Present invention also offers scaling method based on described panoramic camera and joining method, by demarcating, by each road image projection at grade;Recycling dynamic programming method and gradient field fusion method complete the splicing of panoramic picture.Owing to the present invention can obtain bigger vertical angle of view in the case of single sensor arranges, it is only necessary to operation image stitching algorithm in the horizontal, it is possible to simplify the design of stitching algorithm, reduce algorithm complex.The concatenation that the present invention often organizes between adjacent image is separate, and parallel multiprocessor can be used to perform, and can realize the real-time splicing of multi-channel video, have wide market prospect on FPGA.

Description

360 degree of spliced panoramic cameras of the wide ken of vertical direction
Technical field:
The invention belongs to Panorama Mosaic field, particularly relate to 360 degree of spellings of the wide ken of a kind of vertical direction Connect formula panoramic camera.
Background technology:
The spliced panoramic camera using panoramic mosaic mode to build, can overcome single image sensor camera The restriction of hardware condition, realizes big visual field and high-resolution shooting to scene simultaneously.With flake panorama and folding Reflective panorama is compared, and it is little that spliced panoramic has image fault, and resolution is high and each position resolution is the most excellent Gesture.
Spliced panoramic camera is civilian or military domain all has broad application prospects.
At civil area, because of its big visual field and high-resolution characteristic, spliced panoramic is more and more closed Note and favor.Spliced panoramic camera be widely used to safety monitoring, artistic photography, virtual reality with And in the engineering reality of streetscape shooting, and under growing requirement drive, obtain fast development.
In military domain, airborne panoramic camera can realize on a surface target on a large scale, high accuracy detect Look into.The leading advanced wide visual field framework image reconstruction of advanced person's research project office of U.S. Department of Defense (DARPA) with open Send out the mode of (AWARE) engineering utilization 300 14,000,000 pixel minitype cameras splicing successfully to construct and have The image capturing system of 120 degree of visual angle 2,000,000,000 pixel resolutions.360 degree of panoramas based on panoramic mosaic technology Video camera can substitute for traditional submarine optics periscope, and U.S. army Type-18 type submarine the most formally equips 360 Degree panorama is latent hopes imaging system, substantially increases the situational awareness of submarine, provides potential war for submarine Art advantage.Similar system can also be applied to the fire control system of tank, replaces traditional panoramic sight, It is possible not only to simplify frame for movement, and is obtained in that more preferable observing effect.
But the joining method of current spliced panoramic camera is mostly based on transversely arranged imageing sensor battle array Row, although the panoramic picture of single splicing can have the horizontal view angle of 360 degree, but its vertical angle of view is very Little.The camera CMOS of standard 1080P pattern it is set to for having 50 degree of horizontal view angles as shown in Figure 1, Its vertical angle of view only has 28.125 degree.In order to obtain bigger vertical angle of view, current method is that use is multiple rows of Image sensor array.And seamless spliced needs of multiple rows of image considers horizontal and vertical both direction simultaneously, Based on image the most substantial one-dimension storage mode, two-way stitching algorithm needs more random Read, the time complexity of algorithm will certainly be increased.
At the application scenarios such as safety monitoring, streetscape shooting and military Tank Fire Control aiming, focal point For the scene on a large scale of horizontal direction, and need bigger vertical angle of view, the panoramic shooting of horizontal single splicing Machine is difficult to meet demand.The present invention uses mode common image sensor being vertically arranged to carry out panoramic mosaic, for The horizontal view angle of the same width of acquisition, the single arrangement mode being vertically arranged needs more imageing sensor quantity, But it has clear superiority relative to horizontal mode on vertical angle of view.With 360 degree of horizontal view angles of needs, As a example by the application scenario of 50 degree of verticals angle of view, use CMOS shown in Fig. 1, laterally arrange according to imageing sensor The scheme of row, then need two row CMOS could meet demand, as shown in Figure 2.Indulge according to imageing sensor To the scheme of arrangement, the most only need single CMOS can meet demand, as shown in Figure 3.As can be seen here, spy Determining under application scenario, 360 ° of spliced panoramic cameras of the wide ken of vertical direction that the present invention proposes not only exist On apparatus structure the simplest, and the complexity of stitching algorithm of correspondence can be reduced, and then reduction processes Device cost.
Summary of the invention:
The present invention is from practical application request, it is contemplated that traditional transversely arranged panorama mosaic method is building Complexity during bigger vertical angle of view panoramic picture, i.e. needs to carry out the splicing of multiple rows of image, it is proposed that a kind of It is vertically arranged the panoramic mosaic scheme of arrangement based on imageing sensor, and builds 360 ° of the wide ken of vertical direction with this Spliced panoramic shooting.The occasion of this video camera application is the collection of 360 degree of panoramic pictures, due to can be at list Bigger vertical angle of view is obtained, it is only necessary to operation image is spelled in the horizontal in the case of row's imageing sensor arrangement Connect algorithm, it is possible to simplify the design of stitching algorithm, reduce Algorithms T-cbmplexity.
For achieving the above object, the technical scheme is that
A kind of 360 degree of spliced panoramic cameras of the wide ken of vertical direction, identical vertical including several The imageing sensor of arrangement, the visual angle that the long limit of each effective map sheet of described imageing sensor is corresponding is not less than complete The vertical angle of view that scape image needs, between the image that neighboring image sensors is formed, the width of lap meets The requirement of image mosaic, and described imageing sensor quantity meet make the effective map sheet of all imageing sensors Visual angle superposition corresponding to minor face after not less than 360 degree.
Further, the distance between the camera lens photocentre of imageing sensor described in any two is less than the minimum depth of field 1/10;And each described imageing sensor uses identical exposure parameter.
Present invention also offers a kind of scaling method for 360 degree of above-mentioned spliced panoramic cameras, adopt With the scaling method of distinguished point based, comprise the following steps:
The most described 360 degree of spliced panoramic cameras gather one group of view data;
S102. in described view data each imageing sensor formed image carry out characteristic point search with Join;
S103. the Feature Points Matching relation obtained according to step S102 resolves the figure that each imageing sensor is formed Projection relation between Xiang;
S104. by all image projection formed by each imageing sensor to same viewing plane, for by The image that each imageing sensor is formed projection picture in described viewing plane, the pixel of its optional position Value uses the mode of look-up table, by the corresponding positions of the image that bilinear interpolation is formed from correspondence image sensor Put acquisition.
Present invention also offers a kind of image split-joint method for 360 degree of above-mentioned spliced panoramic cameras, Comprise the following steps:
S201. the view data of described 360 degree of spliced panoramic cameras collection is obtained through calibrated projection Image, determines the weight between the image that often group neighboring image sensors is formed further from described projection picture Folded part;
S202. dynamic programming method is utilized to determine a splicing seams at each described lap;
S203. utilize gradient field fusion method at splicing seams every described two side position to corresponding adjacent image Merge;
S204. fusion image is integrated with other parts of images of corresponding adjacent image respectively, export panorama sketch Picture.
Further, described step S202 method particularly includes:
(1) calculate each described lap image difference under Lab color space, obtain this overlapping portion The difference matrix divided;
(2) before and after utilizing, between frame, the relevant information of this lap recalculates difference matrix: the most for the moment On the basis of carving the splicing seams position calculated in this lap, extend to both sides with horizontal direction, according to weight In folded part, each pixel calculates a weight matrix with the horizontal linear distance of this splicing seams, this is weighed The difference matrix summation that value matrix and step (1) obtain, result is as the final difference matrix of this lap; The initial value of the splicing seams calculated in this lap of described previous moment, i.e. this overlapping region of initial time Splicing seams is the center line of this overlapping region;
(3) the final difference matrix obtained according to step (2), uses dynamic programming algorithm to calculate splicing Seam.
Further, described step (3) method particularly includes: from the upper end of this lap of current time to Lower end, searches for an optimal path, makes the image difference value sum of the pixel position of this optimal path process Minimum, described image difference value refers to the value that in final difference matrix, this pixel position is corresponding, this optimum Path is the splicing seams of this lap;When searching for optimal path, for current pixel location point, no Search only for three pixels adjacent with current location in next line, the most also search in same a line with present bit Put two pixels that pixel is adjacent.
Further, in described step (2) according to each pixel apart from the horizontal linear distance of this splicing seams The method calculating a weight matrix is: first calculate the level of each pixel this splicing seams of positional distance away from From d;Then the weights c of this pixel is calculated2=Aebd, wherein A Yu b is default parameter;Then obtained by The weight matrix of the weights composition of each pixel.
Further, described step S201 to S204 all realizes in FPGA.
Visible, in the realization of spliced panoramic camera, the present invention obtains and provides the benefit that: based on spelling Connect the single image sensor architecture that cost is relatively low, splicing performance has been carried out further excavation and lifting.Logical Cross and imageing sensor is vertically arranged, with suitably increase imageing sensor quantity as cost, be effectively increased panorama sketch The vertical angle of view of picture.The present invention chooses proper method, and the process range of image mosaic is limited in lap In, make the concatenation often organizing between adjacent image separate, parallel multiprocessor can be used to perform, Therefore splicing speed can't be affected by the increase of imageing sensor quantity.
Accompanying drawing illustrates:
Fig. 1 be water product visual angle be the CMOS schematic diagram being set to standard 1080P pattern of 50 degree.
Fig. 2 is that horizontal double CMOS is spliced to form 360 × 50 degree of visual fields, and splicing CMOS used is Fig. 1 institute Show model.
Fig. 3 is spliced to form 360 × 50 degree of visual fields for longitudinally single CMOS, and splicing CMOS used is Fig. 1 institute Show model.
Fig. 4 is current frame difference matrix update schematic diagram based on former frame stitching thread.L represents former frame video The stitching thread that image is calculated, in figure, dotted line represents in horizontal direction, and each pixel positional distance is sewed up The distance of line.
Fig. 5 is image stitching line schematic diagram.Wherein, S represents the starting point of stitching thread, and T represents the end of stitching thread Point, thick line represents stitching thread.The difference value sum of the position of thick line process is that in all routes, difference is minimum.
Fig. 6 is the mathematical model that optimum stitching thread is searched.
Fig. 7 is the conventional dynamic planning algorithm direction of search.Can only search for that next line is adjacent with current location three Individual point.
Fig. 8 is the dynamic programming algorithm direction of search improved.May search for next line with the most adjacent three Point and with a line two points adjacent with current location.
Detailed description of the invention
Below in conjunction with accompanying drawing and example, the detailed description of the invention of the present invention is described in further detail.
The present invention is 360 ° of spliced panoramic cameras of the wide ken of a kind of vertical direction.Its advantage is to use In the case of single imageing sensor, the panoramic camera more transversely arranged than traditional imageing sensor has more Big vertical angle of view.In certain applications, can obtain under transversely arranged scheme with single imageing sensor Need double imageing sensor just obtainable visual field, it is only necessary to carry out image mosaic in a direction, simplify The complexity of stitching algorithm.
Firstly the need of according to panoramic mosaic application, the demand of visual field size is determined the model of imageing sensor, number Measure and relative position;Then panoramic camera is demarcated, by each road image projection at grade; Dynamic programming method and gradient field fusion method is finally utilized to complete the splicing of panoramic picture.
The invention provides 360 degree of spliced panoramic cameras of the wide ken of a kind of vertical direction, including some The individual identical imageing sensor being vertically arranged, the long limit of each effective map sheet of described imageing sensor is corresponding The vertical angle of view that visual angle needs not less than panoramic picture, overlapping portion between the image that neighboring image sensors is formed Point width meet the requirement of image mosaic, and the quantity of described imageing sensor meets and makes all images pass Not less than 360 degree after the visual angle superposition that the minor face of the effective map sheet of sensor is corresponding.
In the present invention, the distance between the camera lens photocentre of imageing sensor described in any two is less than the minimum depth of field 1/10;And each described imageing sensor uses identical exposure parameter.
The design of spliced panoramic camera in the present invention, should based on the panoramic mosaic demand of certain visual field size, Including horizontal view angle and the demand of vertical angle of view size.For 360 degree of spliced panoramic cameras, its level Visual requirement is 360 degree.According to the demand of panoramic picture vertical angle of view size, choose suitable camera lens and figure As sensor model number, make visual angle vertically the regarding not less than needs that the long limit of the effective map sheet of imageing sensor is corresponding Angle.
The known panoramic splicing application demand to visual field size, if required visual field size is θW×θH, wherein θWFor Horizontal view angle, θ in the present embodimentW=360 °, θHFor vertical angle of view.Then should choose suitable camera lens and image passes Sensor model, makes visual angle corresponding to the long limit of the effective map sheet of imageing sensor not less than the vertical angle of view needed. If visual field size corresponding to the effective map sheet of imageing sensor is θ 'W×θ′H, for traditional transversely arranged scheme, There is θ 'W×θ′H, the constraint of the most above-mentioned visual angle is represented by
θ′W≥θH (1)
If imageing sensor is set to the 16:9 pattern of standard, then can obtain further
θ H ′ = 9 16 θ W ′ - - - ( 2 )
According to the demand of panoramic picture horizontal view angle size, determine the imageing sensor quantity of needs, leaving On the premise of the lap of suitable width, make the visual angle superposition that the minor face of the effective map sheet of imageing sensor is corresponding Not less than 360 degree.
If the lap width of seamless spliced requirement is η relative to the ratio of each road picture traverse, for figure As sensor is vertically arranged scheme, the minimum widith of each road lap is η θ 'H, then required figure can be solved by following formula As number of sensors N:
N(θ′H-ηθ′H)≥θW=360 ° (3)
I.e.
After determining model and the quantity of imageing sensor, determine omnidirectional imaging system according to visual field size Hardware configuration, makes the relative position between imageing sensor fix.Also note that problem include: for Minimizing parallax, the camera lens photocentre of each road image collecting device should be the most close;For reducing luminance difference, respectively The collection of road image should use same exposure parameter.
Present invention also offers a kind of scaling method for 360 degree of above-mentioned spliced panoramic cameras, adopt With the scaling method of distinguished point based, comprise the following steps:
The most described 360 degree of spliced panoramic cameras gather one group of view data;
S102. in described view data each imageing sensor formed image carry out characteristic point search with Join;
S103. the Feature Points Matching relation obtained according to step S102 resolves the figure that each imageing sensor is formed Projection relation between Xiang;
S104. by all image projection formed by each imageing sensor to same viewing plane, for by The image that each imageing sensor is formed projection picture in described viewing plane, the pixel of its optional position Value uses the mode of look-up table, by the corresponding positions of the image that bilinear interpolation is formed from correspondence image sensor Put acquisition.
Above-mentioned scaling method can do and specifically describe as follows:
After having obtained 360 degree of spliced panoramic cameras of the present invention, need to demarcate this panoramic camera, Obtain distortion parameter and the projection relation of each road image, by all image projection at grade.
Demarcation to panoramic camera can use the scaling method of distinguished point based, i.e. at aspect ratio comparatively dense Reality scene gathers one or more groups view data, each road image is carried out characteristic point and searches and mate, root The projection relation between the image of each road is resolved, finally by all image projection to system according to the information of Feature Points Matching In one viewing plane.
At this moment for each road original image projection picture in viewing plane, one can be determined one a pair The projection relation answered, i.e. projection should be from the where acquisitions of original image as the image information of optional position. This corresponding relation can be represented, if original image is I by the mode of look-up table1,I2,…,IN, projection picture is P1,P2,…,PN, corresponding look-up table is T1,T2,…,TN, look-up table is identical with the wide height of original image, each number Comprise three data [height, width, mask] according to unit, represent that projection is former as present co-ordinate position correspondence respectively The height coordinate of image coordinate location, width coordinate and effective marker position, then projection is as Pk, k=1,2 ..., N Generation process be represented by:
To position (h in projection pictureP,wP), if Tk(hP,wP, 3) ≠ 0, then this position has data in original image, And its coordinate position is
h I = T k ( h P , w P , 1 ) w I = T k ( h P , w P , 2 ) - - - ( 5 )
Coordinate position (hI,wI) it not generally the most integer, the mode of bilinear interpolation can be used to obtain from original image and to throw The pixel value of shadow image.
Present invention also offers a kind of image split-joint method for 360 degree of above-mentioned spliced panoramic cameras, Comprise the following steps:
S201. the view data of described 360 degree of spliced panoramic cameras collection is obtained through calibrated projection Image, determines the weight between the image that often group neighboring image sensors is formed further from described projection picture Folded part;
S202. dynamic programming method is utilized to determine a splicing seams at each described lap;
S203. utilize gradient field fusion method at splicing seams every described two side position to corresponding adjacent image Merge;
S204. fusion image is integrated with other parts of images of corresponding adjacent image respectively, export panorama sketch Picture.
Detailed process can be described as:
After the demarcation completing omnidirectional imaging system, it is possible to obtain the lap of any one group of adjacent image. Ideally, the lap of left and right two width image should be consistent, but due to parallax under practical situation Existence and the error of image capture device, always there is certain deviation in the image of lap.Panorama sketch It is exactly that the image to lap processes as splicing task to be done, completes left and right two width images and exist Seamlessly transitting of lap, eliminates panoramic picture deviation visually and non-continuous event.
First, the lookup of dynamic programming method optimum splicing seams is used.The present invention provides two kinds of concrete realities Existing method.
The first implementation searching optimum splicing seams is: the position of optimum splicing seams should consider figure The texture information of picture and different information, splicing seams should be from image texture is not abundant and two width image differences are less Region is passed through.The texture of image is reflected, the differential chart reflection image in gradient field by the gradient map of two width images Difference, search piece view data should be the comprehensive of the two.Original image I for two width laps1, I2, corresponding gradient map G is calculated by following formula:
G = | | ▿ h I 1 | | + | | ▿ w I 1 | | + | | ▿ h I 2 | | + | | ▿ w I 2 | | - - - ( 6 )
Differential chart D in gradient field is by arriving that following formula calculates:
D = | | ▿ h I 1 - ▿ h I 2 | | + | | ▿ w I 1 - ▿ w I 2 | | - - - ( 7 )
Wherein operatorRepresent the gradient of height and width respectively, | | | | for taking norm computing.
For calculating the weighted average that the image S of piece reference position should be G Yu D, i.e.
S=α G+ (1-α) D (8)
Choose suitable weight coefficient α according to actual needs, obtain the image combining texture information with different information S, operation state planning algorithm on S, can obtain should the optimum splicing seams L of lap.Therein The value of weight coefficient α is generally between 0.1 to 0.9, and in the present embodiment, value α is taken as 0.7.
Then, use the fusion method in gradient field that the lap of left and right two width image is merged.Ladder Fusion method on degree territory needs the image information of known overlapping segment boundary and the gradient letter of whole lap Breath.The image information on lap border is directly from left and right two width Image Acquisition, and its gradient map Gh, GwCan Position according to optimum splicing seams obtains
G h ( h , w ) = ▿ h I 1 , w ≤ L ( h ) ▿ h I 2 , w > L ( h ) - - - ( 9 )
G w ( h , w ) = ▿ w I 1 , w ≤ L ( h ) ▿ w I 2 , w > L ( h ) - - - ( 10 )
Wherein L (h) represents optimum splicing seams abscissa at height h.
Solved by gradient map and on the question essence of fusion image, be to solve for Poisson's equation
ΔI = ∂ ∂ h G h + ∂ ∂ w G w I | ∂ Ω = I * | ∂ Ω - - - ( 11 )
Wherein, Ω represents overlapping region,Representing border, overlapping region, I is unknown images data, I* For known image boundary value.Poisson's equation is solved the property equation that can be reduced to a sparse line of coefficient matrix The Solve problems of group, available convergence rate Gauss-Seidel iteration method faster or SOR iterative method Solve.The most desirable original image is that iterative initial value is to accelerate solving speed.
Present invention also offers the second and search the implementation of optimum splicing seams.
(1) calculate each lap image difference under Lab color space, obtain this lap Difference matrix.
The storage of image be typically all store with tri-Color Channels of RGB, in order to preferably describe image it Between difference, the present invention uses and calculates the image difference of lap under Lab color space.Color model Lab is that this color model has two big advantages based on the people's a kind of color model to the sensation of color.First, The display mode of color that what Lab color space described is rather than generate the number of specific colorant needed for color Amount, so Lab color model is considered and device-independent color model, eliminates color space to equipment Dependency.Second, colour gamut is broad.It not only contains all colour gamuts of RGB, CMYK, moreover it is possible to performance The color that they can not show, the additionally color of human eye energy perception, can be transferred through Lab model and show Come.Therefore the difference of lap can more accurately be described under Lab color space.
Lab color model is that tri-key elements of a, b by brightness L with about color form.L represents brightness (Luminosity), a represents from carmetta to green scope, and b represents from yellow to blue scope.
A corresponding transformational relation is had between RGB color model with Lab color model:
L=F (R, G, B)
A=G (R, G, B)
B=H (R, G, B)
Wherein, R, G, B represent the value of lower three Color Channels of RGB color respectively;F(·)、G(·)、H(·) Represent corresponding transfer function respectively.
After the conversion of color space, calculate the difference of lap under Lab color space.To overlap Part progressively scans, and calculates the difference between the source images that each pixel position is corresponding, and the present invention adopts The difference of quantitative description lap respective pixel point position is carried out, for adjacent image I by Euclidean distance1With I2, The concrete formula that the difference of its lap calculates is as follows:
c1=(wl×(L(I1)-L(I2))2+wc×(a(I1)-a(I2))2+wc×(b(I1)-b(I2))2)1/2
Through calculating, obtain difference matrix.Wherein wl、wcRepresent corresponding weight respectively, can be according to feelings Condition presets, w in the present embodimentl、wcValue be taken as 1/3 respectively;L (), a (), b () represent right respectively Answer the value of three components in the Lab color model of image.c1Represent each pixel of calculated lap Image difference value.The difference matrix of the lap each image difference value formed is designated as C1
(2) before and after utilizing, between frame, the relevant information of this lap recalculates difference matrix.
On the basis of the splicing seams position calculated in this picture frame lap of previous moment, with horizontal direction Extend to both sides, calculate one according to the horizontal linear distance of pixel each in lap with this splicing seams Weight matrix.As Fig. 4, L represent the splicing seams that former frame video image is calculated, in figure, dotted line represents water Square upwards, the distance of each pixel this splicing seams of positional distance.Specific implementation method is to lap Progressively scan, calculate horizontal range d of each pixel this splicing seams of positional distance, with (p0, q) represent The splicing seams that former frame video image is calculated q row the pixel of process, (p q) represents current pixel The position of point, then the computational methods of horizontal range d are:
D=| | (p-p0)||1
In the present invention, the horizontal linear distance according to pixel each in lap with this splicing seams calculates one The method of weight matrix can use different models according to practical situation, uses exponential function in the present embodiment Method, calculate current pixel point position dependency weights c2:
c2=f (d)=Aebd
Wherein A Yu b is parameter, sets according to practical situation, is respectively set to 1 and 5 in the present embodiment.d The horizontal range of the corresponding pixel points positional distance splicing seams for calculating.
Thus it is calculated the weight matrix C being made up of the weights of pixel each in lap2, will The difference matrix summation that this weight matrix and step (1) obtain, result is as the final difference of this lap Different Matrix C, and C=C1+C2
For the initial value of splicing seams in the present invention, i.e. initial time (the first frame) each picture frame lap Splicing seams specifically can set according to practical situation, initial time each picture frame lap in the present embodiment Splicing seams is the center line of this lap.
(3) the final difference matrix obtained according to step (2), uses dynamic programming algorithm to calculate and searches Splicing seams.
The calculating of splicing seams, substantially seeks to, in the difference matrix C calculated in step (2), look for one Demarcation line, this demarcation line the difference value sum of position of process minimum.As it is shown in figure 5, color in figure Thicker demarcation line the difference value sum of position of process minimum, the most calculative splicing seams.Cause The calculating of splicing seams can be abstracted into the problem finding optimal path in the non-directed graph of Weighted Coefficients by this.
Foundation graph model as shown in Figure 6, the node in the position representative graph of each point in lap, often Individual corresponding image difference represents the limit in figure, use dynamic programming algorithm from the upper end of lap to End, searches an optimal path, and the difference value sum of the position of process is minimum, and this paths seeks to search Splicing seams.
The present invention uses the method for dynamic programming to search optimum splicing seams, and the advantage of dynamic programming algorithm is Calculate speed less with cost, meet the requirement of real-time of video-splicing.The think of of conventional dynamic programming method Thinking as follows: in calculated difference matrix, from lower end to upper end, pixel calculates current pixel position line by line Put the minima of lap difference value sum topmost, and record.The mistake that each location of pixels calculates Cheng Zhong, the scope of this pixel location finding only has three points being adjacent in the next line of this pixel. Calculate complete after, take in lap in a line topmost, the pixel that difference value sum is minimum, from Upper end is recalled toward lower end, successively record the pixel of process, eventually find splicing seams.Wherein, under When end begins stepping through to upper end, the difference value sum that initial row (the most descending) each pixel is corresponding is difference The most descending corresponding value of different value matrix C.
Above-mentioned conventional dynamic programming algorithm calculates splicing seams, in the search procedure from lower end to upper end, when The scope of preceding pixel point location finding is three points that next line is adjacent with current pixel point, and the direction of search is such as Shown in Fig. 7, therefore there is bigger limitation in search.Wherein the recurrence relation of dynamic programming algorithm can describe For:
Ei,j=ei,j+min(Ei-1,j-1,Ei-1,j,Ei-1,j+1)
Wherein Ei,jRepresent from descending most beginning-of-line to put in place to put (i, j) the algebraical sum of difference value of position of process. ei,jRepresent position (i, j) difference value at place.
In order to improve search effect, present invention also offers the dynamic programming algorithm of a kind of improvement, such as Fig. 8 institute Showing, current pixel point position is possible not only to search for three points that next line is adjacent with current pixel point, it is also possible to Search for two points adjacent with current pixel point with a line, expand hunting zone, improve splicing seams and search Effect.Its recurrence relation is as follows:
Ei,j=ei,j+min(Ei-1,j-1,Ei-1,j,Ei-1,j+1,Ei,j-1,Ei,j+1)
Wherein Ei,jRepresent from descending most beginning-of-line to put in place to put (i, j) the algebraical sum of difference value of position of process. ei,jRepresent position (i, j) difference value at place.
So from lap most descend beginning-of-line from the beginning of, use the dynamic programming algorithm improved to carry out splicing seams Lookup calculate, finally give the splicing seams of this lap of current time.

Claims (8)

1. 360 degree of spliced panoramic cameras of the wide ken of vertical direction, it is characterised in that include several The identical imageing sensor being vertically arranged, corresponding the regarding in long limit of each effective map sheet of described imageing sensor The vertical angle of view that angle needs not less than panoramic picture, lap between the image that neighboring image sensors is formed Width meet the requirement of image mosaic, and the quantity of described imageing sensor meets and makes all image sensings Not less than 360 degree after the visual angle superposition that the minor face of the effective map sheet of device is corresponding.
360 degree of spliced panoramic cameras of the wide ken of vertical direction the most according to claim 1, it is special Levy and be: the distance between the camera lens photocentre of imageing sensor described in any two is less than the 1/10 of the minimum depth of field; And each described imageing sensor uses identical exposure parameter.
3. for a scaling method for 360 degree of spliced panoramic cameras described in claim 1, its feature It is to use the scaling method of distinguished point based, comprises the following steps:
The most described 360 degree of spliced panoramic cameras gather one group of view data;
S102. in described view data each imageing sensor formed image carry out characteristic point search with Join;
S103. the Feature Points Matching relation obtained according to step S102 resolves the figure that each imageing sensor is formed Projection relation between Xiang;
S104. by all image projection formed by each imageing sensor to same viewing plane, for by The image that each imageing sensor is formed projection picture in described viewing plane, the pixel of its optional position Value uses the mode of look-up table, by the corresponding positions of the image that bilinear interpolation is formed from correspondence image sensor Put acquisition.
4. for an image split-joint method for 360 degree of spliced panoramic cameras described in claim 1, It is characterized in that comprising the following steps:
S201. the view data of described 360 degree of spliced panoramic cameras collection is obtained through calibrated projection Image, determines the weight between the image that often group neighboring image sensors is formed further from described projection picture Folded part;
S202. dynamic programming method is utilized to determine a splicing seams at each described lap;
S203. utilize gradient field fusion method at splicing seams every described two side position to corresponding adjacent image Merge;
S204. fusion image is integrated with other parts of images of corresponding adjacent image respectively, export panorama sketch Picture.
Image split-joint method the most according to claim 4, it is characterised in that described step S202 concrete Method is:
(1) calculate each described lap image difference under Lab color space, obtain this overlapping portion The difference matrix divided;
(2) before and after utilizing, between frame, the relevant information of this lap recalculates difference matrix: the most for the moment On the basis of carving the splicing seams position calculated in this lap, extend to both sides with horizontal direction, according to weight In folded part, each pixel calculates a weight matrix with the horizontal linear distance of this splicing seams, this is weighed The difference matrix summation that value matrix and step (1) obtain, result is as the final difference matrix of this lap; The initial value of the splicing seams position calculated in this lap of described previous moment, i.e. this overlay region of initial time The splicing seams in territory is the center line of this overlapping region;
(3) the final difference matrix obtained according to step (2), uses dynamic programming algorithm to calculate splicing Seam.
Image split-joint method the most according to claim 5, it is characterised in that described step (3) concrete Method is: from the upper end of this lap of current time to lower end, searches for an optimal path, makes this optimum The image difference value sum of the pixel position of path process is minimum, and described image difference value refers to final difference The value that in different matrix, this pixel position is corresponding, this optimal path is the splicing seams of this lap;Searching During rope optimal path, for current pixel location point, do not search only in next line adjacent with current location three Individual pixel, the most also searches for two pixels adjacent with current pixel location point in same a line.
Image split-joint method the most according to claim 5, it is characterised in that basis in described step (2) Each pixel apart from the method for horizontal linear distance one weight matrix of calculating of this splicing seams is: first count Calculate horizontal range d of each pixel this splicing seams of positional distance;Then the weights of this pixel are calculated c2=Aebd, wherein A Yu b is the parameter pre-set;Then obtain the power being made up of the weights of each pixel Value matrix.
Image split-joint method the most according to claim 4, it is characterised in that described each step is all at FPGA Middle realization.
CN201310459441.3A 2013-10-01 360 degree of spliced panoramic cameras of the wide ken of vertical direction Active CN103533266B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310459441.3A CN103533266B (en) 2013-10-01 360 degree of spliced panoramic cameras of the wide ken of vertical direction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310459441.3A CN103533266B (en) 2013-10-01 360 degree of spliced panoramic cameras of the wide ken of vertical direction

Publications (2)

Publication Number Publication Date
CN103533266A CN103533266A (en) 2014-01-22
CN103533266B true CN103533266B (en) 2016-11-30

Family

ID=

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102510474A (en) * 2011-10-19 2012-06-20 中国科学院宁波材料技术与工程研究所 360-degree panorama monitoring system
CN102905079A (en) * 2012-10-16 2013-01-30 北京小米科技有限责任公司 Method, device and mobile terminal for panorama shooting
CN202886832U (en) * 2012-09-27 2013-04-17 中国科学院宁波材料技术与工程研究所 360-degree panoramic camera

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102510474A (en) * 2011-10-19 2012-06-20 中国科学院宁波材料技术与工程研究所 360-degree panorama monitoring system
CN202886832U (en) * 2012-09-27 2013-04-17 中国科学院宁波材料技术与工程研究所 360-degree panoramic camera
CN102905079A (en) * 2012-10-16 2013-01-30 北京小米科技有限责任公司 Method, device and mobile terminal for panorama shooting

Similar Documents

Publication Publication Date Title
CN103019643B (en) A kind of large screen projection automatic calibration of plug and play and splicing display method
CN103337094B (en) A kind of method of applying binocular camera and realizing motion three-dimensional reconstruction
CN104299215B (en) The image split-joint method that a kind of characteristic point is demarcated and matched
CN103971352A (en) Rapid image splicing method based on wide-angle lenses
CN109314752A (en) Effective determination of light stream between image
CN107154014A (en) A kind of real-time color and depth Panorama Mosaic method
CN101394573B (en) Panoramagram generation method and system based on characteristic matching
CN104574339A (en) Multi-scale cylindrical projection panorama image generating method for video monitoring
CN106157304A (en) A kind of Panoramagram montage method based on multiple cameras and system
CN102855629B (en) Method and device for positioning target object
CN105933695A (en) Panoramic camera imaging device and method based on high-speed interconnection of multiple GPUs
TWI527434B (en) Method for using a light field camera to generate a three-dimensional image and the light field camera
CN101321303A (en) Geometric and optical correction method for non-plane multi-projection display
CN103338343A (en) Multi-image seamless splicing method and apparatus taking panoramic image as reference
CN105205796A (en) Wide-area image acquisition method and apparatus
CN102164269A (en) Method and device for monitoring panoramic view
CN106534670B (en) It is a kind of based on the panoramic video generation method for connecting firmly fish eye lens video camera group
CN105096252B (en) A kind of preparation method of the comprehensive streetscape striograph of banding
CN104159026A (en) System for realizing 360-degree panoramic video
CN103115685B (en) Infrared multi-detector combined detecting device and infrared detecting method
US20100302234A1 (en) Method of establishing dof data of 3d image and system thereof
CN103179411B (en) Image processing apparatus and method and image display device
CN101916455A (en) Method and device for reconstructing three-dimensional model of high dynamic range texture
CN103544696B (en) A kind of suture line real-time searching method realized for FPGA
CN107578450A (en) A kind of method and system for the demarcation of panorama camera rigging error

Legal Events

Date Code Title Description
PB01 Publication
SE01 Entry into force of request for substantive examination
GR01 Patent grant