CN101442619A - Method for splicing non-control point image - Google Patents

Method for splicing non-control point image Download PDF

Info

Publication number
CN101442619A
CN101442619A CNA2008102374278A CN200810237427A CN101442619A CN 101442619 A CN101442619 A CN 101442619A CN A2008102374278 A CNA2008102374278 A CN A2008102374278A CN 200810237427 A CN200810237427 A CN 200810237427A CN 101442619 A CN101442619 A CN 101442619A
Authority
CN
China
Prior art keywords
image
centerdot
width
dog
adjacent
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CNA2008102374278A
Other languages
Chinese (zh)
Other versions
CN101442619B (en
Inventor
李德仁
刘进
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen D & W Spatial Information Technology Co., Ltd.
Original Assignee
Wuhan University WHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University WHU filed Critical Wuhan University WHU
Priority to CN2008102374278A priority Critical patent/CN101442619B/en
Publication of CN101442619A publication Critical patent/CN101442619A/en
Application granted granted Critical
Publication of CN101442619B publication Critical patent/CN101442619B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to a method for image mosaic without control points, which comprises the following steps: (1) acquiring an image sequence; (2) extracting a characteristic point set of each image in the image sequence; (3) searching homonymy point pairs of characteristic points between two adjacent images; (4) calculating a Homograph transformation relation between the adjacent images by using an RANSAC error-tolerant algorithm; and (5) using a continual multiplication formula and fusion technology to obtain mosaic results. Aiming at each characteristic point, the method can automatically collect a color channel with highest characteristic significance degree, so as to not only improve amount of the characteristic points greatly, but also improve significance and accuracy of the characteristic points. The method can be used as a correction method for human-computer interactive air photos to correct micro errors accumulated during mosaic. The method needs no control points, and has the advantages of high accuracy, low application cost and high efficiency.

Description

Method for splicing non-control point image
Technical field
The invention belongs to the computer visual image processing technology field, particularly relate to a kind of method that is spliced into a width of cloth complete image from the frame of video of scene change automatically continuously, this technology does not need camera pose and Ground Control dot information, consider the arbitrarily angled rotation of video (or image sequence), ratio and perspective transform realize the seamless spliced technology of a kind of full automatic colour video image.
Background technology
The automatic splicing of image is at telemetry, and (as the moon, the martian surface image joint) is explored in military surveillance, space flight, 360 ° of full-view images generate and there are demand and application widely in field such as automatically stitching unmanned aerial vehicle remote sensing images.Traditional image joint process is in case existence is rotated, and disturbing factors such as ratio, illumination conversion often need artificial participation, waste time and energy, and precision is not high, is difficult to realize automation.Particularly often not having GPS in military field can use, the camera attitude, and the position all can't accurately be known, presses for a kind of real time video image method for automatically split-jointing that does not have control point and camera attitude.
Image splicing is exactly to exist the image sequence of lap to carry out the space coupling each other several to aim at, and merges back formation one width of cloth through sampling and comprises that visual angle each image sequence information, wide is scene, complete, the process of the new images of high definition.The research work of the splicing of image has at present become a demand focus.A lot of traditional registration joining methods, all there is the splicing low precision, efficient is low, disadvantages such as automaticity is low, the attitude that needs aircraft or camera that has and the absolute coordinate of GPS information or ground control point, this be under many circumstances be difficult to provide (as can't obtain GPS information at all in wartime; The information at control point, unknown region (as martian surface) of taking photo by plane is difficult to know), what have is difficult to the antagonism rotation, and ratio changes and the interference effect of illumination.Therefore for remote sensing exploration or military surveillance worker, press for a kind ofly consider to rotate, the fast video image method for automatically split-jointing of ratio and illumination variation.
The automated graphics splicing relates generally to following 3 aspect technology:
1. image characteristic point extracts automatically, must can both stabilizing effectively extract characteristic point for various types of images, does not rely on image rotation, size, resolution, the variation of illumination and color saturation.
2. be spliced the automatic coupling of characteristic point of the same name between image or the frame of video, this process must be sane fast.
3. right according to characteristic point of the same name, calculate the perspective transformation matrix between two images, it is right that algorithm must discard wrong same place automatically.
The method of gray level image sift feature [LOWE D G, Distinctive image features from scale-invariantkey points[J] .International journal of Computer vision, 2,004 60 (2): 91-110] determine the comparatively stable constant characteristic point position of yardstick by in image multiscale space pyramid, seeking extreme point; Determine the characteristic point direction by the statistical gradient direction histogram.This method of mainly utilizing document realizes the identification to the object that is rotated and blocks.But the document is not discussed the following Several Key Problems in the video image splicing:
How 1 extract the yardstick invariable rotary feature of coloured image.Experiment shows: simply RGB 3 channel characteristics are merged the effect that can produce non-constant.
2. how to utilize these characteristic points to realize stable splicing between full-automatic image or the frame of video.
3. how to realize the continuous splicing between the great amount of images sequence.
4. how to realize that quick calculating of Concurrent Feature point makes feature extraction speed significantly improve.
How solving these key issues in the automatic splicing, all is problem to be solved by this invention.
2 fork tree quick mode sorting algorithm [Beis, J.and Lowe, D.G.1997.Shape indexing using approximatenearest-neighbour search in high-dimensional spaces.In Conference on Computer Vision andPattern Recognition, Puerto Rico, pp.1000-1006.], set up a kind of so-called optimal classification 2 fork trees automatically by treating the distribution of classification mode in feature space.Make matching efficiency be greatly improved.This algorithm can be realized quick coupling to the large nuber of images characteristic point by further transformation.
Than Feature Points Matching, feature extraction also is the important ring in the stitching algorithm, also is a ring more consuming time.For further improving matching efficiency, can in the stitching algorithm process, realize a kind of parallel computation, make splicing efficient significantly improve.Why can consider to adopt the interframe parallel computation be because
1. calculate noncausal relationship between each frame, but independent operating, to give full play to the advantage of parallel computation.
2. the multi-core computer hardware environment is universal day by day, even notebook computer, existing market is not sold monokaryon basically and progressively popularized double-core or 4 nuclear types, and this has greatly reduced the application cost of this parallel splicing.
The RANSAC fault-toleranr technique is used for fitting a straight line, and all need carry out field of fault-tolerant processing or the like further to extend to plane fitting and other in addition.The RANSAC algorithm is by the qualified number of objects of random search and maximizes realizing of this quantity, but this do not consider the error degree of the object of choosing and condition, and this error condition also is considerable in the image splicing.Utilization and extention can solve the computational problem of the fault-tolerant coupling of characteristic point in the image splicing better in conjunction with the RANSAC technology of conditional error.
The research work of the splicing of image has at present become a demand focus.A lot of traditional registration joining methods, all there is the splicing low precision, efficient is low, disadvantages such as automaticity is low, the attitude that needs aircraft or camera that has and the absolute coordinate of GPS information or ground control point, this be under many circumstances be difficult to provide (as can't obtain GPS information at all in wartime; The information at control point, unknown region (as martian surface) of taking photo by plane is difficult to know), what have is difficult to the antagonism rotation, and ratio changes and the interference effect of illumination.Therefore for remote sensing exploration or military surveillance worker, press for a kind ofly consider to rotate, the fast video image method for automatically split-jointing of ratio and illumination variation.
Summary of the invention
The present invention provides a kind of method for splicing non-control point image at the problems referred to above just.This method can realize that the frame of video (as the frame of video or the continuous multiple image of the continuous video camera shooting of moving or rotating) from scene change is spliced into a width of cloth complete image continuously, does not need camera pose and Ground Control dot information, the arbitrarily angled rotation of consideration video (or image sequence), ratio and perspective transform.Splicing precision height.
Technical scheme provided by the invention is:
A kind of method for splicing non-control point image may further comprise the steps:
(1) obtains image sequence
The employing digital camera obtains image sequence or adopts Digital Video to obtain video; For image sequence, the superimposed image more than 40% is arranged between the adjacent image; For video, obtain image sequence by video decode, and adjacent image there is the superimposed image more than 40%;
(2) feature point set of every width of cloth image in the extraction image sequence
A, at first set up the multiple dimensioned pyramid of image; When image was coloured image, multiple dimensioned pyramid was the multiple dimensioned pyramid of three Color Channels of RGB;
B, carry out z=4~8 time Gaussian Blur for multiple dimensioned pyramidal every tomographic image and obtain this blurred picture of z+1 floor height gauss c[i], i=0,1,2 ..., z, wherein gauss c[0] is this layer original image; And acquire z layer dog c[e] image, e=0,1,2 ..., z-1; Dog wherein c[e]=gauss c[e+1]-gauss c[e]; C represents gray image or coloured image; When image is coloured image, c={R, G, B}, R, G, B represent three Color Channels of red, green, blue;
C, searching e=1,2 ..., every layer of dog in the z-2 layer c[e] image the inside satisfy dog c[e] [x, y] is than 6 adjacent element value dog c[e] [x+1, y], dog c[e] [x, y+1], dog c[e] [x-1, y], dog c[e] [x, y-1], dog c[e-1] [x, y], dog cThe all big Local Extremum of [e+1] [x, y]; X wherein, y is a pixel coordinate, x-1, y, x+1, y, x, y+1 and x, y-1 represent x respectively, a y left side adjacent, right adjacent, go up adjacent and adjacent pixel coordinate down; Dog cCoordinate x on the e layer blurred picture among [e] [x, y] expression Color Channel c, the pixel value at y place;
When image was gray image, Local Extremum entered step (3) as characteristic point; When image is coloured image, enter after steps d finds out characteristic point, enter step (3) again;
Every layer of dog among d, the extraction step c cThe characteristic remarkable degree C of the Local Extremum of [e] image the inside Cf[x, y, e] maximum obtains the feature point set of the ratio that do not rely on as characteristic point:
C wherein Cf[x, y, e]=dog c[e] [x, y]-
(dog c[e][x+1,y]+dog c[e][x,y+1]+dog c[e][x-1,y]+dog c[e][x,y-1]+dog c[e-1][x,y]+dog c[e+1][x,y])/6;
(3) same place of the characteristic point between adjacent two width of cloth images of search is right
(4) utilize the RANSAC tolerant fail algorithm to calculate Homograph transformation relation between the adjacent image
Utilize the RANSAC tolerant fail algorithm to find out adjacent two width of cloth image t-1, same place correct between the t frame is to (x T-1, k, y T-1, k) and (x Tk, y Tk), k=1 .., m, m 〉=4 are that correct same place is right; T=2~N, N are the figure film size number of image sequence; Separate following equation
x t - 1,1 y t - 1,1 1 - x t , 1 x t - 1,1 - x t , 1 y t - 1,1 x t - 1,1 y t - 1,1 1 - y t - 1 x t - 1,1 - y t , 1 y t - 1,1 · · · · · · · · · · · · · · · · · · · · · · · · x t - 1 , m y t - 1 , m 1 - x t , m x t - 1 , m - x t , m y t - 1 , m x t - 1 , m y t - 1 , m 1 - y t , m x t - 1 , m - y t , m y t - 1 , m h 11 h 12 h 13 h 21 h 22 h 23 h 31 h 32 = x t , 1 y t , 1 · · · · · · x t , m y t , m
Obtain adjacent two width of cloth image t-1, the Homograph transformation relation between the t frame H t - 1 , t = h 11 , h 12 , h 13 h 21 , h 22 , h 23 h 31 , h 32 , h 33 ; H wherein 33=1;
(5) company of utilization takes advantage of formula and integration technology to obtain splicing the result
According to adjacent two width of cloth image t-1, the Homograph transformation relation between the t frame couples together as shown in the formula the transformation relation H that obtains between the 1st width of cloth image and the t width of cloth image by following matrix multiplication 1, t: H 1, t=H 1, t-1H T-1, tWhen t=2, H 1, t-1=H 1,1Be unit matrix;
Earlier with the 1st width of cloth image I 1(x ', y ') directly copies F to 1(x y), adopts following formula with the 1st width of cloth image F again 1(x is y) with the 2nd width of cloth image I 2(x ', y ') splicing obtains stitching image F 2(x, y), with stitching image F 2(x y) obtains stitching image F with the splicing of the 3rd width of cloth image 3(x, y), with stitching image F 3(x y) obtains stitching image F with the splicing of the 4th width of cloth image 4(x, y) ..., with stitching image F N-1(x y) obtains stitching image F with the splicing of N width of cloth image N(x y), is finally spliced result images F from the 1st width of cloth image to N width of cloth image by width of cloth splicing like this N(x, y):
F t(x, y)=α I t(x ', y ')+(1-α) F T-1(x, y) t=2 wherein, 3,4 ... ..N, α=0.3~1;
X ' wherein, y ' calculates by following formula:
λ x ′ y ′ 1 = H 1 , t x y 1 H wherein 1, t=H 1, t-1H T-1, t
The present invention can utilize the computer multithreading to extract the feature point set of multiple image simultaneously: for the computer that N CPU arranged, distribute N thread to extract the invariant features of video top n image to be spliced simultaneously earlier; This N thread is responsible for being numbered 0 respectively, 1,2, ..., the feature point set that the feature point set of the N of N-1 image to be spliced, that thread of finishing at first then are responsible for being numbered N+1 the image to be spliced of N automatically extracts, and then second that thread of finishing feature point set of then being responsible for being numbered N+2 the image to be spliced of N+1 automatically extracts, leapfrog so successively, extract up to the feature point set of finishing whole images to be spliced.
The same place of the characteristic point in the above-mentioned steps (3) between adjacent two width of cloth images of search is to realizing as follows:
According to each characteristic point x, y is the (gauss of n * n) at center cOri is at the histogram of all directions in the gradient direction ori statistical pixel territory of [e] image slices prime field, and the ori that occurrence number is maximum is principal direction; Ori=arc tan[dy/dx wherein], dy=gauss c[e] [x, y+1]-gauss c[e] [x, y-1], dx=gauss c[e] [x+1, y]-gauss c[e] [x-1, y]; Again with the (gauss of n * n) around this characteristic point c[e] image pixel distributes and to correct according to gradient principal direction separately; Be positioned at (the gauss of n * n) around this characteristic point after the correction c[e] image pixel distributes and just constitutes the descriptor f[1 of this characteristic point, and 2 ... n 2]; N=3~5 wherein;
Right according to the characteristic point same place that the descriptor of the characteristic point range search in n * n dimension descriptor space goes out between adjacent two width of cloth images.
Characteristic point same place between above-mentioned adjacent two width of cloth images is to determining by laxative remedy: obtain characteristic point p on one of adjacent two width of cloth images image a and the distance of the characteristic point q on another image b D pq = Σ i = 1 n 2 ( f p [ i ] - f q [ i ] ) 2 , P=1 ... r, r are that image b goes up characteristic point sum, then D PqTwo characteristic points of minimum value correspondence promptly to constitute a same place right; Other same place of according to said method finding out in adjacent two width of cloth images is right.
Perhaps, obtain characteristic point p on one of adjacent two width of cloth images image a and the distance of the characteristic point q on another image b D pq = Σ i = 1 n 2 | f p [ i ] - f q [ i ] | , P=1 ... r, r are that image b goes up characteristic point sum, then D PqTwo characteristic points of minimum value correspondence promptly to constitute a same place right; Other same place of according to said method finding out in adjacent two width of cloth images is right.
The same place that the present invention also can adopt 2 fork tree quick mode sorting algorithms to search between adjacent two width of cloth images is right.
The present invention is based on a large amount of several colours of the fault-tolerant matching technique of characteristic point or the continuous seamless splicing computational methods and the computer software thereof of grey-level image frame realizes.This image split-joint method neither needs any ground control point, also without any need for information about camera position and attitude, but determine according to the incidence relation of image information between the picture frame fully.
The present invention proposes a kind of maximum significance characteristic points automatic extraction method of coloured image, at each characteristic point, can collect the Color Channel of characteristic remarkable degree maximum automatically, this has not only improved the quantity of characteristic point greatly, and has improved the conspicuousness and the accuracy of characteristic point.With respect to gray level image, coloured image can generate abundanter feature so that realize more accurate stable image splicing effect.
The present invention can adopt parallel calculating method, and the feature point set that extracts a plurality of frame of video simultaneously is used for splicing, makes image splicing efficient significantly improve.Link the most consuming time---invariant features leaching process in the image splicing that this technology is has been realized parallel computation.Simultaneously main program is constantly monitored the characteristic point that newly extracts and is used to splice the calculating of frame of video in the splicing, has realized walking abreast between extract minutiae collection and splicing calculating and the output procedure so again.
The present invention adopts a kind of fault-tolerant matching process of considering conditional error, makes this splicing have robustness and fault-tolerance.Adopt this method stitching error obviously to reduce.
The present invention can be used as the bearing calibration of man-machine interaction boat sheet, and the slight error of accumulating in the splicing is revised.
Advantage of the present invention and effect:
1. do not need control point precision height, application cost is low, the efficient height
The present invention can realize the automatic method that is spliced into a width of cloth complete image continuously of frame of video (as the frame of video or the continuous multiple image of the continuous video camera shooting of moving or rotating) from scene change, this method does not need camera pose and Ground Control dot information, consider video (or image sequence) arbitrarily angled rotation, ratio and perspective transform.Splicing precision height.A large amount of experiments show that relative accuracy can reach sub-pixel (for image experiment stitching error<1 pixel of 1024*1024 pixel).Experimental result is seen accompanying drawing 2 to accompanying drawing 7, and wherein the aircraft air strips still can coincide with the route map picture of front process through image after rotating in the accompanying drawing 3.Can be applied in and comprise the unmanned plane military surveillance, the moon, martian surface image joint, 360 ° of full-view images generations, survey of territorial resources, fields such as satellite remote sensing earth observation graphical analysis.This image split-joint method neither needs any ground control point, also without any need for information about camera position and attitude, but fully determine that according to the incidence relation of image information between the picture frame this makes its application cost in every field low, the efficient height.
The present invention can realize that the frame of video (as the frame of video or the continuous multiple image of the continuous video camera shooting of moving or rotating) from scene change is spliced into the method for a width of cloth complete image continuously, this technology does not need camera pose and scene control point information, consider the arbitrarily angled rotation of video (or image sequence), ratio and perspective transform, realized that the full automatic colour video image is seamless spliced, make full use of the various related informations that conspicuousness is strong between image sequence and splice calculating automatically, consecutive frame splicing precision reaches sub-pix, and utilizes concurrent technique to make splicing speed significantly improve.
(b) parallel efficiency height
Owing to adopted concurrent technique, the multiple image characteristic extraction procedure has all been realized concurrent operation between feature extraction and the splicing.Make the arithmetic speed of this method on multi-core computer significantly improve, can realize in real time, consider the video stream splicing of rotation ratio illumination variation.
Description of drawings
Fig. 1 is a flow chart of the present invention;
Fig. 2 for the present invention to the splicing of 600 frame videos (every 10 frames splicings totally 60 images once) figure as a result;
Fig. 3 is the splicing result of the present invention to 3200 frame videos (every 10 frames splicings totally 320 images once), aircraft flight air strips route as can be seen in the clump, and also this algorithm is to have considered existing rotation transformation between the image sequence in the splicing as can be seen;
Fig. 4 is the splicing result of the present invention to 3 satellite remote-sensing images;
Fig. 5 is that the present invention is to the 80 frame videos splicing result of (splicing once totally 8 images every 10 frames);
Fig. 6 for the present invention to the splicing result of ordinary digital camera from 5 photos of 5 arbitrarily angled shootings.
Embodiment
1. theoretical foundation
(1) the continuous splicing of multiple image
Sequential images (as the ground remote sensing image of aerial shooting, the panorama scene image that is used to splice etc.) can utilize the continuous splicing of multiple image to be spliced into the image of a complete spatial distribution on a large scale.According to theory on computer vision, the spatial transform relation between the consecutive frame image can be described with the Homograph matrix, and the main employing of coupling Homograph matrix obtains the perspective transform relation between two frames, the picture point P on two width of cloth images so between the consecutive frame 0=(x 0, y 0, 1) and P 1=(x 1, y 1, 1) between the spatial alternation relation table be shown:
P 1=λHP 0 (1)
Wherein λ is a certain coefficient, and H is a 3*3 matrix
H = h 11 h 12 h 13 h 21 h 22 h 23 h 31 h 32 h 33
This is a kind of spatial image conversion relation of considering perspective transform, and the position that utilizes this transformation relation just can obtain between two width of cloth images and the space face concerns.So for a sequence image j, k, l, m, n ... can by this transformation relation between adjacent two frames by matrix multiplication couple together as shown in the formula:
H jn=H jk*H kl*H lm*H mn (2)
This method can not only obtain the relative position at arbitrary frame (video) image place, and can obtain its corresponding relative direction, experimental result Fig. 2.Fig. 2 is the splicing results of 600 frame videos (every 10 frames splicings totally 60 images once), the average stitching error of consecutive frame<0.15 pixel, among the figure arrow coordinate system represent initial, the mapping rectangle frame of abort frame image correspondence.
(2) Concurrent Feature extractive technique
Experiment finds that it is time-consuming procedure in the splicing that the image invariant features extracts, and utilizes multithreading to calculate the invariant feature of multiple image simultaneously, and whole splicing efficient is doubled and redoubled in multi-core computer.Because it is fully independently that different frame calculates the invariant features process, do not interdepend, this method is not only feasible, implements also very simple.Overall thought is as follows: suppose that computer has n CPU, the every interval of video m frame is caught a splicing frame.Can distribute n thread to extract the invariant features of preceding n the splicing two field picture of video simultaneously so earlier.This n thread is responsible for being numbered 0 respectively, 1,2, ..., the invariant feature of the n of n-1 image to be spliced, that thread of finishing at first then is responsible for being numbered the n feature extraction of (n+1 splices frame) automatically, and then second that thread of finishing then is responsible for being numbered the n+1 feature extraction of (n+2 splices frame) automatically ... .., leapfrog so successively, image characteristics extraction efficient the most consuming time is improved near n doubly.
(3) coloured image Feature Extraction Technology
The present invention adopts a kind of maximum significance characteristic points automatic extraction method of coloured image, at each characteristic point, can collect the Color Channel of characteristic remarkable degree maximum automatically, this has not only improved the quantity of characteristic point greatly, and has improved the conspicuousness and the accuracy of characteristic point.If each feature of all passages is all collected, not only can increase and underspeed owing to amount of calculation is quick-fried, and for gray level image, 3 channel informations repeat, can cause a large amount of repeated characteristics, this makes the feature space distance ratio of Optimum Matching characteristic point and suboptimum matching characteristic point eternal=1 (generally requiring this ratio<0.5~0.6), and finally causes optimum suboptimum ratio matching algorithm also to lose efficacy.
It is as follows that the present invention defines significance:
C cf[x,y]=dog c[ply][x,y]
-(dog c[ply][x+1,y]+dog c[ply][x,y+1]+dog c[ply][x-1,y]+dog c[ply][x,y-1]
+dog c[ply-1][x,y]+dog c[ply+1][x,y])/6
C={r wherein, g, b} represents Color Channel, ply represents the different itval layers of the dog image that different Gaussian Blur degree obtain.
Extract in the process of invariant features point position and can select C at image pyramid by following formula Rf[x, y] C Gf[x, y] C Bf[x, y], with respect to gray level image, coloured image can generate abundanter feature so that realize more accurate stable image splicing effect.
(4) the fault-tolerant splicing of characteristic error
The same place that the present invention selects the BBF algorithm carries out contiguous concatenation frame splicing fault-tolerant calculation to information.4 points of needs just can obtain the Homograph matrix between the contiguous concatenation frame in theory.Actual conditions are that the same place in early stage is far longer than 4 to quantity, and this can obtain an optimal solution with minimum 2 multiplication.And on the other hand, the same place that early stage, the BBF method obtained is correct to impossible 100%, it is right to adopt the RANSAC tolerant fail algorithm can weed out wrong point, but the RANSAC algorithm often only considers that maximization meets the quantity of the point of geometric error condition, does not consider the similarity degree of same place feature.It is right to obtain 150 pairs of same places altogether such as BBF search, and the same place of meeting geometric error<0.8 pixel is to there being 100 pairs at most, and 100 pairs of same places combinations of meeting geometric error<0.8 pixel have 30 kinds, wherein might select mistake.The present invention has then further considered the feature space distance between the match point, can further select feature space between the match point apart from the optimum point set of minimum in these 30 kinds of combinations.This technology makes the precision and the stability of splicing obtain bigger raising.
2. technical conditions
Stitching algorithm can under attitude and the ground control point condition, splice according to image information fully at unknown aircraft self coordinate, considers rotation when splicing is calculated, ratio, and illumination, contrast and a small amount of noise and fuzzy reach the sub-pixel precision by interpolation.Stitching algorithm is reliable and stable.
Suggestion contiguous concatenation frame has the above superimposed image of 40% (best more than 60%) district, just can reach good splicing effect.
Adopt DirectShow technical support various video form (to support two kinds of video resolution 320*240 of acquiescence, 640*480); Also support the splicing (size of each image is not limit) between many sequential images.
Can realize the video of ground video image software Real Time Observation unmanned plane airborne acquisition by radio transmitting device and video interface.
An important application of the present invention is that splice automatically at the no control point of realizing remote sensing image.This needs one and is used to load the miniature self-service machine of camera or pinhole camera vertically downward is installed, and can adopt the parachuting mode to land for preventing to break.The video camera that can install vertically downward on these unmanned planes is used to take the ground video image.Because camera resolution and field range are limited, can't obtain complete ground image on a large scale all the time, this is necessary that just in time the automatic video frequency image joint technology of utilizing the present invention to propose realizes.
3, implementation procedure
As shown in Figure 1, specific implementation step of the present invention is as follows:
(1) obtains sequence of image frames in the video
The employing digital camera obtains image sequence or adopts Digital Video to obtain video; For image sequence, the superimposed image more than 40% is arranged between the adjacent image; For video, obtain image sequence by video decode, and adjacent image there is the superimposed image more than 40%; As adopt the take photo by plane method of picture of video camera unmanned plane to obtain image sequence, do not need aspect and Ground Control dot information.
(2) feature point set of every width of cloth image in the extraction image sequence
A, at first set up the multiple dimensioned pyramid of image; When image was coloured image, multiple dimensioned pyramid was the multiple dimensioned pyramid of three Color Channels of RGB;
B, carry out z=4~8 time Gaussian Blur for multiple dimensioned pyramidal every tomographic image and obtain this blurred picture of z+1 floor height gauss c[i], i=0,1,2 ..., z, wherein gauss c[0] is this layer original image; And acquire z layer dog c[e] image, e=0,1,2 ..., z-1; Dog wherein c[e]=gauss c[e+1]-gauss c[e]; C represents gray image or coloured image; When image is coloured image, c={R, G, B}, R, G, B represent three Color Channels of red, green, blue;
C, searching e=1,2 ..., every layer of dog in the z-2 layer c[e] image the inside satisfy dog c[e] [x, y] is than 6 adjacent element value dog c[e] [x+1, y], dog c[e] [x, y+1], dog c[e] [x-1, y], dog c[e] [x, y-1], dog c[e-1] [x, y], dog cThe all big Local Extremum of [e+1] [x, y]; X wherein, y is a pixel coordinate, x-1, y, x+1, y, x, y+1 and x, y-1 represent x respectively, a y left side adjacent, right adjacent, go up adjacent and adjacent pixel coordinate down; Dog cCoordinate x on the e layer blurred picture among [e] [x, y] expression Color Channel c, the pixel value at y place;
When image was gray image, Local Extremum entered step e as characteristic point; When image is coloured image, enter after steps d finds out characteristic point, enter step e again;
Every layer of dog among d, the extraction step c cThe characteristic remarkable degree C of the Local Extremum of [e] image the inside Cf[x, y, e] maximum obtains the feature point set of the ratio that do not rely on as characteristic point:
C wherein Cf[x, y, e]=dog c[e] [x, y]-
(dog c[e][x+1,y]+dog c[e][x,y+1]+dog c[e][x-1,y]+dog c[e][x,y-1]+
dog c[e-1][x,y]+dog c[e+1][x,y])/6;
Select C Rf[x, y], C Gf[x, y] and C Bf[x, y] three's maximum is as characteristic point, for C Rf[x, y], C Gf[x, y] and C BfHave only one to be Local Extremum in [x, y], then with it as characteristic point.
E, according to each characteristic point x, y is the (gauss of n * n) at center cOri is at the histogram of all directions in the gradient direction ori statistical pixel territory of [e] image slices prime field, and the ori that occurrence number is maximum is principal direction; Ori=arc tan[dy/dx wherein], dy=gauss cIe] [x, y+1]-gauss c[e] [x, y-1], dx=gauss c[e] [x+1, y]-gauss c[e] [x-1, y]; Again with the (gauss of n * n) around this characteristic point c[e] image pixel distributes and to correct according to gradient principal direction separately; Be positioned at (the gauss of n * n) around this characteristic point after the correction c[e] image pixel distributes and just constitutes the descriptor f[1 of this characteristic point, and 2 ... n 2]; Wherein n=3~5 (as getting 4);
The present invention is for coloured image, adopt the maximum significance characteristic points automatic extraction method of coloured image,, can collect the Color Channel of characteristic remarkable degree maximum automatically at each characteristic point, this has not only improved the quantity of characteristic point greatly, and has improved the conspicuousness and the accuracy of characteristic point.
The present invention can utilize the computer multithreading to extract the feature point set of multiple image simultaneously: for the computer that N CPU arranged, distribute N thread to extract the invariant features of video top n image to be spliced simultaneously earlier; This N thread is responsible for being numbered 0 respectively, 1,2, ..., the feature point set that the feature point set of the N of N-1 image to be spliced, that thread of finishing at first then are responsible for being numbered N+1 the image to be spliced of N automatically extracts, and then second that thread of finishing feature point set of then being responsible for being numbered N+2 the image to be spliced of N+1 automatically extracts, leapfrog so successively, extract up to the feature point set of finishing whole images to be spliced.
(3) search coupling same place is right
The same place of searching for the characteristic point between adjacent two width of cloth images is right: go out characteristic point same place between adjacent two width of cloth images to (be two minimum characteristic points of distance of the characteristic point on the image and the characteristic point on another image in adjacent two width of cloth images promptly constitute a same place to) according to the range search of descriptor in feature space that characteristic point had; Search for characteristic point same place between adjacent two width of cloth images to being undertaken by the prior art existent method; Also can be undertaken by laxative remedy:
According to each characteristic point x, y is the (gauss of n * n) at center cOri is at the histogram of all directions in the gradient direction ori statistical pixel territory of [e] image slices prime field, and the ori that occurrence number is maximum is principal direction; Ori=arc tan[dy/dx wherein], dy=gauss c[e] [x, y+1]-gauss c[e] [x, y-1], dx=gauss c[e] [x+1, y]-gauss c[e] [x-1, y]; Again with the (gauss of n * n) around this characteristic point c[e] image pixel distributes and to correct according to gradient principal direction separately; Be positioned at (the gauss of n * n) around this characteristic point after the correction c[e] image pixel distributes and just constitutes the descriptor f[1 of this characteristic point, and 2 ... n 2]; N=3~5 wherein;
Right according to the characteristic point same place that the descriptor of the characteristic point range search in n * n dimension descriptor space goes out between adjacent two width of cloth images:
Obtain characteristic point p on one of adjacent two width of cloth images image a and the distance of the characteristic point q on another image b D pq = Σ i = 1 n 2 ( f ap [ i ] - f bq [ i ] ) 2 , Or D pq = Σ i = 1 n 2 | f ap [ i ] - f bq [ i ] | , F wherein p[i], f q[i] is respectively the descriptor of characteristic point p, q, p=1 ... r, r are that image b goes up characteristic point sum, then D PqTwo characteristic points of minimum value correspondence promptly to constitute a same place right; According to said method find out other same place in adjacent two width of cloth images to (its number should satisfy the correct same place that obtains in the following step (4) at least to being no less than 4).
For improving matching efficiency, can adopt BBF2 fork tree quick mode sorting algorithm, the same place of searching for fast between adjacent two frames is right, prepares for next step carries out the fault-tolerant splicing calculating of consecutive frame.The frame feature set that the continuous obtaining step (2) in step (3)-(6) has obtained is finished splicing work.Except that front cross frame, step (3)-(6) and step (2) have also all realized parallel, have realized limit extraction multiframe feature like this, and the limit is carried out concatenation and exported the splicing result who works as forward part.
(4) tolerant fail algorithm calculates the Homograph homograph relation between the consecutive frame
On RANSAC tolerant fail algorithm basis, further considered the feature space distance between the match point, can in the optimum combination of meeting geometric error minimum, further select feature space between the match point apart from the optimum point set of minimum.This technology makes the precision and the stability of splicing obtain bigger raising.
Utilize the Homograph transformation relation between the RANSAC tolerant fail algorithm calculating adjacent image:
Utilize the RANSAC tolerant fail algorithm to find out adjacent two width of cloth image t-1, same place correct between the t frame is to (x T-1, k, y T-1, k) and (x Tk, y Tk), k=1 .., m, m 〉=4 for correct same place to (4 correct same places are arranged in theory to can realizing the present invention, but correct same place being to the multiple-effect fruit is good more more); T=2~N, N are the figure film size number of image sequence;
Separate following equation
x t - 1,1 y t - 1,1 1 - x t , 1 x t - 1,1 - x t , 1 y t - 1,1 x t - 1,1 y t - 1,1 1 - y t - 1 x t - 1,1 - y t , 1 y t - 1,1 · · · · · · · · · · · · · · · · · · · · · · · · x t - 1 , m y t - 1 , m 1 - x t , m x t - 1 , m - x t , m y t - 1 , m x t - 1 , m y t - 1 , m 1 - y t , m x t - 1 , m - y t , m y t - 1 , m h 11 h 12 h 13 h 21 h 22 h 23 h 31 h 32 = x t , 1 y t , 1 · · · · · · x t , m y t , m
Obtain adjacent two width of cloth image t-1, the Homograph transformation relation between the t frame H t - 1 , t = h 11 , h 12 , h 13 h 21 , h 22 , h 23 h 31 , h 32 , h 33 ; H wherein 33=1;
(5) with Homograph matrix upper left corner 2*2 submatrix unitization (this step is optional)
If the image rake ratio is severe, can adopt orthogonalization Homograph matrix method that image rectification is become orthography, with the Homograph matrix H = h 11 h 12 h 13 h 21 h 22 h 23 h 31 h 32 h 33 Convert to
H , = 1 c 2 + s 2 c s h 13 s c h 23 h 31 h 32 h 33 Wherein c = ( h 11 + h 22 ) / 2 s = ( h 21 - h 12 ) / 2
This has guaranteed that Homograph matrix upper left corner 2*2 submatrix must be the unit orthogonal matrix.
(6) company of utilization takes advantage of formula and fusion formula to obtain the sequence frame transformational relation and obtains splicing the result
The company of utilization takes advantage of formula and integration technology to obtain splicing the result:
According to adjacent two width of cloth image t-1, the Homograph transformation relation between the t frame couples together as shown in the formula the transformation relation H that obtains between the 1st width of cloth image and the t width of cloth image by following matrix multiplication 1, t: H 1, t=H 1, t-1H T-1, tWhen t=2, H 1, t-1=H 1,1Be unit matrix;
Earlier with the 1st width of cloth image I 1(x ', y ') directly copies F to 1(x y), adopts following formula with F again 1(x is y) with the 2nd width of cloth image I 2(x ', y ') splicing obtains stitching image F 2(x, y), with stitching image F 2(x is y) with the 3rd width of cloth image I 3(x ', y ') splicing obtains stitching image F 3(x, y), with stitching image F 3(x is y) with the 4th width of cloth image I 4(x ', y ') splicing obtains stitching image F 4(x, y) ..., with stitching image F N-1(x is y) with N width of cloth image I N(x ', y ') splicing obtains stitching image F N(x y), is finally spliced result images F from the 1st width of cloth image to N width of cloth image by width of cloth splicing like this N(x, y):
F t(x, y)=α I t(x ', y ')+(1-α) F T-1(x, y) t=2 wherein, 3,4 ... ..N, α=0.3~1;
X ' wherein, y ' calculates by following formula:
λ x ′ y ′ 1 = H 1 , t x y 1 H wherein 1, t=H 1, t-1H T-1, t, (comprise λ in the equation, x ', three unknown numbers of y ' have 3 equations, can find the solution to obtain).
Adopt said method of the present invention that the splicing of adopting digital camera and obtaining image sequence or adopt Digital Video to obtain video be the results are shown in Figure 2-Fig. 6: Fig. 2 for the present invention to the splicing of 600 frame videos (every 10 frames splicing totally 60 images once) figure as a result; Fig. 3 is the splicing result of the present invention to 3200 frame videos (every 10 frames splicings totally 320 images once), aircraft flight air strips route as can be seen in the clump, and also this algorithm is to have considered existing rotation transformation between the image sequence in the splicing as can be seen; Fig. 4 is the splicing result of the present invention to 3 satellite remote-sensing images; Fig. 5 is that the present invention is to the 80 frame videos splicing result of (splicing once totally 8 images every 10 frames); Fig. 6 for the present invention to the splicing result of ordinary digital camera from 5 photos of 5 arbitrarily angled shootings.Be used for splicing non-control point image by the visible this method of Fig. 2-Fig. 6 and have the precision height, can resist the rotation in shooting process of video or image sequence, ratio changes, and the effect of influence such as illumination variation and small amount of noise has more stable splicing effect.

Claims (6)

1. method for splicing non-control point image may further comprise the steps:
(1) obtains image sequence
The employing digital camera obtains image sequence or adopts Digital Video to obtain video; For image sequence, the superimposed image more than 40% is arranged between the adjacent image; For video, obtain image sequence by video decode, and adjacent image there is the superimposed image more than 40%;
(2) feature point set of every width of cloth image in the extraction image sequence
A, at first set up the multiple dimensioned pyramid of image; When image was coloured image, multiple dimensioned pyramid was the multiple dimensioned pyramid of three Color Channels of RGB;
B, carry out z=4~8 time Gaussian Blur for multiple dimensioned pyramidal every tomographic image and obtain this blurred picture of z+1 floor height gauss c[i], i=0,1,2 ..., z, wherein gauss c[0] is this layer original image; And acquire z layer dog c[e] image, e=0,1,2 ..., z-1; Dog wherein c[e]=gauss c[e+1]-gauss c[e]; C represents gray image or coloured image; When image is coloured image, c={R, G, B}, R, G, B represent three Color Channels of red, green, blue;
C, searching e=1,2 ..., every layer of dog in the z-2 layer c[e] image the inside satisfy dog c[e] [x, y] is than 6 adjacent element value dog c[e] [x+1, y], dog c[e] [x, y+1], dog c[e] [x-1, y], dog c[e] [x, y-1], dog c[e-1] [x, y], dog cThe all big Local Extremum of [e+1] [x, y]; X wherein, y is a pixel coordinate, x-1, y, x+1, y, x, y+1 and x, y-1 represent x respectively, a y left side adjacent, right adjacent, go up adjacent and adjacent pixel coordinate down; Dog cCoordinate x on the e layer blurred picture among [e] [x, y] expression Color Channel c, the pixel value at y place;
When image was gray image, Local Extremum entered step (3) as characteristic point; When image is coloured image, enter after steps d finds out characteristic point, enter step (3) again;
Every layer of dog among d, the extraction step c cThe characteristic remarkable degree C of the Local Extremum of [e] image the inside Cf[x, y, e] maximum obtains the feature point set of the ratio that do not rely on as characteristic point:
C wherein Cf[x, y, e]=dog c[e] [x, y]-
(dog c[e][x+1,y]+dog c[e][x,y+1]+dog c[e][x-1,y]+dog c[e][x,y-1]+
dog c[e-1][x,y]+dog c[e+1][x,y])/6;
(3) same place of the characteristic point between adjacent two width of cloth images of search is right;
(4) utilize the RANSAC tolerant fail algorithm to calculate Homograph transformation relation between the adjacent image
Utilize the RANSAC tolerant fail algorithm to find out adjacent two width of cloth image t-1, same place correct between the t frame is to (x T-1, k, y T-1, k) and (x Tk, y Tk), k=1 .., m, m 〉=4 are that correct same place is right; T=2~N, N are the figure film size number of image sequence; Separate following equation
x t - 1,1 y t - 1,1 1 - x t , 1 x t - 1,1 - x t , 1 y t - 1,1 x t - 1,1 y t - 1,1 1 - y t , 1 x t - 1,1 - y t , 1 y t - 1,1 · · · · · · · · · · · · · · · · · · · · · · · · x t - 1 , m y t - 1 , m 1 - x t , m x t - 1 , m - x t , m y t - 1 , m x t - 1 , m y t - 1 , m 1 - y t , m x t - 1 , m - y t , m y t - 1 , m h 11 h 12 h 13 h 21 h 22 h 23 h 31 h 32 = x t , 1 y t , 1 · · · · · · x t , m y t , m
Obtain adjacent two width of cloth image t-1, the Homograph transformation relation between the t frame H t - 1 , t = h 11 , h 12 , h 13 h 21 , h 22 , h 23 h 31 , h 32 , h 33 ; H wherein 33=1;
(5) company of utilization takes advantage of formula and integration technology to obtain splicing the result
According to adjacent two width of cloth image t-1, the Homograph transformation relation between the t frame couples together as shown in the formula the transformation relation H that obtains between the 1st width of cloth image and the t width of cloth image by following matrix multiplication 1, t: H 1, t=H 1, t-1H T-1, tWhen t=2, H 1, t-1=H 1,1Be unit matrix;
Earlier with the 1st width of cloth image I 1(x ', y ') directly copies F to 1(x y), adopts following formula with F again 1(x is y) with the 2nd width of cloth image I 2(x ', y ') splicing obtains stitching image F 2(x, y), with stitching image F 2(x is y) with the 3rd width of cloth image I 3(x ', y ') splicing obtains stitching image F 3(x, y), with stitching image F 3(x is y) with the 4th width of cloth image I 4(x ', y ') splicing obtains stitching image F 4(x, y) ..., with stitching image F N-1(x is y) with N width of cloth image I N(x ', y ') splicing obtains stitching image F N(x y), is finally spliced result images F from the 1st width of cloth image to N width of cloth image by width of cloth splicing like this N(x, y):
F t(x, y)=α I t(x ', y ')+(1-α) F T-1(x, y) t=2 wherein, 3,4 ... ..N, α=0.3~1; X ' wherein, y ' calculates by following formula:
λ x ′ y ′ 1 = H 1 , t x y 1 H wherein 1, t=H 1, t-1H T-1, t
2. method according to claim 1 is characterized in that utilizing the computer multithreading to extract the feature point set of multiple image simultaneously: for the computer that N CPU arranged, distribute N thread to extract the invariant features of video top n image to be spliced simultaneously earlier; This N thread is responsible for being numbered 0 respectively, 1,2, ..., the feature point set that the feature point set of the N of N-1 image to be spliced, that thread of finishing at first then are responsible for being numbered N+1 the image to be spliced of N automatically extracts, and then second that thread of finishing feature point set of then being responsible for being numbered N+2 the image to be spliced of N+1 automatically extracts, leapfrog so successively, extract up to the feature point set of finishing whole images to be spliced.
3. method according to claim 1 and 2 is characterized in that the same place of the characteristic point between adjacent two width of cloth images of the middle search of performing step (3) is right as follows:
According to each characteristic point x, y is the (gauss of n * n) at center cOri is at the histogram of all directions in the gradient direction ori statistical pixel territory of [e] image slices prime field, and the ori that occurrence number is maximum is principal direction; Ori=arc tan[dy/dx wherein], dy=gauss c[e] [x, y+1]-gauss c[e] [x, y-1], dx=gauss c[e] [x+1, y]-gauss c[e] [x-1, y]; Again with the (gauss of n * n) around this characteristic point c[e] image pixel distributes and to correct according to gradient principal direction separately; Be positioned at (the gauss of n * n) around this characteristic point after the correction c[e] image pixel distributes and just constitutes the descriptor f[1 of this characteristic point, and 2 ... n 2]; N=3~5 wherein;
Right according to the characteristic point same place that the descriptor of the characteristic point range search in n * n dimension descriptor space goes out between adjacent two width of cloth images.
4. method according to claim 3 is characterized in that determining that by laxative remedy the characteristic point same place between adjacent two width of cloth images is right: obtain characteristic point p on one of adjacent two width of cloth images image a and the distance of the characteristic point q on another image b D pq = Σ i = 1 n 2 ( f ap [ i ] - f bq [ i ] ) 2 , P=1 ... r, r are that image b goes up characteristic point sum, then D PqTwo characteristic points of minimum value correspondence promptly to constitute a same place right; Other same place of according to said method finding out in adjacent two width of cloth images is right.
5. method according to claim 3 is characterized in that determining that by laxative remedy the characteristic point same place between adjacent two width of cloth images is right: obtain characteristic point p on one of adjacent two width of cloth images image a and the distance of the characteristic point q on another image b D pq = Σ i = 1 n 2 | f p [ i ] - f q [ i ] | , P=1 ... r, r are that image b goes up characteristic point sum, then D PqTwo characteristic points of minimum value correspondence promptly to constitute a same place right; Other same place of according to said method finding out in adjacent two width of cloth images is right.
6. method according to claim 1 and 2 is characterized in that: the same place that adopts 2 fork tree quick mode sorting algorithms to search between adjacent two width of cloth images is right.
CN2008102374278A 2008-12-25 2008-12-25 Method for splicing non-control point image Active CN101442619B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2008102374278A CN101442619B (en) 2008-12-25 2008-12-25 Method for splicing non-control point image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2008102374278A CN101442619B (en) 2008-12-25 2008-12-25 Method for splicing non-control point image

Publications (2)

Publication Number Publication Date
CN101442619A true CN101442619A (en) 2009-05-27
CN101442619B CN101442619B (en) 2010-08-18

Family

ID=40726843

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2008102374278A Active CN101442619B (en) 2008-12-25 2008-12-25 Method for splicing non-control point image

Country Status (1)

Country Link
CN (1) CN101442619B (en)

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101839722A (en) * 2010-05-06 2010-09-22 南京航空航天大学 Method for automatically recognizing target at medium and low altitudes and positioning carrier with high accuracy
CN101916452A (en) * 2010-07-26 2010-12-15 中国科学院遥感应用研究所 Method for automatically stitching unmanned aerial vehicle remote sensing images based on flight control information
CN101976464A (en) * 2010-11-03 2011-02-16 北京航空航天大学 Multi-plane dynamic augmented reality registration method based on homography matrix
CN102402855A (en) * 2011-08-29 2012-04-04 深圳市蓝盾科技有限公司 Method and system of fusing real-time panoramic videos of double cameras for intelligent traffic
CN102607532A (en) * 2011-01-25 2012-07-25 吴立新 Quick low-level image matching method by utilizing flight control data
CN103020934A (en) * 2012-12-12 2013-04-03 武汉大学 Seamless automatic image splicing method resistant to subtitle interference
US8781223B2 (en) 2011-05-26 2014-07-15 Via Technologies, Inc. Image processing system and image processing method
CN104820965A (en) * 2015-04-30 2015-08-05 武汉大学 Geocoding-free rapid image splicing method of low-altitude unmanned plane
CN106030583A (en) * 2014-03-27 2016-10-12 英特尔公司 Techniques for parallel execution of RANSAC algorithm
CN106447601A (en) * 2016-08-31 2017-02-22 中国科学院遥感与数字地球研究所 Unmanned aerial vehicle remote image mosaicing method based on projection-similarity transformation
CN106447664A (en) * 2016-09-30 2017-02-22 上海联影医疗科技有限公司 Matching pair determination method and image capturing method
CN106791780A (en) * 2016-12-14 2017-05-31 天津温茂科技有限公司 The unmanned plane image processing system and processing method of a kind of electronic information field
CN106960027A (en) * 2017-03-20 2017-07-18 武汉大学 The UAV Video big data multidate association analysis method of spatial information auxiliary
TWI632528B (en) * 2017-09-15 2018-08-11 林永淵 System and method for unmanned aircraft image analysis
CN108445505A (en) * 2018-03-29 2018-08-24 南京航空航天大学 Feature significance detection method based on laser radar under thread environment
CN109658450A (en) * 2018-12-17 2019-04-19 武汉天乾科技有限责任公司 A kind of quick orthography generation method based on unmanned plane
CN109741381A (en) * 2019-01-23 2019-05-10 张过 Spaceborne push-broom type optical sensor high frequency error removing method based on parallel observation
CN109934093A (en) * 2019-01-21 2019-06-25 创新奇智(南京)科技有限公司 A kind of method, computer-readable medium and identifying system identifying commodity on shelf
CN110084743A (en) * 2019-01-25 2019-08-02 电子科技大学 Image mosaic and localization method based on more air strips starting track constraint
US10580135B2 (en) 2016-07-14 2020-03-03 Shanghai United Imaging Healthcare Co., Ltd. System and method for splicing images
CN112106008A (en) * 2019-09-27 2020-12-18 深圳市大疆创新科技有限公司 Landing control method of unmanned aerial vehicle and related equipment
CN112991487A (en) * 2021-03-11 2021-06-18 中国兵器装备集团自动化研究所有限公司 System for multithreading real-time construction of orthoimage semantic map

Cited By (33)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101839722A (en) * 2010-05-06 2010-09-22 南京航空航天大学 Method for automatically recognizing target at medium and low altitudes and positioning carrier with high accuracy
CN101916452A (en) * 2010-07-26 2010-12-15 中国科学院遥感应用研究所 Method for automatically stitching unmanned aerial vehicle remote sensing images based on flight control information
CN101976464A (en) * 2010-11-03 2011-02-16 北京航空航天大学 Multi-plane dynamic augmented reality registration method based on homography matrix
CN101976464B (en) * 2010-11-03 2013-07-31 北京航空航天大学 Multi-plane dynamic augmented reality registration method based on homography matrix
CN102607532A (en) * 2011-01-25 2012-07-25 吴立新 Quick low-level image matching method by utilizing flight control data
US8781223B2 (en) 2011-05-26 2014-07-15 Via Technologies, Inc. Image processing system and image processing method
CN102402855A (en) * 2011-08-29 2012-04-04 深圳市蓝盾科技有限公司 Method and system of fusing real-time panoramic videos of double cameras for intelligent traffic
CN103020934B (en) * 2012-12-12 2015-10-21 武汉大学 The image seamless method for automatically split-jointing of anti-captions interference
CN103020934A (en) * 2012-12-12 2013-04-03 武汉大学 Seamless automatic image splicing method resistant to subtitle interference
CN106030583A (en) * 2014-03-27 2016-10-12 英特尔公司 Techniques for parallel execution of RANSAC algorithm
CN106030583B (en) * 2014-03-27 2020-01-14 英特尔公司 Techniques for parallel execution of RANSAC algorithms
US10936766B2 (en) 2014-03-27 2021-03-02 Intel Corporation Techniques for parallel execution of RANSAC algorithm
CN104820965A (en) * 2015-04-30 2015-08-05 武汉大学 Geocoding-free rapid image splicing method of low-altitude unmanned plane
US11893738B2 (en) 2016-07-14 2024-02-06 Shanghai United Imaging Healthcare Co., Ltd. System and method for splicing images
US11416993B2 (en) 2016-07-14 2022-08-16 Shanghai United Imaging Healthcare Co., Ltd. System and method for splicing images
US10580135B2 (en) 2016-07-14 2020-03-03 Shanghai United Imaging Healthcare Co., Ltd. System and method for splicing images
CN106447601A (en) * 2016-08-31 2017-02-22 中国科学院遥感与数字地球研究所 Unmanned aerial vehicle remote image mosaicing method based on projection-similarity transformation
CN106447664A (en) * 2016-09-30 2017-02-22 上海联影医疗科技有限公司 Matching pair determination method and image capturing method
CN106791780A (en) * 2016-12-14 2017-05-31 天津温茂科技有限公司 The unmanned plane image processing system and processing method of a kind of electronic information field
CN106960027A (en) * 2017-03-20 2017-07-18 武汉大学 The UAV Video big data multidate association analysis method of spatial information auxiliary
CN106960027B (en) * 2017-03-20 2019-06-25 武汉大学 The UAV Video big data multidate association analysis method of spatial information auxiliary
TWI632528B (en) * 2017-09-15 2018-08-11 林永淵 System and method for unmanned aircraft image analysis
CN108445505A (en) * 2018-03-29 2018-08-24 南京航空航天大学 Feature significance detection method based on laser radar under thread environment
CN109658450A (en) * 2018-12-17 2019-04-19 武汉天乾科技有限责任公司 A kind of quick orthography generation method based on unmanned plane
CN109934093A (en) * 2019-01-21 2019-06-25 创新奇智(南京)科技有限公司 A kind of method, computer-readable medium and identifying system identifying commodity on shelf
CN109934093B (en) * 2019-01-21 2021-03-30 创新奇智(南京)科技有限公司 Method for identifying goods on shelf, computer readable medium and identification system
CN109741381A (en) * 2019-01-23 2019-05-10 张过 Spaceborne push-broom type optical sensor high frequency error removing method based on parallel observation
CN110084743A (en) * 2019-01-25 2019-08-02 电子科技大学 Image mosaic and localization method based on more air strips starting track constraint
CN110084743B (en) * 2019-01-25 2023-04-14 电子科技大学 Image splicing and positioning method based on multi-flight-zone initial flight path constraint
CN112106008A (en) * 2019-09-27 2020-12-18 深圳市大疆创新科技有限公司 Landing control method of unmanned aerial vehicle and related equipment
WO2021056432A1 (en) * 2019-09-27 2021-04-01 深圳市大疆创新科技有限公司 Landing control method for unmanned aerial vehicle, and related device
CN112991487A (en) * 2021-03-11 2021-06-18 中国兵器装备集团自动化研究所有限公司 System for multithreading real-time construction of orthoimage semantic map
CN112991487B (en) * 2021-03-11 2023-10-17 中国兵器装备集团自动化研究所有限公司 System for multithreading real-time construction of orthophoto semantic map

Also Published As

Publication number Publication date
CN101442619B (en) 2010-08-18

Similar Documents

Publication Publication Date Title
CN101442619B (en) Method for splicing non-control point image
Xie et al. Linking points with labels in 3D: A review of point cloud semantic segmentation
Yahyanejad et al. A fast and mobile system for registration of low-altitude visual and thermal aerial images using multiple small-scale UAVs
CN101604018B (en) Method and system for processing high-definition remote sensing image data
Brigot et al. Adaptation and evaluation of an optical flow method applied to coregistration of forest remote sensing images
Li et al. Current issues in high-resolution earth observation technology
CN107808362A (en) A kind of image split-joint method combined based on unmanned plane POS information with image SURF features
CN107480727A (en) The unmanned plane image fast matching method that a kind of SIFT and ORB are combined
CN104732482A (en) Multi-resolution image stitching method based on control points
CN102509287B (en) Finding method for static target based on latitude and longitude positioning and image registration
Jiang et al. Unmanned Aerial Vehicle-Based Photogrammetric 3D Mapping: A survey of techniques, applications, and challenges
CN102435188A (en) Monocular vision/inertia autonomous navigation method for indoor environment
CN105844587A (en) Low-altitude unmanned aerial vehicle-borne hyperspectral remote-sensing-image automatic splicing method
CN105389774A (en) Method and device for aligning images
CN105004337B (en) Agricultural unmanned plane autonomous navigation method based on matching line segments
CN112991487B (en) System for multithreading real-time construction of orthophoto semantic map
CN103020934B (en) The image seamless method for automatically split-jointing of anti-captions interference
CN104134208A (en) Coarse-to-fine infrared and visible light image registration method by adopting geometric construction characteristics
JP2023530449A (en) Systems and methods for air and ground alignment
CN104751451B (en) Point off density cloud extracting method based on unmanned plane low latitude high resolution image
Zhao et al. A review of 3D reconstruction from high-resolution urban satellite images
Majdik et al. Micro air vehicle localization and position tracking from textured 3d cadastral models
Karantzalos et al. Model-based building detection from low-cost optical sensors onboard unmanned aerial vehicles
Maurer et al. Automated inspection of power line corridors to measure vegetation undercut using UAV-based images
Zeng et al. Urban land-use classification using integrated airborne laser scanning data and high resolution multi-spectral satellite imagery

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
ASS Succession or assignment of patent right

Owner name: SHENZHEN D + W SPATIAL INFORMATION TECHNOLOGY CO.,

Free format text: FORMER OWNER: WUHAN UNIVERSITY

Effective date: 20120327

C41 Transfer of patent application or patent right or utility model
COR Change of bibliographic data

Free format text: CORRECT: ADDRESS; FROM: 430072 WUHAN, HUBEI PROVINCE TO: 518063 SHENZHEN, GUANGDONG PROVINCE

TR01 Transfer of patent right

Effective date of registration: 20120327

Address after: 518063 Shenzhen City, Nanshan District Keyuan Road, Wuhan University Shenzhen research building B block 7 layer

Patentee after: Shenzhen D & W Spatial Information Technology Co., Ltd.

Address before: 430072 Hubei city of Wuhan province Wuchang Luojiashan

Patentee before: Wuhan University