CN104794490A - Slanted image homonymy point acquisition method and slanted image homonymy point acquisition device for aerial multi-view images - Google Patents

Slanted image homonymy point acquisition method and slanted image homonymy point acquisition device for aerial multi-view images Download PDF

Info

Publication number
CN104794490A
CN104794490A CN201510208194.9A CN201510208194A CN104794490A CN 104794490 A CN104794490 A CN 104794490A CN 201510208194 A CN201510208194 A CN 201510208194A CN 104794490 A CN104794490 A CN 104794490A
Authority
CN
China
Prior art keywords
image
object space
feature point
fixed reference
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510208194.9A
Other languages
Chinese (zh)
Other versions
CN104794490B (en
Inventor
李英成
蔡沅钢
刘晓龙
朱祥娥
罗祥勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CHINA TOPRS (BEIJING) Co Ltd
Original Assignee
CHINA TOPRS (BEIJING) Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CHINA TOPRS (BEIJING) Co Ltd filed Critical CHINA TOPRS (BEIJING) Co Ltd
Priority to CN201510208194.9A priority Critical patent/CN104794490B/en
Publication of CN104794490A publication Critical patent/CN104794490A/en
Application granted granted Critical
Publication of CN104794490B publication Critical patent/CN104794490B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/76Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries based on eigen-space representations, e.g. from pose or different illumination conditions; Shape manifolds

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a slanted image homonymy point acquisition method and a slanted image homonymy point acquisition device for aerial multi-view images, and relates to the field of aerial photography. According to the slanted image homonymy point acquisition method, optimized object space elements are determined through multiple reference images with same visual angles, homonymy points of reference feature points are determined according to determined optimized object space elements and attitude information of a target slanted image, and the accuracy rate of the acquired optimized object space elements is high due to texture similarity among the reference images with the same visual angles, so that only optimized object space element based re-sampling is needed during each matching after the optimized object space elements high in accuracy are determined, directions of the object space elements do not need to be adjusted each time to search a feature point, with a largest SNCC coefficient, on the target slanted image, searching time is shortened, and matching efficiency is improved.

Description

The inclination image same place acquisition methods of aviation multi-view images and device
Technical field
The present invention relates to aeroplane photography field, in particular to inclination image same place acquisition methods and the device of aviation multi-view images.
Background technology
Aeroplane photography (aerial photography), also known as taking photo by plane, refers to the technology utilizing aerial camera intake surface scenery photo on aircraft or other aviation aircrafts.By photograph slope classification (photograph slope is the angle between the ground pedal line (principal vertical line) at aerial camera primary optical axis and scioptics center), camera style can be divided into vertical photography and oblique photograph.
Wherein, oblique photograph technology is a new and high technology growing up of international survey field in recent years, its orthography of breaching over can only from the limitation of vertical angle shooting, by carrying multiple stage sensor (primary optical axis of different camera is different from the angle on ground) on same flying platform, and then from vertically obtaining multi-view images (vertical image and inclination image) with the multiple different angles tilted while flight.
The object of shooting multi-view images obtains more terrestrial information (as edit map, stereo-picture synthesis etc.), and wherein, the basis of edit map or stereo-picture synthesis is Image Matching, is specially and two or multiple images is spliced.No matter Image Matching, as one of photogrammetric basic problem, is obtain tie point the aerotriangulation stage to carry out area adjustment, or the dense Stereo Matching in later stage modeling process, all needs Image Matching as basis.Oblique aerial multi-view images has multiple visual angles coverage, greatly improves the redundance of information on the one hand, on the other hand because the difference of filming image angle brings larger challenge to looking coupling more.And then how fast, and obtain the identical point coordinates on multi-view images accurately, and then obtain the three-dimensional information of atural object, be the key of multi-view images coupling.
Generally speaking, the content of the coupling of image comprises: Matching unit, search strategy and criterion, specifically:
1, Matching unit,
Matching unit carries out in Image Matching process exactly, selected key element (point, line, surface).Usually, the image matching method adopting and describe is all the Image Matching based on a primitive, (following alleged Image Matching is all the coupling based on a primitive).In theory, on image, each pixel is exactly a some primitive, but in fact we be all choose have texture singularity and repeatability (angle point in such as house) pixel as a primitive (hereinafter referred to as unique point); Often opening on image, obtaining a large amount of unique points, as Matching unit by feature detection operator.
2, search strategy,
To any one unique point on an image, its possible same place of acquisition on another image (refers to the point of position imaging on another image that actual geographic coordinate is identical, below be referred to as to search for image and candidate point) process, this process is exactly search strategy, generally speaking, the difference of Image Matching Algorithm mainly all embodies a concentrated reflection of in the change of search strategy.
A search strategy the simplest is exactly the method for exhaustion, and namely search for each unique point alternatively point on image, this is the most apparent solution of one; In fact, in the acquisition process of image, usual camera carries out taking according to certain track, or possess other supplementary (position and attitude), we utilize these known geometric relationships to reduce the hunting zone of candidate point usually, such as core line geometry constraint, as shown in Figure 1:
According to corresponding image rays to the principle intersected, suppose that S1 and S2 is respectively the camera site of image I1 and I2, p1 is a certain unique point on image I1, and its same place on I2 is p2, then corresponding image rays S1p1, S2p2 intersects at spatial point P.
Obvious S1p1, S2p2 and S1S2 tri-light are coplanar, and the same place p2 of p1 must drop on the intersection of this plane and image I2, and this intersection is called the core line of a p1 on image I2.Utilize this core line can retrain the hunting zone of candidate point, core line geometry constraint that Here it is.
3, criterion,
Criterion is exactly determine that whether unique point on two images is the criterion of same place.Generally speaking, Image Matching utilizes the imagery zone textures windows window of N*N (centered by the unique point) information at point to be matched and candidate point place separately to carry out judging, as shown in Figure 2, what compare is the similarity of image in image and right part window in left part window in figure.The center of two windows is a pair same place pair to be determined, and the computing formula of related coefficient is as follows:
ρ ( I 1 , I 2 ) = Σ i = 1 m Σ j = 1 n ( g i , j - g ‾ ) ( g i , j ′ - g ‾ , ′ ) Σ i = 1 m Σ j = 1 n ( g i , j - g ‾ ) 2 · Σ i = 1 m Σ j = 1 n ( g i , j ′ - g ‾ ′ ) 2 g ‾ = 1 m · n Σ i = 1 m Σ j = 1 n g i , j g ‾ ′ = 1 m · n Σ i = 1 m Σ j = 1 n g ij ′ Formula 1
Wherein: g i,jbe the gray-scale value of each pixel in main imaging window, g ' ijfor the gray-scale value of each pixel in corresponding window on search image.After calculating related coefficient, the candidate point that related coefficient generally can be selected maximum is as the same place of point to be matched.
But in multi-view images, if all adopt the matching process of above-mentioned same place to search same place, be then difficult to the efficiency ensureing to search.
Summary of the invention
In view of this, the object of the embodiment of the present invention is the image same place acquisition methods and the device that provide aviation multi-view images, to improve the homotopy mapping efficiency of inclination image.
First aspect, embodiments provides the image same place acquisition methods of aviation multi-view images, it is characterized in that, comprising:
Obtain multiple reference images that visual angle is identical;
Determine multiple with reference to object space unit, each is with reference to specifying two fixed reference feature points with reference to image to obtain in image by multiple with reference to object space unit;
If the similarity all with reference to object space unit is all less than predetermined threshold, then determine to optimize object space unit according to whole reference object space units;
According to the attitude information optimizing object space unit, target tilt image, determine that unique point the highest with fixed reference feature point similarity on target tilt image is as the same place with reference to unique point.
In conjunction with first aspect, embodiments provide the first possible embodiment of first aspect, wherein, determine that multiple reference object space unit comprises:
Determine multiple association image pair respectively, each association image centering all includes the first association image and associates image with second, second association image of same association image centering includes in the first image group, associate the unique point that fixed reference feature point similarity on image is the highest with first, what include in the first image group except the first association image is whole in images;
By same association image centering, unique point the highest with the fixed reference feature point similarity on the first image on the second association image associates the fixed reference feature point of image as second;
Determine with reference to object space unit according to each fixed reference feature point with reference to image centering two association image respectively.
In conjunction with first aspect, embodiments provide the embodiment that the second of first aspect is possible, wherein, be lower seeing image picture with reference to image.
In conjunction with first aspect, embodiments providing the third possible embodiment of first aspect, wherein, determining that multiple association image is to comprising respectively:
Associate multiple according to the similarity of fixed reference feature point with reference to image, to determine image association sequence, in image association sequence, a rear reference image includes the multiple with reference in images of the second image group, with the previous fixed reference feature point the highest with reference to the fixed reference feature point similarity on image, include in the second image combination except previous with reference to the whole reference images except image;
By in image sequence, adjacent two form an association image pair with reference to image.
In conjunction with first aspect, embodiments provide the 4th kind of possible embodiment of first aspect, wherein, associate multiple according to the similarity of fixed reference feature point, to determine that image association sequence comprises with reference to image:
Select a reference image of specifying as main image;
Select a unique point that main image is specified as the fixed reference feature point of main image;
Search mode by the constraint of core line and local grain optimum, to determine respectively on other reference images except main image with the maximum unique point of the fixed reference feature point similarity of main image as unique point to be determined;
Select, in other reference images except main image, to include the association image of reference image as main image of the maximum unique point to be determined of similarity;
Using association image as main image and using with the fixed reference feature point of the highest unique point to be determined of the fixed reference feature point similarity on last main image as current main image, and multiple exercise step is retrained by core line and the mode of local grain optimum, determine that other except main image are with reference to fixed reference feature point maximum with the fixed reference feature point similarity of main image on images respectively, until the association image got is by as crossing main image;
According to all previous main image and the incidence relation associating image, set up image association sequence.
In conjunction with first aspect, embodiments provide the 5th kind of possible embodiment of first aspect, wherein, by the constraint of core line and the mode of local grain optimum, determine that on other reference images except main image, the fixed reference feature point maximum with the fixed reference feature point similarity of main image comprises respectively:
Using a reference image in all the other reference images except main image as reference image to be determined;
According to the positional information of main filming image point, the positional information of main image fixed reference feature point and the positional information set up an office with reference to filming image to be determined, to determine on main image that fixed reference feature point is relative to the core line with reference to image to be determined with determine main image and to be determined with reference to the initial object space unit of image about main image fixed reference feature point;
By adjusting normal vector and the elevation of initial object space unit, calculate respectively to be determined with reference on image, and the maximum S/N CC coefficient of fixed reference feature point with distance each unique point and main image in preset range of core line;
By in multiple maximum S/N CC coefficient, the unique point on the reference image to be determined corresponding to the maximum S/N CC coefficient that numerical value is maximum is as the fixed reference feature point with reference to image;
Multiple exercise step using except main image all the other with reference in images with reference to image as to be determined with reference to image, until whole except main image all determine fixed reference feature point with reference to images.
In conjunction with first aspect, embodiments provide the 6th kind of possible embodiment of first aspect, wherein, according to the attitude information optimizing object space unit, target tilt image, determine that unique point the highest with fixed reference feature point similarity on target tilt image comprises as the same place with reference to unique point:
Using optimizing object space unit as the accurate object space unit with reference to unique point, according to attitude information and the accurate object space unit of target tilt image, carry out resampling in the appointment sample range on target tilt image, to determine target image sample window;
In destination sample window, select the unique point the highest with fixed reference feature point similarity as the same place with reference to unique point.
In conjunction with first aspect, embodiments provide the 7th kind of possible embodiment of first aspect, wherein, in step according to the attitude information optimizing object space unit, target tilt image, determine that unique point the highest with fixed reference feature point similarity on target tilt image also comprises as with reference to before the same place of unique point:
Bundle adjustment process is carried out, to determine the attitude information with reference to image to reference to image;
According to attitude information and the camera attitude of reference image, determine the attitude information of target tilt image.
In conjunction with first aspect, embodiments provide the 8th kind of possible embodiment of first aspect, wherein, according to the attitude information optimizing object space unit, target tilt image, determine that unique point the highest with fixed reference feature point similarity on target tilt image also comprises as the same place with reference to unique point:
According to the positional information of fixed reference feature point and the positional information of target tilt filming image point on each positional information with reference to filming image point, each reference image, determine each with reference to the core line of image relative to target tilt image respectively;
According to the core line of many objectives inclination image, determine the sample range of target tilt image.
Second aspect, the embodiment of the present invention additionally provides the image same place acquisition device of aviation multi-view images, comprising:
Acquisition module, for obtaining multiple identical reference images of visual angle;
First determination module, multiple with reference to object space unit for determining, each is with reference to specifying two fixed reference feature points with reference to image to obtain in image by multiple with reference to object space unit;
Second determination module, if be all less than predetermined threshold with reference to the similarity of object space unit, then optimizes object space unit for determining according to whole reference object space units;
Resolve module, for according to the attitude information optimizing object space unit, target tilt image, using unique point the highest with fixed reference feature point similarity on target tilt image as the same place with reference to unique point.
The image same place acquisition methods of the aviation multi-view images that the embodiment of the present invention provides, multiple first using same view angle identical are adopted to determine to optimize the mode of object space unit with reference to image, with of the prior art carry out homotopy mapping at every turn time all will one by one image the adjustment carrying out object space unit and mate, and then the matching efficiency decline that result in entirety is compared, it is determined by multiple the reference images first using visual angle identical optimizes object space unit, attitude information that is first according to the optimization object space determined and target tilt image determines the same place of fixed reference feature point again, because the texture of the reference image of same view angle is close, the accuracy of the optimization object space unit therefore obtained is higher, and then after determining the higher optimization object space unit of accuracy again, when each coupling, only need again to carry out resampling according to optimization object space unit, and do not need each direction all adjusting object space unit, find the unique point that SNCC coefficient on target tilt image is maximum, thus reduce search time, also just improve matching efficiency.
For making above-mentioned purpose of the present invention, feature and advantage become apparent, preferred embodiment cited below particularly, and coordinate appended accompanying drawing, be described in detail below.
Accompanying drawing explanation
In order to be illustrated more clearly in the technical scheme of the embodiment of the present invention, be briefly described to the accompanying drawing used required in embodiment below, be to be understood that, the following drawings illustrate only some embodiment of the present invention, therefore the restriction to scope should be counted as, for those of ordinary skill in the art, under the prerequisite not paying creative work, other relevant accompanying drawings can also be obtained according to these accompanying drawings.
Fig. 1 shows correlation technique center line geometry constraint schematic diagram;
Fig. 2 shows the same place textures windows schematic diagram of the image same place acquisition methods of aviation multi-view images provided by the present invention;
Fig. 3 shows the core line geometry constraint schematic diagram of the multi-view images of the image same place acquisition methods of aviation multi-view images provided by the present invention;
Fig. 4 shows the multi-view images texture sampling Window match schematic diagram of the image same place acquisition methods of aviation multi-view images provided by the present invention;
Fig. 5 shows the multi-view images geometric correlation matching algorithm schematic diagram of the image same place acquisition methods of aviation multi-view images provided by the present invention;
Fig. 6 shows the core line geometry Constrain Searching scope schematic diagram of the image same place acquisition methods of aviation multi-view images provided by the present invention;
Fig. 7 shows the basic flow sheet of the image same place acquisition methods of aviation multi-view images provided by the present invention;
Fig. 8 shows the spatial relationship schematic diagram of the determination reference substance Fang Yuan of the image same place acquisition methods of aviation multi-view images provided by the present invention;
Fig. 9 shows the multinuclear line constraint schematic diagram of the image same place acquisition methods of aviation multi-view images provided by the present invention;
Figure 10 shows the candidate point hunting zone schematic diagram of the multinuclear line constraint of the image same place acquisition methods of aviation multi-view images provided by the present invention.
Embodiment
Below in conjunction with accompanying drawing in the embodiment of the present invention, be clearly and completely described the technical scheme in the embodiment of the present invention, obviously, described embodiment is only the present invention's part embodiment, instead of whole embodiments.The assembly of the embodiment of the present invention describing and illustrate in usual accompanying drawing herein can be arranged with various different configuration and design.Therefore, below to the detailed description of the embodiments of the invention provided in the accompanying drawings and the claimed scope of the present invention of not intended to be limiting, but selected embodiment of the present invention is only represented.Based on embodiments of the invention, the every other embodiment that those skilled in the art obtain under the prerequisite not making creative work, all belongs to the scope of protection of the invention.
In correlation technique, when carrying out multi-view images coupling, employing be traditional matching technique, no matter i.e.: inclination image or vertically image, all adopt the mode of Feature Points Matching to search same place.Specifically, be determine with reference to the unique point that image is specified (unique point normally on this image texture and surrounding pixel point texture have the pixel of enough difference, like this, unique point is just easy to observe, white point as on full shadow picture) as with reference to after unique point, target image is searched and the unique point matched with reference to the unique point on image.Unique point on target image is normally multiple, therefore need each similarity (SNCC coefficient) all calculating the unique point on target image and the fixed reference feature point on reference image, and select the sufficiently high unique point of similarity as the same place of this fixed reference feature point.
Further, when shooting ground object, the difference of shooting angle can make the texture of the object photographed change (as same water tumbler, from different angles, the color seen, structure are all different).Thus, need the elevation and the normal vector (mainly referring to the angle that adjustment is observed) that adjust object space unit, again get on to find unique point at target image after adjusting method vector again, still need to calculate the similarity (SNCC coefficient) of unique point on target image and fixed reference feature point when finding unique point, and by value maximum for the SNCC coefficient corresponding to each unique point, as the optimization SNCC coefficient of this unique point, the final coefficient selecting numerical value maximum in the optimization SNCC coefficient corresponding to each unique point is as optimal coefficient, and then this unique point corresponding to optimal coefficient is exactly the same place corresponding to fixed reference feature point, the object space unit that unique point corresponding to fixed reference feature point and most desirability figure is determined is a bit corresponding optimization object space unit (also can be called accurate object space unit) that object is specified.Specifically, change the elevation (Z) of object space unit center, its sample window center on each image can be had influence on; On the other hand, change its normal vector towards, directly have influence on the shape of its resampling window on each image.
The process of the same place determining fixed reference feature point is described in correlation technique below by a concrete example:
One, first following basic theories is described:
1, Matching unit: unique point
2, search strategy:
Same place on different images, necessarily same spatial point (being referred to as object space point below) in real three dimensions, as shown in Figure 3:
If image I1, when the position and attitude information of I2, I3 is determined, once the volume coordinate (X, Y, Z) of object space point P determines, so its position on multiple images is also determined substantially.In fact the point to be matched on main image is utilized can to have determined a space light, and object space point P only can change (if the position of P ' is mobile on this line of S1P1) in the certain limit on this light, the hunting zone of the candidate point therefore on each search image (other images with reference in image except main image) also just determines.
3, criterion:
Utilize the texture sampling window of same place on multiple images jointly to determine, as shown in Figure 4, P is a bit on photography light S1p1, centered by P point, determine the sample window of a N*N, suppose that this textures windows is a plane, so defining object space unit is Patch (X, Y, Z, α, beta, gamma), (X, Y, Z) be the volume coordinate of object space point P, (α, beta, gamma) be the normal vector of this plane (plane at P place).So, object space unit opens image up-sampling at n and obtains n textures windows, and the SNCC coefficient defining to be matched some p1 is defined as:
SNCC = 1 n - 1 Σ i = 1 n ρ ( I 0 , Ii ) ( P ) Formula 2
Wherein ρ (I0, Ii) (P) is the related coefficient of main image (one with reference in image) I0 and search image Ii (multiple are with reference in image, an image except main image) at the resampling window of P.
The method of resampling window is as follows:
1) get centered by the point to be matched on main image, the textures windows of scope N*N;
2) for each pixel in this window, a photography light can be formed;
3) ask the intersection point of this light and object space unit (object space unit now optimizes object space unit, or is called accurate object space unit), obtain corresponding volume coordinate;
4) according to volume coordinate, the position of object space unit on each search image (multiple are with reference in image, other images except main image) is calculated.
Two, idiographic flow main flow is described below, is mainly divided into following several step
The first step: feature point extraction,
Unique point, as Matching unit, is the basis of whole matching algorithm, and feature point extraction is by carrying out a series of pixel operation to image, obtains the textural characteristics that some are special, the angle point (intersection point at edge) etc. in such as house.
The coupling of primitive feature point, need to consider the positioning precision of unique point and the multiplicity (probability that on two images, same unique point is detected by feature operator simultaneously) of unique point, under considering, what this techniqueflow adopted is Harris unique point.
The selection of unique point, can not do too much requirement, can select single unique point, also various features point can be selected, under the prerequisite that can meet coupling, (often open the equally distributed match point of image capturing), in conjunction with otherwise factor, comprehensively choose.
Second step: select main image,
Travel through all reference images, each all can as main image with reference to image, and this step just determines a main image.
3rd step: determine to search for image,
In order to determine which image needs to mate with main image, can be completed by the mode calculating visible image, computing method have multiple, such as by all image projectings to ground (photographing region Mean height plane), obtain and have the image of overlapping region with main image; Also according to the position and attitude information of image, a few the images nearest with main image space position can be selected; Or the associated ancillary information according to other obtains.
The object calculating visible image reduces unnecessary search procedure, and in extreme circumstances, each main image all needs to mate with remaining all images.
4th step: select unique point,
Travel through all unique points on main image, to each unique point, all complete a matching process (this step is minimum determines that a unique point is as reference unique point).
5th step: calculate core line equation,
Core line geometry is utilized to retrain the hunting zone determined on each image: to each pixel on image, spatially a light, the principle of any must be met in space according to corresponding image rays (light that same place is formed), as shown in Figure 5, unique point p0 on main image I0, same place pi on search image Ii, the projection of space light on image Ii of p0 must be dropped on, this projection ray is exactly the core line of p0 on image Ii, similar, also can determine the core line of I2.
According to the position and attitude information of I0, and the coordinate information of p0, calculate the space light of p0; The volume coordinate of Pmin and Pmax is calculated according to the elevation maximin of photographing region; According to the volume coordinate of Pmin and Pmax, and the position and attitude information of Ii calculates the Projection Line Segment ai*x+bi*y+c=0 of Pmin and Pmax on image Ii, x ∈ [xmini, xmaxi], yi ∈ [ymini, ymaxi];
6th step: search candidate point set,
On each search image, within the scope of the core line segment of correspondence, search for all unique points meeting unique point and be less than certain threshold value to the distance of core line, and generate candidate point set, as shown in Figure 6:
The rectangle of solid line is the seek area formed according to core line on image, the dotted line of upper and lower two is threshold value (two dotted line and the core line parallel of core line search, and the rectangle of solid line is between two dotted lines), concrete, solid-line rectangle region is the hunting zone determined by minimum and maximum elevation, and only dropping on unique point in solid-line rectangle just may alternatively point.
7th step: determine match point,
According to the principle (SNCC coefficient is maximum) of local grain optimum, in the middle of candidate point set, select the match point that best candidate point is the most final, specifically:
1 ' determines initial object space unit
Object space unit can think to be made up of a series of spatial point in a spatial point (X, Y, Z) and its neighborhood.In fact, our atural object representated by hypothesis space point (X, Y, Z) is a plane in certain neighborhood, this plane is exactly object space unit, and can with spatial point (X, Y, a Z,) and a normal vector (α, beta, gamma) show.
For each candidate point pij (i-th with reference to the jth unique point on image), a spatial point can be determined, centered by this spatial point through forward intersection with fixed reference feature point p0, generate the object space unit of a N*N, initial normal vector can be set to (0,0,1), so just determine an object space unit Patch (X, Y, Z, α, beta, gamma);
2 ' resampling object space unit is to imaging window
According to object space unit plane, main image and the locus searching for image, can by the resampling of object space unit on each image according to collinearity equation, obtain n textures windows (total n-1 search image and a main image, each search image there is a textures windows, also just define n textures windows).
3 ' accurate object space unit
Calculate the related coefficient NCC between n-1 textures windows and main image texture window and final SNCC coefficient.
By changing elevation (Z) and the normal vector of object space unit, make SNCC coefficient reach maximum through Least-squares minimization, and that candidate point selecting SNCC coefficient maximum is as the final match point same place of fixed reference feature point (namely on main image).
On the reference image of different visual angles, be different to the texture presented of same object.As the formal and side-looking to a cup, viewed texture is all different, and the height of observation point also can affect the result of observation, thus, when carrying out the coupling of image of various visual angles, can matching error be caused, think same place by different material objects point.Concrete if, A, B two is apart from close object, when this A object certain towards face on there is the unique point (object B also existing the unique point that texture is X) that texture is X, due to the problem of shooting angle, then very possible by these two object misidentifications.
In view of this, this application provides the image same place acquisition methods of aviation multi-view images, as shown in Figure 7, comprise the steps:
S101, obtains multiple reference images that visual angle is identical;
S102, determines multiple with reference to object space unit, and each is with reference to specifying two fixed reference feature points with reference to image to obtain in image by multiple with reference to object space unit;
S103, if be all less than predetermined threshold with reference to the similarity of object space unit, then determines to optimize object space unit according to whole reference object space units;
S104, according to the attitude information optimizing object space unit, target tilt image, determines that unique point the highest with fixed reference feature point similarity on target tilt image is as the same place with reference to unique point.
In step S101, need to determine from various visual angles aerial stereo images, to obtain multiple identical images of visual angle as reference image (with reference to mutual between image).Concrete reference image is lower seeing image picture, or is forward sight image.
In step S102, determine multiple reference object space unit according to multiple with reference to image.It should be noted that, in the process determining same place, must need first that (other except main image should be the visible image of main image with reference to images as main image with reference in image, also can by wipe-out mode in advance, select with the visible image of main image as reference image), select a unique point on main image as with reference to unique point again, and then get on to find the same place (this same place also can become the optimal match point of main image on fixed reference feature point) of the unique point the most similar to the fixed reference feature point on main image as fixed reference feature point on main image at other reference image.
When carrying out reference object space unit and determining, that the fixed reference feature point searched on all the other images according to the fixed reference feature point on main image (specifically refers on any reference image except main image, a unique point the highest with the fixed reference feature Point correlation coefficient on main image, also just determine each with reference to a unique point the highest with fixed reference feature Point correlation coefficient on main image on image respectively), then, more progressively determine with reference to object space unit.
Determine can there be the following two kinds mode with reference to object space unit:
1, an object space unit (certainly, being the object space unit determined according to the mode that SNCC coefficient is maximum) is determined with the fixed reference feature point on the fixed reference feature point of main image, main image and any one all the other image (the reference image except main image) and this image.Specifically, be namely adopt the initial normal vector of object space unit of adjustment and the mode of elevation, find the fixed reference feature point of the highest unique point of related coefficient as these all the other images.Concrete searching mode illustrates above, does not repeat them here.
Repeat the step of the preceding paragraph, all go out with main Image Matching with reference to object space unit until often open all the other images.
2, namely step S102 determines that multiple reference object space unit comprises:
Determine multiple association image pair respectively, each association image centering all includes the first association image and associates image with second, second association image of same association image centering includes in the first image group, associate the unique point that fixed reference feature point similarity on image is the highest with first, what include in the first image group except the first association image is whole in images;
By same association image centering, unique point the highest with the fixed reference feature point similarity on the first image on the second association image associates the fixed reference feature point of image as second;
Determine with reference to object space unit according to each fixed reference feature point with reference to image centering two association image respectively.
Association image centering includes 2 images that are mutually related (the first association image associates image with second), and the first association image associates image with second be all that multiple are with reference in image.It should be noted that, when determining that association image is right, be according to the first association image with first the fixed reference feature point that associate on image find the second reference image of this first association image.
If any six with reference to image A-F, first determine that first the first association image associating image centering is A (can be optional), image A determines a fixed reference feature point A ', then affect at remaining and B-F goes find and the unique point of A ' similarity the highest (SNCC coefficient is the highest), and then determine the highest unique point of SNCC coefficient (relative to fixed reference feature point A ') on B-F image respectively.The coefficient of some B '-F ' as the highest in coefficient on B-F image is respectively 1,2,3,4,5,6, so image F ' goes up namely the most similar with the A ' unique point of the highest some F ' of SNCC coefficient and then determines first and associates image pair, comprise image A and F, afterwards, again using the first association image of F as next one association image centering, in A-E, find the unique point the most similar to F ', searching mode be that first to associate the mode that image carries out finding identical with A.Because primary and foremost purpose when carrying out homotopy mapping determines the same place in multiple images, therefore, after determining the first association image, when searching second associates image, all (as when to affect B as the first association image time according to the fixed reference feature point determined, select to pass through the determined B ' of A ' as reference unique point) find, instead of redefine fixed reference feature point.
And then the multiple association images determined are first right to may to be A, F be, F, B are second right, and B, C are the 3rd right, and C, A are the 4th right.When determine four to after just do not need continuation to find because again find also only can repeat to find before the association image pair determined.
Then, only need same association image centering, unique point the highest with the fixed reference feature point similarity on the first image on the second association image associates the fixed reference feature point of image as second.In fact, the fixed reference feature point on the second association image is determined when determining the second association image according to the first association image before, herein, does not need again to calculate, obtains from the content obtained.
Finally, determine with reference to object space unit according to each fixed reference feature point with reference to image centering two association image respectively.
In step S103, because the reference object space unit found is all in theory about same subject, therefore, these similarities with reference to object space unit are quite higher, and the normal vector and the elevation that are embodied as each reference object space unit are all approximate.When normal vector and height variation larger time, then illustrate when searching, gone wrong, be namely all not less than predetermined threshold with reference to the similarity of object space unit, then fixed reference feature found before point all can have been abandoned, and re-execute step S101.
If the similarity all with reference to object space unit is all less than predetermined threshold, then determine to optimize object space unit according to whole reference object space units.Namely determine that relatively accurately optimize an object space unit according to multiple with reference to object space unit by the mode of error concealment.Concrete optimize object space unit as used bundle adjustment to determine, or use and determine to optimize object space unit with reference to the average weighted mode of object spaces unit by multiple, can also be directly with reference in object space unit as optimizing object space unit.
Finally, in step S104, when object space unit determines, when carrying out mating about the fixed reference feature point of main image to any image again, just do not need to perform again the initial normal vector of object space unit of adjustment and the mode of height value to carry out have matched of unique point, because the object space unit of optimum determines, as long as and then according to optimizing the attitude information of object space unit and target tilt image, just can determine whether the unique point tilted arbitrarily on image is can carry out with fixed reference feature point the same place mated.
It should be noted that, fixed reference feature point in step S104 can refer to the fixed reference feature point on main image, also can refer to arbitrarily with reference to the fixed reference feature point (when the similarity all with reference to object space unit is all less than predetermined threshold, description references unique point has been same point) on image.
Concrete, the reference image in step S101 is lower seeing image picture.In theory, (textural characteristics of the subject corresponding to same view angle image is similar as all achieving the goal with reference to image to use lower seeing image picture, backsight image, forward sight image, left seeing image picture and right seeing image picture, better can determine whether certain unique point is the fixed reference feature point needing to find, and the phenomenon of misidentification can not occur because of shooting angle).
Concrete, determine that multiple association image is to comprising respectively:
Associate multiple according to the similarity of fixed reference feature point with reference to image, to determine image association sequence, in image association sequence, a rear reference image includes the multiple with reference in images of the second image group, with the previous fixed reference feature point the highest with reference to the fixed reference feature point similarity on image, include in the second image combination except previous with reference to the whole reference images except image;
By in image sequence, adjacent two form an association image pair with reference to image.
Namely before determining association image, first determine the image association sequence with reference to image.Namely determine a rear reference image according to previous with reference to image, concrete association process associates according to the similarity of the fixed reference feature point described in above, that is, determine which all the other image (except last with reference to other reference images except image) are as a rear reference image according to the previous fixed reference feature point with reference to image.Specifically, being by determining in all the other images multiple, will all the other images of the unique point the most similar to the fixed reference feature point of a upper reference image being had as a rear reference image.Image association sequence as determined is ADCBFA, then can form an association image pair by two adjacent in this relating sequence with reference to image, and then obtain multiple association image pair, namely obtain AD, DC, CB, BF, FA these five associates image pair.
Further, associate multiple according to the similarity of fixed reference feature point, to determine that image association sequence comprises with reference to image:
Select a reference image of specifying as main image;
Select a unique point that main image is specified as the fixed reference feature point of main image;
By the core line constraint (shooting point of main image, the fixed reference feature point of main image and the camera site of arbitrary reference image selected can determine that the fixed reference feature point of main image examines the core line of image about this this, the fixed reference feature point of this reference image is just near core line) (except main image, all the other are the most similar to the texture of the fixed reference feature point on main image with reference to the fixed reference feature point on image with local grain optimum, namely SNCC coefficient is maximum) search mode, determine that unique point maximum with the fixed reference feature point similarity of main image on other reference images except main image is as unique point to be determined respectively,
Select in other reference images except main image, the reference image including the maximum unique point to be determined of similarity as the association image of main image (namely after determining that each except main image is with reference to the unique point to be determined on image, the SNCC coefficient of more multiple unique point to be determined again, and the reference image corresponding to unique point to be determined selecting SNCC coefficient maximum is as the reference image of main image);
Using association image as main image and using with the fixed reference feature point of the highest unique point to be determined of the fixed reference feature point similarity on last main image as current main image, and multiple exercise step is retrained by core line and the mode of local grain optimum, determine that other except main image are with reference to fixed reference feature point maximum with the fixed reference feature point similarity of main image on images respectively, until the association image got is by as crossing main image;
According to all previous main image and the incidence relation associating image, set up image association sequence.
Determine that a main image all can obtain an association image at every turn, the more up-to-date association image once obtained is continued to find association image as main image afterwards, finally just can form a relating sequence of order association.
Concrete, by the mode of the constraint of core line and local grain optimum, determine that on other reference images except main image, the fixed reference feature point maximum with the fixed reference feature point similarity of main image comprises respectively:
Using a reference image in all the other reference images except main image as reference image to be determined;
According to the positional information of main filming image point, the positional information of main image fixed reference feature point and the positional information set up an office with reference to filming image to be determined, to determine on main image that fixed reference feature point is relative to the core line with reference to image to be determined with determine main image and to be determined with reference to the initial object space unit of image about main image fixed reference feature point;
By adjusting normal vector and the elevation of initial object space unit, calculate respectively to be determined with reference on image, and the maximum S/N CC coefficient of fixed reference feature point with distance each unique point and main image in preset range of core line;
By in multiple maximum S/N CC coefficient, the unique point on the reference image to be determined corresponding to the maximum S/N CC coefficient that numerical value is maximum is as the fixed reference feature point with reference to image;
Multiple exercise step using except main image all the other with reference in images with reference to image as to be determined with reference to image, until whole except main image all determine fixed reference feature point with reference to images.
Concrete, according to the attitude information optimizing object space unit, target tilt image, determine that unique point the highest with fixed reference feature point similarity on target tilt image comprises as the same place with reference to unique point:
Using optimizing object space unit as the accurate object space unit with reference to unique point, according to attitude information and the accurate object space unit of target tilt image, carry out resampling in the appointment sample range on target tilt image, to determine target image sample window;
In destination sample window, select the unique point the highest with fixed reference feature point similarity as the same place with reference to unique point.
Concrete, in step according to the attitude information optimizing object space unit, target tilt image, determine that unique point the highest with fixed reference feature point similarity on target tilt image also comprises as with reference to before the same place of unique point:
Bundle adjustment process is carried out, to determine the attitude information with reference to image to reference to image;
According to attitude information and the camera attitude of reference image, determine the attitude information of target tilt image.
Use the object of bundle adjustment to be determine the more accurate attitude information with reference to image, and then ensure that the object space metamessage determined is comparatively accurate.
Concrete, according to the attitude information optimizing object space unit, target tilt image, determine that unique point the highest with fixed reference feature point similarity on target tilt image also comprises as the same place with reference to unique point:
According to the positional information of fixed reference feature point and the positional information of target tilt filming image point on each positional information with reference to filming image point, each reference image, determine each with reference to the core line of image relative to target tilt image respectively;
According to the core line of many objectives inclination image, determine the sample range of target tilt image.
Above-mentioned steps is the part that in the application and correlation technique, difference is larger, the scheme that the application provides, confirm out with reference to image by multiple owing to optimizing object space unit, therefore, often open and all can produce a core line on target tilt image with reference to image, same place so on target tilt image certainly exists on the intersection point of these many core lines, and this also just substantially reduces the hunting zone of same place.
Below, the image same place acquisition methods of the aviation multi-view images that the application provides is described with a concrete example.
The first step: feature point extraction, selects in multiple images one as main image, then determines the unique point on main image.
Second step: image divides into groups,
According to the angle often opened between image primary optical axis, by image naturalization to different groups, the consistent image (i.e. the image of same viewing angles) of such primary optical axis will be assigned to same group.
Unlike the prior art, this technology is not carry out unified process to the image at all visual angles, but by the image naturalization at multiple visual angle to different groups, treat respectively, step-by-step processing.
3rd step: obtain the set of object space unit
According to the strategy of step-by-step processing, this step only processes vertical image (vertical image is the reference image chosen), due to relatively accurate position and attitude information, can complete Image Matching according to the related procedure of existing technology; As shown in Figure 8:
Using image A as main image, its unique point a, by existing matching technique flow process, supposes on image B, obtain its optimal match point b, then determine object space unit PA{a, b}, (light Aa and light Bb intersection form); In like manner, again using image B as main image, its unique point b, its optimal match point may unique point c on image C (a point of the point the most similar to unique point b not necessarily on A image), so just there is object space unit PB{b, c} (notes, here PA and PB is a point in theory, but in fact may not be like this, need by step below, obtain more stable result and join the set of object space unit, in the set of object space unit, have multiple with reference to object space unit, now determine that object space unit PA and PB is with reference to object space unit).
When after the coupling completing all unique points on all images, need to merge object space unit of the same name (having common unique point), such as PA{a, b} and PB{b, c} has same place b, combination principle: have and if only if the space length of PA and PB is less than certain threshold value, and when normal vector angle is less than certain threshold value, think (a, b, c) be same place, Aa, Bb and Cc3 bar crossover can generate object space unit P{a, b, c}; And be added into object space unit set Patch (in fact, when the quantity of object space unit increases, if any object space unit PA, PB, PC, PD, PE, then whole object space units being merged).
The object of this step is the object space unit set Patch{0 obtained, 1,2 ... n} is used for doing the benchmark of inclination Image Matching, and (determined object space unit set is herein the fixed reference feature point based on have selected multiple main image, each fixed reference feature point can determine the set of an object space unit, namely by the set of multiple object space unit); Here each object space unit (is often opened image by the light of more than 3 and has a pipeline, three images also just have three light) intersection forms, add the reliability of object space unit locus on the one hand, on the other hand in inclination Image Matching process, also many effect of contractions of core line geometries.
4th step: accurately object space unit set
The object space unit set that last step obtains, as coupling benchmark, will complete the coupling of inclination image; Before carrying out inclination Image Matching, need to obtain more accurate object space unit:
1) to all vertical images, carry out Integratively bundle adjustment, obtain the spatial coordinated information of the accurate position and attitude information of vertical image and each object space unit;
2) according to the position and attitude information of vertical image, and the design attitude of camera, again resolve the position and attitude information of inclination image, make the position and attitude of inclination image more accurate;
3) to each object space unit, Patch (X, Y, Z, α, β, γ), according to new locus (X, Y, and the space position solution of image Z), according to the principle of local grain optimum, again resolve (α, β, γ), object space unit more accurately is namely determined;
This step object, exactly in order to when carrying out inclination Image Matching, have more accurate object space unit and more accurate position and attitude information (inclination image), usually the position and attitude of vertical image is comparatively accurately, therefore just can calibrate the position and attitude information of inclination image with the relative position of the position and attitude of vertical image and camera and parameter.
5th step: calculate visible image
According to the positional information (X of object space unit, Y, Z), and the positional information of each inclination image (Xs, Ys, Zs), according to collinearity equation, the position of object space unit on image can be calculated, if this position effectively (is dropped in target tilt image capturing range), then think that this inclination image is visible.
So, just can obtain current object space unit n group inclination image (n=4 here, namely forward sight, backsight, a left side depending on and the inclination image looked of the right side).
6th step: calculate core line equation
The computing method of core line equation are similarly to the prior art, known unlike the positional information (X, Y, Z) due to object space unit, and the first light by more than 3 of object space forms, and we are provided with the core line of more than 3 simultaneously.
The calculating of core line equation, mainly plays the effect of contraction of geometry, and obviously every many core lines, just many one heavily retrain, and result is also just more reliable.
7th step: search candidate point set
The same with existing technology search strategy, only geometry restriction (restrictions of many core lines) is here more strict, as shown in Figure 9 and Figure 10:
In Fig. 9, suppose that object space unit P can be formed (three solid line) by three crossovers, so it will have 3 core lines (in Fig. 9, three dotted lines on I4, three crossing fine lines in Figure 10) on inclination image I4, as shown in Figure 10:
Article three, cross solid line respectively represent a core line, the hunting zone represented by dotted arrows of every bar core line both sides (hunting zone of this core line) of this core line of corresponding represented by dotted arrows constraint, for each unique point of region of search, calculate the distance that it arrives each core line, as shown in the runic solid line in Figure 10, only have when this unique point is all less than certain threshold value (2-10 pixels) to the distance di of each core line, this unique point just can be taken as candidate point, if the small circle in Figure 10 is unique point, unique point 3 and 4 in this figure (unique point 3 and unique point 4 are near the intersection point of three core lines) just can think the candidate matches point of fixed reference feature point on main image, as long as calculate the SNCC coefficient of unique point 3 and 4, other unique point then need not calculate (unique point 1, 2, 5 just need not calculate).Specifically can first can calculate a polygonal hunting zone after the intersection point determining three core lines, as 6 limit shapes in the middle part of Figure 10, dashed boundaries along three core lines confirms out, calculates the SNCC coefficient being positioned at this 6 limit shape scope unique point.
As in Figure 10, obviously when only having a core line constraint, so putting 1,2,3,4,5 and all alternatively carrying out the coupling in later stage, but if 3 core lines carry out retraining, so only have in figure and a little 3 and 4 by alternatively point.
Multinuclear line retrains, and can reduce the quantity of candidate point, improve the efficiency of coupling, also reduce the probability of error hiding simultaneously.
8th step: determine match point
1) because the optimization object space unit in the 4th step is known, here do not need to carry out again initialization (namely in correlation technique from the normal vector of adjustment object space unit and elevation), this is a very crucial factor, it avoid the difference that resampling window texture that the initial value out of true due to normal vector causes is huge, make texture optimization procedure can Fast Convergent.
2) then, resampling window, by the difference resampling of object space unit in the image group of different visual angles; Due to 1) existence, become simple of this step and effectively.
3) to the image at each visual angle, (locus of object space unit can be thought accurately to take the optimized principle of local grain equally, only normal vector need be considered), the candidate point selecting SNCC coefficient maximum is as match point, so far, searching of same place (match point) is just completed.Afterwards, then this match point is added into object space unit, then when carrying out step 7, many core lines are carried out geometrical constraint again by this object space unit, and then increase accuracy further.
What deserves to be explained is,
1, this application provides image grouping strategy (namely first determining the identical image in visual angle as reference image before mating), in the matching process, reduce the quantity of matching candidate point search on the one hand, especially, when the position and attitude precision of inclination image is not high, the possibility of error hiding is reduced; On the other hand, the image of same group, because shooting angle is consistent, deformation texture each other also keeps relatively consistent, in the process solving object space unit, avoids because deformation texture causes that it fails to match.
2, image substep matching strategy, unlike the prior art, the application's flow process is not that the disposable all coherent videos (vertical image and inclination image) that utilize complete coupling; But adopt the strategy of substep, under the prerequisite completing image grouping, utilize the feature tilting to look aviation image self more, first complete the coupling of vertical image, again coherent video position and attitude information is resolved through adjustment, and obtain the set of accurate object space unit, recycle the coupling that these object space units complete other perspective images.This strategy, first can directly utilize prior art to process vertical image, avoids the interference (position and attitude and deformation texture) of inclination image; Secondly, utilize the result of vertical Image Matching can to refine the position and attitude of inclination image, improve the reliability of inclination Image Matching; Finally, in inclination Image Matching process, owing to make use of the object space unit that vertical image is determined, on the one hand, in the search procedure of candidate point, (object space unit is made up of the light of more than 3 to enhance the constraint of geometry, mean that the core line having more than 3 retrains) improve reliability, on the other hand, determine in the process of match point, because the normal vector of object space unit determines, therefore on inclination image, the textures windows of resampling is also relatively stable.
The embodiment of the present invention additionally provides the image same place acquisition device of aviation multi-view images, comprising:
Acquisition module, for obtaining multiple identical reference images of visual angle;
First determination module, multiple with reference to object space unit for determining, each is with reference to specifying two fixed reference feature points with reference to image to obtain in image by multiple with reference to object space unit;
Second determination module, if be all less than predetermined threshold with reference to the similarity of object space unit, then optimizes object space unit for determining according to whole reference object space units;
Resolve module, for according to the attitude information optimizing object space unit, target tilt image, using unique point the highest with fixed reference feature point similarity on target tilt image as the same place with reference to unique point.
The device that the embodiment of the present invention provides, its technique effect realizing principle and generation is identical with preceding method embodiment, is concise and to the point description, and the not mentioned part of device embodiment part can with reference to corresponding contents in preceding method embodiment.
Those skilled in the art can be well understood to, and for convenience and simplicity of description, the specific works process of the system of foregoing description, device and unit, with reference to the corresponding process in preceding method embodiment, can not repeat them here.
The above; be only the specific embodiment of the present invention, but protection scope of the present invention is not limited thereto, is anyly familiar with those skilled in the art in the technical scope that the present invention discloses; change can be expected easily or replace, all should be encompassed within protection scope of the present invention.Therefore, protection scope of the present invention should described be as the criterion with the protection domain of claim.

Claims (10)

1. the image same place acquisition methods of aviation multi-view images, is characterized in that, comprising:
Obtain multiple reference images that visual angle is identical;
Determine multiple with reference to object space unit, each described reference object space unit is with reference to specifying two fixed reference feature points with reference to image to obtain in image by multiple;
If whole described similarity with reference to object space unit is all less than predetermined threshold, then determine to optimize object space unit according to whole described reference object space units;
According to the attitude information of described optimization object space unit, target tilt image, determine the same place of unique point the highest with described fixed reference feature point similarity on target tilt image as described fixed reference feature point.
2. the image same place acquisition methods of aviation multi-view images according to claim 1, is characterized in that, describedly determines multiplely to comprise with reference to object space unit:
Determine multiple association image pair respectively, each described association image centering all includes the first association image and associates image with second, the described second association image of same association image centering includes in the first image group, associate the unique point that fixed reference feature point similarity on image is the highest with described first, what include in described first image group except described first association image is whole in images;
By same association image centering, unique point the highest with the fixed reference feature point similarity on described first image on the second association image associates the fixed reference feature point of image as described second;
Determine described with reference to object space unit according to each fixed reference feature point with reference to image centering two association image respectively.
3. the image same place acquisition methods of aviation multi-view images according to claim 2, is characterized in that, described reference image is lower seeing image picture.
4. the image same place acquisition methods of aviation multi-view images according to claim 2, is characterized in that, describedly determines that multiple association image is to comprising respectively:
Multiple described reference image is associated according to the similarity of fixed reference feature point, to determine image association sequence, in described image association sequence, a rear reference image includes the multiple with reference in images of the second image group, with the previous fixed reference feature point the highest with reference to the fixed reference feature point similarity on image, include except described previous with reference to the whole reference images except image in described second image combination;
By in described image sequence, adjacent two form an association image pair with reference to image.
5. the image same place acquisition methods of aviation multi-view images according to claim 4, is characterized in that, describedly describedly to associate multiple according to the similarity of fixed reference feature point, to determine that image association sequence comprises with reference to image:
Select a reference image of specifying as main image;
Select a unique point that described main image is specified as the fixed reference feature point of main image;
Search mode by the constraint of core line and local grain optimum, to determine respectively on other reference images except described main image with the maximum unique point of the fixed reference feature point similarity of described main image as unique point to be determined;
Select, in other reference images except described main image, to include the association image of reference image as described main image of the maximum unique point to be determined of similarity;
Using described association image as main image and using with the fixed reference feature point of the highest unique point to be determined of the fixed reference feature point similarity on last main image as current main image, and described in multiple exercise step, pass through the mode of the constraint of core line and local grain optimum, determine that other except described main image are with reference to fixed reference feature point maximum with the fixed reference feature point similarity of described main image on images respectively, until the association image got is by as crossing main image;
According to all previous main image and the incidence relation associating image, set up image association sequence.
6. the image same place acquisition methods of aviation multi-view images according to claim 5, it is characterized in that, the described mode passing through the constraint of core line and local grain optimum, determine that on other reference images except described main image, the fixed reference feature point maximum with the fixed reference feature point similarity of described main image comprises respectively:
Using a reference image in all the other reference images except described main image as reference image to be determined;
According to the positional information of described main filming image point, the positional information of described main image fixed reference feature point and the positional information set up an office with reference to filming image to be determined, to determine on described main image that fixed reference feature point is relative to the described core line with reference to image to be determined with determine described main image and described to be determined with reference to the initial object space unit of image about described main image fixed reference feature point;
By adjusting normal vector and the elevation of described initial object space unit, calculate respectively described to be determined with reference on image, and the maximum S/N CC coefficient of distance fixed reference feature point of each unique point and described main image in preset range with described core line;
By in multiple maximum S/N CC coefficient, to be determined with reference to the fixed reference feature point of the unique point on image as described reference image corresponding to the maximum S/N CC coefficient that numerical value is maximum;
Described in multiple exercise step using except described main image all the other with reference in images with reference to image as to be determined with reference to image, until whole except described main image all determine fixed reference feature point with reference to images.
7. the image same place acquisition methods of aviation multi-view images according to claim 3, it is characterized in that, the described attitude information according to described optimization object space unit, target tilt image, determine that unique point the highest with described fixed reference feature point similarity on target tilt image comprises as the same place of described fixed reference feature point:
Using the accurate object space unit of described optimization object space unit as described fixed reference feature point, according to attitude information and the described accurate object space unit of described target tilt image, resampling is carried out, to determine target image sample window in appointment sample range on described target tilt image;
In described destination sample window, select the unique point the highest with described fixed reference feature point similarity as the same place of described fixed reference feature point.
8. the image same place acquisition methods of aviation multi-view images according to claim 7, it is characterized in that, according to the attitude information of described optimization object space unit, target tilt image described in step, also comprise before determining the same place of unique point the highest with described fixed reference feature point similarity on target tilt image as described fixed reference feature point:
Bundle adjustment process is carried out with reference to image, to determine the described attitude information with reference to image to described;
According to the described attitude information with reference to image and described camera attitude, determine the attitude information of described target tilt image.
9. the image same place acquisition methods of aviation multi-view images according to claim 7, it is characterized in that, the described attitude information according to described optimization object space unit, target tilt image, determine that unique point the highest with described fixed reference feature point similarity on target tilt image also comprises as the same place of described fixed reference feature point:
According to the positional information of fixed reference feature point and the positional information of target tilt filming image point on each positional information with reference to filming image point, each described reference image, determine each described with reference to the core line of image relative to described target tilt image respectively;
According to the core line of many described target tilt images, determine the sample range of described target tilt image.
10. the image same place acquisition device of aviation multi-view images, is characterized in that, comprising:
Acquisition module, for obtaining multiple identical reference images of visual angle;
First determination module, multiple with reference to object space unit for determining, each described reference object space unit is with reference to specifying two fixed reference feature points with reference to image to obtain in image by multiple;
Second determination module, if whole described similarity with reference to object space unit is all less than predetermined threshold, then for determining to optimize object space unit according to whole described reference object space units;
Resolve module, for the attitude information according to described optimization object space unit, target tilt image, using the same place of unique point the highest with described fixed reference feature point similarity on target tilt image as described fixed reference feature point.
CN201510208194.9A 2015-04-28 2015-04-28 The inclination image same place acquisition methods and device of aviation multi-view images Active CN104794490B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510208194.9A CN104794490B (en) 2015-04-28 2015-04-28 The inclination image same place acquisition methods and device of aviation multi-view images

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510208194.9A CN104794490B (en) 2015-04-28 2015-04-28 The inclination image same place acquisition methods and device of aviation multi-view images

Publications (2)

Publication Number Publication Date
CN104794490A true CN104794490A (en) 2015-07-22
CN104794490B CN104794490B (en) 2018-10-02

Family

ID=53559277

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510208194.9A Active CN104794490B (en) 2015-04-28 2015-04-28 The inclination image same place acquisition methods and device of aviation multi-view images

Country Status (1)

Country Link
CN (1) CN104794490B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106248055A (en) * 2016-08-31 2016-12-21 中测新图(北京)遥感技术有限责任公司 A kind of inclination view stereoscopic plotting method
CN106846384A (en) * 2016-12-30 2017-06-13 中国人民解放军61540部队 A kind of various visual angles incline greatly linear array image matching method and device
CN108399631A (en) * 2018-03-01 2018-08-14 北京中测智绘科技有限公司 A kind of inclination image of scale invariability regards dense Stereo Matching method more
CN110135474A (en) * 2019-04-26 2019-08-16 武汉市土地利用和城市空间规划研究中心 A kind of oblique aerial image matching method and system based on deep learning
CN110148205A (en) * 2018-02-11 2019-08-20 北京四维图新科技股份有限公司 A kind of method and apparatus of the three-dimensional reconstruction based on crowdsourcing image
CN110392244A (en) * 2018-04-18 2019-10-29 长光卫星技术有限公司 A kind of three line scanner camera image synthesis chromatic image method
CN111222586A (en) * 2020-04-20 2020-06-02 广州都市圈网络科技有限公司 Inclined image matching method and device based on three-dimensional inclined model visual angle
CN112598740A (en) * 2020-12-29 2021-04-02 中交第二公路勘察设计研究院有限公司 Rapid and accurate matching method for large-range multi-view oblique image connection points

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120206438A1 (en) * 2011-02-14 2012-08-16 Fatih Porikli Method for Representing Objects with Concentric Ring Signature Descriptors for Detecting 3D Objects in Range Images
CN103390102A (en) * 2013-07-16 2013-11-13 中交第二公路勘察设计研究院有限公司 Method for calculating three-dimensional intersection angle of satellite images
CN104318566A (en) * 2014-10-24 2015-01-28 南京师范大学 Novel multi-image plumb line track matching method capable of returning multiple elevation values
CN104501779A (en) * 2015-01-09 2015-04-08 中国人民解放军63961部队 High-accuracy target positioning method of unmanned plane on basis of multi-station measurement

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120206438A1 (en) * 2011-02-14 2012-08-16 Fatih Porikli Method for Representing Objects with Concentric Ring Signature Descriptors for Detecting 3D Objects in Range Images
CN103390102A (en) * 2013-07-16 2013-11-13 中交第二公路勘察设计研究院有限公司 Method for calculating three-dimensional intersection angle of satellite images
CN104318566A (en) * 2014-10-24 2015-01-28 南京师范大学 Novel multi-image plumb line track matching method capable of returning multiple elevation values
CN104501779A (en) * 2015-01-09 2015-04-08 中国人民解放军63961部队 High-accuracy target positioning method of unmanned plane on basis of multi-station measurement

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106248055A (en) * 2016-08-31 2016-12-21 中测新图(北京)遥感技术有限责任公司 A kind of inclination view stereoscopic plotting method
CN106248055B (en) * 2016-08-31 2019-05-10 中测新图(北京)遥感技术有限责任公司 A kind of inclination view stereoscopic plotting method
CN106846384A (en) * 2016-12-30 2017-06-13 中国人民解放军61540部队 A kind of various visual angles incline greatly linear array image matching method and device
CN110148205A (en) * 2018-02-11 2019-08-20 北京四维图新科技股份有限公司 A kind of method and apparatus of the three-dimensional reconstruction based on crowdsourcing image
CN108399631A (en) * 2018-03-01 2018-08-14 北京中测智绘科技有限公司 A kind of inclination image of scale invariability regards dense Stereo Matching method more
CN110392244A (en) * 2018-04-18 2019-10-29 长光卫星技术有限公司 A kind of three line scanner camera image synthesis chromatic image method
CN110135474A (en) * 2019-04-26 2019-08-16 武汉市土地利用和城市空间规划研究中心 A kind of oblique aerial image matching method and system based on deep learning
CN111222586A (en) * 2020-04-20 2020-06-02 广州都市圈网络科技有限公司 Inclined image matching method and device based on three-dimensional inclined model visual angle
CN111222586B (en) * 2020-04-20 2020-09-18 广州都市圈网络科技有限公司 Inclined image matching method and device based on three-dimensional inclined model visual angle
CN112598740A (en) * 2020-12-29 2021-04-02 中交第二公路勘察设计研究院有限公司 Rapid and accurate matching method for large-range multi-view oblique image connection points

Also Published As

Publication number Publication date
CN104794490B (en) 2018-10-02

Similar Documents

Publication Publication Date Title
CN104794490A (en) Slanted image homonymy point acquisition method and slanted image homonymy point acquisition device for aerial multi-view images
US8427472B2 (en) Multidimensional evidence grids and system and methods for applying same
CN109509230A (en) A kind of SLAM method applied to more camera lens combined type panorama cameras
Wan et al. Illumination-invariant image matching for autonomous UAV localisation based on optical sensing
CN104966281B (en) The IMU/GNSS guiding matching process of multi-view images
CN112598740B (en) Rapid and accurate matching method for large-range multi-view oblique image connection points
CN105869136A (en) Collaborative visual SLAM method based on multiple cameras
CN105069843A (en) Rapid extraction method for dense point cloud oriented toward city three-dimensional modeling
AU2020315519B2 (en) 3D view model generation of an object utilizing geometrically diverse image clusters
CN109827548A (en) The processing method of aerial survey of unmanned aerial vehicle data
CN102750537A (en) Automatic registering method of high accuracy images
CN109443359A (en) A kind of geographic positioning of ground full-view image
Gómez et al. An experimental comparison of multi-view stereo approaches on satellite images
US11568638B2 (en) Image targeting via targetable 3D data
CN105761257A (en) Elimination method for gross error in unmanned aerial vehicle image matching on cross air strip and device thereof
Fei et al. Ossim: An object-based multiview stereo algorithm using ssim index matching cost
Haala et al. Hybrid georeferencing, enhancement and classification of ultra-high resolution UAV lidar and image point clouds for monitoring applications
Gerke et al. Supervised and unsupervised MRF based 3D scene classification in multiple view airborne oblique images
Marí et al. To bundle adjust or not: A comparison of relative geolocation correction strategies for satellite multi-view stereo
Yong-guo et al. The navigation of mobile robot based on stereo vision
Khezrabad et al. A new approach for geometric correction of UAV-based pushbroom images through the processing of simultaneously acquired frame images
Zhao et al. An ORB-SLAM3 autonomous positioning and orientation approach using 360-degree panoramic video
Fanta‐Jende et al. Co‐registration of panoramic mobile mapping images and oblique aerial images
Yadav et al. Hybrid adjustment of UAS-based LiDAR and image data
US9709395B2 (en) Method and system for analyzing images from satellites

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
EXSB Decision made by sipo to initiate substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant