CN103578093A - Image registration method and device and augmented reality system - Google Patents

Image registration method and device and augmented reality system Download PDF

Info

Publication number
CN103578093A
CN103578093A CN201210247979.3A CN201210247979A CN103578093A CN 103578093 A CN103578093 A CN 103578093A CN 201210247979 A CN201210247979 A CN 201210247979A CN 103578093 A CN103578093 A CN 103578093A
Authority
CN
China
Prior art keywords
unique point
scale
matrix
input picture
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201210247979.3A
Other languages
Chinese (zh)
Other versions
CN103578093B (en
Inventor
柳寅秋
李薪宇
宋海涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Idealsee Technology Co Ltd
Original Assignee
Chengdu Idealsee Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Idealsee Technology Co Ltd filed Critical Chengdu Idealsee Technology Co Ltd
Priority to CN201210247979.3A priority Critical patent/CN103578093B/en
Priority to CN201610443680.3A priority patent/CN106127748B/en
Publication of CN103578093A publication Critical patent/CN103578093A/en
Application granted granted Critical
Publication of CN103578093B publication Critical patent/CN103578093B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20016Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses an image registration method comprising that: characteristic points are detected and extracted on an input image; matrix description is performed on all the extracted characteristic points respectively according to a sparse sampling model so that a binary description matrix of all the characteristic points of the input image is obtained; and matching calculation is performed on the binary description matrix of all the characteristic points of the input image and the binary description matrix of all the characteristic points in a characteristic sample database, so that image registration results are obtained. Correspondingly, the invention also discloses an image registration device, an augmented reality system and a mobile terminal. A problem that an existing image registration technology is not applicable to realization of accurate and real-time matching of images on the mobile terminal is solved, and thus the image registration method and the device which are small in occupied memory and high in execution efficiency are provided.

Description

Method for registering images, device and augmented reality system
Technical field
The present invention relates to image processing field, relate in particular to a kind of method for registering images, device and use the augmented reality system of this method for registering images and device, and a kind of mobile terminal that comprises described augmented reality system.
Background technology
Mobile augmented reality, i.e. the augmented reality based on mobile terminal, is that augmented reality combines with mobile computing and the research direction that produces, is one of focus that field of human-computer interaction receives much concern in recent years.Mobile augmented reality has the essence of traditional augmented reality, in the scene presenting in true environment, by with computer graphics techniques and visualization technique, by virtual information in real time " seamless " merge with it, utilize virtual information that real scene is supplemented, strengthened; Simultaneously, with the combination of mobile-terminal platform, the feature of augmented reality " mobility " be can bring into play to greatest extent, brand-new sensory experience and interactive mode that user is different from traditional calculations machine platform completely given.
In mobile augmented reality technology, image registration is technological difficulties, existing image registration techniques majority designs based on common computer, if this type of image registration techniques is grafted directly on this class constrained system of mobile intelligent terminal (as smart mobile phone and panel computer etc.), due to system architecture and the performance difference of mobile intelligent terminal and common computer, such algorithm is transplanted real-time and the accuracy requirement that can not meet system operation.
For example: " Daniel Wagner; Gerhard Reitmayr; Alessandro Mulloni; et al.Pose Tracking from Natural Features on Mobile Phones[C] // 7th IEEE/ACM International Symposium on Mixed and Augmented Reality; pp.125-134; 2008 " in the image registration techniques mentioned, belong to and utilize a kind of improved SIFT algorithm to realize image registration, it specifically comprises the steps:
Steps A: use FAST algorithm to carry out Corner Detection to image, extract image characteristic point, wherein FAST refers to that a kind of Corner Detection Algorithm being proposed by Edward Rosten and Tom Drummond is (if in 16 points in some P field, have the only poor threshold value t that is greater than of the gray scale of continuous 12 points and the gray scale of some P, judging point P is angle point).
Step B: form the feature of unique point is described, be specially with SIFT (Scale-invariant feature transform, the conversion of yardstick invariant features) algorithm:
First determine that the principal direction of unique point is to guarantee the direction unchangeability of unique point, calculate in unique point neighborhood gradient direction and gradient quantized value a little, as Fig. 1 (a).These Grad have formed a direction histogram, as shown in Fig. 1 (b).Formula 1-1 is the gradient quantized value computing formula of neighborhood point L (x, y), and formula 1-2 is the gradient direction computing formula of field point L (x, y).The value of calculating according to these two formula be take direction O as index, is placed in the histogram of 36 dimensions, and wherein each bin represents the direction of 10 degree.And peak value in histogram is the principal direction of unique point.
m ( x , y ) = ( L ( x + 1 , y ) - L ( x - 1 , y ) ) 2 + ( L ( x , y + 1 ) - L ( x , y - 1 ) ) 2 - - - ( 1 - 1 )
θ ( x , y ) = arctan ( L ( x , y + 1 ) - L ( x , y - 1 ) L ( x + 1 , y ) - L ( x - 1 , y ) ) - - - ( 1 - 2 )
Then centered by unique point, set up the subregion of 3X3, each region is the pixel-matrix of 5X5, every sub regions is described by the Grad of 4 directions, for the weights of the Grad of this subregion, by this pixel, the distance to subregion center determines each pixel, distance is far away, and weights are less.The description of therefore every sub regions having been set up to 4 dimensional vectors, 9 sub regions jointly form 36 dimensional vectors of unique point and describe, as Fig. 2; Or set up 4X4 sub regions, every sub regions is described a unique point by the vector description (as Fig. 3) of 128 dimensions by the Grad of 8 directions.
Step C: structure overflows tree (Spill Forest) and carries out characteristic matching;
Every the Spill tree root node of tree Spill Forest that overflows of structure comprises 50~80 leafy nodes, the vector of unique point is searched for and is mated in every tree, every tree can be found a highest leafy node of matching degree, the difference of two squares summation of comparative feature point vector and leafy node, judgement minimum value is that the match is successful.
Step D: bad point is got rid of;
Although SIFT feature is a very powerful description, it still produces bad point, before doing attitude estimation, must exclude bad point.Get rid of the first-selection that breaks down according to the principal direction of unique point, exclude gradient direction and principal direction and differ larger unique point; Then remaining unique point is carried out to geometrical test.All unique points are carried out to matching degree sequence, from the highest two unique points of matching degree, two unique points are determined straight line, if most of remaining unique points judge that on same one side of straight line these two unique points are better, otherwise one of them is bad point, carry out about 30 tests, exclude all bad points; Finally use homography matrix to exclude remaining bad point.
Step e: carry out attitude estimation.
Above-mentioned image registration techniques, has following shortcoming:
1), owing to adopting FAST algorithm to extract image characteristic point, SIFT yardstick and directional information originally lost, therefore need the real-time unique point of input picture is described under different scale, can take the memory headroom of several times.
2), structure Spill Forest can take a large amount of internal memories, as Fig. 4, shown the situation of the Spill Forest committed memory of a typical data set structure different scales.
3) although, principal direction is calculated is that unique point has had direction unchangeability, has increased certain working time;
4), each unique point is described by the proper vectors of 128 dimensions or 36 dimensions in SIFT algorithm, information redundance is high, and algorithm space complexity is high;
5), get rid of bad point and significantly increase working time.
Above-mentioned conventional images registration technology may can reach better registration effect in common computer, but this many restriction that are subject in aspects such as calculated performance, memory headrooms of the mobile terminals such as smart mobile phone, and this registration technology is no longer applicable on mobile terminal.Main manifestations is calculation of complex, can cause system response time sharply to increase; Data volume is huge, can make memory usage high.Therefore simple algorithm is transplanted accurate, the real-time registration that can not realize image on mobile terminal.
Summary of the invention
The object of this invention is to provide a kind of method for registering images, device and use the augmented reality system of this method for registering images and device, and a kind of mobile terminal that comprises described augmented reality system, solve conventional images registration technology and be not suitable for accurate, the real-time matching problem that realizes image on mobile terminal, provide a kind of committed memory little, the method for registering images that execution efficiency is high and device.
In order to realize foregoing invention object, the invention provides a kind of method for registering images, comprising:
Input picture is carried out to feature point detection and extraction;
Each unique point extracting is carried out to matrix description according to sparse sampling model respectively, obtain the scale-of-two Description Matrix of each unique point of input picture, the pixel battle array that described sparse sampling model is N*N, wherein N is more than or equal to 2 and be less than or equal to 64 integer, preferably, the span of N is 5~9, and optimum is 8.
By the scale-of-two Description Matrix of each unique point of input picture, mate calculating with the scale-of-two Description Matrix of each unique point in feature samples database, obtain image registration results.
Wherein, described each unique point extracting is carried out to matrix description according to sparse sampling model respectively, obtains the scale-of-two Description Matrix of each unique point of input picture, further comprise:
Pixel in each unique point neighborhood respectively input picture being extracted carries out sparse sampling, obtains N*N pixel battle array;
The N*N pixel battle array of each unique point is extracted to gray-scale value, obtain the gray matrix of N*N;
The gray matrix of each unique point is carried out to the grey level quantization of K kind different rank, and 1 N for the grey level quantization matrix of every kind of exponent number 2dimensional vector is described, and wherein K is more than or equal to 4 and be less than or equal to 10 integer;
To between the whole gray area from white to black, be divided into K sub-range, according to each pixel in the pixel battle array of the N*N of unique point, in the grey level quantization value of various exponent numbers, whether fall into each gray scale sub-range, use respectively N 2* K matrix is described each unique point of input picture, obtains the scale-of-two Description Matrix of each unique point of input picture, and wherein, in practice, K is preferably 5 or 6.
Wherein, the scale-of-two Description Matrix of described each unique point of input picture, can be specially following matrix form:
Figure BDA00001900403100041
Wherein, every a line R i, 0r i, 1r i, k-1whether a corresponding pixel i drops on each gray scale sub-range, and
R i , j = 1 , B j < I i , j < B j + 1 0 , I i , j &le; B j &cup; I i , j &GreaterEqual; B j + 1
Wherein, I i, jthe gray-scale value of pixel i under j kind exponent number in the sparse sampling pixel battle array of expression input picture, B jthe minimum gradation value that represents gray scale sub-range j.
Wherein, described input picture is carried out to feature point detection and extraction step before, also comprise:
Samples pictures is carried out to feature point detection and extraction;
Each unique point extracting is carried out to matrix description according to sparse sampling model respectively, obtain the scale-of-two Description Matrix of each unique point of samples pictures, the pixel battle array that described sparse sampling model is N*N;
Deposit the scale-of-two Description Matrix of each unique point of samples pictures in feature samples database.
Wherein, described each unique point extracting is carried out to matrix description according to sparse sampling model respectively, obtains the scale-of-two Description Matrix of each unique point of samples pictures, further comprise:
Pixel in each unique point neighborhood respectively samples pictures being extracted carries out sparse sampling, obtains N*N pixel battle array;
The N*N pixel battle array of each unique point is extracted to gray-scale value, obtain the gray matrix of N*N;
The gray matrix of each unique point is carried out to the grey level quantization of K kind different rank, and 1 N for the grey level quantization matrix of every kind of exponent number 2dimensional vector is described, and wherein K is more than or equal to 4 and be less than or equal to 10 integer;
To between the whole gray area from white to black, be divided into K sub-range, according to each pixel in the N*N pixel battle array of unique point, in the grey level quantization value of various exponent numbers, whether fall into each gray scale sub-range, use respectively N 2* each unique point of K matrix description samples pictures, obtains the scale-of-two Description Matrix of each unique point of samples pictures.
Wherein, the scale-of-two Description Matrix of described each unique point of samples pictures, can be specially following matrix form:
Figure BDA00001900403100051
Wherein, every a line D i, 0d i, 1d i, k-1whether a corresponding pixel i drops on each gray scale sub-range, and
D i , j = 1 , B j < G i , j < B j + 1 0 , G i , j &le; B j &cup; G i , j &GreaterEqual; B j + 1
Wherein, G i, jthe gray-scale value of pixel i under j kind exponent number in the sparse sampling pixel battle array of expression sample image, B jthe minimum gradation value that represents gray scale sub-range j.
Wherein, described by the scale-of-two Description Matrix of each unique point of input picture, mate calculating with the scale-of-two Description Matrix of each unique point in feature samples database, obtain image registration results, further comprise:
By described D i, jwith described R i, jcarry out and computing, obtain the distinctiveness ratio of each unique point in each unique point of input picture and feature samples database, wherein D i, jand R i, jbe the element in scale-of-two Description Matrix;
When described distinctiveness ratio is less than setting threshold, judge Feature Points Matching success;
In the unique point of input picture and feature samples database, the successful number of Feature Points Matching of certain samples pictures is greater than setting threshold, judges described input picture and the success of this samples pictures registration.
Preferably, described method for registering images also comprises: each unique point in feature samples database is set up to aspect indexing, and all unique points in feature samples database are set up to index tree, the corresponding aspect indexing value of each aspect indexing; And described, by the scale-of-two Description Matrix of each unique point of input picture, mate calculating with the scale-of-two Description Matrix of each unique point in feature samples database, before obtaining image registration results, also comprise:
Each unique point of input picture is set up to aspect indexing;
According to the aspect indexing value of each unique point of input picture, in described index tree, search respectively and whether have identical aspect indexing value;
If found identical aspect indexing value in index tree, the scale-of-two Description Matrix of this unique point in input picture is mated to calculating with the scale-of-two Description Matrix of character pair point in feature samples database.
Preferably, each unique point of feature samples database or input picture is set up to aspect indexing, comprising:
From the sparse sampling model of unique point, choose at random comprise unique point self 5~21 pixels as index point;
If the gray-scale value of index point is greater than the average gray of all pixels in sparse sampling model, the value of remembering this index point is 1, otherwise is designated as 0;
5~21 index point sequences are quantified as to the aspect indexing value of 5~21 bits.
Preferably, described index tree is the distressed structure of B+ tree construction or B+ tree.
Preferably, described input picture is carried out in feature point detection and extraction step, if detected unique point is greater than M, chooses at random M unique point and extract, wherein, M is more than or equal to 100 and be less than or equal to 700 integer (M is preferably 200-400); If detected unique point is less than M, extract the unique point having detected, and described input picture is set up to scale factor is 2 to 6 image pyramid (general scale factor selects 2 or 4), and the lower one deck to image pyramid carries out feature point extraction, until the unique point of extracting arrives M.
Accordingly, the present invention also provides a kind of image registration device, comprising:
Feature point extraction module, for carrying out feature point detection and extraction to input picture;
Matrix description module, for each unique point extracting is carried out to matrix description according to sparse sampling model respectively, obtain the scale-of-two Description Matrix of each unique point of input picture, the pixel battle array that described sparse sampling model is N*N, wherein N is more than or equal to 2 and be less than or equal to 64 integer;
Coupling computing module, for by the scale-of-two Description Matrix of each unique point of input picture, mates calculating with the scale-of-two Description Matrix of each unique point in feature samples database, obtains image registration results.
Wherein, described matrix description module comprises:
Sparse sampling unit, carries out sparse sampling for the pixel in each feature neighborhood of a point that described feature point extraction module is extracted, and obtains N*N pixel battle array;
Gray-scale value extraction unit, extracts gray-scale value for the N*N pixel battle array of each unique point that described sparse sampling unit is obtained, obtains the gray matrix of N*N;
Quantifying unit, carries out the grey level quantization of K kind different rank for the gray matrix of each unique point that described gray-scale value extraction unit is obtained, and by 1 N for the grey level quantization matrix of every kind of exponent number 2dimensional vector is described, and wherein K is more than or equal to 4 and be less than or equal to 10 integer;
Description Matrix generation unit, this unit will be divided into K sub-range between the whole gray area from white to black, and in the grey level quantization value of various exponent numbers, whether falls into each gray scale sub-range according to each pixel in the pixel battle array of the N*N of unique point, uses respectively N 2* K matrix is described each unique point of input picture, obtains the scale-of-two Description Matrix of each unique point of input picture.
Wherein, described image registration device also comprises memory module, for storing feature samples database, stores the scale-of-two Description Matrix of each unique point of samples pictures in described feature samples database.
Wherein, described coupling computing module comprises:
Unique point distinctiveness ratio computing unit, for the element in the scale-of-two Description Matrix of each unique point in the element of the scale-of-two Description Matrix of each unique point of input picture and feature samples database is carried out and computing, obtain the distinctiveness ratio of each unique point in each unique point of input picture and feature samples database;
Feature Points Matching unit, whether successful for the distinctiveness ratio judging characteristic point coupling calculating according to described unique point distinctiveness ratio computing unit;
Images match unit, for according to the successful number of Feature Points Matching of the unique point of input picture and feature samples database samples pictures, judges the whether registration success of described input picture and this samples pictures.
Preferably, described memory module is also for storing the aspect indexing of all unique points of feature samples database, the corresponding aspect indexing value of each aspect indexing, and all aspect indexing values are stored in memory module with the form of index tree;
Described image registration device also comprises:
Index generation unit, sets up aspect indexing for each unique point of input picture that described feature point extraction module is extracted;
Search unit, be used for according to the aspect indexing value of each unique point of input picture, by searching index tree, find out the samples pictures unique point with it with same characteristic features index value, and the scale-of-two Description Matrix of the scale-of-two Description Matrix of this input picture unique point and corresponding sample characteristics point is sent to described coupling computing unit mate calculating.
Preferably, described feature point extraction module comprises that detecting unit, judging unit, extraction unit and pyramid set up unit, wherein:
Described detecting unit, for carrying out feature point detection to input picture;
Described judging unit, be used for judging whether the detected unique point number of described detecting unit is greater than M, when judging the detected unique point of described detecting unit and be greater than M, order extraction unit is chosen at random M unique point and is extracted from the detected all unique points of detecting unit, wherein, M is more than or equal to 100 and be less than or equal to 700 integer; When described judging unit is judged the detected unique point of described detecting unit and is less than M, order extraction unit extracts the unique point having detected, and ordering pyramid to set up unit, described input picture is set up to scale factor is 2 to 6 image pyramid, and order extraction unit carries out feature point extraction to lower one deck of image pyramid, until the unique point of extracting arrives M.
Accordingly, the present invention also provides a kind of augmented reality system, comprises camera assembly, image format conversion assembly, image registration assembly and actual situation fusion component, wherein:
Described camera assembly, the scene image of taking for capture camera;
Described image format conversion assembly is RGB image and gray level image for the image format conversion that described camera assembly is caught;
Described image registration assembly is above-mentioned image registration device, for image that described camera assembly is caught and the samples pictures of sample database, carries out registration;
Described actual situation fusion component, the virtual information shining upon for the samples pictures that the RGB image of described image format conversion assembly conversion and described image registration device are registrated to carries out actual situation fusion, completes playing up and presenting of figure.
Accordingly, the present invention also provides a kind of mobile terminal, and described mobile terminal comprises above-mentioned augmented reality system.
Compared with prior art, the present invention has following beneficial effect:
1) the present invention carries out matrix description by sparse sampling model to unique point, obtain the scale-of-two Description Matrix of unique point, such sparse sampling model and binary character description method, significantly reduced the information redundance that feature is described, made feature describe the memory headroom taking and significantly decline;
2) the present invention describes this feature according to binary features, has adopted the logic and operation that execution efficiency is higher when carrying out characteristic matching, can effectively reduce the operation time of characteristic matching;
3) adopt aspect indexing structure B+ tree to solve the problem of overflowing a large amount of committed memories of tree data structure space of constructing.
Accompanying drawing explanation
In order to be illustrated more clearly in the embodiment of the present invention or technical scheme of the prior art, to the accompanying drawing of required use in embodiment or description of the Prior Art be briefly described below, apparently, accompanying drawing in the following describes is only some embodiments of the present invention, for those of ordinary skills, do not paying under the prerequisite of creative work, can also obtain according to these accompanying drawings other accompanying drawing:
Fig. 1 selects schematic diagram and corresponding direction histogram to the principal direction of unique point in prior art;
Fig. 2 is unique point descriptive model schematic diagram in prior art;
Fig. 3 is that the gradient direction of unique point in prior art is described schematic diagram;
Fig. 4 is the situation schematic diagram of the Spill Forest committed memory of typical data set structure different scales;
Fig. 5 is method for registering images schematic flow sheet one in the embodiment of the present invention;
Fig. 6 is the process of establishing schematic flow sheet of feature samples database in the embodiment of the present invention;
Fig. 7 is a kind of unique point sparse sampling array schematic diagram in the embodiment of the present invention;
Fig. 8 sets up the sparse sampling array schematic diagram of index to unique point in the embodiment of the present invention;
Fig. 9 is B+ index tree storage organization schematic diagram in the embodiment of the present invention;
Figure 10 is the structural representation one of embodiment of the present invention image registration device;
Figure 11 is the structural representation two of embodiment of the present invention image registration device;
Figure 12 is a kind of structural representation of embodiment of the present invention augmented reality system;
Figure 13 is embodiment of the present invention augmented reality working-flow schematic diagram.
Embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is clearly and completely described, obviously, described embodiment is only the present invention's part embodiment, rather than whole embodiment.Embodiment based in the present invention, those of ordinary skills, not making the every other embodiment obtaining under creative work prerequisite, belong to the scope of protection of the invention.
Due to system architecture and the performance difference of common computer and mobile intelligent terminal, the image registration algorithm that is applicable to common computer is simply transplanted on mobile intelligent terminal, can not meet real-time and the accuracy requirement of system operation.The present invention improves mainly for the deficiency on system real time in this process of natural feature matching of the augmented reality based on mobile intelligent terminal, on this class constrained system of mobile intelligent terminal (as smart mobile phone and panel computer etc.), realize detecting fast and accurately and describing physical feature.
Referring to Fig. 5, be method for registering images schematic flow sheet one in the embodiment of the present invention, the method for registering images in this embodiment, comprises the steps:
S101: input picture is carried out to feature point detection and extraction;
Wherein, the detection of unique point can be undertaken by FAST Corner Detection Algorithm, can certainly detect by other any particular algorithms, in this step, input picture is preferably gray level image in addition, if input picture is non-gray level image, should first be converted into detection and the extraction of carrying out again unique point after gray level image.
S102: each unique point extracting is carried out to matrix description according to sparse sampling model respectively, obtain the scale-of-two Description Matrix of each unique point of input picture, the pixel battle array that described sparse sampling model is N*N, wherein N is more than or equal to 2 and be less than or equal to 64 integer, it should be noted that, the span of N is 5~9 o'clock effects better (referring to Fig. 7, the unique point sparse sampling pixel battle array during for N=8).
S103: by the scale-of-two Description Matrix of each unique point of input picture, mate calculating with the scale-of-two Description Matrix of each unique point in feature samples database, obtain image registration results.Described feature samples database just establishes before image registration carrying out, and, before step S101, also comprises and sets up feature samples database, and the concrete method for building up of feature samples database can enter to describe in detail later.
Method for registering images in the present embodiment is applicable to the various application that need to carry out image registration, is particularly useful for being applicable to carrying out on mobile terminal image registration.Above recapitulatively introduced the realization flow of method for registering images in the embodiment of the present invention, below in conjunction with object lesson, each step is described in detail.
how paper sets up feature samples database:
Tracking registration based on vision need to be set up the huge feature samples database of data volume, the physical feature of the samples pictures that comprises the various visual angles that each can tracking target.This feature samples library database is set up required samples pictures, can pass through the multi-angled shooting to realistic objective, or be obtained by the different yardstick of reference picture, the affined transformation of rotation.Due to realistic objective is carried out to multi-angled shooting, Comparision is loaded down with trivial details, and is difficult for containing all visual angles, therefore preferably adopts the sample set to target that reference picture is carried out to affine variation.In addition, in samples pictures, add random noise and distortion, can allow training feature out there is better robustness.
A target is set up to feature samples database, need to comprise the feature samples on each visual angle of this target (affined transformation of a visual angle of target to reply reference picture), but the affined transformation scope of target is large, do as a whole while carrying out characteristic matching, time complexity is high, therefore whole visual angle change scope can be divided into several subsets, each subset comprises visual angle change among a small circle, and stores with tree.Feature detection is carried out Corner Detection to each affined transformation subset respectively, and the position of each angle point in reference picture can be obtained by the inverse transformation of affined transformation.Accordingly, feature samples storehouse is also divided into some subsets according to different visual angles, and relatively independent.The common construction feature subset of all images in subset, the image of current detection is found new unique point, join in character subset, otherwise the lower piece image that antithetical phrase is concentrated carries out feature detection.Group concentrates all image detection to finish, and therefrom selects n unique point of repetition rate the highest (this unique point of the higher meaning of repetition rate is more stable) as the feature set of this visual angle subset.When actual implementation feature samples, consider the situation of smart mobile phone memory-limited, reference picture is built to the character subset of four direction, then carry out all images in affined transformation generating feature subset, can keep like this having a good robustness to affine, feature samples can be controlled to an order of magnitude that is applicable to smart mobile phone again.
Referring to Fig. 6, the process of establishing schematic flow sheet for feature samples database in the embodiment of the present invention, comprises the steps:
Step S201: samples pictures is carried out to feature point detection and extraction, and the sample characteristics point extracting in this step is aforesaid to after all image detection finish in samples pictures subset, several the highest unique points of repetition rate of therefrom selecting.
Step S202: each unique point extracting is carried out to matrix description according to sparse sampling model respectively, obtain the scale-of-two Description Matrix of each unique point of samples pictures, (N is always identical value with the N in abovementioned steps S102 to the pixel battle array that described sparse sampling model is N*N in same embodiment, the binary matrix describing mode of samples pictures unique point is identical with input picture binary matrix describing mode, but assignment is just the opposite).
Concrete, in step S202, each unique point of samples pictures is carried out to binary matrix description and further comprise the steps:
A1: respectively the pixel in each unique point neighborhood extracting from samples pictures is carried out to sparse sampling, obtain N*N pixel battle array G, the unique point sparse sampling 8*8 pixel battle array G when N=8 can be as Fig. 7 form (middle that rhombus pore representation feature point).
A2: the N*N pixel battle array of each unique point is extracted to gray-scale value, obtain the gray matrix of N*N;
A3: the gray matrix of each unique point is carried out to the grey level quantization of K kind different rank, and 1 N for the grey level quantization matrix of every kind of exponent number 2dimensional vector is described, and wherein K is more than or equal to 4 and be less than or equal to 10 integer, and preferably K gets 5 or 6.
A4: will be divided into K sub-range between the whole gray area from white to black, whether fall into each gray scale sub-range according to each pixel in the N*N pixel battle array of unique point in the grey level quantization value of various exponent numbers, use respectively N 2* each unique point of K matrix description samples pictures, obtains the scale-of-two Description Matrix D of each unique point of samples pictures.The scale-of-two Description Matrix D of described each unique point of samples pictures, can be specially following matrix form:
Figure BDA00001900403100121
Wherein, every a line D i, 0d i, 1d i, k-1whether a corresponding pixel i drops on each gray scale sub-range, and
D i , j = 1 , B j < G i , j < B j + 1 0 , G i , j &le; B j &cup; G i , j &GreaterEqual; B j + 1
Wherein, G i, jthe gray-scale value of pixel i under j kind exponent number in the sparse sampling pixel battle array of expression sample image, B jthe minimum gradation value that represents gray scale sub-range j.Adopt such description, each pixel needs k position, and the description of the pixel battle array of N*N takies k*N 2the space of/8 bytes, adds the storage space that the position of unique point in reference picture accounts for 4 bytes, and the feature of each unique point is described and taken k*N 2the storage space of/8+4 byte.
Step S203: deposit the scale-of-two Description Matrix of each unique point of samples pictures in feature samples database.
introduce below in step S102 and how each unique point of input picture carried out to binary matrix description:
The binary matrix describing mode of input picture unique point identical with samples pictures binary matrix describing mode (assignment is contrary), comprises the steps:
B1: the pixel in each unique point neighborhood respectively input picture being extracted carries out sparse sampling, obtains N*N pixel battle array, the unique point sparse sampling 8*8 pixel battle array I when N=8 equally can be as Fig. 7 form.
B2: the N*N pixel battle array of each unique point is extracted to gray-scale value, obtain the gray matrix of N*N;
B3: the gray matrix of each unique point is carried out to the grey level quantization of K kind different rank, and 1 N for the grey level quantization matrix of every kind of exponent number 2dimensional vector is described, and wherein K is more than or equal to 4 and be less than or equal to 10 integer, is preferably 5 or 6.
B4: will be divided into K sub-range between the whole gray area from white to black, whether fall into each gray scale sub-range according to each pixel in the pixel battle array of the N*N of unique point in the grey level quantization value of various exponent numbers, use respectively N 2* K matrix is described each unique point of input picture, obtains the scale-of-two Description Matrix of each unique point of input picture.The scale-of-two Description Matrix R of described each unique point of input picture, can be specially following matrix form:
Figure BDA00001900403100131
Wherein, every a line R i, 0r i, 1r i, 1r i, k-1whether a corresponding pixel i drops on each gray scale sub-range, and
R i , j = 1 , B j < I i , j < B j + 1 0 , I i , j &le; B j &cup; I i , j &GreaterEqual; B j + 1
Wherein, I i, jthe gray-scale value of pixel i under j kind exponent number in the sparse sampling pixel battle array of expression input picture, B jthe minimum gradation value that represents gray scale sub-range j.
It should be noted that when setting up sample characteristics database, samples pictures unique point is carried out to binary matrix while describing, if what adopt is 8*8 pixel battle array, what obtain is the samples pictures unique point scale-of-two Description Matrix of 64*5, so input picture is being carried out to binary matrix while describing, need to use the pixel battle array of 8*8 equally, obtain the input picture unique point scale-of-two Description Matrix (being that describing mode is consistent) of 64*5.
introduce below and in step S103, how to carry out image registration calculating:
The essence of the scale-of-two Description Matrix D of each unique point of samples pictures is: around characterizing samples pictures unique point, whether sampled point drops in each gray scale sub-range; The input picture of coupling should have the sampled point of most of sampled point and sample image to drop in same grayscale sub-range.Therefore, when carrying out real-time images match, by sampled point, the distribution in gray scale sub-range carrys out the distinctiveness ratio between calculating input image and sample image., in matching process, the unique point that in feature samples, matching degree is the highest is the point of distinctiveness ratio minimum.The main advantages of this simple distinctiveness ratio counting algorithm is that it has mainly used logical operation and a computing is counted in position, can calculate fast large-scale data.
Described step S103, further comprises:
By described D i, jwith described R i, jcarry out and computing, obtain the distinctiveness ratio e of each unique point in each unique point of input picture and feature samples database, shown in (1-3):
e = &Sigma; i , j D i , j &CircleTimes; R i , j - - - ( 1 - 3 )
Because every a line of R only has one, formula (1-3) also can be
e = &Sigma; i ( ( D i , 0 &CircleTimes; R i , 0 ) &CirclePlus; &CenterDot; &CenterDot; &CenterDot; &CirclePlus; ( D i , k - 1 &CircleTimes; R i , k - 1 ) ) - - - ( 1 - 4 )
If by D, each row of R are considered as a N 2the integer d of position j, r j, logical operation can further be simplified, and distinctiveness ratio is expressed as a N 2the position counting (bit count) of position integer
e = bitcount ( ( d 0 &CircleTimes; r 0 ) &CirclePlus; &CenterDot; &CenterDot; &CenterDot; &CirclePlus; ( d k - 1 &CircleTimes; r k - 1 ) ) - - - ( 1 - 5 )
When described distinctiveness ratio be less than setting threshold (as: 2~200, be preferably sampling model pixel sum 10%), judge Feature Points Matching success;
In the unique point of input picture and feature samples database, the successful number of Feature Points Matching of certain samples pictures is greater than setting threshold (as: 50-100), judges described input picture and the success of this samples pictures registration.
Because real-time matching first stage of input picture is that input picture is carried out to FAST-9 feature point detection.Owing to having selected the most stable FAST unique point in each visual angle subset in the feature samples Database stage, need in input picture, not extract too much unique point, through experiment, find, select at random the unique point of 200 left and right can make to follow the tracks of registration and there is good robustness.Therefore, preferably, when input picture being carried out feature point detection and being extracted in described step S101, if detected unique point is greater than M, choosing at random M unique point extracts, wherein, M can be preset as and be more than or equal to 100 and be less than or equal to 700 integer (M is preferably 200-400).
In addition, although added random distortion when setting up feature samples database, can increase to a certain extent the robustness to pattern distortion, can not solve after input picture distortion cannot extract minutiae problem.Therefore, must be increased in FAST Corner Detection in order to improve the accuracy rate to fault image feature point detection, and the yardstick of image diminishes, can effectively weaken distortion problem, therefore, can being less than a certain amount of input picture to detected characteristics point, to set up scale factor be 2 to 6 image pyramid (the general practice mesoscale factor selects 2 or 4), first extract the unique point of original image, then lower one deck of image pyramid is carried out to feature point extraction, until the unique point number of extracting arrival is abundant.
According to formula (1-5), the distinctiveness ratio counting yield of the feature of input picture and sample characteristics is very high, but in actual application, sample size is quite huge, and coupling is can rise along with sample size linearity the time.Therefore, needing a kind of method to reduce the calculation times of distinctiveness ratio, in other words, is the invalid computation of avoiding too much.
Preferably, the embodiment of the present invention adopts a kind of indexing means to address the above problem, be specially: each unique point in feature samples database is set up to aspect indexing, form index tree (preferred, described index tree is the distressed structure of B+ tree construction or B+ tree), the corresponding aspect indexing value of each aspect indexing; And each unique point of input picture is set up to aspect indexing.
Before step S103 carries out image registration, also comprise:
According to the aspect indexing value of each unique point of input picture, in described index tree, search respectively and whether have identical aspect indexing value;
If found identical aspect indexing value in index tree, the scale-of-two Description Matrix of this unique point in input picture is mated to calculating with the scale-of-two Description Matrix of character pair point in feature samples database.
Wherein, the mode of each unique point of feature samples database or input picture being set up to aspect indexing is identical, can be following mode:
From the sparse sampling model of unique point, choose at random comprise unique point self 5~21 pixels as index point; If the gray-scale value of index point is greater than the average gray of all pixels in sparse sampling model, the value of remembering this index point is 1, otherwise is designated as 0; 5~21 index point sequences are quantified as to the aspect indexing value of 5~21 bits.
For example: from unique point sampled point around, select 12 points, add unique point itself totally 13 index points carry out computation index value, as shown in Figure 8.These 12 adopt point and unique point to keep a more rational distance, rotation and change of scale are had to good stability, and distance spatially guarantee its independence each other.
The algorithm of index value is: if the gray scale of index point is greater than all sampled point average gray, the value of this index point is 1, otherwise is 0.Therefore 13 index point sequences are just quantified as the binary number (decimal system is 0~8192) of 13.Its essence is to have characterized to a certain extent the unique point intensity profile of sampled point around, and intensity profile is roughly the same is the necessary condition of two characteristic matching of judgement, therefore can dwindle by this condition the hunting zone of characteristic matching, improves efficiency of algorithm.
Due to each feature has been set up to index, according to index value, can to all features in each feature samples subset, set up B+ data tree structure easily.The descriptor of all unique points is kept to leafy node, be stored in the external memory of mobile device, root node and intermediate node are only stored the mean value that index value is preserved the index value of its all leafy nodes, be stored in main memory, can effectively solve the situation of memory-limited like this, reduce the waste of system resource.
B+ tree is a kind of multi-path search tree, and the subtree pointer number of its non-leafy node is identical with key word number, the subtree pointer P[i of non-leafy node], sensing key value belongs to [K[i], K[i+1]) subtree.Its all key words (being aspect indexing value) are all stored in the chained list of leafy node, and are orderly, and non-leafy node is the index of leafy node, and leafy node is the data Layer of storage data.
Its rank, three rank B+ tree construction is as shown in Figure 9 to suppose to have 27 unique points (herein signal get less value to facilitate calculating, actual index value span is 0~8192).
B+ tree signature search process is key word index process.If search key k in the B+ tree T on 3 rank, top layer call form be B+TREE-SEARCH (root[T], k).If k is in T, B+TREE-SEARCH just returns to one by node y and makes key ithe ordered pair (y, i) that the subscript i that [y]=k sets up forms.Otherwise rreturn value NIL.Its false code is as follows:
Figure BDA00001900403100161
The 3 rank B+ trees that degree of depth is h have 2 * 3 at least h-1 key word is to 2 * 3 with B+ tree h-1 aspect indexing value is searched, and on average searching number of times is h (on average search number of times fewer, search efficiency is higher).
Accordingly, the present invention also provides a kind of image registration device, referring to Figure 10, is the structural representation one of embodiment of the present invention image registration device, and described image registration device comprises:
Feature point extraction module 1, for carrying out feature point detection and extraction to input picture;
Matrix description module 2, for each unique point extracting is carried out to matrix description according to sparse sampling model respectively, obtain the scale-of-two Description Matrix of each unique point of input picture, the pixel battle array that described sparse sampling model is N*N, wherein N is more than or equal to 2 and be less than or equal to 64 integer;
Coupling computing module 3, be used for the scale-of-two Description Matrix of each unique point of input picture, mate calculating with the scale-of-two Description Matrix of each unique point in feature samples database, obtain image registration results, described feature samples database is stored in memory module 4, stores the scale-of-two Description Matrix of each unique point of samples pictures in described feature samples database.Described memory module 4 can be image registration device internal module, can be also external connection storage apparatus.
Referring to Figure 11, be the structural representation two of embodiment of the present invention image registration device, as can be seen from Figure 11, described matrix description mould 2 can further comprise:
Sparse sampling unit 21, carries out sparse sampling for the pixel in each feature neighborhood of a point that described feature point extraction module 1 is extracted, and obtains N*N pixel battle array;
Gray-scale value extraction unit 22, extracts gray-scale value for the N*N pixel battle array of each unique point that described sparse sampling unit 21 is obtained, obtains the gray matrix of N*N;
Quantifying unit 23, carries out the grey level quantization of K kind different rank for the gray matrix of each unique point that described gray-scale value extraction unit 22 is obtained, and by 1 N for the grey level quantization matrix of every kind of exponent number 2dimensional vector is described, and wherein K is more than or equal to 4 and be less than or equal to 10 integer;
Description Matrix generation unit 24, this unit will be divided into K sub-range between the whole gray area from white to black, and in the grey level quantization value of various exponent numbers, whether falls into each gray scale sub-range according to each pixel in the pixel battle array of the N*N of unique point, uses respectively N 2* K matrix is described each unique point of input picture, obtains the scale-of-two Description Matrix of each unique point of input picture.
Described coupling computing module 3 can further comprise:
Unique point distinctiveness ratio computing unit 31, be used for the element of the scale-of-two Description Matrix of each unique point of input picture, carry out and computing with the element in the scale-of-two Description Matrix of each unique point in feature samples database, obtain the distinctiveness ratio of each unique point in each unique point of input picture and feature samples database;
Feature Points Matching unit 32, whether successful for the distinctiveness ratio judging characteristic point coupling calculating according to described unique point distinctiveness ratio computing unit 31;
Images match unit 33, for according to the successful number of Feature Points Matching of the unique point of input picture and feature samples database samples pictures, judges the whether registration success of described input picture and this samples pictures.
Unique point distinctiveness ratio computing unit 31, in the situation that sample data amount is less, can realize high-level efficiency and calculate distinctiveness ratio, but in actual application, sample size is quite huge, and coupling is can rise along with sample size linearity the time.Therefore, needing a kind of method to reduce the calculation times of distinctiveness ratio, in other words, is the invalid computation of avoiding too much.
Therefore, a kind of indexing means of the embodiment of the present invention addresses the above problem, preferably, aspect indexing to all unique points in storage feature samples database, the corresponding aspect indexing value of each aspect indexing, all aspect indexing values are stored in memory module 4 with the form of index tree, and described index tree is the distressed structure of B+ tree construction or B+ tree.
In the embodiment of index of reference, described image registration device also comprises:
Index generation unit 5, sets up aspect indexing for each unique point of input picture that described feature point extraction module 1 is extracted;
Search unit 6, be used for according to the aspect indexing value of each unique point of input picture, by searching the index tree of storage in memory module 4, find out the samples pictures unique point with it with same characteristic features index value, and the scale-of-two Description Matrix of the scale-of-two Description Matrix of this input picture unique point and corresponding sample characteristics point is sent to described coupling computing unit mate calculating.
Because the feature samples database of storage in storage unit 4 has been selected the most stable FAST unique point in each visual angle subset at establishment stage, therefore need in input picture, not extract too much unique point.Through experiment, find, select at random the unique point of 200 left and right can make to follow the tracks of registration and there is good robustness.
Therefore, preferred, described feature point extraction module 1 can also comprise that detecting unit 11, judging unit 12, extraction unit 13 and pyramid set up unit 14, wherein:
Described detecting unit 11, for carrying out feature point detection to input picture;
Described judging unit 12, be used for judging whether the detected unique point number of described detecting unit 11 is greater than M, when judging the detected unique point of described detecting unit and be greater than M, order extraction unit 13 is chosen at random M unique point and is extracted from the detected all unique points of detecting unit 11, wherein, M is more than or equal to 100 and be less than or equal to 700 integer; When described judging unit 12 is judged the detected unique point of described detecting unit 11 and is less than M, order extraction unit 13 extracts the unique point having detected, and ordering pyramid to set up the described input picture in 14 pairs of unit, to set up scale factor be 2 to 6 image pyramid, and lower one deck of 13 pairs of image pyramids of order extraction unit carries out feature point extraction, until the unique point of extracting arrives M.
Accordingly, the present invention also provides a kind of augmented reality system, referring to Figure 12, is a kind of structural representation of embodiment of the present invention augmented reality system, comprise camera assembly 71, image format conversion assembly 72, image registration assembly 73 and actual situation fusion component 74, wherein:
Described camera assembly 71, the scene image of taking for capture camera;
Described image format conversion assembly 72, is RGB image and gray level image for the image format conversion that described camera assembly 71 is caught, and described gray level image sends image registration assembly 73 to and carries out image registration;
Described image registration assembly 73 is the image registration device shown in Figure 10 or Figure 11, for image that described camera assembly is caught and the samples pictures of sample database, carries out registration, obtains homography matrix;
Described actual situation fusion component 74, the virtual information shining upon for the samples pictures that the RGB image of described image format conversion assembly 72 conversions and described image registration assembly are registrated to carries out actual situation fusion, completes playing up and presenting of figure.For clearer explanation augmented reality system of the present invention, referring to Figure 13, be embodiment of the present invention augmented reality working-flow schematic diagram.
Actual situation merges and carries out actual situation fusion by comprising on the virtual information of three-dimensional model, text and picture and the input picture that is added to, and completes graph rendering and output.
Homography matrix is the mapping relations between image coordinate system and world coordinate system in essence, and image coordinate system refers to the two-dimensional coordinate system that image is exported on display, and world coordinate system refers to take the three-dimensional system of coordinate of input picture center as coordinate far point.
Accordingly, the present invention also provides a kind of mobile terminal, and described mobile terminal comprises above-mentioned augmented reality system.
Method for registering images disclosed by the invention, device, solved conventional images registration technology and be not suitable for accurate, the real-time matching problem that realizes image on mobile terminal, provides a kind of committed memory little, the method for registering images that execution efficiency is high and device.
Disclosed all features in this instructions, or the step in disclosed all methods or process, except mutually exclusive feature and/or step, all can combine by any way.
Disclosed arbitrary feature in this instructions (comprising any accessory claim, summary and accompanying drawing), unless narration especially all can be replaced by other equivalences or the alternative features with similar object.That is,, unless narration especially, each feature is an example in a series of equivalences or similar characteristics.
The present invention is not limited to aforesaid embodiment.The present invention expands to any new feature or any new combination disclosing in this manual, and the arbitrary new method disclosing or step or any new combination of process.

Claims (19)

1. a method for registering images, is characterized in that, comprising:
Input picture is carried out to feature point detection and extraction;
Each unique point extracting is carried out to matrix description according to sparse sampling model respectively, obtain the scale-of-two Description Matrix of each unique point of input picture, the pixel battle array that described sparse sampling model is N*N, wherein N is more than or equal to 2 and be less than or equal to 64 integer;
By the scale-of-two Description Matrix of each unique point of input picture, mate calculating with the scale-of-two Description Matrix of each unique point in feature samples database, obtain image registration results.
2. the method for claim 1, is characterized in that, described each unique point extracting is carried out to matrix description according to sparse sampling model respectively, obtains the scale-of-two Description Matrix of each unique point of input picture, further comprises:
Pixel in each unique point neighborhood respectively input picture being extracted carries out sparse sampling, obtains N*N pixel battle array;
The N*N pixel battle array of each unique point is extracted to gray-scale value, obtain the gray matrix of N*N;
The gray matrix of each unique point is carried out to the grey level quantization of K kind different rank, and 1 N for the grey level quantization matrix of every kind of exponent number 2dimensional vector is described, and wherein K is more than or equal to 4 and be less than or equal to 10 integer;
To between the whole gray area from white to black, be divided into K sub-range, according to each pixel in the pixel battle array of the N*N of unique point, in the grey level quantization value of various exponent numbers, whether fall into each gray scale sub-range, use respectively N 2* K matrix is described each unique point of input picture, obtains the scale-of-two Description Matrix of each unique point of input picture.
3. method as claimed in claim 2, is characterized in that, the scale-of-two Description Matrix of described each unique point of input picture, is specially:
Figure FDA00001900403000011
Wherein, every a line R i, 0r i, 1r i, k-1whether a corresponding pixel i drops on each gray scale sub-range, and
R i , j = 1 , B j < I i , j < B j + 1 0 , I i , j &le; B j &cup; I i , j &GreaterEqual; B j + 1
Wherein, I i, jthe gray-scale value of pixel i under j kind exponent number in the sparse sampling pixel battle array of expression input picture, B jthe minimum gradation value that represents gray scale sub-range j.
4. method as claimed any one in claims 1 to 3, is characterized in that, described input picture is carried out to feature point detection and extraction step before, also comprise:
Samples pictures is carried out to feature point detection and extraction;
Each unique point extracting is carried out to matrix description according to sparse sampling model respectively, obtain the scale-of-two Description Matrix of each unique point of samples pictures, the pixel battle array that described sparse sampling model is N*N;
Deposit the scale-of-two Description Matrix of each unique point of samples pictures in feature samples database.
5. method as claimed in claim 4, is characterized in that, described each unique point extracting is carried out to matrix description according to sparse sampling model respectively, obtains the scale-of-two Description Matrix of each unique point of samples pictures, further comprises:
Pixel in each unique point neighborhood respectively samples pictures being extracted carries out sparse sampling, obtains N*N pixel battle array;
The N*N pixel battle array of each unique point is extracted to gray-scale value, obtain the gray matrix of N*N;
The gray matrix of each unique point is carried out to the grey level quantization of K kind different rank, and 1 N for the grey level quantization matrix of every kind of exponent number 2dimensional vector is described, and wherein K is more than or equal to 4 and be less than or equal to 10 integer;
To between the whole gray area from white to black, be divided into K sub-range, according to each pixel in the N*N pixel battle array of unique point, in the grey level quantization value of various exponent numbers, whether fall into each gray scale sub-range, use respectively N 2* each unique point of K matrix description samples pictures, obtains the scale-of-two Description Matrix of each unique point of samples pictures.
6. method as claimed in claim 5, is characterized in that, the scale-of-two Description Matrix of described each unique point of samples pictures, is specially:
Figure FDA00001900403000021
Wherein, every a line D i, 0d i, 1d i, k-1whether a corresponding pixel i drops on each gray scale sub-range, and
D i , j = 1 , B j < G i , j < B j + 1 0 , G i , j &le; B j &cup; G i , j &GreaterEqual; B j + 1
Wherein, G i, jthe gray-scale value of pixel i under j kind exponent number in the sparse sampling pixel battle array of expression sample image, B jthe minimum gradation value that represents gray scale sub-range j.
7. method as claimed in claim 6, is characterized in that, by the scale-of-two Description Matrix of each unique point of input picture, mates calculating with the scale-of-two Description Matrix of each unique point in feature samples database, obtains image registration results, further comprises:
By described D i, jwith described R i, jcarry out and computing, obtain the distinctiveness ratio of each unique point in each unique point of input picture and feature samples database;
When described distinctiveness ratio is less than setting threshold, judge Feature Points Matching success;
In the unique point of input picture and feature samples database, the successful number of Feature Points Matching of certain samples pictures is greater than setting threshold, judges described input picture and the success of this samples pictures registration.
8. method as claimed any one in claims 1 to 3, is characterized in that, described method also comprises:
Each unique point in feature samples database is set up to aspect indexing, and all unique points in feature samples database are set up to index tree, the corresponding aspect indexing value of each aspect indexing; And
Described, by the scale-of-two Description Matrix of each unique point of input picture, mate calculating with the scale-of-two Description Matrix of each unique point in feature samples database, before obtaining image registration results, also comprise:
Each unique point of input picture is set up to aspect indexing;
According to the aspect indexing value of each unique point of input picture, in described index tree, search respectively and whether have identical aspect indexing value;
If found identical aspect indexing value in index tree, the scale-of-two Description Matrix of this unique point in input picture is mated to calculating with the scale-of-two Description Matrix of character pair point in feature samples database.
9. method as claimed in claim 8, is characterized in that, each unique point of feature samples database or input picture is set up to aspect indexing, comprising:
From the sparse sampling model of unique point, choose at random comprise unique point self 5~21 pixels as index point;
If the gray-scale value of index point is greater than the average gray of all pixels in sparse sampling model, the value of remembering this index point is 1, otherwise is designated as 0;
5~21 index point sequences are quantified as to the aspect indexing value of 5~21 bits.
10. method as claimed in claim 8, is characterized in that, described index tree is the distressed structure of B+ tree construction or B+ tree.
11. methods as claimed any one in claims 1 to 3, it is characterized in that, described input picture is carried out in feature point detection and extraction step, if detected unique point is greater than M, choosing at random M unique point extracts, wherein, M is more than or equal to 100 and be less than or equal to 700 integer;
If detected unique point is less than M, extract the unique point having detected, and described input picture is set up to scale factor is 2 to 6 image pyramid, and lower one deck of image pyramid is carried out to feature point extraction, until the unique point of extracting arrives M.
12. 1 kinds of image registration devices, is characterized in that, comprising:
Feature point extraction module, for carrying out feature point detection and extraction to input picture;
Matrix description module, for each unique point extracting is carried out to matrix description according to sparse sampling model respectively, obtain the scale-of-two Description Matrix of each unique point of input picture, the pixel battle array that described sparse sampling model is N*N, wherein N is more than or equal to 2 and be less than or equal to 64 integer;
Coupling computing module, for by the scale-of-two Description Matrix of each unique point of input picture, mates calculating with the scale-of-two Description Matrix of each unique point in feature samples database, obtains image registration results.
13. devices as claimed in claim 12, is characterized in that, described matrix description module comprises:
Sparse sampling unit, carries out sparse sampling for the pixel in each feature neighborhood of a point that described feature point extraction module is extracted, and obtains N*N pixel battle array;
Gray-scale value extraction unit, extracts gray-scale value for the N*N pixel battle array of each unique point that described sparse sampling unit is obtained, obtains the gray matrix of N*N;
Quantifying unit, carries out the grey level quantization of K kind different rank for the gray matrix of each unique point that described gray-scale value extraction unit is obtained, and by 1 N for the grey level quantization matrix of every kind of exponent number 2dimensional vector is described, and wherein K is more than or equal to 4 and be less than or equal to 10 integer;
Description Matrix generation unit, this unit will be divided into K sub-range between the whole gray area from white to black, and in the grey level quantization value of various exponent numbers, whether falls into each gray scale sub-range according to each pixel in the pixel battle array of the N*N of unique point, uses respectively N 2* K matrix is described each unique point of input picture, obtains the scale-of-two Description Matrix of each unique point of input picture.
14. devices as described in claim 12 or 13, is characterized in that, described image registration device also comprises memory module, for storing feature samples database, stores the scale-of-two Description Matrix of each unique point of samples pictures in described feature samples database.
15. devices as claimed in claim 14, is characterized in that, described coupling computing module comprises:
Unique point distinctiveness ratio computing unit, for the element in the scale-of-two Description Matrix of each unique point in the element of the scale-of-two Description Matrix of each unique point of input picture and feature samples database is carried out and computing, obtain the distinctiveness ratio of each unique point in each unique point of input picture and feature samples database;
Feature Points Matching unit, whether successful for the distinctiveness ratio judging characteristic point coupling calculating according to described unique point distinctiveness ratio computing unit;
Images match unit, for according to the successful number of Feature Points Matching of the unique point of input picture and feature samples database samples pictures, judges the whether registration success of described input picture and this samples pictures.
16. devices as described in claim 12 or 13, it is characterized in that, described memory module is also for storing the aspect indexing of all unique points of feature samples database, the corresponding aspect indexing value of each aspect indexing, and all aspect indexing values are stored in memory module with the form of index tree;
Described image registration device also comprises:
Index generation unit, sets up aspect indexing for each unique point of input picture that described feature point extraction module is extracted;
Search unit, be used for according to the aspect indexing value of each unique point of input picture, by searching index tree, find out the samples pictures unique point with it with same characteristic features index value, and the scale-of-two Description Matrix of the scale-of-two Description Matrix of this input picture unique point and corresponding sample characteristics point is sent to described coupling computing unit mate calculating.
17. devices as described in claim 12 or 13, is characterized in that, described feature point extraction module comprises that detecting unit, judging unit, extraction unit and pyramid set up unit, wherein:
Described detecting unit, for carrying out feature point detection to input picture;
Described judging unit, be used for judging whether the detected unique point number of described detecting unit is greater than M, when judging the detected unique point of described detecting unit and be greater than M, order extraction unit is chosen at random M unique point and is extracted from the detected all unique points of detecting unit, wherein, M is more than or equal to 100 and be less than or equal to 700 integer;
When described judging unit is judged the detected unique point of described detecting unit and is less than M, order extraction unit extracts the unique point having detected, and ordering pyramid to set up unit, described input picture is set up to scale factor is 2 to 6 image pyramid, and order extraction unit carries out feature point extraction to lower one deck of image pyramid, until the unique point of extracting arrives M.
18. 1 kinds of augmented reality systems, is characterized in that, comprise camera assembly, image format conversion assembly, image registration assembly and actual situation fusion component, wherein:
Described camera assembly, the scene image of taking for capture camera;
Described image format conversion assembly is RGB image and gray level image for the image format conversion that described camera assembly is caught;
Described image registration assembly is the image registration device described in any one in claim 12 to 17, for image that described camera assembly is caught and the samples pictures of sample database, carries out registration;
Described actual situation fusion component, the virtual information shining upon for the samples pictures that the RGB image of described image format conversion assembly conversion and described image registration assembly are registrated to carries out actual situation fusion, completes playing up and presenting of figure.
19. 1 kinds of mobile terminals, is characterized in that, described mobile terminal comprises the augmented reality system described in claim 18.
CN201210247979.3A 2012-07-18 2012-07-18 Method for registering images, device and augmented reality system Active CN103578093B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201210247979.3A CN103578093B (en) 2012-07-18 2012-07-18 Method for registering images, device and augmented reality system
CN201610443680.3A CN106127748B (en) 2012-07-18 2012-07-18 A kind of characteristics of image sample database and its method for building up

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210247979.3A CN103578093B (en) 2012-07-18 2012-07-18 Method for registering images, device and augmented reality system

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN201610443680.3A Division CN106127748B (en) 2012-07-18 2012-07-18 A kind of characteristics of image sample database and its method for building up

Publications (2)

Publication Number Publication Date
CN103578093A true CN103578093A (en) 2014-02-12
CN103578093B CN103578093B (en) 2016-08-17

Family

ID=50049819

Family Applications (2)

Application Number Title Priority Date Filing Date
CN201610443680.3A Active CN106127748B (en) 2012-07-18 2012-07-18 A kind of characteristics of image sample database and its method for building up
CN201210247979.3A Active CN103578093B (en) 2012-07-18 2012-07-18 Method for registering images, device and augmented reality system

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN201610443680.3A Active CN106127748B (en) 2012-07-18 2012-07-18 A kind of characteristics of image sample database and its method for building up

Country Status (1)

Country Link
CN (2) CN106127748B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103929653A (en) * 2014-04-30 2014-07-16 成都理想境界科技有限公司 Enhanced real video generator and player, generating method of generator and playing method of player
CN106528665A (en) * 2016-10-21 2017-03-22 广州视源电子科技股份有限公司 AOI equipment test file searching method and system
CN108427870A (en) * 2017-02-15 2018-08-21 北京京东尚科信息技术有限公司 Hand gesture unlocking method, device, storage medium and electronic equipment
CN109712121A (en) * 2018-12-14 2019-05-03 复旦大学附属华山医院 A kind of method, equipment and the device of the processing of medical image picture
CN106997366B (en) * 2016-01-26 2020-05-15 视辰信息科技(上海)有限公司 Database construction method, augmented reality fusion tracking method and terminal equipment
CN111340114A (en) * 2020-02-26 2020-06-26 上海明略人工智能(集团)有限公司 Image matching method and device, storage medium and electronic device
CN111444985A (en) * 2020-04-26 2020-07-24 南京大学 Image matching method based on histogram matching
CN111861871A (en) * 2020-07-17 2020-10-30 浙江商汤科技开发有限公司 Image matching method and device, electronic equipment and storage medium

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108664583A (en) * 2018-05-04 2018-10-16 北京物灵智能科技有限公司 A kind of index tree method for building up and image search method
CN109117773B (en) * 2018-08-01 2021-11-02 Oppo广东移动通信有限公司 Image feature point detection method, terminal device and storage medium
CN111080241A (en) * 2019-12-04 2020-04-28 贵州非你莫属人才大数据有限公司 Internet platform-based data-based talent management analysis system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050249434A1 (en) * 2004-04-12 2005-11-10 Chenyang Xu Fast parametric non-rigid image registration based on feature correspondences
CN101339658A (en) * 2008-08-12 2009-01-07 北京航空航天大学 Aerial photography traffic video rapid robust registration method
CN102231191A (en) * 2011-07-17 2011-11-02 西安电子科技大学 Multimodal image feature extraction and matching method based on ASIFT (affine scale invariant feature transform)

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101714254A (en) * 2009-11-16 2010-05-26 哈尔滨工业大学 Registering control point extracting method combining multi-scale SIFT and area invariant moment features
CN102782708A (en) * 2009-12-02 2012-11-14 高通股份有限公司 Fast subspace projection of descriptor patches for image recognition
CN102096819B (en) * 2011-03-11 2013-03-20 西安电子科技大学 Method for segmenting images by utilizing sparse representation and dictionary learning

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050249434A1 (en) * 2004-04-12 2005-11-10 Chenyang Xu Fast parametric non-rigid image registration based on feature correspondences
CN101339658A (en) * 2008-08-12 2009-01-07 北京航空航天大学 Aerial photography traffic video rapid robust registration method
CN102231191A (en) * 2011-07-17 2011-11-02 西安电子科技大学 Multimodal image feature extraction and matching method based on ASIFT (affine scale invariant feature transform)

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
DANIEL WAGNER 等: "Pose Tracking from Natural Features on Mobile Phones", 《MIXED AND AUGMENTED REALITY, 2008. ISMAR 2008. 7TH IEEE/ACM INTERNATIONAL SYMPOSIUM ON》, 18 September 2008 (2008-09-18), pages 125 - 134 *
朱英宏 等: "基于LBP的尺度不变特征的描述和匹配算法", 《计算机辅助设计与图形学学报》, vol. 23, no. 10, 15 October 2011 (2011-10-15), pages 1758 - 1763 *
王颖 等: "一种鲁棒的二进制图像特征点描述子", 《东南大学学报(自然科学版)》, vol. 42, no. 2, 20 March 2012 (2012-03-20), pages 265 - 269 *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103929653A (en) * 2014-04-30 2014-07-16 成都理想境界科技有限公司 Enhanced real video generator and player, generating method of generator and playing method of player
CN106997366B (en) * 2016-01-26 2020-05-15 视辰信息科技(上海)有限公司 Database construction method, augmented reality fusion tracking method and terminal equipment
CN106528665A (en) * 2016-10-21 2017-03-22 广州视源电子科技股份有限公司 AOI equipment test file searching method and system
CN106528665B (en) * 2016-10-21 2019-09-03 广州视源电子科技股份有限公司 AOI equipment test file searching method and system
CN108427870A (en) * 2017-02-15 2018-08-21 北京京东尚科信息技术有限公司 Hand gesture unlocking method, device, storage medium and electronic equipment
CN109712121A (en) * 2018-12-14 2019-05-03 复旦大学附属华山医院 A kind of method, equipment and the device of the processing of medical image picture
CN109712121B (en) * 2018-12-14 2023-05-23 复旦大学附属华山医院 Medical image picture processing method, device and apparatus
CN111340114A (en) * 2020-02-26 2020-06-26 上海明略人工智能(集团)有限公司 Image matching method and device, storage medium and electronic device
CN111444985A (en) * 2020-04-26 2020-07-24 南京大学 Image matching method based on histogram matching
CN111444985B (en) * 2020-04-26 2023-04-07 南京大学 Image matching method based on histogram matching
CN111861871A (en) * 2020-07-17 2020-10-30 浙江商汤科技开发有限公司 Image matching method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN106127748A (en) 2016-11-16
CN103578093B (en) 2016-08-17
CN106127748B (en) 2018-11-30

Similar Documents

Publication Publication Date Title
CN103578093B (en) Method for registering images, device and augmented reality system
CN108549891B (en) Multi-scale diffusion well-marked target detection method based on background Yu target priori
CN110246163B (en) Image processing method, image processing device, image processing apparatus, and computer storage medium
Tang et al. Geometric correspondence network for camera motion estimation
CN108734210B (en) Object detection method based on cross-modal multi-scale feature fusion
Liu et al. Fg-net: A fast and accurate framework for large-scale lidar point cloud understanding
CN109816769A (en) Scene map generation method, device and equipment based on depth camera
CN104809731B (en) A kind of rotation Scale invariant scene matching method based on gradient binaryzation
CN111126412B (en) Image key point detection method based on characteristic pyramid network
CN107844795A (en) Convolutional neural network feature extraction method based on principal component analysis
CN103745201B (en) A kind of program identification method and device
CN113205520B (en) Method and system for semantic segmentation of image
CN113159232A (en) Three-dimensional target classification and segmentation method
Uchiyama et al. Toward augmenting everything: Detecting and tracking geometrical features on planar objects
CN107944459A (en) A kind of RGB D object identification methods
CN111311702B (en) Image generation and identification module and method based on BlockGAN
CN111860124A (en) Remote sensing image classification method based on space spectrum capsule generation countermeasure network
CN102982561A (en) Method for detecting binary robust scale invariable feature of color of color image
CN115410081A (en) Multi-scale aggregated cloud and cloud shadow identification method, system, equipment and storage medium
CN110163095B (en) Loop detection method, loop detection device and terminal equipment
CN114663880A (en) Three-dimensional target detection method based on multi-level cross-modal self-attention mechanism
CN109961103A (en) The training method of Feature Selection Model, the extracting method of characteristics of image and device
CN106997366A (en) Database construction method, augmented reality fusion method for tracing and terminal device
CN113704276A (en) Map updating method and device, electronic equipment and computer readable storage medium
Yang et al. An effective and lightweight hybrid network for object detection in remote sensing images

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant