CN103714548A - Infrared image and visible image registration method based on visual attention - Google Patents
Infrared image and visible image registration method based on visual attention Download PDFInfo
- Publication number
- CN103714548A CN103714548A CN201310743788.0A CN201310743788A CN103714548A CN 103714548 A CN103714548 A CN 103714548A CN 201310743788 A CN201310743788 A CN 201310743788A CN 103714548 A CN103714548 A CN 103714548A
- Authority
- CN
- China
- Prior art keywords
- image
- infrared
- visible
- visual
- visible ray
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000000007 visual effect Effects 0.000 title claims abstract description 93
- 238000000034 method Methods 0.000 title claims abstract description 45
- 230000007246 mechanism Effects 0.000 claims abstract description 18
- 238000010606 normalization Methods 0.000 claims abstract description 10
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 8
- 238000004364 calculation method Methods 0.000 claims abstract description 4
- 238000010586 diagram Methods 0.000 claims description 53
- 230000004438 eyesight Effects 0.000 claims description 45
- 238000005452 bending Methods 0.000 claims description 12
- 230000008878 coupling Effects 0.000 claims description 10
- 238000010168 coupling process Methods 0.000 claims description 10
- 238000005859 coupling reaction Methods 0.000 claims description 10
- 238000012545 processing Methods 0.000 claims description 7
- 238000001914 filtration Methods 0.000 claims description 5
- 238000006243 chemical reaction Methods 0.000 claims description 4
- 230000005484 gravity Effects 0.000 claims description 3
- 238000009499 grossing Methods 0.000 claims description 3
- 230000004310 photopic vision Effects 0.000 claims description 3
- 238000001514 detection method Methods 0.000 abstract description 7
- 230000009466 transformation Effects 0.000 abstract description 3
- 239000000284 extract Substances 0.000 description 5
- 238000003384 imaging method Methods 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 230000008901 benefit Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000005286 illumination Methods 0.000 description 2
- 230000008676 import Effects 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000005855 radiation Effects 0.000 description 2
- 238000004088 simulation Methods 0.000 description 2
- 238000012952 Resampling Methods 0.000 description 1
- 230000006978 adaptation Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000000052 comparative effect Effects 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000000746 purification Methods 0.000 description 1
- 238000011403 purification operation Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Images
Landscapes
- Image Processing (AREA)
Abstract
The invention discloses an infrared image and visible image registration method based on a visual attention mechanism. The infrared image and visible image registration method based on the visual attention mechanism comprises the following steps of inputting an original infrared image and an original visible image, and obtaining a visual salience region of the infrared image and a visual salience region of the visible image through a visual attention mechanism model; carrying out rough matching on the visual salience region of the infrared image and the visual salience region of the visible image through invariant moment calculation of the Hu moment; searching for the center of mass in the visual salience region of the infrared image and the center of mass in the visual salience region of the visible image after rough matching is accomplished, using the centers of mass as visual salience points, and using the visual salience points to represent the visual salience regions; carrying out fine matching on the visual salience points on the basis of the de-mean normalization cross-correlation principle; purifying accurately-matched images through an RANSAC algorithm, calculating a registration estimation function, carrying out image transformation through the registration estimation function to achieve registration and output of the images. According to the infrared image and visible image registration method based on the visual attention mechanism, due to the fact that the visual attention mechanism of human eyes is simulated, serves as an image characteristic detection method, and is used for extracting stable characteristics, after matching is carried out, registration between the infrared image and the visible image is accurately achieved.
Description
Technical field
The invention belongs to technical field of image processing, relate in particular to a kind of infrared image and visible light image registration method based on vision noticing mechanism, " region-of-interest " of the image of noticing by simulation human eye, the information of utilization " region-of-interest " is carried out the registration of infrared image and visible images.
Background technology
Image registration techniques is by identical area, in different visual angles, not in the same time, the image taken under different sensors or different illumination conditions carries out the process of spacial alignment.It is a series of projections to imaging plane at different time of real three-dimensional world that Same Scene is taken to the image obtaining, between image and image, there are larger correlativity and information redundancy, so no matter there is the variation of which kind of form in handled image, or by which kind of sensor obtained, always can utilize the information of part constant in image, general character to complete registration.
Abroad since the sixties in 20th century, in image registration field, study, and domesticly just set foot in this field since early 1990s.To 20 end of the centurys, single mode image registration problem solves substantially, but single mode image because imaging source is single, the quantity of information therefore obtaining is also more single.Multi-mode image, owing to deriving from different imaging devices, so can obtain more enriching and comprehensive information than single mode image, merges these complementary informations, significant for tasks such as target identifications.The prerequisite of image co-registration is the accuracy registration of image.
Common image registration mode can be divided at present: the method for registering based on image-region and the method for registering based on characteristics of image.Method for registering based on image-region normally removes the transformation parameter on space geometry between estimated image by a certain region or the entire image of image.The common method for registering based on region has spatial domain associated class method, phase correlation method, probabilistic type measure method etc.Method for registering based on characteristics of image is by extracting the feature that in image, comparative example, rotation, translation, illumination etc. remain unchanged, utilizing the high layer information of image to carry out registration to image.The common method for registering based on characteristics of image comprises: the method based on Structural Characteristics, the method based on wavelet coefficient, the method based on invariant image descriptor and the method based on local invariant feature etc.The basic step of the method for registering images based on feature mainly comprises feature detection, characteristic matching, transformation model parameter estimation and image resampling and conversion.
Infrared and visible light image registration is common multimode image registering.What infrared image reflected is the radiation information of scenery, the reflective information of visible images reflection scenery, and the image of the two output has different gray features.Infrared image can have the object of higher temperature in scene position decided at the higher level but not officially announced, visible images provides background information, and the two is merged, and just can realize the function of locating high temp objects in background.But, due to the radiation information of infrared reflection scenery, and the reflective information of visible ray reflection scenery, both image-forming conditions and scene have certain complicacy, so the correlativity between infrared and visible images is less, the gamma characteristic of image differs greatly, the feature that is lack of consistency, registration difficulty is large.
In order to solve the registration problems between infrared image and visible images, in every field, according to self-demand, many effective algorithms have been produced, in military and remote sensing for infrared and research visible light image registration is more, the content of these images is generally all based on whole scene, in this class image, may have some obvious mark physical efficiencys the unique point that is beneficial to coupling is provided, as the border in the corner of tank, region etc.; And sometimes may be difficult to obtain such unique point in infrared and visible images on industry and civil area, because they have often just characterized the regional area of object, for the registration problems of this class image, how can stably extract infrared and common characteristic visible images and be still difficult point problem at present urgently to be resolved hurrily.
Summary of the invention
The deficiency existing for current multimode image registering, the object of the present invention is to provide a kind of infrared and visible light image registration method based on vision attention, the vision noticing mechanism of simulation human eye of usining extracts stable feature as the characteristic detection method of image, after overmatching, accurately realize the registration of infrared image and visible images.
To achieve these goals, the present invention takes following technical solution:
Infrared image based on vision noticing mechanism and visible light image registration method, comprise the following steps: input original infrared image and primary visible light image,
Step 1, the visual salient region that obtains infrared image and visible images by vision noticing mechanism model:
Step 1-1, original infrared image and primary visible light image are carried out respectively to gray scale processing, obtain infrared gray-scale map and visible ray gray-scale map, infrared luminance graph using infrared gray-scale map under single yardstick, the visible ray luminance graph using visible ray gray-scale map under single yardstick;
Step 1-2, m of structure are of a size of n * n pixel, direction is
gabor wave filter, respectively infrared gray-scale map and visible ray gray-scale map are carried out to filtering, correspondence obtains infrared directional diagram under m single yardstick and the visible ray directional diagram under m single yardstick;
Step 1-3, set up the multiscale space of image: infrared, visible ray directional diagram to infrared, the visible ray luminance graph of single yardstick and single yardstick carry out respectively interlacing step by step every the down-sampled and smoothing processing of row, form the pyramid model of each image, obtain multiple dimensioned infrared, visible ray luminance graph and multiple dimensioned infrared, visible ray directional diagram;
Step 1-4, multiple dimensioned infrared, visible ray luminance graph and multiple dimensioned infrared, visible ray directional diagram are carried out to the poor operation of central authorities' week, utilize pyramid model high-level diagram picture and low tomographic image to subtract computing, that correspondence obtains is infrared, visible ray brightness figure and infrared, visible ray direction character figure;
Step 1-5, to infrared, visible ray brightness figure and infrared, visible ray direction character figure respectively normalization merge, that correspondence obtains is infrared, significantly figure and infrared, visible ray direction are significantly schemed in visible ray brightness;
Step 1-6, linear superposition are obtained total significantly figure, by global search, determine visual salient region;
The remarkable figure of infrared brightness and the remarkable figure of infrared direction are obtained to infrared total significantly figure by linear superposition, the remarkable figure of visible ray brightness and the remarkable figure of visible ray direction are obtained to the total significantly figure of visible ray by linear superposition, by victor, entirely get tactful global search maximum value respectively, obtain the visual salient region of image;
Step 2: the not bending moment by Hu square calculates, and the visual salient region of infrared image and visible images is slightly mated:
Step 2-1, original infrared image and primary visible light image are separately converted to bianry image, by chain code, obtain the profile of earlier figures picture;
Step 2-2, the visual salient region of the visual salient region of infrared image, visible images is imported respectively in the profile of infrared image and the profile of visible images that step 2-1 obtains, obtain the visual salient region profile diagram of infrared image and the visual salient region profile diagram of visible images;
Step 2-3, the visual salient region of infrared image and visible images is slightly mated, step is as follows:
A. the center square u of computation vision marking area
pq:
Wherein, f (x, y) is the grey scale pixel value of pixel in the profile diagram of visual salient region,
,
for the center of gravity of visual salient region profile diagram, N is the height of visual salient region profile diagram, and M is the width of visual salient region profile diagram;
B. normalization center square calculating formula: η
pq=u
pq/ (u
00 ρ), ρ=(p+q)/2+1 wherein;
C. construct the not bending moment of Hu square:
M1=η
20+η
02,
M2=(η
20-η
02)
2+4η
11 2,
M3=(η
30-3η
12)
2+(3η
21-η
03)
2;
D. calculate the first moment ratio of infrared image and visible images: M1/M1', and second moment ratio: M2/M2', M3/M3', if three groups of ratios are all considered as matching area in threshold range;
Step 3, in the infrared image of thick coupling and the visual salient region of visible images, search its barycenter, using barycenter as vision significant point, with vision significant point, represent visual salient region;
Calculate barycenter infrared, photopic vision marking area, barycenter (x
0, y
0) by following formula, calculate:
Wherein, T represents visual salient region, and I (x, y) is this brightness value in visual salient region;
Step 4, based on going average normalized crosscorrelation principle carefully to mate vision significant point;
The vision significant point of choosing infrared image around l * l region is template, by l * l region around the vision significant point of visible images, it is image to be matched, coefficient R (the u of calculation template and image to be matched, v), exact matching is carried out in the visual salient region of infrared image and visible images, when coefficient R (u, v) more than 0.6 is being considered as coupling;
Wherein, the size that x, y are image to be matched, the size that U, V are template, u, v are vision significant point, and f (x, y) is the grey scale pixel value of pixel in image to be matched, and t (x-u, y-v) is the grey scale pixel value of pixel in template,
for the gray average of template,
gray average for image to be matched;
Step 5, employing RANSAC algorithm are purified to the image of thin coupling, calculate registration estimation function, by registration estimation function, carry out image conversion; Realize registration, the output of infrared image and visible images.
Concrete technical scheme of the present invention is that the Gabor wave filter in described step 1-2 is:
Wherein, x
0=xcos θ+ysin θ, y
0=-xsin θ+ycos θ, x, y are the position coordinates of spatial domain pixel, w
0for the centre frequency of wave filter, θ is the little wave line of propagation of Gabor wave filter, and σ is that Gaussian function is along the standard variance of two coordinate axis, exp(jw
0x
0) be alternating component.
Concrete technical scheme of the present invention is, image pyramid model in described step 1-3 is divided into 6 layers, be followed successively by from the bottom up 0~5 layer, wherein the 0th layer is original image, last layer image relatively next tomographic image reduces half successively on length and width, the Gaussian function of the corresponding different scale of different gaussian kernel carries out convolution, successively smoothed image to pyramidal every tomographic image.
Concrete technical scheme of the present invention is that the rule that subtracts computing in described step 1-4 is 0-2,0-3,1-3,1-4,2-4,2-5, can obtain 6 groups of images.
Concrete technical scheme of the present invention is, in described pyramid model during the image subtraction of different layers, small-sized image carried out to interpolation in the same size to guarantee two width picture sizes.
Concrete technical scheme of the present invention is, the step that in described step 1-5, normalization merges is as follows: every width characteristic pattern is obtained to normalized weight coefficient by GBVS method, again every width characteristic pattern is multiplied by normalized weight coefficient, after characteristic pattern weighted stacking, is significantly schemed.
Concrete technical scheme of the present invention is that described weight coefficient is by the local maximum M of every width characteristic pattern and the mean value except local maximum
form,
Concrete technical scheme of the present invention is in described step 2-1, by 8 connection chain codes, to obtain the profile of image:
A. to binary map from top to bottom, point that from left to right search pixel value is 1, as treating starting point, carry out next step;
B. will treat that starting point follows the tracks of as starting point, search for the more lower of current point, in this puts 3 neighborhoods, according to chain code walking, by counter clockwise direction, search for, for the point of having followed the tracks of, making its pixel value is 0, and point of crossing is 3, after all direction search of each starting point, proceed to next starting point; When tracking runs into boundary end point, point of crossing or starting point, define this chain code end-of-encode, if point of crossing, making it is next starting point, the chain code that continues coding belongs to and runs into point of crossing chain code in the past;
One connection boundary search is complete, continues toward search forward, and repeating step a and b, until the search of view picture bianry image is complete, complete all object codings, obtains the profile diagram of image.
Concrete technical scheme of the present invention is that in described step 2-3, the threshold range of the not converter torque ratio value of Hu square is 0.8~1.2.
From above technical scheme, the inventive method is noted " region-of-interest " based on vision noticing mechanism modeling human eye, obtain the visual salient region of image, utilize the not bending moment calculating of Hu square slightly to mate visual salient region, by going average normalized crosscorrelation and RANSAC algorithm to carry out accuracy registration to image, can stablize and extract infrared, the corresponding feature of visible images, for two width or the multiple image with differences such as rotation, skew, yardstick scaling and different visual fields sizes, can both realize good registration, there is higher registration accuracy.
Accompanying drawing explanation
In order to be illustrated more clearly in the embodiment of the present invention or technical scheme of the prior art, to the accompanying drawing that need in embodiment or description of the Prior Art to use be done to simple introduction below, apparently, accompanying drawing in the following describes is only some embodiments of the present invention, for those of ordinary skills, do not paying under the prerequisite of creative work, can also obtain according to these accompanying drawings other accompanying drawing.
Fig. 1 is the process flow diagram of the inventive method;
Fig. 2 a is the original infrared image of the embodiment of the present invention;
Fig. 2 b is the primary visible light image of the embodiment of the present invention;
Fig. 3 a is the infrared brightness characteristic pattern under a certain yardstick of the embodiment of the present invention;
Fig. 3 b is the visible ray brightness figure under a certain yardstick of the embodiment of the present invention;
Fig. 4 a is the infrared direction character figure under a certain yardstick of the embodiment of the present invention;
Fig. 4 b is the visible ray direction character figure under a certain yardstick of the embodiment of the present invention;
Fig. 5 a is that the infrared brightness of the embodiment of the present invention is significantly schemed;
Fig. 5 b is that the visible ray brightness of the embodiment of the present invention is significantly schemed;
Fig. 6 a is that the infrared direction of the embodiment of the present invention is significantly schemed;
Fig. 6 b is that the visible ray direction of the embodiment of the present invention is significantly schemed;
Fig. 7 a is infrared total significantly figure of the embodiment of the present invention;
Fig. 7 b is the total significantly figure of the visible ray of the embodiment of the present invention;
Fig. 8 a is the visual salient region schematic diagram of the infrared image of the embodiment of the present invention;
Fig. 8 b is the visual salient region schematic diagram of the visible images of the embodiment of the present invention;
Fig. 9 a is the schematic diagram of the infrared image visual salient region profile of the embodiment of the present invention;
Fig. 9 b is the schematic diagram of the visible images visual salient region profile of the embodiment of the present invention;
Figure 10 a is that the embodiment of the present invention imports visual salient region the schematic diagram of infrared image profile;
Figure 10 b be the embodiment of the present invention visual salient region is imported to the schematic diagram of visible images profile
Figure 11 is the thick matching result figure of the visual salient region of embodiment of the present invention visible images and infrared image;
Figure 12 is the result figure after the thin coupling of the embodiment of the present invention;
Figure 13 is the result figure of embodiment of the present invention purification operations;
Figure 14 is the net result figure of embodiment of the present invention image registration.
Embodiment
Vision noticing mechanism is a kind of people's watching to be directed to the mechanism on attention object in scene.Conventionally the visual information that enters the people visual field is magnanimity, but from the information of these magnanimity, people still can search the information of wanting.Vision attention is can from the environment of the visual field, " jump out " automatically with the complete distinguished object of surrounding environment and the concern of the power that attracts attention.
The inventive method comprises feature detection and two parts of characteristic matching, feature detection is noted " region-of-interest " based on vision noticing mechanism modeling human eye, obtain the visual salient region of image, characteristic matching utilizes the not bending moment calculating of Hu square slightly to mate visual salient region, by going average normalized crosscorrelation and RANSAC algorithm to carry out accuracy registration to image, can stablize extraction infrared, the corresponding feature of visible images, for thering is rotation, skew, two width or the multiple image of the differences such as yardstick scaling and different visual fields size can both be realized good registration, there is higher registration accuracy.
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme of the embodiment of the present invention is clearly and completely described, obviously, described embodiment is only the present invention's part embodiment, rather than whole embodiment.Embodiment based in the present invention, those of ordinary skills, not making the every other embodiment obtaining under creative work prerequisite, belong to the scope of protection of the invention.
A lot of details have been set forth in the following description so that fully understand the present invention, but the present invention can also adopt other to be different from alternate manner described here and implement, those skilled in the art can do similar popularization without prejudice to intension of the present invention in the situation that, so the present invention is not subject to the restriction of following public specific embodiment.
With reference to Fig. 1, the process flow diagram that Fig. 1 is the inventive method, the concrete steps of the inventive method are as follows: input original infrared image and primary visible light image, as shown in Figure 2 a and 2 b,
Step 1, by vision noticing mechanism model, obtain the visual salient region of infrared image and visible images, concrete steps are as follows:
Step 1-1, original infrared image and primary visible light image are carried out respectively to gray scale processing, obtain infrared gray-scale map and visible ray gray-scale map, infrared luminance graph using infrared gray-scale map under single yardstick, the visible ray luminance graph using visible ray gray-scale map under single yardstick.
Step 1-2, infrared gray-scale map and visible ray gray-scale map are carried out to filtering by Gabor wave filter respectively, obtain infrared directional diagram and visible ray directional diagram under corresponding single yardstick;
M of structure is of a size of n * n pixel, direction is
gabor wave filter, respectively infrared gray-scale map and visible ray gray-scale map are carried out to filtering, correspondence obtains infrared directional diagram under m single yardstick and the visible ray directional diagram under m single yardstick;
Described Gabor wave filter is:
Wherein, x
0=xcos θ+ysin θ, y
0=-xsin θ+ycos θ, x, y are the position coordinates of spatial domain pixel, w
0for the centre frequency of wave filter, θ is the little wave line of propagation of Gabor wave filter, and σ is that Gaussian function is along the standard variance of two coordinate axis, exp(jw
0x
0) be alternating component;
The Gabor wave filter of constructing in the present embodiment be 8 be of a size of 31 * 31 pixels, direction is
gabor wave filter; By obtaining respectively infrared gray-scale map after Gabor filter filtering, exist
directional diagram and visible ray gray-scale map under single yardstick in eight directions exist
directional diagram under single yardstick in eight directions, utilizes Gabor wave filter can find out the feature that just can highlight on specific direction.
Step 1-3, set up the multiscale space of image, infrared, visible ray directional diagram to infrared, the visible ray luminance graph of single yardstick and single yardstick carry out respectively down-sampled and smoothing processing step by step, obtain multiple dimensioned infrared, visible ray luminance graph and multiple dimensioned infrared, visible ray directional diagram;
It is down-sampled every what be listed as that the directional diagram of the single yardstick that the luminance graph of the single yardstick that step 1-1 is obtained and step 1-2 obtain carries out respectively interlacing, forms the pyramid model of each image; Further, the image pyramid model of the present embodiment is divided into 6 layers, be followed successively by from the bottom up 0~5 layer, wherein the 0th layer is original image, last layer image relatively next tomographic image reduces half successively on length and width, the Gaussian function of the corresponding different scales of different gaussian kernel carries out convolution to pyramidal every tomographic image, smoothed image successively, thus the different scale that obtains luminance graph and directional diagram represents; Image under different yardsticks can show the feature of different frequency range: thick yardstick can be found low frequency detail section, detection be profile information; Thin yardstick can be found high frequency detail section, detection be marginal information, image is set up to metric space and can find various feature that just can highlight under different yardsticks;
Step 1-4, multiple dimensioned infrared, the visible ray luminance graph that step 1-3 is obtained and multiple dimensioned infrared, visible ray directional diagram carry out the poor operation of central authorities' week, obtain brightness figure and direction character figure;
In the poor operation of central authorities' week, thin scale feature figure represents middle section, thick scale feature figure represents neighboring area, pyramid model image on the middle and senior level and low tomographic image in step 1-3 are subtracted to computing between two, obtain the high-frequency information of different levels, i.e. the minutia of different levels.For the present embodiment, the rule that subtracts computing is 0-2,0-3,1-3,1-4,2-4,2-5, can obtain 6 groups of images, when image subtraction, small-sized image is carried out to interpolation in the same size to guarantee two width picture sizes.Multiple dimensioned infrared luminance graph and visible ray luminance graph are after the poor operation of central authorities' week, correspondence obtains 6 infrared brightness characteristic patterns and 6 width visible ray brightness figure respectively, if Fig. 3 a is the infrared brightness characteristic pattern under a certain yardstick, Fig. 3 b is the visible ray brightness figure under a certain yardstick.Multiple dimensioned infrared directional diagram and visible ray directional diagram are after the poor operation of central authorities' week, correspondence obtains 48 infrared direction character figure and 48 width visible ray direction character figure respectively, if Fig. 4 a is the infrared direction character figure under a certain yardstick, Fig. 4 b is the visible ray direction character figure under a certain yardstick.
Step 1-5, brightness figure and direction character figure normalization are merged, significantly figure and direction are significantly schemed to obtain brightness;
Every width characteristic pattern is obtained to normalized weight coefficient by GBVS (Graph-Based visual saliency) method, more every width characteristic pattern is multiplied by weight coefficient, after weighted stacking, significantly schemed; As 6 infrared brightness characteristic patterns are carried out to characteristic value normalization operation: find out the local maximum M of every infrared brightness characteristic pattern and the mean value except local maximum
, by local maximum and mean value, form weight coefficient
after normalization, be multiplied by every width characteristic pattern, then 6 width brightness figure after weighting are superposeed to merge obtain a width infrared brightness as shown in Figure 5 a and significantly scheme, the operation of visible ray brightness figure is the same, and significantly figure is as shown in Figure 5 b in visible ray brightness.
For direction character figure, 48 infrared direction character figure are found out to the local maximum of every infrared direction character figure and the mean value except local maximum, each width figure takes advantage of the weight coefficient after normalization, the merging that again 48 width direction character figure after weighting superposeed obtains an infrared direction significantly to be schemed, as shown in Figure 6 a; Equally, 48 width visible ray direction character figure are found out to the local maximum of every width visible ray direction character figure and the mean value except local maximum, each width figure takes advantage of the weight coefficient after normalization, the merging that again 48 width direction character figure superposeed obtains a width visible ray direction significantly to be schemed, as shown in Figure 6 b.
Step 1-6, linear superposition are obtained total significantly figure, by global search, determine visual salient region;
The remarkable figure of infrared brightness and the remarkable figure of infrared direction are obtained to infrared total significantly figure by linear superposition, and soon Fig. 5 a and Fig. 6 a linear superposition obtain infrared total significantly figure as shown in Figure 7a; The remarkable figure of visible ray brightness and the remarkable figure of visible ray direction are obtained to the total significantly figure of visible ray by linear superposition, be about to Fig. 5 b and Fig. 6 b linear superposition and obtain the total significantly figure of visible ray as shown in Figure 7b, the most significant part of the maximum value correspondence image in total significantly figure, by victor, entirely get (Winner-take-all) tactful global search maximum value, obtain the visual salient region of image, as shown in Fig. 8 a and Fig. 8 b.
Step 2: the not bending moment by Hu square calculates, the infrared image that step 1 is obtained slightly mates with the visual salient region of visible images; Due to infrared different with image-forming mechanism visible ray, the image of the two output has different gray features, the visual salient region obtaining by vision noticing mechanism model has difference, and the calculating of the not bending moment by Hu square is slightly mated with the visual salient region of visible images infrared image;
Step 2-1, original image (original infrared image and primary visible light image) is separately converted to bianry image, obtains the profile of original image by 8 connection chain codes, the operation steps of infrared image and visible images is identical, specific as follows:
A. to bianry image from top to bottom, point that from left to right search pixel value is 1, as treating starting point, carry out next step;
B. will treat that starting point follows the tracks of as starting point, search for the more lower of current point, in putting 3 neighborhoods, this according to chain code walking, by counter clockwise direction, searches for, for the point of having followed the tracks of, making its pixel value is 0, point of crossing is 3, after all direction search of each starting point, proceeds to next starting point; When tracking runs into boundary end point, point of crossing or starting point, define this chain code end-of-encode, if point of crossing, making it is next starting point, the chain code that continues coding belongs to and runs into point of crossing chain code in the past;
One connection boundary search is complete, continues toward search forward, and repeating step a and b, until the search of view picture bianry image is complete, complete all object codings, obtains the profile of image.Fig. 9 a is that infrared image passes through the profile that 8 connection chain codes extract, and Fig. 9 b is that visible images passes through the profile that 8 connection chain codes extract.
Step 2-2, the visual salient region of infrared image that step 1 is obtained, the visual salient region of visible images import respectively in the profile of infrared image and the profile of visible images that step 2-1 obtains, obtain the visual salient region profile diagram of infrared image and the visual salient region profile diagram of visible images, as shown in Figure 10 a and Figure 10 b.
After step 2-3, rejecting fine edge and discrete point, utilize the not bending moment calculating of Hu square slightly to mate infrared and visual salient region visible images, step is as follows:
A. the center square u of computation vision marking area
pq:
Wherein, f (x, y) is the grey scale pixel value of pixel in the profile diagram of visual salient region,
,
for the center of gravity of visual salient region profile diagram, N is the height of visual salient region profile diagram, and M is the width of visual salient region profile diagram;
B. normalization center square calculating formula: η
pq=u
pq/ (u
00 ρ), ρ=(p+q)/2+1 wherein;
C. construct the not bending moment of Hu square:
M1=η
20+η
02,
M2=(η
20-η
02)
2+4η
11 2,
M3=(η
30-3η
12)
2+(3η
21-η
03)
2;
D. according to above step, obtain respectively after the not bending moment of infrared image and visible images, calculate the not ratio of bending moment: M1/M1' of infrared image and visible images, M2/M2', M3/M3', the first moment that M1 is infrared image, M2, M3 are the second moment of infrared image, M1' is the first moment of visible images, and the second moment that M2', M3' are visible images, if three groups of ratios are all considered as matching area within the scope of setting threshold, further, this threshold range is 0.8~1.2.
The thick matching result of the visual salient region of visible images and infrared image as shown in figure 11.
Step 3, in the infrared image of the thick coupling being obtained by step 2 and the visual salient region of visible images, search barycenter, using barycenter as vision significant point, with vision significant point, represent visual salient region;
The profile information of the visual salient region after the thick coupling of statistics, calculates barycenter infrared, photopic vision marking area, barycenter is considered as to vision significant point, barycenter (x
0, y
0) by following formula, calculate:
Wherein, T represents visual salient region, and I (x, y) is pixel brightness value in visual salient region;
Step 4, based on going average normalized crosscorrelation principle carefully to mate vision significant point;
The vision significant point of the infrared image that selecting step 3 obtains around l * l region is template, by l * l region around the vision significant point of visible images, it is image to be matched, coefficient R (the u of calculation template and image to be matched, v), mate again visual salient region to infrared image and visible images, R (u, v) is considered as coupling when above 0.6;
Wherein, the size that x, y are image to be matched, the size that U, V are template, u, v represent vision significant point, and f (x, y) is the grey scale pixel value of pixel in image to be matched, and t (x-u, y-v) is the grey scale pixel value of pixel in template,
for the gray average of template,
gray average for image to be matched;
Result after exact matching as shown in figure 12.
Step 5, employing RANSAC algorithm are purified to the image of exact matching, calculate registration estimation function, by registration estimation function, carry out image conversion;
The conventional means that is calculated as prior art of RANSAC algorithm purification of the present invention and registration estimation function, by repeating to extract the initial parameter of minimum point set estimation registration estimation function, utilize initial parameter all vision significant points to be divided into interior point and the incongruent exterior point that meets this function, Figure 13 shows that the schematic diagram with interior point, utilize all interior points to recalculate the parameter of registration estimation function, upgrade registration estimation function, by registration estimation function changing image, realize registration, the output of infrared image and visible images, as shown in figure 14.
Compare with traditional images method for registering, the inventive method has good stability, adaptability and robustness, without manual intervention, registration accuracy is high, can fine adaptation different sensors, the different image changing, for thering is two width of the differences such as rotation, skew, yardstick scaling and different visual fields size or the accuracy registration that multiple image can both be realized image.
The above, it is only preferred embodiment of the present invention, not the present invention is done to any pro forma restriction, although the present invention discloses as above with preferred embodiment, yet not in order to limit the present invention, any those skilled in the art, do not departing within the scope of technical solution of the present invention, when can utilizing the technology contents of above-mentioned announcement to make a little change or being modified to the equivalent embodiment of equivalent variations, in every case be the content that does not depart from technical solution of the present invention, any simple modification of above embodiment being done according to technical spirit of the present invention, equivalent variations and modification, all still belong in the scope of technical solution of the present invention.
Claims (9)
1. the infrared image based on vision noticing mechanism and visible light image registration method, is characterized in that, comprises the following steps: input original infrared image and primary visible light image,
Step 1, the visual salient region that obtains infrared image and visible images by vision noticing mechanism model:
Step 1-1, original infrared image and primary visible light image are carried out respectively to gray scale processing, obtain infrared gray-scale map and visible ray gray-scale map, infrared luminance graph using infrared gray-scale map under single yardstick, the visible ray luminance graph using visible ray gray-scale map under single yardstick;
Step 1-2, m of structure are of a size of n * n pixel, direction is
gabor wave filter, respectively infrared gray-scale map and visible ray gray-scale map are carried out to filtering, correspondence obtains infrared directional diagram under m single yardstick and the visible ray directional diagram under m single yardstick;
Step 1-3, set up the multiscale space of image: infrared, visible ray directional diagram to infrared, the visible ray luminance graph of single yardstick and single yardstick carry out respectively interlacing step by step every the down-sampled and smoothing processing of row, form the pyramid model of each image, obtain multiple dimensioned infrared, visible ray luminance graph and multiple dimensioned infrared, visible ray directional diagram;
Step 1-4, multiple dimensioned infrared, visible ray luminance graph and multiple dimensioned infrared, visible ray directional diagram are carried out to the poor operation of central authorities' week, utilize pyramid model high-level diagram picture and low tomographic image to subtract computing, that correspondence obtains is infrared, visible ray brightness figure and infrared, visible ray direction character figure;
Step 1-5, to infrared, visible ray brightness figure and infrared, visible ray direction character figure respectively normalization merge, that correspondence obtains is infrared, significantly figure and infrared, visible ray direction are significantly schemed in visible ray brightness;
Step 1-6, linear superposition are obtained total significantly figure, by global search, determine visual salient region;
The remarkable figure of infrared brightness and the remarkable figure of infrared direction are obtained to infrared total significantly figure by linear superposition, the remarkable figure of visible ray brightness and the remarkable figure of visible ray direction are obtained to the total significantly figure of visible ray by linear superposition, by victor, entirely get tactful global search maximum value respectively, obtain the visual salient region of image;
Step 2: the not bending moment by Hu square calculates, and the visual salient region of infrared image and visible images is slightly mated:
Step 2-1, original infrared image and primary visible light image are converted to respectively to bianry image, by chain code, obtain the profile of earlier figures picture;
Step 2-2, the visual salient region of the visual salient region of infrared image, visible images is imported respectively in the profile of infrared image and the profile of visible images that step 2-1 obtains, obtain the visual salient region profile diagram of infrared image and the visual salient region profile diagram of visible images;
Step 2-3, the visual salient region of infrared image and visible images is slightly mated, step is as follows:
A. the center square u of computation vision marking area
pq:
Wherein, f (x, y) is the grey scale pixel value of pixel in the profile diagram of visual salient region,
,
for the center of gravity of visual salient region profile diagram, N is the height of visual salient region profile diagram, and M is the width of visual salient region profile diagram;
B. normalization center square calculating formula: η
pq=u
pq/ (u
00 ρ), ρ=(p+q)/2+1 wherein;
C. construct the not bending moment of Hu square:
M1=η
20+η
02,
M2=(η
20-η
02)
2+4η
11 2,
M3=(η
30-3η
12)
2+(3η
21-η
03)
2;
D. calculate the first moment ratio of infrared image and visible images: M1/M1', and second moment ratio: M2/M2', M3/M3', if three groups of ratios are all considered as matching area in threshold range;
Step 3, in the infrared image of thick coupling and the visual salient region of visible images, search its barycenter, using barycenter as vision significant point, with vision significant point, represent visual salient region;
Calculate barycenter infrared, photopic vision marking area, barycenter (x
0, y
0) by following formula, calculate:
Wherein, T is visual salient region, and I (x, y) is pixel brightness value in visual salient region;
Step 4, based on going average normalized crosscorrelation principle carefully to mate vision significant point;
The vision significant point of choosing infrared image around l * l region is template, by l * l region around the vision significant point of visible images, it is image to be matched, coefficient R (the u of calculation template and image to be matched, v), exact matching is carried out in the visual salient region of infrared image and visible images, when coefficient R (u, v) more than 0.6 is being considered as coupling;
Wherein, the size that x, y are image to be matched, the size that U, V are template, u, v are vision significant point, and f (x, y) is the grey scale pixel value of pixel in image to be matched, and t (x-u, y-v) is the grey scale pixel value of pixel in template,
for the gray average of template,
gray average for image to be matched;
Step 5, employing RANSAC algorithm are purified to the image of exact matching, calculate registration estimation function, by registration estimation function, carry out image conversion, realize registration, the output of infrared image and visible images.
2. infrared image and the visible light image registration method based on vision attention according to claim 1, is characterized in that: the Gabor wave filter in described step 1-2 is:
Wherein, x
0=xcos θ+ysin θ, y
0=-xsin θ+ycos θ, x, y are the position coordinates of spatial domain pixel, w
0for the centre frequency of wave filter, θ is the little wave line of propagation of Gabor wave filter, and σ is that Gaussian function is along the standard variance of two coordinate axis, exp(jw
0x
0) be alternating component.
3. infrared image and the visible light image registration method based on vision attention according to claim 1 and 2, it is characterized in that: the image pyramid model in described step 1-3 is divided into 6 layers, be followed successively by from the bottom up 0~5 layer, wherein the 0th layer is original image, last layer image relatively next tomographic image reduces half successively on length and width, the Gaussian function of the corresponding different scale of different gaussian kernel carries out convolution, successively smoothed image to pyramidal every tomographic image.
4. infrared image and the visible light image registration method based on vision attention according to claim 3, is characterized in that: the rule that subtracts computing in described step 1-4 is 0-2,0-3,1-3,1-4,2-4,2-5, can obtain 6 groups of images.
5. infrared image and the visible light image registration method based on vision attention according to claim 3, is characterized in that: in described pyramid model during the image subtraction of different layers, small-sized image is carried out to interpolation in the same size to guarantee two width picture sizes.
6. infrared image and the visible light image registration method based on vision attention according to claim 1, it is characterized in that: the step that in described step 1-5, normalization merges is: every width characteristic pattern is obtained to normalized weight coefficient by GBVS method, again every width characteristic pattern is multiplied by normalized weight coefficient, after characteristic pattern weighted stacking, is significantly schemed.
8. infrared image and the visible light image registration method based on vision attention according to claim 1, is characterized in that: the profile that obtains image in described step 2-1 by 8 connection chain codes:
A. to bianry image from top to bottom, point that from left to right search pixel value is 1, as treating starting point, carry out next step;
B. will treat that starting point follows the tracks of as starting point, search for the more lower of current point, in putting 3 neighborhoods, this according to chain code walking, by counter clockwise direction, searches for, for the point of having followed the tracks of, making its pixel value is 0, point of crossing is 3, after all direction search of each starting point, proceeds to next starting point; When tracking runs into boundary end point, point of crossing or starting point, define this chain code end-of-encode, if point of crossing, making it is next starting point, the chain code that continues coding belongs to and runs into point of crossing chain code in the past;
One connection boundary search is complete, continues toward search forward, and repeating step a and b, until the search of view picture bianry image is complete, complete all object codings, obtains the profile of image.
9. infrared image and the visible light image registration method based on vision attention according to claim 1, is characterized in that: in described step 2-3, the threshold range of the not converter torque ratio value of Hu square is 0.8~1.2.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310743788.0A CN103714548B (en) | 2013-12-27 | 2013-12-27 | Infrared image and visible image registration method based on visual attention |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310743788.0A CN103714548B (en) | 2013-12-27 | 2013-12-27 | Infrared image and visible image registration method based on visual attention |
Publications (2)
Publication Number | Publication Date |
---|---|
CN103714548A true CN103714548A (en) | 2014-04-09 |
CN103714548B CN103714548B (en) | 2017-01-11 |
Family
ID=50407491
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201310743788.0A Expired - Fee Related CN103714548B (en) | 2013-12-27 | 2013-12-27 | Infrared image and visible image registration method based on visual attention |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103714548B (en) |
Cited By (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103971116A (en) * | 2014-04-24 | 2014-08-06 | 西北工业大学 | Area-of-interest detection method based on Kinect |
CN104299223A (en) * | 2014-08-21 | 2015-01-21 | 中国人民解放军63620部队 | Different-source image matching method based on Gabor coding |
CN104504723A (en) * | 2015-01-14 | 2015-04-08 | 西安电子科技大学 | Image registration method based on remarkable visual features |
CN104835129A (en) * | 2015-04-07 | 2015-08-12 | 杭州电子科技大学 | Two-band image fusion method by using local window visual attention extraction |
CN105844235A (en) * | 2016-03-22 | 2016-08-10 | 南京工程学院 | Visual saliency-based complex environment face detection method |
CN107240094A (en) * | 2017-05-19 | 2017-10-10 | 同济大学 | A kind of visible ray and infrared image reconstructing method for electrical equipment on-line checking |
CN107464252A (en) * | 2017-06-30 | 2017-12-12 | 南京航空航天大学 | A kind of visible ray based on composite character and infrared heterologous image-recognizing method |
CN107545552A (en) * | 2017-09-08 | 2018-01-05 | 四川理工学院 | A kind of image rendering method |
CN108520529A (en) * | 2018-03-30 | 2018-09-11 | 上海交通大学 | Visible light based on convolutional neural networks and infrared video method for tracking target |
CN108537833A (en) * | 2018-04-18 | 2018-09-14 | 昆明物理研究所 | A kind of quick joining method of infrared image |
CN108961154A (en) * | 2018-07-13 | 2018-12-07 | 福州大学 | Based on the solar cell hot spot detection method for improving non-down sampling contourlet transform |
CN109146930A (en) * | 2018-09-20 | 2019-01-04 | 河海大学常州校区 | A kind of electric power calculator room equipment is infrared and visible light image registration method |
CN109196551A (en) * | 2017-10-31 | 2019-01-11 | 深圳市大疆创新科技有限公司 | Image processing method, equipment and unmanned plane |
CN109247910A (en) * | 2017-07-12 | 2019-01-22 | 京东方科技集团股份有限公司 | Blood tube display apparatus and blood vessel display method |
CN109285183A (en) * | 2018-08-25 | 2019-01-29 | 南京理工大学 | A kind of multimode video image method for registering based on moving region image definition |
CN109858490A (en) * | 2018-12-21 | 2019-06-07 | 广东电网有限责任公司 | A kind of electrical equipment Infrared Image Features vector extracting method |
CN110021029A (en) * | 2019-03-22 | 2019-07-16 | 南京华捷艾米软件科技有限公司 | A kind of real-time dynamic registration method and storage medium suitable for RGBD-SLAM |
CN111462225A (en) * | 2020-03-31 | 2020-07-28 | 电子科技大学 | Centroid identification and positioning method of infrared light spot image |
CN111681198A (en) * | 2020-08-11 | 2020-09-18 | 湖南大学 | Morphological attribute filtering multimode fusion imaging method, system and medium |
CN113228104A (en) * | 2018-11-06 | 2021-08-06 | 菲力尔商业***公司 | Automatic co-registration of thermal and visible image pairs |
CN114066950A (en) * | 2021-10-27 | 2022-02-18 | 北京的卢深视科技有限公司 | Monocular speckle structure optical image matching method, electronic device and storage medium |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8396249B1 (en) * | 2008-12-23 | 2013-03-12 | Hrl Laboratories, Llc | Robot control based on saliency and invariant spatial representations using hierarchical spatial working memory |
US8396282B1 (en) * | 2008-10-31 | 2013-03-12 | Hrl Labortories, Llc | Method and system for computing fused saliency maps from multi-modal sensory inputs |
-
2013
- 2013-12-27 CN CN201310743788.0A patent/CN103714548B/en not_active Expired - Fee Related
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8396282B1 (en) * | 2008-10-31 | 2013-03-12 | Hrl Labortories, Llc | Method and system for computing fused saliency maps from multi-modal sensory inputs |
US8396249B1 (en) * | 2008-12-23 | 2013-03-12 | Hrl Laboratories, Llc | Robot control based on saliency and invariant spatial representations using hierarchical spatial working memory |
Non-Patent Citations (1)
Title |
---|
SONGQI YANG ET AL: "Infrared decoys recognition method based on geometrical features", 《INTERNATIONAL SYMPOSIUM ON PHOTOELECTRONIC DETECTION AND IMAGING 2013》, vol. 8907, 25 June 2013 (2013-06-25) * |
Cited By (30)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103971116A (en) * | 2014-04-24 | 2014-08-06 | 西北工业大学 | Area-of-interest detection method based on Kinect |
CN104299223A (en) * | 2014-08-21 | 2015-01-21 | 中国人民解放军63620部队 | Different-source image matching method based on Gabor coding |
CN104504723A (en) * | 2015-01-14 | 2015-04-08 | 西安电子科技大学 | Image registration method based on remarkable visual features |
CN104504723B (en) * | 2015-01-14 | 2017-05-17 | 西安电子科技大学 | Image registration method based on remarkable visual features |
CN104835129A (en) * | 2015-04-07 | 2015-08-12 | 杭州电子科技大学 | Two-band image fusion method by using local window visual attention extraction |
CN104835129B (en) * | 2015-04-07 | 2017-10-31 | 杭州电子科技大学 | A kind of two-hand infrared image fusion method that use local window vision attention is extracted |
CN105844235B (en) * | 2016-03-22 | 2018-12-14 | 南京工程学院 | The complex environment method for detecting human face of view-based access control model conspicuousness |
CN105844235A (en) * | 2016-03-22 | 2016-08-10 | 南京工程学院 | Visual saliency-based complex environment face detection method |
CN107240094A (en) * | 2017-05-19 | 2017-10-10 | 同济大学 | A kind of visible ray and infrared image reconstructing method for electrical equipment on-line checking |
CN107464252A (en) * | 2017-06-30 | 2017-12-12 | 南京航空航天大学 | A kind of visible ray based on composite character and infrared heterologous image-recognizing method |
CN109247910A (en) * | 2017-07-12 | 2019-01-22 | 京东方科技集团股份有限公司 | Blood tube display apparatus and blood vessel display method |
CN107545552A (en) * | 2017-09-08 | 2018-01-05 | 四川理工学院 | A kind of image rendering method |
CN109196551A (en) * | 2017-10-31 | 2019-01-11 | 深圳市大疆创新科技有限公司 | Image processing method, equipment and unmanned plane |
CN109196551B (en) * | 2017-10-31 | 2021-08-27 | 深圳市大疆创新科技有限公司 | Image processing method and device and unmanned aerial vehicle |
WO2019084825A1 (en) * | 2017-10-31 | 2019-05-09 | 深圳市大疆创新科技有限公司 | Image processing method and device, and unmanned aerial vehicle |
CN108520529A (en) * | 2018-03-30 | 2018-09-11 | 上海交通大学 | Visible light based on convolutional neural networks and infrared video method for tracking target |
CN108537833A (en) * | 2018-04-18 | 2018-09-14 | 昆明物理研究所 | A kind of quick joining method of infrared image |
CN108961154A (en) * | 2018-07-13 | 2018-12-07 | 福州大学 | Based on the solar cell hot spot detection method for improving non-down sampling contourlet transform |
CN108961154B (en) * | 2018-07-13 | 2022-12-23 | 福州大学 | Solar cell hot spot detection method based on improved non-subsampled contourlet transform |
CN109285183A (en) * | 2018-08-25 | 2019-01-29 | 南京理工大学 | A kind of multimode video image method for registering based on moving region image definition |
CN109285183B (en) * | 2018-08-25 | 2022-03-18 | 南京理工大学 | Multimode video image registration method based on motion region image definition |
CN109146930A (en) * | 2018-09-20 | 2019-01-04 | 河海大学常州校区 | A kind of electric power calculator room equipment is infrared and visible light image registration method |
CN109146930B (en) * | 2018-09-20 | 2021-10-08 | 河海大学常州校区 | Infrared and visible light image registration method for electric power machine room equipment |
CN113228104A (en) * | 2018-11-06 | 2021-08-06 | 菲力尔商业***公司 | Automatic co-registration of thermal and visible image pairs |
CN109858490A (en) * | 2018-12-21 | 2019-06-07 | 广东电网有限责任公司 | A kind of electrical equipment Infrared Image Features vector extracting method |
CN110021029A (en) * | 2019-03-22 | 2019-07-16 | 南京华捷艾米软件科技有限公司 | A kind of real-time dynamic registration method and storage medium suitable for RGBD-SLAM |
CN111462225A (en) * | 2020-03-31 | 2020-07-28 | 电子科技大学 | Centroid identification and positioning method of infrared light spot image |
CN111462225B (en) * | 2020-03-31 | 2022-03-25 | 电子科技大学 | Centroid identification and positioning method of infrared light spot image |
CN111681198A (en) * | 2020-08-11 | 2020-09-18 | 湖南大学 | Morphological attribute filtering multimode fusion imaging method, system and medium |
CN114066950A (en) * | 2021-10-27 | 2022-02-18 | 北京的卢深视科技有限公司 | Monocular speckle structure optical image matching method, electronic device and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN103714548B (en) | 2017-01-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103714548A (en) | Infrared image and visible image registration method based on visual attention | |
CN101398886B (en) | Rapid three-dimensional face identification method based on bi-eye passiveness stereo vision | |
WO2018024030A1 (en) | Saliency-based method for extracting road target from night vision infrared image | |
CN104200461B (en) | The remote sensing image registration method of block and sift features is selected based on mutual information image | |
CN102789637B (en) | Salient region extraction based on improved SUSAN (small univalue segment assimilating nucleus) operator | |
CN105809173B (en) | A kind of image RSTN invariable attribute feature extraction and recognition methods based on bionical object visual transform | |
Perko et al. | A framework for visual-context-aware object detection in still images | |
CN107123188A (en) | Ticket of hindering based on template matching algorithm and edge feature is recognized and localization method | |
CN103295239A (en) | Laser-point cloud data automatic registration method based on plane base images | |
CN104504723A (en) | Image registration method based on remarkable visual features | |
CN106971404A (en) | A kind of robust SURF unmanned planes Color Remote Sensing Image method for registering | |
Dong et al. | Infrared image colorization using a s-shape network | |
CN107180436A (en) | A kind of improved KAZE image matching algorithms | |
CN103955950B (en) | Image tracking method utilizing key point feature matching | |
CN104820991A (en) | Multi-soft-constraint stereo matching method based on cost matrix | |
CN103778411B (en) | Circle detection method and device based on raster image division | |
CN104063711A (en) | Corridor vanishing point rapid detection algorithm based on K-means method | |
CN105160686A (en) | Improved scale invariant feature transformation (SIFT) operator based low altitude multi-view remote-sensing image matching method | |
CN102122359A (en) | Image registration method and device | |
CN104504007A (en) | Method and system for acquiring similarity degree of images | |
CN104850857A (en) | Trans-camera pedestrian target matching method based on visual space significant constraints | |
Frohlinghaus et al. | Regularizing phase-based stereo | |
Ibrahim et al. | Deep learning-based masonry wall image analysis | |
Ren et al. | Dual attention-guided multiscale feature aggregation network for remote sensing image change detection | |
CN103605979A (en) | Object identification method and system based on shape fragments |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20170111 |
|
CF01 | Termination of patent right due to non-payment of annual fee |