CN104966085A - Remote sensing image region-of-interest detection method based on multi-significant-feature fusion - Google Patents

Remote sensing image region-of-interest detection method based on multi-significant-feature fusion Download PDF

Info

Publication number
CN104966085A
CN104966085A CN201510331174.0A CN201510331174A CN104966085A CN 104966085 A CN104966085 A CN 104966085A CN 201510331174 A CN201510331174 A CN 201510331174A CN 104966085 A CN104966085 A CN 104966085A
Authority
CN
China
Prior art keywords
color
remote sensing
color channel
image
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510331174.0A
Other languages
Chinese (zh)
Other versions
CN104966085B (en
Inventor
张立保
吕欣然
王士一
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Normal University
Original Assignee
Beijing Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Normal University filed Critical Beijing Normal University
Priority to CN201510331174.0A priority Critical patent/CN104966085B/en
Publication of CN104966085A publication Critical patent/CN104966085A/en
Application granted granted Critical
Publication of CN104966085B publication Critical patent/CN104966085B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The present invention discloses a remote sensing image region-of-interest detection method based on multi-significant-feature fusion, belonging to the technical fields of remote sensing image processing and image identification. The remote sensing image region-of-interest detection method comprises the following steps: 1) obtaining color channels of one group of input remote sensing images and calculating a color histogram of each color channel; 2) calculating a standard significant weight of each color channel according to the color histograms; 3) calculating an information content significant feature image; 4) converting one group of input remote sensing images from an RGB color space to a CIE Lab color space; 5) utilizing a clustering algorithm to obtain clusters; 6) calculating a significant value of each cluster, and obtaining a common significant feature image; 7) fusing the information content significant feature image with the common significant feature image to obtain a final significant image; and 8) performing threshold segmentation through an OTSU method to extract a region of interest. Compared with a traditional method, the remote sensing image region-of-interest detection method of the present invention achieves accurate detection for a remote sensing image region-of-interest on the premise of not having a prior knowledge base, thus the remote sensing image region-of-interest detection method can be widely applied to fields such as environment monitoring, land utilization and agricultural investigation.

Description

A kind of remote sensing images region of interest area detecting method merged based on many notable features
Technical field
The invention belongs to remote sensing image processing and image identification technical field, be specifically related to a kind of remote sensing images region of interest area detecting method merged based on many notable features.
Background technology
Along with the fast development of remote sensing technology, the data scale of remote sensing image expands rapidly, and the region of interesting extraction of remote sensing images can reduce the complexity of remote Sensing Image Analysis process, therefore the region of interesting extraction of remote sensing images is also the focus of attention of nearest a period of time, and how realizing the detection of remote sensing image area-of-interest has accurately and rapidly become one of problem demanding prompt solution instantly.Effective solution of this problem, by acquisition is significant with the contradiction between low speed decipher at a high speed to alleviating remote sensing image, also has important actual application value to association areas such as Land_use change, Disaster Assessment, town planning and environmental monitorings.
Tradition remote sensing images area-of-interest detects mostly based on the overall situation, needs priori.But the foundation in priori storehouse itself is a very complicated problem, the information such as expert knowledge library, target area feature, background area feature need be considered.Some methods need the training of the psychophysics data that introducing presents color and eye moves, and some methods then will by the numerical map in same region to remote sensing image area-of-interest detection and classification.These algorithms all need priori storehouse, and computation complexity is higher.
Visual attention model is that remote sensing images region of interest detects and provides a brand-new visual angle, be different from traditional detection method, visual attention model is completely by data-driven, do not relate to the impact of the external factor such as knowledge base, and there is the advantages such as fast recognition result is accurate, visual attention model receives increasing concern, and detection visual attention model being introduced remote sensing images area-of-interest is significant.
In the visual attention model based on Low Level Vision feature, the people such as Itti propose Itti visual attention method in article " A Model of Saliency-BasedVisual Attention for Rapid Scene Analysis ", this model, close to human visual system, utilizes various visual characteristic to produce significantly figure.Based in the visual attention model of mathematical method, the people such as Harel propose algorithm (the Graph-based visual saliency based on graph theory in article " Graph-Based Visual Saliency ", GBVS), this algorithm completes characteristic extraction step by using traditional Itti modeling vision noticing mechanism, then the pixel between image associates to use graph structure to represent, finally introduces Markov chain (Markow chains) calculating and significantly schemes.Based in frequency-domain analysis attention model, the people such as Achanta propose the frequency tuning method (Frequency-tuned being used for marking area detection in article " Frequency-tuned Salient RegionDetection ", FT), the RGB image of input is transformed into CIELab color space and carries out Gaussian smoothing, after the arithmetic mean of subtracted image proper vector again, amplitude is asked namely to obtain even and sharply marginated remarkable figure by point.
Based on the visual attention model simulate concern mode of human eye vision of Low Level Vision feature, but do not take into full account the frequency domain character of image, computing velocity is slow simultaneously, efficiency is low, is difficult to the requirement reaching application in real time.Visual attention model form based on frequency-domain analysis method is succinct, be easy to explain and realize, but when the ratio that marking area accounts for whole image is excessive, or when image background is too complicated, part background can be designated as marking area by the remarkable figure that the method obtains by mistake, and its biorational is not perfectly clear.Recent domestic scholar it is also proposed new algorithm vision significance being applied to the detection of remote sensing image area-of-interest.The people such as such as Zhang propose based on wavelet transformation in article " Fast Detection of Visual Saliency Regions inRemote Sensing Image based on Region Growing ", reduce image resolution ratio, in visual signature, introduce the conversion of two-dimensional discrete square, generate and significantly scheme.But these algorithms have common shortcoming, marking area all can only extract by they, but cannot distinguish the difference between these marking areas.And the remote sensing image that a group has a similar area-of-interest, if their similarity can be utilized, just can get rid of and other regions noisy are detected to area-of-interest.
In calculating area-of-interest mask, the conventional radii fixus circle of classic method describes area-of-interest, and it can bring bulk redundancy information when identifying random areas, and the speed using single threshold value quickly, but area-of-interest has a lot of fractionlet, and region description is inaccurate.Maximum variance between clusters (Ostu method) is a kind of nonparametric, unsupervised Threshold selection method automatically, and the method is the simple high efficiency method of adaptive polo placement single threshold, and the method has advantages such as calculating simply, self-adaptation is strong.
Summary of the invention
The object of the present invention is to provide a kind of remote sensing images region of interest area detecting method merged based on many notable features, the method is used for accurately detecting the area-of-interest of remote sensing images.Existing region of interest area detecting method mainly based on the overall situation, needs priori.But the foundation in priori storehouse itself is a very complicated problem, the information such as expert knowledge library, target area feature, background area feature need be considered.Institute mainly pays close attention to two aspects in the process of the present invention:
1) without the need to based on global search with set up priori storehouse;
2) promote remote sensing images area-of-interest accuracy of detection, obtain interested area information more accurately.
The quantity of information notable feature figure that technical scheme used in the present invention comprises remote sensing images generates, and total notable feature figure generates, and finally significantly scheme to generate, area-of-interest template generation and area-of-interest generate five main process, specifically comprise the following steps:
Step one: calculate color histogram, namely input the remote sensing images that a packet size is M × N, extract each Color Channel of every width image respectively, use f c(x, y) represents the color intensity of (x, y) position in Color Channel c, builds the intensity histogram H of every width remote sensing images in different color channels c(i), wherein M represents the length of image, and N represents the wide of image, and x, y represent horizontal stroke, the ordinate of image respectively, x=1,2 ... M, y=1,2 ... N, c represent Color Channel, and c=1,2,3, i represents pixel intensity value, i=0,1 ... 255;
Step 2: the remarkable weight of standardization calculating Color Channel c, namely according to the color histogram H of Color Channel c ci (), calculates the quantity of information In of each pixel intensity value i in this Color Channel c(i), and this quantity of information is assigned to the pixel equal with this pixel intensity value, complete and all calculate and after assignment, obtain the information spirogram LOG of Color Channel c c(x, y), utilizes this information spirogram, obtains the significance h of Color Channel c c, recycle the significance of each Color Channel, calculate the remarkable weight w of each Color Channel standardization of every width image c;
Step 3: computing information amount notable feature figure, namely utilizes the remarkable weight w of the standardization of each Color Channel c, weighted calculation obtains the preliminary quantity of information notable feature figure of every width image, carries out Gaussian smoothing filter, obtain the final quantity of information notable feature figure of every width image after filtering noise to the preliminary quantity of information notable feature figure obtained;
Step 4: by one group of remote sensing images from RGB color space conversion to CIE Lab color space, namely the R of each pixel of every width image is extracted respectively, G, B tri-color channel values, they are converted to CIE Lab color space, obtain L, a, b tri-components, in RGB color space, R represents that red is red, G represents that green is green, B represents that blue is blue, in CIELab color space, L represents brightness, L=0 represents black, L=100 represents white, a represents the position of color between red/green, a is that negative value represents green, a is on the occasion of representing redness, b represents the position of color between indigo plant/Huang, b is that negative value represents blueness, b is on the occasion of representing yellow,
Step 5: utilize k-means clustering algorithm to complete the pixel cluster of CIE Lab color space, namely by k-means clustering algorithm, the value this being organized all pixels that original remote sensing images are mapped on CIE Lab color space carries out cluster, obtains k bunch;
Step 6: calculate total notable feature figure, be divided by by the pixel count contained in individual bunch of jth and image total pixel number, the result of being divided by is defined as the weight of individual bunch of jth, wherein j=1,2 ... k, after obtaining the weight of all k bunch, the weight to utilize bunch and bunch between the saliency value of distance compute cluster, bunch saliency value be assigned to each pixel belonging to this bunch, obtain one group thus and have notable feature figure;
Step 7: calculate final significantly figure, namely the quantity of information notable feature figure that each color channel histograms information obtains is utilized, be multiplied with the total notable feature figure obtained by k-means cluster in CIE Lab color space, thus obtain the final significantly figure after the fusion of many notable features;
Step 8: region of interesting extraction, namely the segmentation threshold of final significantly figure is obtained by maximum variance between clusters, utilize this threshold value that final significantly figure is divided into a width bianry image template, area-of-interest is represented with " 1 ", represent non-region of interest with " 0 ", finally bianry image template being multiplied with original image obtains final region of interest and extracts result.
Accompanying drawing explanation
Fig. 1 is process flow diagram of the present invention.
Fig. 2 is one group of four width remote sensing images sample picture used in the present invention.
Fig. 3 is characteristic pattern of the present invention and finally significantly schemes.A quantity of information notable feature figure that () is sample picture, the total notable feature figure that (b) is sample picture, the final significantly figure that (c) is sample picture.
Fig. 4 is the comparison of the remarkable figure that sample picture adopts the inventive method and additive method to generate.A remarkable figure that () generates for Itti method, the remarkable figure that (b) generates for GBVS method, the remarkable figure that (c) generates for FT method, the remarkable figure that (d) generates for the inventive method.
Fig. 5 adopts the area-of-interest detected by the inventive method and additive method to compare for sample picture.A area-of-interest figure that () detects for Itti method, b area-of-interest that () detects for GBVS method, c area-of-interest that () detects for FT method, the area-of-interest that (d) detects for the inventive method.
Embodiment
Below in conjunction with accompanying drawing, the present invention is described in further details.Overall framework of the present invention as shown in Figure 1, is now introduced each step and is realized details.
Step one: calculate color histogram;
Input the remote sensing images that a packet size is M × N as shown in Figure 2, every width image I is obtained respectively peach Color Channel, use f c(x, y) represents image I pthe color intensity of (x, y) position in Color Channel c, builds the intensity histogram H of these remote sensing images in different color channels c(i), wherein M represents the length of image, and N represents the wide of image, these group remote sensing images add up to Q, use represent that quantity is the remote sensing images group of Q, I prepresent the p width of one group of remote sensing images, p=1,2 ... Q, x, y represent horizontal stroke, the ordinate of image respectively, x=1,2 ... M, y=1,2 ... N, c represent Color Channel, and c=1,2,3, i represents pixel intensity value, i=0,1 ... 255;
The histogram of each Color Channel of the every width image in this group image can obtain with following formula:
H c ( i ) = Σ x = 1 M Σ y = 1 N δ c ( x , y ) / ( M × N )
Wherein, δ c(x, y) represents the binary image of Color Channel c, and computing formula is:
δ c ( x , y ) = 1 , f c ( x , y ) = i 0 , otherwie
Step 2: the remarkable weight of standardization calculating Color Channel c;
According to image I pthe color histogram H of Color Channel c ci (), calculates the quantity of information In of each pixel intensity value i in this Color Channel ci (), utilizes this quantity of information to carry out calculating and assignment, finally obtains image I pthe remarkable weight w of each Color Channel standardization c, specifically realized by following four steps;
(1) according to image I pcolor Channel c in color histogram H ci (), utilizes the quantity of information In of each pixel intensity value in this Color Channel of following formulae discovery c(i):
In(i) c=-ln(H c(i))
(2) this quantity of information is assigned to pixel equal with this pixel intensity value in Color Channel c, obtains the information spirogram LOG of Color Channel c c(x, y), that is:
i=f c(x,y)
(3) the information spirogram LOG of Color Channel c is utilized c(x, y), calculates significance h c, computing formula is as follows:
h c = Σ x = 1 M Σ y = 1 N LOG c ( x , y ) Σ c = 1 3 Σ x = 1 M Σ y = 1 N LOG c ( x , y )
Wherein there are three Color Channels, then h 1represent the significance of Color Channel 1, h 2represent the significance of Color Channel 2, h 3represent the significance of Color Channel 3;
(4) by the significance of the significance of Color Channel c divided by three Color Channels, total gets negative logarithm, obtain the remarkable weight w of Color Channel standardization c:
w 1 = - log ( h 1 h 1 + h 2 + h 3 ) w 2 = - log ( h 2 h 1 + h 2 + h 3 ) w 3 = - log ( h 3 h 1 + h 2 + h 3 )
Wherein there are three Color Channels, then w 1represent the remarkable weight of standardization of Color Channel 1, w 2represent the remarkable weight of standardization of Color Channel 2, w 3represent the remarkable weight of standardization of Color Channel 3;
Step 3: computing information amount notable feature figure;
Utilize image I pthe remarkable weight w of standardization of each Color Channel c, weighted calculation obtains preliminary quantity of information notable feature figure Smap (x, y) of this image, carries out Gaussian smoothing filter to preliminary quantity of information notable feature figure, obtains final quantity of information notable feature figure SS (x, y) after filtering noise:
Smap ( x , y ) = Σ c = 1 3 w c f c ( x , y )
Wherein, represent Gaussian filter;
Through above step, obtain remote sensing images group in the quantity of information notable feature figure of each width remote sensing images.
Step 4: by remote sensing images from RGB color space conversion to CIE Lab color space;
Because the Color Channel of CIELab eliminates monochrome information to a certain extent, the content reflected is closer to the essence of Color perception, therefore color of light slip can be embodied better, based on the clear superiority of CIE Lab space on color homogeneity, select on CIE Lab color space, carry out cluster, below advanced color space conversion:
Extract remote sensing images group respectively in R, G, B tri-color channel values of every each pixel of width image, they are converted to CIE Lab color space, obtain L, a, b tri-components of each pixel, be designated as in the remote sensing images group of CIE Lab color space in RGB color space, R represents that red is red, G represents that green is green, and B represents that blue is blue, and three passages of CIE Lab color space represent brightness L respectively, L=0 represents black, L=100 represents white, the position a of color between red/green, and a is that negative value represents green, a is on the occasion of representing redness, the position b of color between indigo plant/Huang, b are that negative value represents blueness, and b is on the occasion of representing yellow;
Step 5: color characteristic cluster;
Utilize k-means clustering algorithm, complete the pixel cluster of CIE Lab color space, namely on CIE Lab color space, the value of all pixels of this group image is carried out cluster, obtain k bunch, specific implementation step is as follows:
(1) remote sensing images group is extracted in every width image at L, a, b tri-passages of CIE Lab color space, adjust the scope of the pixel point value in three passages, make the scope of the pixel point value of after adjustment three passages identical;
(2) pixel value of three passages of images all in image sets is calculated simultaneously, make the square distance of each pixel point value and nearest cluster centre and minimum, now, the pixel that all nearest cluster centres are identical is one bunch, can utilize following formulae discovery square distance and W:
W = min ( Σ r = 1 n | pi r - a j | 2 )
Pi in formula rrepresent pixel value, wherein r=1,2 ... n, n are that image pixel is counted, a jrepresent cluster centre, wherein j=1,2 ... k;
Step 6: calculate total notable feature figure;
After calculating the weight of all k bunch, the weight that then can to utilize bunch and bunch between the saliency value of distance compute cluster, bunch saliency value be assigned to each pixel belonging to this bunch, obtain one group thus and have notable feature figure, specific implementation needs following three steps:
(1) by a jth bunch l jin the pixel count that contains and image sets total pixel number be divided by, the result of being divided by is defined as the weights omega (l of jth bunch j), wherein j=1,2 ... k;
(2) D (l is defined t, l j) be two bunches of l t, l jcolor distance, the saliency value CL (l of each bunch j) available following formulae discovery:
CL ( l j ) = Σ i ≠ j ω ( l j ) D l ( l t , l j ) ω ( l j )
Wherein,
D ( l t , l j ) = - ln ( 1 - 1 2 Σ s = 1 m ( q ts - q js ) 2 q ts + q js )
In formula, j, t value is 1,2 ... k, q tsbe the probability that s color occurs in the m kind color of t bunch, namely have m kind pixel value, s=1,2 in individual bunch of t ... m;
(3) through cluster, make the saliency value of each pixel equal the saliency value at this pixel place bunch, obtain common characteristic thus and significantly scheme SM (x, y):
Work as ILab p(x, y) ∈ l j, wherein j=1,2 ... k, p=1,2 ... Q,
SM(x,y)=CL(l j)
Through above step, obtain remote sensing images group in the total notable feature figure of each width remote sensing images.
Step 7: calculate final significantly figure;
By the quantity of information notable feature figure obtained by each Color Channel, correspondingly with the total notable feature figure obtained by k-means cluster at CIE Lab color space to be multiplied, thus obtain the many notable features of every width image in these group remote sensing images merge after finally significantly scheme S (x, y):
S(x,y)=SS(x,y)×SM(x,y)
Step 8: region of interesting extraction;
The segmentation threshold of final significantly figure is obtained by maximum variance between clusters, utilize this threshold value that final significantly figure is divided into a width bianry image template, area-of-interest is represented with " 1 ", represent non-region of interest with " 0 ", finally bianry image template being multiplied with original image obtains final region of interest and extracts result.
Effect of the present invention further illustrates by following experimental result and analysis:
1. experimental data
The present invention have chosen the visible remote sensing image in suburb, one group of Beijing from SPOT5 satellite source figure, and respectively therefrom intercept generate size be a picture group of 1024 × 1024 as testing source figure herein, as shown in Figure 2.
2. contrast experiment
In order to evaluate the performance of the inventive method, we devise following contrast experiment, and have chosen existing representative visual attention method and have chosen ITTI method, GBVS method, FT method and the inventive method carry out performance comparison.From the subjective remarkable figure and the area-of-interest figure that compared for distinct methods generation respectively, as shown in Figure 4 and Figure 5.In Fig. 4, the remarkable figure that (a) generates for Itti method, the remarkable figure that (b) generates for GBVS method, the remarkable figure that (c) generates for FT method, the remarkable figure that (d) generates for the inventive method.In Fig. 5, a area-of-interest figure that () generates for Itti method, b area-of-interest figure that () generates for GBVS method, the area-of-interest figure that (c) generates for FT method, the area-of-interest figure that (d) generates for the inventive method.
Can find out through contrast, the remarkable figure resolution utilizing Itti model to obtain is very low, only has 1/256 of former figure size, when final extraction area-of-interest, significantly scheme to amplify.And GBVS model is based on Itti model, just when significantly being schemed, utilize Markov chain.The area-of-interest obtained by these two models all can be larger than the regional extent originally needing to extract, and namely can extract unwanted part.Utilize FT model, result can be extracted preferably when BACKGROUNDFrequency change is little, but when BACKGROUNDFrequency changes greatly, interference will be caused to extraction result, and algorithm herein can obtain good testing result.

Claims (2)

1. the remote sensing images region of interest area detecting method merged based on many notable features, this method processes for one group of remote sensing images, first the colouring information of remote sensing images is utilized, by building the color histogram of different color channels and being weighted, obtain quantity of information notable feature figure, next utilizes k-means clustering algorithm one group of remote sensing images is carried out cluster on CIE Lab color space and calculates saliency value, thus one group that obtains CIE Lab color space has notable feature figure, then merge above-mentioned two picture groups and obtain final significantly figure, carry out Threshold segmentation finally by maximum variance between clusters and extract area-of-interest, it is characterized in that, comprise the following steps:
Step one: calculate color histogram, namely input the remote sensing images that a packet size is M × N, extract each Color Channel of every width image respectively, use f c(x, y) represents the color intensity of (x, y) position in Color Channel c, builds the intensity histogram H of every width remote sensing images in different color channels c(i), wherein M represents the length of image, and N represents the wide of image, and x, y represent horizontal stroke, the ordinate of image respectively, x=1,2 ... M, y=1,2 ... N, c represent Color Channel, and c=1,2,3, i represents pixel intensity value, i=0,1 ... 255;
Step 2: the remarkable weight of standardization calculating Color Channel c, namely according to the color histogram H of Color Channel c ci (), calculates the quantity of information In of each pixel intensity value i in this Color Channel c(i), and this quantity of information is assigned to the pixel equal with this pixel intensity value, complete and all calculate and after assignment, obtain the information spirogram LOG of Color Channel c c(x, y), utilizes this information spirogram, obtains the significance h of Color Channel c c, recycle the significance of each Color Channel, calculate the remarkable weight w of each Color Channel standardization of every width image c;
Step 3: computing information amount notable feature figure, namely utilizes the remarkable weight w of the standardization of each Color Channel c, weighted calculation obtains the preliminary quantity of information notable feature figure of every width image, carries out Gaussian smoothing filter, obtain the final quantity of information notable feature figure of every width image after filtering noise to the preliminary quantity of information notable feature figure obtained;
Step 4: by one group of remote sensing images from RGB color space conversion to CIE Lab color space, namely the R of each pixel of every width image is extracted respectively, G, B tri-color channel values, they are converted to CIE Lab color space, obtain L, a, b tri-components, in RGB color space, R represents that red is red, G represents that green is green, B represents that blue is blue, in CIE Lab color space, L represents brightness, L=0 represents black, L=100 represents white, a represents the position of color between red/green, a is that negative value represents green, a is on the occasion of representing redness, b represents the position of color between indigo plant/Huang, b is that negative value represents blueness, b is on the occasion of representing yellow,
Step 5: utilize k-means clustering algorithm to complete the pixel cluster of CIE Lab color space, namely by k-means clustering algorithm, the value this being organized all pixels that original remote sensing images are mapped on CIE Lab color space carries out cluster, obtains k bunch;
Step 6: calculate total notable feature figure, be divided by by the pixel count contained in individual bunch of jth and image total pixel number, the result of being divided by is defined as the weight of individual bunch of jth, wherein j=1,2 ... k, after obtaining the weight of all k bunch, the weight to utilize bunch and bunch between the saliency value of distance compute cluster, bunch saliency value be assigned to each pixel belonging to this bunch, obtain one group thus and have notable feature figure;
Step 7: calculate final significantly figure, namely the quantity of information notable feature figure that each color channel histograms information obtains is utilized, be multiplied with the total notable feature figure obtained by k-means cluster in CIE Lab color space, thus obtain the final significantly figure after the fusion of many notable features;
Step 8: region of interesting extraction, namely the segmentation threshold of final significantly figure is obtained by maximum variance between clusters, utilize this threshold value that final significantly figure is divided into a width bianry image template, area-of-interest is represented with " 1 ", represent non-region of interest with " 0 ", finally bianry image template being multiplied with original image obtains final region of interest and extracts result.
2. a kind of remote sensing images area-of-interest exacting method based on notable feature cluster according to claim 1, it is characterized in that, the detailed process of described step 2 is:
1) according to the color histogram H in Color Channel c ci (), calculates the quantity of information In of each pixel intensity value c(i):
In(i) c=-ln(H c(i))
2) this quantity of information is assigned to the pixel equal with this pixel intensity value, obtains the information spirogram LOG of Color Channel c c(x, y), that is:
i=f c(x,y),
3) the information spirogram LOG of Color Channel c is utilized c(x, y), calculates significance h c, because image contains three Color Channels, therefore use h 1represent the significance of Color Channel 1, h 2represent the significance of Color Channel 2, h 3represent the significance of Color Channel 3:
h c = Σ x = 1 M Σ y = 1 M LOG c ( x , y ) Σ c = 1 3 Σ x = 1 M Σ y = 1 M LOG c ( x , y )
4) by the significance of Color Channel c divided by three Color Channels significance and, total gets negative logarithm, obtains the remarkable weight w of Color Channel standardization c,
w 1 = - log ( h 1 h 1 + h 2 + h 3 ) w 2 = - log ( h 2 h 1 + h 2 + h 3 ) w 3 = - log ( h 3 h 1 + h 2 + h 3 )
Because image contains three Color Channels, therefore use w 1represent the remarkable weight of standardization of Color Channel 1, w 2represent the remarkable weight of standardization of Color Channel 2, w 3represent the remarkable weight of standardization of Color Channel 3.
CN201510331174.0A 2015-06-16 2015-06-16 A kind of remote sensing images region of interest area detecting method based on the fusion of more notable features Active CN104966085B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510331174.0A CN104966085B (en) 2015-06-16 2015-06-16 A kind of remote sensing images region of interest area detecting method based on the fusion of more notable features

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510331174.0A CN104966085B (en) 2015-06-16 2015-06-16 A kind of remote sensing images region of interest area detecting method based on the fusion of more notable features

Publications (2)

Publication Number Publication Date
CN104966085A true CN104966085A (en) 2015-10-07
CN104966085B CN104966085B (en) 2018-04-03

Family

ID=54220120

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510331174.0A Active CN104966085B (en) 2015-06-16 2015-06-16 A kind of remote sensing images region of interest area detecting method based on the fusion of more notable features

Country Status (1)

Country Link
CN (1) CN104966085B (en)

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106407978A (en) * 2016-09-24 2017-02-15 上海大学 Unconstrained in-video salient object detection method combined with objectness degree
CN106780422A (en) * 2016-12-28 2017-05-31 深圳市美好幸福生活安全***有限公司 A kind of notable figure fusion method based on Choquet integrations
CN106951841A (en) * 2017-03-09 2017-07-14 广东顺德中山大学卡内基梅隆大学国际联合研究院 A kind of multi-object tracking method based on color and apart from cluster
CN107239760A (en) * 2017-06-05 2017-10-10 中国人民解放军军事医学科学院基础医学研究所 A kind of video data handling procedure and system
CN108335307A (en) * 2018-04-19 2018-07-27 云南佳叶现代农业发展有限公司 Adaptive tobacco leaf picture segmentation method and system based on dark primary
CN108364288A (en) * 2018-03-01 2018-08-03 北京航空航天大学 Dividing method and device for breast cancer pathological image
CN108596920A (en) * 2018-05-02 2018-09-28 北京环境特性研究所 A kind of Target Segmentation method and device based on coloured image
CN108764106A (en) * 2018-05-22 2018-11-06 中国计量大学 Multiple dimensioned colour image human face comparison method based on cascade structure
CN109035254A (en) * 2018-09-11 2018-12-18 中国水产科学研究院渔业机械仪器研究所 Based on the movement fish body shadow removal and image partition method for improving K-means cluster
CN109858394A (en) * 2019-01-11 2019-06-07 西安电子科技大学 A kind of remote sensing images water area extracting method based on conspicuousness detection
CN109949906A (en) * 2019-03-22 2019-06-28 上海鹰瞳医疗科技有限公司 Pathological section image procossing and model training method and equipment
CN110232378A (en) * 2019-05-30 2019-09-13 苏宁易购集团股份有限公司 A kind of image interest point detecting method, system and readable storage medium storing program for executing
CN110268442A (en) * 2019-05-09 2019-09-20 京东方科技集团股份有限公司 In the picture detect background objects on exotic computer implemented method, in the picture detect background objects on exotic equipment and computer program product
CN110612534A (en) * 2017-06-07 2019-12-24 赫尔实验室有限公司 System for detecting salient objects in images
CN111339953A (en) * 2020-02-27 2020-06-26 广西大学 Clustering analysis-based mikania micrantha monitoring method
CN111400557A (en) * 2020-03-06 2020-07-10 北京市环境保护监测中心 Method and device for automatically identifying atmospheric pollution key area
CN113139934A (en) * 2021-03-26 2021-07-20 上海师范大学 Rice grain counting method
CN113469976A (en) * 2021-07-06 2021-10-01 浙江大华技术股份有限公司 Object detection method and device and electronic equipment
CN115131327A (en) * 2022-07-14 2022-09-30 电子科技大学 Color feature fused display screen color line defect detection method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100239170A1 (en) * 2009-03-18 2010-09-23 Asnis Gary I System and method for target separation of closely spaced targets in automatic target recognition
US20120051606A1 (en) * 2010-08-24 2012-03-01 Siemens Information Systems Ltd. Automated System for Anatomical Vessel Characteristic Determination
CN103810710A (en) * 2014-02-26 2014-05-21 西安电子科技大学 Multispectral image change detection method based on semi-supervised dimensionality reduction and saliency map
CN104463224A (en) * 2014-12-24 2015-03-25 武汉大学 Hyperspectral image demixing method and system based on abundance significance analysis

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100239170A1 (en) * 2009-03-18 2010-09-23 Asnis Gary I System and method for target separation of closely spaced targets in automatic target recognition
US20120051606A1 (en) * 2010-08-24 2012-03-01 Siemens Information Systems Ltd. Automated System for Anatomical Vessel Characteristic Determination
CN103810710A (en) * 2014-02-26 2014-05-21 西安电子科技大学 Multispectral image change detection method based on semi-supervised dimensionality reduction and saliency map
CN104463224A (en) * 2014-12-24 2015-03-25 武汉大学 Hyperspectral image demixing method and system based on abundance significance analysis

Cited By (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106407978A (en) * 2016-09-24 2017-02-15 上海大学 Unconstrained in-video salient object detection method combined with objectness degree
CN106780422A (en) * 2016-12-28 2017-05-31 深圳市美好幸福生活安全***有限公司 A kind of notable figure fusion method based on Choquet integrations
CN106951841A (en) * 2017-03-09 2017-07-14 广东顺德中山大学卡内基梅隆大学国际联合研究院 A kind of multi-object tracking method based on color and apart from cluster
CN106951841B (en) * 2017-03-09 2020-05-12 广东顺德中山大学卡内基梅隆大学国际联合研究院 Multi-target tracking method based on color and distance clustering
CN107239760A (en) * 2017-06-05 2017-10-10 中国人民解放军军事医学科学院基础医学研究所 A kind of video data handling procedure and system
CN107239760B (en) * 2017-06-05 2020-07-17 中国人民解放军军事医学科学院基础医学研究所 Video data processing method and system
CN110612534A (en) * 2017-06-07 2019-12-24 赫尔实验室有限公司 System for detecting salient objects in images
CN110612534B (en) * 2017-06-07 2023-02-21 赫尔实验室有限公司 System, computer-readable medium, and method for detecting salient objects in an image
CN108364288B (en) * 2018-03-01 2022-04-05 北京航空航天大学 Segmentation method and device for breast cancer pathological image
CN108364288A (en) * 2018-03-01 2018-08-03 北京航空航天大学 Dividing method and device for breast cancer pathological image
CN108335307A (en) * 2018-04-19 2018-07-27 云南佳叶现代农业发展有限公司 Adaptive tobacco leaf picture segmentation method and system based on dark primary
CN108596920A (en) * 2018-05-02 2018-09-28 北京环境特性研究所 A kind of Target Segmentation method and device based on coloured image
CN108764106A (en) * 2018-05-22 2018-11-06 中国计量大学 Multiple dimensioned colour image human face comparison method based on cascade structure
CN108764106B (en) * 2018-05-22 2021-12-21 中国计量大学 Multi-scale color image face comparison method based on cascade structure
CN109035254A (en) * 2018-09-11 2018-12-18 中国水产科学研究院渔业机械仪器研究所 Based on the movement fish body shadow removal and image partition method for improving K-means cluster
CN109858394A (en) * 2019-01-11 2019-06-07 西安电子科技大学 A kind of remote sensing images water area extracting method based on conspicuousness detection
CN109949906A (en) * 2019-03-22 2019-06-28 上海鹰瞳医疗科技有限公司 Pathological section image procossing and model training method and equipment
CN110268442B (en) * 2019-05-09 2023-08-29 京东方科技集团股份有限公司 Computer-implemented method of detecting a foreign object on a background object in an image, device for detecting a foreign object on a background object in an image, and computer program product
CN110268442A (en) * 2019-05-09 2019-09-20 京东方科技集团股份有限公司 In the picture detect background objects on exotic computer implemented method, in the picture detect background objects on exotic equipment and computer program product
CN110232378A (en) * 2019-05-30 2019-09-13 苏宁易购集团股份有限公司 A kind of image interest point detecting method, system and readable storage medium storing program for executing
CN110232378B (en) * 2019-05-30 2023-01-20 苏宁易购集团股份有限公司 Image interest point detection method and system and readable storage medium
CN111339953A (en) * 2020-02-27 2020-06-26 广西大学 Clustering analysis-based mikania micrantha monitoring method
CN111400557A (en) * 2020-03-06 2020-07-10 北京市环境保护监测中心 Method and device for automatically identifying atmospheric pollution key area
CN111400557B (en) * 2020-03-06 2023-08-08 北京市环境保护监测中心 Method and device for automatically identifying important areas of atmospheric pollution
CN113139934A (en) * 2021-03-26 2021-07-20 上海师范大学 Rice grain counting method
CN113139934B (en) * 2021-03-26 2024-04-30 上海师范大学 Rice grain counting method
CN113469976A (en) * 2021-07-06 2021-10-01 浙江大华技术股份有限公司 Object detection method and device and electronic equipment
CN115131327A (en) * 2022-07-14 2022-09-30 电子科技大学 Color feature fused display screen color line defect detection method
CN115131327B (en) * 2022-07-14 2024-04-30 电子科技大学 Color line defect detection method for display screen with fused color features

Also Published As

Publication number Publication date
CN104966085B (en) 2018-04-03

Similar Documents

Publication Publication Date Title
CN104966085A (en) Remote sensing image region-of-interest detection method based on multi-significant-feature fusion
CN103035013B (en) A kind of precise motion shadow detection method based on multi-feature fusion
CN106909902B (en) Remote sensing target detection method based on improved hierarchical significant model
CN103177458B (en) A kind of visible remote sensing image region of interest area detecting method based on frequency-domain analysis
CN106384117B (en) A kind of vehicle color identification method and device
CN105528794A (en) Moving object detection method based on Gaussian mixture model and superpixel segmentation
CN103020985B (en) A kind of video image conspicuousness detection method based on field-quantity analysis
CN103020992B (en) A kind of video image conspicuousness detection method based on motion color-associations
CN106933816A (en) Across camera lens object retrieval system and method based on global characteristics and local feature
CN107301376B (en) Pedestrian detection method based on deep learning multi-layer stimulation
Yuan et al. Learning to count buildings in diverse aerial scenes
CN112906550B (en) Static gesture recognition method based on watershed transformation
CN109886146B (en) Flood information remote sensing intelligent acquisition method and device based on machine vision detection
CN104217440B (en) A kind of method extracting built-up areas from remote sensing images
CN108197650A (en) The high spectrum image extreme learning machine clustering method that local similarity is kept
CN108960142B (en) Pedestrian re-identification method based on global feature loss function
CN109635634A (en) A kind of pedestrian based on stochastic linear interpolation identifies data enhancement methods again
CN106951863B (en) Method for detecting change of infrared image of substation equipment based on random forest
CN104392459A (en) Infrared image segmentation method based on improved FCM (fuzzy C-means) and mean drift
Ding et al. FCM image segmentation algorithm based on color space and spatial information
CN112949738A (en) Multi-class unbalanced hyperspectral image classification method based on EECNN algorithm
CN111639589A (en) Video false face detection method based on counterstudy and similar color space
CN110569764B (en) Mobile phone model identification method based on convolutional neural network
CN106971402B (en) SAR image change detection method based on optical assistance
CN104392209B (en) A kind of image complexity evaluation method of target and background

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant