CN109741293A - Conspicuousness detection method and device - Google Patents
Conspicuousness detection method and device Download PDFInfo
- Publication number
- CN109741293A CN109741293A CN201811386757.3A CN201811386757A CN109741293A CN 109741293 A CN109741293 A CN 109741293A CN 201811386757 A CN201811386757 A CN 201811386757A CN 109741293 A CN109741293 A CN 109741293A
- Authority
- CN
- China
- Prior art keywords
- pixel
- segmented image
- super
- feature
- low level
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Landscapes
- Image Analysis (AREA)
Abstract
The embodiment of the present invention provides a kind of conspicuousness detection method and device, and wherein method includes: to carry out super-pixel segmentation respectively to original image with different scale, obtains the segmented image of different scale;It determines a certain number of low level features, calculates fisrt feature figure of the segmented image of every kind of scale under each low level feature, the fisrt feature figure of all scales under same low level feature is merged, the second feature figure of each low level feature is obtained;For the segmented image of any one scale, the segmented image is optimized using dark and center priori strategy, in conjunction with the second feature figure of each low level feature, obtains the notable figure of the segmented image;The notable figure of the segmented image of all scales is integrated, final notable figure is formed.It is more rationally and accurate that the embodiment of the present invention calculates conspicuousness.
Description
Technical field
The present embodiments relate to technical field of image processing, more particularly, to conspicuousness detection method and device.
Background technique
Human visual system (HVS) can rapidly pick out in scene be most interested in ground region, by the past largely about
Human attention mechanism is just applied to computer vision field by the inspiration of human visual attention's Mechanism Study, researcher,
I.e. vision significance detects, and main goal in research is to quickly locate most to allow people interested i.e. most attractive from piece image
Region, to bring great promotion for the performance of visual processes mechanism, a fining class as object detection field
Topic, conspicuousness detection has very important status in various Computer Vision Tasks or application, pretreated as one
Journey, the detection of accurate and efficient conspicuousness the meter such as identify in image classification, object detection, image segmentation, image retrieval, pedestrian again
Calculation machine visual field is widely used.
In general, visual attention is driven by low-level visual stimulus.As researcher was in the past more than ten
The research in year, the conspicuousness detection model of a large amount of view-based access control model attention mechanism are suggested, these models are largely all only fitted
For in the scene of visible light, although very big achievement is achieved under high contrast scene, for real-life
Actual scene, such as rain, the variation of the natural climates such as haze or night look after the very poor natural scene of condition, these models it is accurate
Degree is just greatly reduced.Since the computation model of most of conspicuousnesses detection is concerned with bottom-up method, this method benefit
The contrast of image-region Yu its ambient enviroment is measured with the characteristics of image of low level, and soft image is easy by each
The interference of kind noise, scene changes, texture variations, causes the performance of traditional conspicuousness detection method to decline to a great extent.
Summary of the invention
The embodiment of the present invention provides a kind of conspicuousness inspection for overcoming the above problem or at least being partially solved the above problem
Survey method and device.
First aspect, the embodiment of the present invention provide a kind of conspicuousness detection method, comprising:
Super-pixel segmentation is carried out with different scale to original image respectively, obtains the segmented image of different scale;
It determines a certain number of low level features, calculates the of the segmented image of every kind of scale under each low level feature
One characteristic pattern merges the fisrt feature figure of all scales under same low level feature, obtains each low level feature
Second feature figure;
For the segmented image of any one scale, the segmented image is carried out using dark and center priori strategy
Optimization, in conjunction with the second feature figure of each low level feature, obtains the notable figure of the segmented image;
The notable figure of the segmented image of all scales is integrated, final notable figure is formed.
The second aspect, the embodiment of the present invention provide a kind of conspicuousness detection device, comprising:
Super-pixel segmentation module obtains different rulers for carrying out super-pixel segmentation respectively to original image with different scale
The segmented image of degree;
Characteristic extracting module calculates the segmented image of every kind of scale every for determining a certain number of low level features
Fisrt feature figure under a low level feature merges the fisrt feature figure of all scales under same low level feature,
Obtain the second feature figure of each low level feature;
Optimization module, for the segmented image for any one scale, using dark and center priori strategy to institute
It states segmented image to optimize, in conjunction with the second feature figure of each low level feature, obtains the notable figure of the segmented image;
Integration module forms final notable figure for integrating the notable figure of the segmented image of all scales.
The third aspect, the embodiment of the present invention provides a kind of electronic equipment, including memory, processor and is stored in memory
Computer program that is upper and can running on a processor, is realized when the processor executes described program as first aspect provides
Method the step of.
Fourth aspect, the embodiment of the present invention provide a kind of non-transient computer readable storage medium, are stored thereon with calculating
Machine program is realized as provided by first aspect when the computer program is executed by processor the step of method.
Conspicuousness detection method and device provided in an embodiment of the present invention, by being surpassed to original image with different scale
Pixel segmentation, can integrate the conspicuousness segmentation result of the segmented image of different accuracys, pass through a certain number of low layers of determination
Secondary feature, and fisrt feature figure of the segmented image of kind of scale under each low level feature is calculated, realize acquisition segmentation figure
The purpose of local difference as in, then the fisrt feature figure of all scales under same low level feature is merged, it obtains each
The second feature figure of low level feature realizes the purpose for obtaining global disparity in segmented image, first using dark and center
It tests strategy to optimize the segmented image, can effectively identify hypo-intense region and conspicuousness object is better anticipated,
Finally the notable figure of the segmented image of all scales is carried out to be integrated to form final notable figure, calculates conspicuousness more reasonable
With it is accurate.
Detailed description of the invention
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below
There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is this hair
Bright some embodiments for those of ordinary skill in the art without creative efforts, can be with root
Other attached drawings are obtained according to these attached drawings.
Fig. 1 provides the flow diagram of conspicuousness detection method for the embodiment of the present invention;
Fig. 2 is the structural schematic diagram of conspicuousness detection device provided in an embodiment of the present invention;
Fig. 3 is the entity structure schematic diagram of electronic equipment provided in an embodiment of the present invention.
Specific embodiment
In order to make the object, technical scheme and advantages of the embodiment of the invention clearer, below in conjunction with the embodiment of the present invention
In attached drawing, technical scheme in the embodiment of the invention is clearly and completely described, it is clear that described embodiment is
A part of the embodiment of the present invention, instead of all the embodiments.Based on the embodiments of the present invention, those of ordinary skill in the art
Every other embodiment obtained without creative efforts, shall fall within the protection scope of the present invention.
Fig. 1 is the flow diagram of conspicuousness detection method provided in an embodiment of the present invention, as shown in Figure 1, comprising:
S100, super-pixel segmentation is carried out with different scale to original image respectively, obtains the segmented image of different scale.
It should be noted that since the accuracy of different super-pixel segmentation algorithm notable figures generated is by super-pixel
Quantity dominates, and background area may have similar super-pixel on different scale, but marking area can on a small number of scales
There can be similar super-pixel, therefore the embodiment of the present invention continues super-pixel segmentation to original image with different scales respectively, obtains
Obtain the segmented image of different scale.It should be understood that segmented image is made of several super-pixel block, and each super-pixel block
Including several pixels.
S101, a certain number of low level features are determined, calculates the segmented image of every kind of scale in each low level feature
Under fisrt feature figure, the fisrt feature figure of all scales under same low level feature is merged, each low level is obtained
The second feature figure of feature.
It should be noted that needing to remove image back to extract visual signature information useful in soft image
Correlated noise in scape avoids the detection to prospect saliency object from interfering, therefore the embodiment of the present invention determines several
Low level feature, low level are characterized in not needing what any shape/spatial relationship information can be automatically extracted from image
Essential characteristic, such as: common ground threshold method is exactly as an a kind of low level feature extraction mode for processing.All low layer powers
Method can be applied to high-level feature extraction, to find shape in the picture.
The embodiment of the present invention calculates fisrt feature figure of the segmented image of every kind of scale under each low level feature first,
In fisrt feature figure the characteristic value of each super-pixel for characterize the super-pixel under low level feature with the difference of other super-pixel
It is different, as soon as because of region made of super-pixel block originally pixel collection, the comparison between two super-pixel block is suitable
Comparison between pixel area pixel point set adjacent thereto, this process are equivalent to local contrast.By to same
The fisrt feature figure of all scales under low level feature is merged, and obtains the second feature figure of each low level feature, and second
The characteristic value of super-pixel of the characteristic pattern due to having merged all scales, no longer there is super-pixel in second feature figure, second
The characteristic value of pixel in characteristic pattern for characterize the pixel under low level feature with the difference of other pixels.
S102, the segmented image for any one scale, using dark and center priori strategy to the segmentation figure
As optimizing, in conjunction with the second feature figure of each low level feature, the notable figure of the segmented image is obtained.
It should be noted that the effect of dark channel prior is the haze in removal input picture, prevent the noise to feature
Extraction causes unnecessary interference.According to the observation to outdoor images, some pixels or region typically at least have a color logical
Road, intensity are very low.This means that the dark of image pixel is mainly to be generated by dark space or characteristic area, they are generally occurred within
In obvious object.Therefore, it can use the dark channel prior of image to estimate the conspicuousness of super-pixel.Further, since people
It when watching picture, often concentrates on the object near the heart in the picture, therefore, close to the super picture of picture centre
The saliency value of element should be endowed higher weight.Therefore, the embodiment of the present invention can be efficiently identified by dark calculating
Conspicuousness object can be better anticipated by center priori in hypo-intense region.
S103, the notable figure of the segmented image of all scales is integrated, forms final notable figure.
Specifically, the embodiment of the present invention can average to be formed finally by the notable figure of the segmented image to all scales
Notable figure.Making an uproar among the notable figure of generation can be eliminated to the notable figure of the segmented image of all scales is integrated
Sound.
The embodiment of the present invention can integrate different accuracys by carrying out super-pixel segmentation to original image with different scale
Segmented image conspicuousness segmentation result, by a certain number of low level features of determination, and calculate the segmentation of different scale
Fisrt feature figure of the image under each low level feature realizes the purpose for obtaining local difference in segmented image, then to same
The fisrt feature figure of all scales under one low level feature is merged, and the second feature figure of each low level feature is obtained, real
The purpose for obtaining global disparity in segmented image is showed, the segmented image has been carried out using dark and center priori strategy excellent
Change, can effectively identify hypo-intense region and conspicuousness object is better anticipated, finally by the segmented image of all scales
Notable figure carries out being integrated to form final notable figure, calculates conspicuousness more rationally and accurate.
On the basis of the various embodiments described above, as a kind of alternative embodiment, low level feature includes brightness, color
Feature and Gradient Features.
Correspondingly, fisrt feature figure of the segmented image for calculating every kind of scale under each low level feature, comprising:
For the segmented image of any one scale, the segmented image is converted to CIELAB color space, by dividing
The Euclidean distance between the brightness of the super-pixel of image and the brightness of the super-pixel in the channel L is cut, calculates segmented image in brightness spy
Fisrt feature figure under sign.
Specifically, the segmented image of rgb format is converted to CIELAB color space, and using the ingredient in the channel L come
Calculate luminance difference.If a super-pixel is variant relative to every other super-pixel, it can be considered as
Significantly.Use dlightness(SP (i), SP (j)) indicates the average value of super-pixel SP (i) and the super-pixel SP (j) in the channel L
Euclidean distance.Therefore, for each super-pixel SP (i), i=1 ..., N, by being calculated with remaining N-1 super-pixel SP (j)
Local brightness difference obtains global contrast.Otherness between two super-pixel can be defined by following formula:
Brightness saliency value of the SP (i) under n-th of scaleIt can be with is defined as:
For the segmented image of any one scale, the segmented image is converted to CIELAB color space, is passed through
Euclidean distance in the A channel and channel B of CIELAB color space between the average color of super-pixel calculates segmented image and exists
Fisrt feature figure under color characteristic.
Specifically, the embodiment of the present invention extracts the color spy of corresponding channel by converting the color space of input picture
The segmented image is converted super picture in the A channel and channel B for calculate CIELAB color space to CIELAB color space by sign
Otherness between the average color of element:
Wherein, da(SP (m), SP (l)) indicates the color of the color value of super-pixel SP (m) and super-pixel SP (l) in A channel
The difference of value, db (SP (m), SP (l)) indicate the color value of the color value and super-pixel SP (l) of super-pixel SP (m) in channel B
Difference, dpositionThe Euclidean distance of (SP (m), SP (l)) expression super-pixel m place-centric and super-pixel l place-centric.
Brightness saliency value of the SP (m) under n-th of scaleIt can be with is defined as:
For the segmented image of any one scale, between the average gradient value by super-pixel both horizontally and vertically
Euclidean distance, calculate fisrt feature figure of the segmented image under Gradient Features.
Specifically, Gradient Features have important role in the saliency object Detection task of complex scene.It can be surveyed
The size that spirogram changes as local gray level, thus it is more rationally and accurate in order to calculate conspicuousness, segmentation is indicated with g (x, y)
The coordinate of the pixel of image calculates the gradient of each pixel in the horizontal and vertical directions respectively, can be with is defined as:
Gx(x, y)=g (x+1, y)-g (x-1, y)
Gy(x, y)=g (x, y+1)-g (x, y-1)
In embodiments of the present invention, the difference between super-pixel SP (i) and the average gradient value of super-pixel SP (j) is calculated
Property:
Wherein, dGx(SP (i), SP (j)) indicates super-pixel SP (i) gradient in the horizontal direction and super-pixel SP (j) in water
Square to gradient difference, dGy(SP (i), SP (j)) indicates that super-pixel SP (i) exists in the gradient and super-pixel SP (j) of vertical direction
The gradient difference of vertical direction, dpositionThe Europe of (SP (i), SP (j)) expression super-pixel i place-centric and super-pixel j place-centric
Formula distance.
Gradient saliency value of the SP (i) under n-th of scaleIt can be with is defined as:
On the basis of the various embodiments described above, as a kind of alternative embodiment, to all rulers under same low level feature
The fisrt feature figure of degree, which carries out fusion, can ask equal by the fisrt feature figure for all scales thought same low level feature
The mode of value is merged, by taking brightness as an example:
Wherein,Indicate that the second feature figure of brightness, N indicate the total number of scale.
On the basis of the various embodiments described above, as a kind of alternative embodiment, using dark to the segmented image into
Row optimization, specifically:
The dark channel prior value I of pixel I (x, y) is calculated according to the following formuladark(x, y):
Wherein, c indicates one of tri- kinds of Color Channels of R, G, B, IcSegmented image under expression Color Channel c, I (x,
Y) segmented image I is indicatedcIn pixel, p (x, y) indicates localized mass centered on I (x, y).
The dark channel prior value of super-pixel SP (i) in segmented image is calculated according to the following formula:
Wherein, num (SP (i)) indicates the sum of pixel in super-pixel SP (i).
It needs, the embodiment of the present invention can efficiently identify hypo-intense region by dark calculating.Therefore, deep
Color region, coloured surface or special object can be selected out from input picture.At the same time, these factors are also significant
A part of property object, wherein dark is especially dark.Therefore, dark attribute can estimate area-of-interest (ROI) well.
On the basis of the various embodiments described above, as a kind of alternative embodiment, using center priori strategy to the segmentation
Image optimizes, specifically:
Weight SP is increased to the super-pixel in segmented image according to the following formulaweight(i):
SPweight(i)=dcenter(SPcenter(i),Icenter).
Wherein, dcenterIndicate Euclidean distance, SPcenter(i) center of super-pixel SP (i), I are indicatedcenterIndicate segmentation figure
The center of picture.
On the basis of the various embodiments described above, as a kind of alternative embodiment, segmentation figure is obtained especially by following formula
The notable figure of picture:
Wherein, SPsaliency(i) saliency value of super-pixel SP (i) in segmented image, SP are indicatedweight(i) super-pixel is indicated
The weight of SP (i), Idark (SP (i)) indicate the dark channel prior value of super-pixel SP (i),Indicate pixel i in segmented image
Brightness value,Indicate the color feature value of pixel i in segmented image,Indicate the ladder of pixel i in segmented image
Spend characteristic value.
Fig. 2 is the structural schematic diagram of conspicuousness detection device provided in an embodiment of the present invention, as shown in Fig. 2, the conspicuousness
Detection device includes: super-pixel segmentation module 201, characteristic extracting module 202, optimization module 203 and integration module 204, in which:
Super-pixel segmentation module 201 is used to carry out super-pixel segmentation respectively to original image with different scale, obtains different
The segmented image of scale.
It should be noted that since the accuracy of different super-pixel segmentation algorithm notable figures generated is by super-pixel
Quantity dominates, and background area may have similar super-pixel on different scale, but marking area can on a small number of scales
There can be similar super-pixel, therefore the super-pixel segmentation module 201 of the embodiment of the present invention divides original image with different scales
Not Ji Xu super-pixel segmentation, obtain the segmented image of different scale.It should be understood that segmented image is made of several super-pixel block
, and each super-pixel block includes several pixels.
For determining a certain number of low level features, the segmented image for calculating every kind of scale exists characteristic extracting module 202
Fisrt feature figure under each low level feature melts the fisrt feature figure of all scales under same low level feature
It closes, obtains the second feature figure of each low level feature.
It should be noted that needing to remove image back to extract visual signature information useful in soft image
Correlated noise in scape avoids the detection to prospect saliency object from interfering, therefore feature extraction mould of the embodiment of the present invention
Block 202 determines several low level features, and low level is characterized in not needing any shape/spatial relationship information can be from figure
The essential characteristic automatically extracted as in, such as: common ground threshold method is exactly as an a kind of low level feature extraction side for processing
Formula.All low level methods can be applied to high-level feature extraction, to find shape in the picture.
The segmented image that the characteristic extracting module 202 of the embodiment of the present invention calculates every kind of scale first is special in each low level
The lower fisrt feature figure of sign, the characteristic value of each super-pixel is used to characterize the super-pixel under low level feature in fisrt feature figure
With the difference of other super-pixel, because of region made of a super-pixel block originally pixel collection, two super-pixel block
Between comparison be equivalent to the comparison between pixel area pixel point set adjacent thereto, it is right that this process is equivalent to part
Degree of ratio.It is merged by the fisrt feature figure to all scales under same low level feature, obtains each low level feature
Second feature figure, the characteristic value of super-pixel of the second feature figure due to having merged all scales, in second feature figure no longer
There are super-pixel, the characteristic value of the pixel in second feature figure for characterize the pixel under low level feature with other pixels
Difference.
Optimization module 203 is used for the segmented image for any one scale, utilizes dark and center priori strategy pair
The segmented image optimizes, and in conjunction with the second feature figure of each low level feature, obtains the notable figure of the segmented image.
It should be noted that the effect of dark channel prior is the haze in removal input picture, prevent the noise to feature
Extraction causes unnecessary interference.According to the observation to outdoor images, some pixels or region typically at least have a color logical
Road, intensity are very low.This means that the dark of image pixel is mainly to be generated by dark space or characteristic area, they are generally occurred within
In obvious object.Therefore, it can use the dark channel prior of image to estimate the conspicuousness of super-pixel.Further, since people
It when watching picture, often concentrates on the object near the heart in the picture, therefore, close to the super picture of picture centre
The saliency value of element should be endowed higher weight.Therefore, calculated by dark can be with for optimization module of the embodiment of the present invention 203
Hypo-intense region is efficiently identified, conspicuousness object can be better anticipated by center priori.
Integration module 204 forms final notable figure for integrating the notable figure of the segmented image of all scales.
Specifically, the integration module 204 of the embodiment of the present invention can pass through the notable figure of the segmented image to all scales
It averages to form final notable figure.The aobvious of generation can be eliminated to the notable figure of the segmented image of all scales is integrated
Write the noise among figure.
Conspicuousness detection device provided in an embodiment of the present invention specifically executes above-mentioned each conspicuousness detection method embodiment stream
Journey please specifically be detailed in the content of above-mentioned each conspicuousness detection method embodiment, and details are not described herein.It is provided in an embodiment of the present invention
Conspicuousness detection device can integrate the segmentation of different accuracys by carrying out super-pixel segmentation to original image with different scale
The conspicuousness segmentation result of image, by a certain number of low level features of determination, and the segmented image for calculating different scale exists
Fisrt feature figure under each low level feature realizes the purpose for obtaining local difference in segmented image, then to same low layer
The fisrt feature figure of all scales under secondary feature is merged, and is obtained the second feature figure of each low level feature, is realized and obtain
The purpose for taking global disparity in segmented image optimizes the segmented image using dark and center priori strategy, energy
It is enough effectively to identify hypo-intense region and conspicuousness object is better anticipated, finally by the notable figure of the segmented image of all scales
It carries out being integrated to form final notable figure, calculates conspicuousness more rationally and accurate.
Fig. 3 is the entity structure schematic diagram of electronic equipment provided in an embodiment of the present invention, as shown in figure 3, the electronic equipment
It may include: processor (processor) 310,320, memory communication interface (Communications Interface)
(memory) 330 and communication bus 340, wherein processor 310, communication interface 320, memory 330 pass through communication bus 340
Complete mutual communication.Processor 310 can call the meter that is stored on memory 330 and can run on processor 310
Calculation machine program, to execute the conspicuousness detection method of the various embodiments described above offer, for example, to original image with different scale
Super-pixel segmentation is carried out respectively, obtains the segmented image of different scale;It determines a certain number of low level features, calculates every kind of ruler
Fisrt feature figure of the segmented image of degree under each low level feature, to first of all scales under same low level feature
Characteristic pattern is merged, and the second feature figure of each low level feature is obtained;For the segmented image of any one scale, using dark
Channel and center priori strategy optimize the segmented image, in conjunction with the second feature figure of each low level feature, obtain institute
State the notable figure of segmented image;The notable figure of the segmented image of all scales is integrated, final notable figure is formed.
In addition, the logical order in above-mentioned memory 330 can be realized by way of SFU software functional unit and conduct
Independent product when selling or using, can store in a computer readable storage medium.Based on this understanding, originally
The technical solution of the inventive embodiments substantially part of the part that contributes to existing technology or the technical solution in other words
It can be embodied in the form of software products, which is stored in a storage medium, including several fingers
It enables and using so that a computer equipment (can be personal computer, server or the network equipment etc.) executes the present invention respectively
The all or part of the steps of a embodiment the method.And storage medium above-mentioned includes: USB flash disk, mobile hard disk, read-only memory
(ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic or disk
Etc. the various media that can store program code.
The embodiment of the present invention also provides a kind of non-transient computer readable storage medium, is stored thereon with computer program,
The computer program is implemented to carry out the conspicuousness detection method of the various embodiments described above offer when being executed by processor, such as wraps
It includes: super-pixel segmentation being carried out with different scale to original image respectively, obtains the segmented image of different scale;Determine certain amount
Low level feature, fisrt feature figure of the segmented image under each low level feature of every kind of scale is calculated, to same low layer
The fisrt feature figure of all scales under secondary feature is merged, and the second feature figure of each low level feature is obtained;For any
A kind of segmented image of scale optimizes the segmented image using dark and center priori strategy, in conjunction with each low layer
The second feature figure of secondary feature, obtains the notable figure of the segmented image;The notable figure of the segmented image of all scales is carried out
It is integrated, form final notable figure.
Using the embodiment of the present invention and existing image significance detection method respectively MSRA data set, SED data set,
It is tested on CSSD data set and DUT-OMRON data set, wherein the existing conspicuousness detection method for comparing point
Not are as follows: NP (Non-Parametric) method based on printenv low-level image feature, the IS (Image based on image signatures
Signature) method, CA (Context-Aware) method based on context-aware, the LR (Low restored based on low-rank matrix
Rank) method, PD (Patch Distinction) method based on image block uniqueness, the SO based on conspicuousness optimization
(Saliency Optimization) method, based on multiple dimensioned MS (Multi-Scale) method, based on the BL of study-leading
(Bootstrap Learning) method.
AUC value (area under the curve) is the percentage of area under ROC curve, indicates that Saliency maps prediction is true
The ability of real obvious object.Area under a curve is bigger, illustrates that the accuracy rate of conspicuousness detection is higher.Each method is in different data
AUC performance under collection is as shown in table 1, it can be seen that the result of the embodiment of the present invention is better than the result of other 8 kinds of methods.
1. embodiment of the present invention of table and AUC performance comparison table of 8 kinds of conspicuousness detection methods on 4 data sets
MAE (mean absolute error) mean absolute error shows similar between notable figure and benchmark notable figure
Degree.MAE performance of each method under different data collection is as shown in table 2, it can be seen that the result of the embodiment of the present invention obtain compared with
Good performance.
2. embodiment of the present invention of table and MAE performance comparison table of 8 kinds of conspicuousness detection methods on 4 data sets
The embodiment of the present invention is complete on the computer of intel pentium G2020 2.9GHz CPU and 12GB memory by MATLAB
At test.
Table 3 shows that the present invention and the execution time performance of other 8 control methods compare.Comparatively, the present invention exists
It is that comparison is efficient in efficiency.
3. present invention of table and runing time (unit: the second) performance comparison of 8 kinds of conspicuousness detection methods on 4 data sets
Table
In conclusion the embodiment of the present invention proposes the frame based on super-pixel, by polymerizeing on multiple scales
Local contrast and global contrast detect well-marked target, and constructing based on the frame of Characteristic Contrast degree indicates conspicuousness,
And optimize significant result using dark channel prior and center priori strategy.By (square with 8 state-of-the-art conspicuousness models
Method) comparison, the conspicuousness model (method) for widely having evaluated the embodiment of the present invention is concentrated in 4 common datas.Experiment knot
Fruit demonstrates the superiority of the model, improves the robustness of conspicuousness detection.
The apparatus embodiments described above are merely exemplary, wherein described, unit can as illustrated by the separation member
It is physically separated with being or may not be, component shown as a unit may or may not be physics list
Member, it can it is in one place, or may be distributed over multiple network units.It can be selected according to the actual needs
In some or all of the modules achieve the purpose of the solution of this embodiment.Those of ordinary skill in the art are not paying creativeness
Labour in the case where, it can understand and implement.
Through the above description of the embodiments, those skilled in the art can be understood that each embodiment can
It realizes by means of software and necessary general hardware platform, naturally it is also possible to pass through hardware.Based on this understanding, on
Stating technical solution, substantially the part that contributes to existing technology can be embodied in the form of software products in other words, should
Computer software product may be stored in a computer readable storage medium, such as ROM/RAM, magnetic disk, CD, including several fingers
It enables and using so that a computer equipment (can be personal computer, server or the network equipment etc.) executes each implementation
Method described in certain parts of example or embodiment.
Finally, it should be noted that the above embodiments are merely illustrative of the technical solutions of the present invention, rather than its limitations;Although
Present invention has been described in detail with reference to the aforementioned embodiments, those skilled in the art should understand that: it still may be used
To modify the technical solutions described in the foregoing embodiments or equivalent replacement of some of the technical features;
And these are modified or replaceed, technical solution of various embodiments of the present invention that it does not separate the essence of the corresponding technical solution spirit and
Range.
Claims (10)
1. a kind of conspicuousness detection method characterized by comprising
Super-pixel segmentation is carried out with different scale to original image respectively, obtains the segmented image of different scale;
It determines a certain number of low level features, calculates first spy of the segmented image of every kind of scale under each low level feature
Sign figure, merges the fisrt feature figure of all scales under same low level feature, obtains the second of each low level feature
Characteristic pattern;
For the segmented image of any one scale, the segmented image is carried out using dark and center priori strategy excellent
Change, in conjunction with the second feature figure of each low level feature, obtains the notable figure of the segmented image;
The notable figure of the segmented image of all scales is integrated, final notable figure is formed.
2. conspicuousness detection method according to claim 1, which is characterized in that the low level feature includes brightness spy
Sign, color characteristic and Gradient Features;
Correspondingly, fisrt feature figure of the segmented image for calculating every kind of scale under each low level feature, comprising:
For the segmented image of any one scale, the segmented image is converted to CIELAB color space, segmentation figure is passed through
Euclidean distance between the brightness of the super-pixel of the brightness and channel L of the super-pixel of picture calculates segmented image under brightness
Fisrt feature figure;
For the segmented image of any one scale, the segmented image is converted to CIELAB color space, CIELAB is passed through
Euclidean distance in the A channel and channel B of color space between the average color of super-pixel calculates segmented image in color spy
Fisrt feature figure under sign;
For the segmented image of any one scale, pass through the Europe between the average gradient value of super-pixel both horizontally and vertically
Family name's distance calculates fisrt feature figure of the segmented image under Gradient Features.
3. conspicuousness detection method according to claim 2, which is characterized in that it is described by the segmented image convert to
CIELAB color space, by the Euclidean distance between the brightness of the super-pixel of the brightness and channel L of the super-pixel of segmented image,
Fisrt feature figure of the segmented image under brightness is calculated, specifically:
It is calculated using the following equation between the brightness of the super-pixel SP (i) of segmented image and the brightness of the super-pixel SP (j) in the channel L
Euclidean distance:
It is calculated using the following equation brightness saliency value of the SP (i) under n-th of scale
4. conspicuousness detection method according to claim 2, which is characterized in that described to pass through super picture in component A and component B
Euclidean distance between the average color of element calculates fisrt feature figure of the segmented image under color characteristic, specifically:
It is calculated using the following equation the difference in the A channel and channel B of CIELAB color space between the average color of super-pixel
It is anisotropic:
Wherein, da(SP (m), SP (l)) indicates the color value of the color value and super-pixel SP (l) of super-pixel SP (m) in A channel
Difference, db(SP (m), SP (l)) indicates the difference of the color value of the color value and super-pixel SP (l) of super-pixel SP (m) in channel B
Value, dpositionThe Euclidean distance of (SP (m), SP (l)) expression super-pixel m place-centric and super-pixel l place-centric;
Using formulaIt is aobvious to calculate the brightness of SP (m) under n-th of scale
Work value
5. conspicuousness detection method according to claim 1, which is characterized in that described to utilize dark to the segmentation figure
As optimizing, specifically:
The dark channel prior value I of pixel I (x, y) is calculated according to the following formuladark(x, y):
Wherein, c indicates one of tri- kinds of Color Channels of R, G, B, IcIndicate the segmented image under Color Channel c, I (x, y) is indicated
Segmented image IcIn pixel, p (x, y) indicates localized mass centered on I (x, y);
The dark channel prior value of super-pixel SP (i) in segmented image is calculated according to the following formula:
Wherein, num (SP (i)) indicates the sum of pixel in super-pixel SP (i).
6. conspicuousness detection method according to claim 1, which is characterized in that described to utilize priori strategy in center to described
Segmented image optimizes, specifically:
Weight SP is increased to the super-pixel in segmented image according to the following formulaweight(i):
SPweight(i)=dcenter(SPcenter(i),Icenter).
Wherein, dcenterIndicate Euclidean distance, SPcenter(i) center of super-pixel SP (i), I are indicatedcenterIndicate segmented image
Center.
7. conspicuousness detection method according to claim 1, which is characterized in that obtain segmentation figure especially by following formula
The notable figure of picture:
Wherein, SPsaliency(i) saliency value of super-pixel SP (i) in segmented image, SP are indicatedweight(i) super-pixel SP (i) is indicated
Weight, Idark(SP (i)) indicates the dark channel prior value of super-pixel SP (i),Indicate the brightness of pixel i in segmented image
Characteristic value,Indicate the color feature value of pixel i in segmented image,Indicate the Gradient Features of pixel i in segmented image
Value.
8. a kind of conspicuousness detection device characterized by comprising
Super-pixel segmentation module obtains different scale for carrying out super-pixel segmentation respectively to original image with different scale
Segmented image;
Characteristic extracting module calculates the segmented image of every kind of scale each low for determining a certain number of low level features
Fisrt feature figure under level characteristics merges the fisrt feature figure of all scales under same low level feature, obtains
The second feature figure of each low level feature;
Optimization module, for the segmented image for any one scale, using dark and center priori strategy to described point
It cuts image to optimize, in conjunction with the second feature figure of each low level feature, obtains the notable figure of the segmented image;
Integration module forms final notable figure for integrating the notable figure of the segmented image of all scales.
9. a kind of electronic equipment characterized by comprising
At least one processor;And
At least one processor being connect with the processor communication, in which:
The memory is stored with the program instruction that can be executed by the processor, and the processor calls described program to instruct energy
Enough execute conspicuousness detection method as claimed in any of claims 1 to 7 in one of claims.
10. a kind of non-transient computer readable storage medium, which is characterized in that the non-transient computer readable storage medium is deposited
Computer instruction is stored up, it is significant as claimed in any of claims 1 to 7 in one of claims that the computer instruction executes the computer
Property detection method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811386757.3A CN109741293A (en) | 2018-11-20 | 2018-11-20 | Conspicuousness detection method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811386757.3A CN109741293A (en) | 2018-11-20 | 2018-11-20 | Conspicuousness detection method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109741293A true CN109741293A (en) | 2019-05-10 |
Family
ID=66356991
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811386757.3A Pending CN109741293A (en) | 2018-11-20 | 2018-11-20 | Conspicuousness detection method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109741293A (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110414377A (en) * | 2019-07-09 | 2019-11-05 | 武汉科技大学 | A kind of remote sensing images scene classification method based on scale attention network |
CN110929735A (en) * | 2019-10-17 | 2020-03-27 | 杭州电子科技大学 | Rapid significance detection method based on multi-scale feature attention mechanism |
CN111429463A (en) * | 2020-03-04 | 2020-07-17 | 北京三快在线科技有限公司 | Instance splitting method, instance splitting device, electronic equipment and storage medium |
CN112101376A (en) * | 2020-08-14 | 2020-12-18 | 北京迈格威科技有限公司 | Image processing method, image processing device, electronic equipment and computer readable medium |
CN112184745A (en) * | 2019-07-01 | 2021-01-05 | ***通信集团浙江有限公司 | Image segmentation method, segmentation device and terminal equipment |
CN112446417A (en) * | 2020-10-16 | 2021-03-05 | 山东大学 | Spindle-shaped fruit image segmentation method and system based on multilayer superpixel segmentation |
CN112529896A (en) * | 2020-12-24 | 2021-03-19 | 山东师范大学 | Infrared small target detection method and system based on dark channel prior |
CN114638822A (en) * | 2022-03-31 | 2022-06-17 | 扬州市恒邦机械制造有限公司 | Method and system for detecting surface quality of automobile cover plate by using optical means |
CN114998320A (en) * | 2022-07-18 | 2022-09-02 | 银江技术股份有限公司 | Method, system, electronic device and storage medium for visual saliency detection |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107103326A (en) * | 2017-04-26 | 2017-08-29 | 苏州大学 | The collaboration conspicuousness detection method clustered based on super-pixel |
CN108345892A (en) * | 2018-01-03 | 2018-07-31 | 深圳大学 | A kind of detection method, device, equipment and the storage medium of stereo-picture conspicuousness |
-
2018
- 2018-11-20 CN CN201811386757.3A patent/CN109741293A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107103326A (en) * | 2017-04-26 | 2017-08-29 | 苏州大学 | The collaboration conspicuousness detection method clustered based on super-pixel |
CN108345892A (en) * | 2018-01-03 | 2018-07-31 | 深圳大学 | A kind of detection method, device, equipment and the storage medium of stereo-picture conspicuousness |
Non-Patent Citations (2)
Title |
---|
MU NAN ET AL.: "A Multiscale Superpixel-Level Salient Object Detection Model Using Local-Global Contrast Cue", 《SHANGHAI JIAO TONG UNIV.(SCI.)》 * |
XIN XU ET AL.: "SALIENT OBJECT DETECTION FROM DISTINCTIVE FEATURES IN LOW CONTRAST IMAGES", 《IEEE》 * |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112184745A (en) * | 2019-07-01 | 2021-01-05 | ***通信集团浙江有限公司 | Image segmentation method, segmentation device and terminal equipment |
CN110414377A (en) * | 2019-07-09 | 2019-11-05 | 武汉科技大学 | A kind of remote sensing images scene classification method based on scale attention network |
CN110929735A (en) * | 2019-10-17 | 2020-03-27 | 杭州电子科技大学 | Rapid significance detection method based on multi-scale feature attention mechanism |
CN110929735B (en) * | 2019-10-17 | 2022-04-01 | 杭州电子科技大学 | Rapid significance detection method based on multi-scale feature attention mechanism |
CN111429463A (en) * | 2020-03-04 | 2020-07-17 | 北京三快在线科技有限公司 | Instance splitting method, instance splitting device, electronic equipment and storage medium |
CN112101376A (en) * | 2020-08-14 | 2020-12-18 | 北京迈格威科技有限公司 | Image processing method, image processing device, electronic equipment and computer readable medium |
CN112446417A (en) * | 2020-10-16 | 2021-03-05 | 山东大学 | Spindle-shaped fruit image segmentation method and system based on multilayer superpixel segmentation |
CN112529896A (en) * | 2020-12-24 | 2021-03-19 | 山东师范大学 | Infrared small target detection method and system based on dark channel prior |
CN114638822A (en) * | 2022-03-31 | 2022-06-17 | 扬州市恒邦机械制造有限公司 | Method and system for detecting surface quality of automobile cover plate by using optical means |
CN114638822B (en) * | 2022-03-31 | 2022-12-13 | 扬州市恒邦机械制造有限公司 | Method and system for detecting surface quality of automobile cover plate by using optical means |
CN114998320A (en) * | 2022-07-18 | 2022-09-02 | 银江技术股份有限公司 | Method, system, electronic device and storage medium for visual saliency detection |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109741293A (en) | Conspicuousness detection method and device | |
Berman et al. | Single image dehazing using haze-lines | |
CN111553406B (en) | Target detection system, method and terminal based on improved YOLO-V3 | |
CN113902897B (en) | Training of target detection model, target detection method, device, equipment and medium | |
US10248854B2 (en) | Hand motion identification method and apparatus | |
CN111325271B (en) | Image classification method and device | |
CN105205453B (en) | Human eye detection and localization method based on depth self-encoding encoder | |
CN109919159A (en) | A kind of semantic segmentation optimization method and device for edge image | |
CN108446694A (en) | A kind of object detection method and device | |
CN108090435A (en) | One kind can parking area recognition methods, system and medium | |
CN109948593A (en) | Based on the MCNN people counting method for combining global density feature | |
CN110288602A (en) | Come down extracting method, landslide extraction system and terminal | |
CN107506792B (en) | Semi-supervised salient object detection method | |
WO2020258077A1 (en) | Pedestrian detection method and device | |
CN110222607A (en) | The method, apparatus and system of face critical point detection | |
CN109558790B (en) | Pedestrian target detection method, device and system | |
CN109670517A (en) | Object detection method, device, electronic equipment and target detection model | |
CN116311083B (en) | Crowd counting model training method and system | |
CN109685806A (en) | Image significance detection method and device | |
CN113850136A (en) | Yolov5 and BCNN-based vehicle orientation identification method and system | |
CN114519819B (en) | Remote sensing image target detection method based on global context awareness | |
CN108320281A (en) | A kind of image significance detection method and terminal based on multiple features diffusion | |
CN113052923A (en) | Tone mapping method, tone mapping apparatus, electronic device, and storage medium | |
CN113160117A (en) | Three-dimensional point cloud target detection method under automatic driving scene | |
CN112329550A (en) | Weak supervision learning-based disaster-stricken building rapid positioning evaluation method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190510 |