CN108846404B - Image significance detection method and device based on related constraint graph sorting - Google Patents

Image significance detection method and device based on related constraint graph sorting Download PDF

Info

Publication number
CN108846404B
CN108846404B CN201810658629.3A CN201810658629A CN108846404B CN 108846404 B CN108846404 B CN 108846404B CN 201810658629 A CN201810658629 A CN 201810658629A CN 108846404 B CN108846404 B CN 108846404B
Authority
CN
China
Prior art keywords
node
value
ith
image
formula
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810658629.3A
Other languages
Chinese (zh)
Other versions
CN108846404A (en
Inventor
江波
关媛媛
汤进
罗斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui University
Original Assignee
Anhui University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui University filed Critical Anhui University
Priority to CN201810658629.3A priority Critical patent/CN108846404B/en
Publication of CN108846404A publication Critical patent/CN108846404A/en
Application granted granted Critical
Publication of CN108846404B publication Critical patent/CN108846404B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an image significance detection method and device based on related constraint graph sorting, wherein the method comprises the following steps: performing superpixel segmentation on an image to be detected, establishing a closed-loop graph model, and further calculating prior information of each superpixel node; extracting information such as color, texture, position and the like of an input image; obtaining a foreground probability value of each super-pixel node; taking a set of nodes with foreground probability values larger than a first preset threshold value as a foreground seed point set ind _ form; taking a set of nodes with the foreground probability value smaller than a second preset threshold value as a background seed point set ind _ back; the first preset threshold is larger than the second preset threshold; and calculating by using a model of related constraint graph sequencing to obtain the foreground probability S _ f of each super-pixel node, and using the foreground probability value S _ f as a final significant estimation value S _ final. By applying the embodiment of the invention, the significance detection result can be more accurate.

Description

Image significance detection method and device based on related constraint graph sorting
Technical Field
The invention relates to a method and a device for detecting saliency, in particular to a method and a device for detecting the saliency of an image based on related constraint graph sorting.
Background
With the rapid development of computer and network communication technologies, image data is increasing. The massive multimedia image data provides a great challenge for information processing, and a main research focus in recent years is how to efficiently store, analyze and process the image information. The saliency detection is used as an important preprocessing step for reducing the computational complexity in the field of computer vision, and the saliency target detection is used for positioning and segmenting the most salient foreground target from a scene. The application fields of the technology are particularly wide, such as: target detection and recognition, content-based image retrieval, context-aware based image resizing, video target detection, and the like. How to quickly and accurately find the significant region of the image does not form a complete theoretical system, and has close relation with specific applications, and still remains a challenging subject for researchers.
Currently, visual information processing is generally performed using a bottom-up approach. Bottom-up methods are typically based on underlying visual information, so it is possible to efficiently detect detailed information of the image, rather than global shape information, and the detected salient region may contain only a portion of the object, or be easily mixed with the background. Many bottom-up significance detection models have emerged in recent years: first, Itti et al proposed a neural network-based saliency detection model that combines three feature channels in multiple scales to achieve fast scene analysis, and although the model was able to identify partially salient pixels, the results also included a large number of false positives. Harel et al propose a graph-based saliency detection method, which is a bottom-up model, with final saliency results obtained by calculating dissimilarity. Chang et al constructed a graphical model combining semblance and regional saliency to obtain a better saliency estimate. Wang et al propose a saliency detection model that combines local graph structure and background prior, and an optimized framework, and the final experimental results have good performance in most scenarios. Jiang et al propose the use of an absorbing Markov chain model for saliency detection of images. Tu et al propose to use a minimum spanning tree model for saliency detection of images. Li et al propose to estimate significance values using a regularized random walk ranking model. Yang et al propose a significance detection algorithm based on manifold sorting of the figure, (hereinafter referred to as MR algorithm) this algorithm through screening out some foreground seed points and background seed points, then use the model of manifold sorting to calculate the correlation between these seed points and other nodes, thus get the final significance.
However, the MR algorithm is divided into two stages, the correlations between the remaining nodes and the obtained background seed points are first calculated, a preliminary significant result is obtained after negation is performed, then foreground seed points are obtained on the basis of the first stage, and then the correlations between the remaining nodes and the foreground seed points are calculated, so that a final result is obtained. The two sequencing processes of the method are independently performed, which can cause the technical problem of low accuracy of image significance detection.
Disclosure of Invention
The invention aims to provide an image significance detection method and device based on correlation constraint graph sorting, so as to solve the defects in the traditional manifold sorting model based on graphs.
The invention solves the technical problems through the following technical scheme:
the embodiment of the invention provides an image significance detection method based on related constraint graph sorting, which comprises the following steps:
a: aiming at each image to be detected, performing superpixel segmentation on the image to be detected by using a simple linear iterative clustering SLIC algorithm to obtain non-overlapping superpixel blocks, then establishing a closed-loop graph model by taking each non-overlapping superpixel block as a node, and further calculating the central priori information of each node;
b: extracting information such as color, texture, position and the like of an input image;
c: acquiring a foreground probability value of each node by using an MR algorithm;
d: taking a set of nodes with foreground probability values larger than a first preset threshold value as a foreground seed point set ind _ form; taking a set of nodes with the foreground probability value smaller than a second preset threshold value as a background seed point set ind _ back; the first preset threshold is greater than the second preset threshold;
e: and calculating by using a model of related constraint graph sequencing to obtain the foreground probability S _ f of each super-pixel node, and using the foreground probability value S _ f as a final significant estimation value S _ final.
Optionally, the step a includes:
a1: for each image to be detected, performing superpixel segmentation on the image into N superpixel blocks by using a SLIC algorithm, wherein each superpixel is used as a node in a set V; then, a non-directional edge corresponding to each node is obtained, and a non-directional graph model G is further constructed1=(V,E);
A2: by means of the formula (I) and (II),
Figure BDA0001706125590000031
central prior information is calculated for each node, wherein,
cicentral prior information of the ith node; x is the number ofiThe abscissa of the central position of the ith node is; y isiIs a longitudinal seat at the central position of the ith nodeMarking; (x)0,y0) The coordinates of the center position of the whole image are shown; sigma1The balance parameter is used for controlling the discrete degree of the calculated position distance; exp () is an exponential function with a natural base number as the base number; i is the number of nodes.
Optionally, the undirected edge is obtained by:
for the neighboring node l of the two neighbors k of each node i, the formula is used, dist (k, l) | | xk-xl||2Calculating the Euclidean distance dist (k, l) between the color and k; if the Euclidean distance of the color is smaller than the threshold value theta, connecting a non-directional edge between the node l and the node i, and continuing searching after finding the connected nodes until all the nodes are connected, wherein dist (k, l) is the Euclidean distance of the color between the ith node and the kth node; x is the number ofkIs the color value of the kth node; x is the number oflIs the color value of the l node; and | | is a modulo function.
Optionally, step B includes: according to the color, texture and position information of the extracted image; by means of the formula (I) and (II),
Figure BDA0001706125590000041
calculating the weight of each undirected edge to construct a first incidence matrix
Figure BDA0001706125590000048
Wherein,
Figure BDA0001706125590000042
a weight of an undirected edge between the ith node and the jth node; i and j are serial numbers of nodes, i is more than or equal to 0, and N is more than or equal to j; v. ofiIs a feature descriptor of the ith node, and vi∈R65,vi=[xi,yi,Li,ai,bi,cii];(xi,yi) Representing the center position coordinates of each superpixel node; (L)i,ai,bi) Expressed is the inclusion of each super-pixel node in the CIE LAB color spaceThe color mean value of all contained pixel points; c. CiCentral prior information of the ith node; omegaiIs the LBP value of the ith node; v. ofjA feature descriptor for the jth node; sigma is a preset constant for controlling weight balance; n is the number of superpixel blocks.
Optionally, step C includes:
c1: acquiring the weight of each undirected edge of each node in the MR algorithm;
c2: constructing a second incidence matrix of the MR algorithm according to the weight of each non-directional edge
Figure BDA0001706125590000043
Wherein,
Figure BDA0001706125590000044
is the weight of the edge between the ith and jth superpixels, and
Figure BDA0001706125590000045
W2is a second incidence matrix; i, j belongs to V, and i is the serial number of the ith node; j is the serial number of the jth node; c. CiThe color mean value of all pixel points of the ith node in the CIE LAB color space; c. CjThe color mean value of all pixel points of the jth node in the CIELAB color space is obtained; sigma is a constant for controlling weight balance;
c3: according to the formula, D ═ diag { D ═ D11,…,dnn-calculating a degree matrix, wherein,
d is a degree matrix; diag { } is a diagonal matrix construction function; diiIs a degree matrix element, and
Figure BDA0001706125590000046
Figure BDA0001706125590000047
the weight of the non-directional edge corresponding to the second incidence matrix;
c4: for each node on the boundary, marking the marking value of the node according to boundary prior;
c5: using the formula, f: X → RmCalculating the corresponding sorting weight of the image to be detected, wherein,
f is the ranking function, and f ═ f1,…,fn]T;f1Is the ranking value of the 1 st node; f. ofnIs the ranking value of the nth node; n is the number of nodes; let y be [ y1,y2,…yn]TRepresenting a label vector, wherein the label value of the seed point is 1, and the label values of the rest nodes are 0; x is a characteristic matrix corresponding to the input image; r is real number space; rmIs m-dimensional real number space; m is a spatial dimension; y is a vector formed by label values of all the seed nodes;
c6: by using the formula of the ranking function,
Figure BDA0001706125590000051
a closed solution is calculated in which, among other things,
f is a sorting function;
Figure BDA0001706125590000055
solving a function minimum independent variable function; sigma is a summation function; f. ofiIs the ranking value of the ith node; f. ofjIs the ranking value of the jth node; y isiIs the label value of the ith node;
Figure BDA0001706125590000056
a weight of the undirected edge; diiThe element of the ith row and the ith column in the degree matrix; djjThe element of j row and j column in the degree matrix; mu is a balance parameter;
c7: using a formula in accordance with the closed solution,
Figure BDA0001706125590000052
obtaining a non-normalized solution, wherein,
d is a degree matrix; w2Is a second incidence matrix; s is W2The normalization matrix of (a);
c8: by means of the formula (I) and (II),
Figure BDA0001706125590000053
respectively calculating the correlation between each node and the background seed points on the four boundaries to obtain a background probability value f of each node under four conditions, wherein lambda is a preset parameter;
c9: normalizing correlation values between each node and background seed points on four boundaries to obtain correlation values
Figure BDA0001706125590000057
Then carrying out negation to obtain a significant value of each node; and performing point multiplication on the significant values obtained under the four conditions to obtain an initial result S _ MR as a foreground probability value of the node.
Optionally, the step D includes:
d1: by means of the formula (I) and (II),
Figure BDA0001706125590000054
acquiring a first preset threshold and a second preset threshold; wherein,
h1is a first preset threshold value; h is2Is a second preset threshold; mean is an averaging function; max is a maximum evaluation function;
d2: by means of the formula (I) and (II),
Figure BDA0001706125590000061
obtaining a foreground seed point set ind _ form and a background seed point set ind _ back, wherein,
ind _ form is a foreground seed point set; ind _ back is a background seed point set; theta is a preset parameter.
Optionally, the step E includes:
e1: using the formula, F: X → RnCalculating the corresponding sorting weight of the image to be detected, wherein,
f is a sorting function; fiDenoted is the rank value of the ith node, and F ═ F, g; f is the probability that each node belongs to the foreground, and g is the probability that each node belongs to the background;
e2: by means of the formula (I) and (II),
Figure BDA0001706125590000062
acquiring a label value of each node; using the formula, Y ═ Y1,y2)∈Rm×2And obtaining a label vector of each node, wherein,
y is a label vector of each node; y is1A label value for which the node belongs to the foreground; y is2A label value for which the node belongs to a background; rm×2Is a real space with 2m dimension;
e3: the model formula of the constructed related constraint graph ordering is as follows,
Figure BDA0001706125590000063
wherein,
F*is a closed solution; wijA first incidence matrix corresponding to the weight of the undirected edge between the ith node and the jth node; fiIs the ranking value of the ith node; fjIs the ranking value of the jth node; diIs a degree matrix element; f. ofiForeground probability of the ith node; giThe background probability of the ith node; w is aiThe characteristic weight of the ith node; x is the number ofiIs the characteristic of the ith node; biIs a bias parameter; beta is a1Linear constraint coefficients to the foreground probability; beta is a2Linear constraint coefficient to background probability;
e4: and (4) carrying out partial derivation on the foreground probability by using the sequencing model of the E3 step to obtain a significance value.
The embodiment of the invention also provides an image significance detection device based on the related constraint graph sorting, which comprises the following steps:
the first calculation module is used for carrying out superpixel segmentation on the image to be detected by using a simple linear iterative clustering SLIC algorithm aiming at each image to be detected to obtain non-overlapping superpixel blocks, then establishing a closed-loop graph model by taking each non-overlapping superpixel block as a node, and further calculating the central prior information of each node;
the input module is used for extracting information such as color, texture, position and the like of an input image;
the second calculation module is used for acquiring the foreground probability value of each node by using the MR algorithm;
the first setting module is used for taking a set of nodes with foreground probability values larger than a first preset threshold value as a foreground seed point set ind _ form; taking a set of nodes with the foreground probability value smaller than a second preset threshold value as a background seed point set ind _ back; the first preset threshold is greater than the second preset threshold;
and the second setting module is used for calculating the foreground probability S _ f and the background probability S _ g of each super-pixel node by using the model ordered by the related constraint graph, and using the foreground probability value S _ f as a final significant estimation value S _ final.
Optionally, the first computing module is further configured to:
a1: for each image to be detected, performing superpixel segmentation on the image into N superpixel blocks by using a SLIC algorithm, wherein each superpixel is used as a node in a set V; then, a non-directional edge corresponding to each node is obtained, and a non-directional graph model G is further constructed1=(V,E);
A2: by means of the formula (I) and (II),
Figure BDA0001706125590000071
central prior information is calculated for each node, wherein,
cicentral prior information of the ith node; x is the number ofiThe abscissa of the central position of the ith node is; y isiIs the longitudinal coordinate of the central position of the ith node; (x)0,y0) The coordinates of the center position of the whole image are shown; sigma1The balance parameter is used for controlling the discrete degree of the calculated position distance; exp () is an exponential function with a natural base number as the base number; i is the number of nodes.
Optionally, the second calculating module is further configured to:
c1: acquiring the weight of each undirected edge of each node in the MR algorithm;
C2:constructing a second incidence matrix of the MR algorithm according to the weight of each non-directional edge
Figure BDA0001706125590000081
Wherein,
Figure BDA0001706125590000082
is the weight of the edge between the ith and jth superpixels, and
Figure BDA0001706125590000083
W2is a second incidence matrix; i, j belongs to V, and i is the serial number of the ith node; j is the serial number of the jth node; c. CiThe color mean value of all pixel points of the ith node in the CIE LAB color space; c. CjThe color mean value of all pixel points of the jth node in the CIELAB color space is obtained; sigma is a constant for controlling weight balance;
c3: according to the formula, D ═ diag { D ═ D11,…,dnn-calculating a degree matrix, wherein,
d is a degree matrix; diag { } is a diagonal matrix construction function; diiIs a degree matrix element, and
Figure BDA0001706125590000084
Figure BDA0001706125590000085
weights of the non-directional edges corresponding to the incidence matrix;
c4: for each node on the boundary, marking the marking value of the node according to boundary prior;
c5: using the formula, f: X → RmCalculating the corresponding sorting weight of the image to be detected, wherein,
f is the ranking function, and f ═ f1,…,fn]T;f1Is the ranking value of the ith node; f. ofnIs the ranking value of the nth node; n is the number of nodes; let y be [ y1,y2,…yn]TRepresenting the label vector, speciesThe label value of the child point is 1, and the label values of the other nodes are 0; x is a characteristic matrix corresponding to the input image; r is real number space; rmIs m-dimensional real number space; m is a spatial dimension; y is a vector formed by label values of all the seed nodes;
c6: by using the formula of the ranking function,
Figure BDA0001706125590000086
a closed solution is calculated in which, among other things,
f is a sorting function;
Figure BDA0001706125590000091
solving a function minimum independent variable function; sigma is a summation function; f. ofiIs the ranking value of the ith node; f. ofjIs the ranking value of the jth node; y isiIs the label value of the ith node;
Figure BDA0001706125590000092
a weight of the undirected edge; diiThe element of the ith row and the ith column in the degree matrix; djjThe element of j row and j column in the degree matrix; mu is a balance parameter;
c7: using a formula in accordance with the closed solution,
Figure BDA0001706125590000093
obtaining a non-normalized solution, wherein,
d is a degree matrix; w2Is the second incidence matrix; s is W2The normalization matrix of (a);
c8: by means of the formula (I) and (II),
Figure BDA0001706125590000094
respectively calculating the correlation between each node and the background seed points on the four boundaries to obtain a background probability value f of each node under four conditions, wherein lambda is a preset parameter;
c9: normalizing correlation values between each node and background seed points on four boundaries to obtain correlation values
Figure BDA0001706125590000098
Then carrying out negation to obtain a significant value of each node; and performing point multiplication on the significant values obtained under the four conditions to obtain an initial result S _ MR as a foreground probability value of the node.
Optionally, the second calculating module is further configured to:
according to the color, texture and position information of the extracted image;
by means of the formula (I) and (II),
Figure BDA0001706125590000095
calculating the weight of each undirected edge to construct a first incidence matrix
Figure BDA0001706125590000096
Wherein,
Figure BDA0001706125590000097
a weight of an undirected edge between the ith node and the jth node; i and j are serial numbers of nodes, i is more than or equal to 0, and N is more than or equal to j; v. ofiIs a feature descriptor of the ith node, and vi∈R65,vi=[xi,yi,Li,ai,bi,cii];(xi,yi) Representing the center position coordinates of each superpixel node; (L)i,ai,bi) The color mean value of all pixel points contained in the CIE LAB color space of each super pixel node is represented; c. CiCentral prior information of the ith node; omegaiIs the LBP value of the ith node; v. ofjA feature descriptor for the jth node; σ is a preset constant for controlling the weight balance.
Compared with the prior art, the invention has the following advantages:
by applying the embodiment of the invention, the association parameters between the foreground clues and the background clues are introduced when the graph ranking function is constructed, and compared with the prior art in which the association between the foreground clues and the background clues is not considered when the graph ranking function is constructed, more influence factors are considered, and the significance detection result is more accurate. Meanwhile, because the traditional mode based on composition ignores the effect of image features when the significance calculation is carried out, the final significance value is restrained by utilizing the linear learning of the image features, the composition information and the feature information are fully utilized, and the final detection result is further improved.
Drawings
Fig. 1 is a schematic flowchart of an image saliency detection method based on a related constraint map sorting according to an embodiment of the present invention;
FIG. 2 is a schematic diagram illustrating a method for detecting image saliency based on relevance constraint map ranking according to an embodiment of the present invention;
FIG. 3 is a schematic structural diagram of a constructed closed-loop graph model according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of an image saliency detection apparatus based on a related constraint map sorting according to an embodiment of the present invention.
Detailed Description
The following examples are given for the detailed implementation and specific operation of the present invention, but the scope of the present invention is not limited to the following examples.
In order to solve the problems in the prior art, embodiments of the present invention provide a method and an apparatus for detecting image saliency based on a correlation constraint graph sorting.
Fig. 1 is a schematic flowchart of an image saliency detection method based on a related constraint map sorting according to an embodiment of the present invention; fig. 2 is a schematic diagram illustrating a principle of an image saliency detection method based on a related constraint map sorting according to an embodiment of the present invention, as shown in fig. 1 and fig. 2, the method includes:
s101: aiming at each image to be detected, performing superpixel segmentation on the image to be detected by using a simple linear iterative clustering SLIC algorithm to obtain non-overlapping superpixel blocks, then establishing a closed-loop graph model by using each non-overlapping superpixel block as a node, and further calculating prior information of each node;
specifically, the step S101 may include: a1: for each image to be detected, performing superpixel segmentation on the image into N superpixel blocks by using a SLIC algorithm, wherein each superpixel is used as a node in a set V; then obtaining the non-directional edge corresponding to each node, wherein each node has an edge with the node directly adjacent to the node and the adjacent (two adjacent) node, calculating to obtain the connected edge aiming at the two adjacent nodes of each node by using a formula, and connecting the nodes on the four boundaries, thereby constructing the non-directional graph model G1(V, E); a2: by means of the formula (I) and (II),
Figure BDA0001706125590000111
calculating central prior information of each node, wherein ciCentral prior information of the ith node; x is the number ofiThe abscissa of the central position of the ith node is; y isiIs the longitudinal coordinate of the central position of the ith node; (x)0,y0) The coordinates of the center position of the whole image are shown; sigma1The balance parameter is used for controlling the discrete degree of the calculated position distance; exp () is an exponential function with a natural base number as the base number; i is the number of nodes.
Fig. 3 is a schematic structural diagram of a constructed closed-loop graph model according to an embodiment of the present invention, and as shown in fig. 3, the construction process of fig. 3 is: carrying out superpixel segmentation on an image to be detected to obtain N superpixel blocks, taking each superpixel block as a node, connecting each superpixel with adjacent superpixels in a local region, constructing edges, and finally establishing a closed-loop graph model G1
In practical application, the connection situation of the edge between each node can be divided into the following four situations:
1. there is an edge between each node i and its immediately adjacent node j.
2. Each node i has an edge with its two neighbors (neighbors) node k.
3. The neighboring node l of the two neighbors k of each node i uses the formula, dist (k, l) | | xk-xl||2Calculating the color Euclidean distance dist (k, l) between the node and the k, if the distance is smaller than a threshold value theta, considering that an edge is connected between the node and the node i, and continuing searching after finding the connected nodes until all the nodes are connected, wherein the dist (k, l) is the color Euclidean distance between the ith node and the kth node; x is the number ofkIs the color value of the kth node; x is the number oflIs the color value of the l node; and | | is a modulo function.
4. And all nodes positioned on the four boundaries are connected with each other to form closed-loop connection around the image.
By applying the embodiment of the invention, compared with the closed-loop graph model in the prior art, the 3 rd point is added into the constructed closed-loop graph model, the local smooth range of each super-pixel region is enlarged, the consistent characteristics among the super-pixel regions in certain adjacent regions are better explained, and the accuracy of image significance detection is further improved.
S102: information such as color, texture, position, etc. of the input image is extracted.
Specifically, the color, texture, and position information of the extracted image may be used;
then the formula is utilized to obtain the final product,
Figure BDA0001706125590000121
calculating the weight of each undirected edge to construct a first incidence matrix
Figure BDA0001706125590000122
Wherein,
Figure BDA0001706125590000123
a weight of an undirected edge between the ith node and the jth node; w1Is a first incidence matrix; i and j are serial numbers of nodes, i is more than or equal to 0, and N is more than or equal to j; v. ofiBeing the i-th nodeFeature descriptors, and vi∈R65,vi=[xi,yi,Li,ai,bi,cii];(xi,yi) Representing the center position coordinates of each superpixel node; (L)i,ai,bi) The color mean value of all pixel points contained in the CIE LAB color space of each super pixel node is represented; c. CiCentral prior information of the ith node; omegaiIs the LBP value of the ith node; v. ofjA feature descriptor for the jth node; sigma is a preset constant for controlling weight balance; n is the number of superpixel blocks.
As shown in fig. 2, in practical applications, the color feature of the image is a CIE LAB (LAB specified by the organization of the Commission International Eclairage light standard) color mean value that each super-pixel region includes a pixel point; the texture features are Local Binary Patterns (LBP) features. Computing the weight W of an edge between two connected superpixel nodes by computing the difference in feature combinations between them2
S103: and acquiring the foreground probability value of each node by using an MR algorithm.
Specifically, the step S103 may include:
c1: performing superpixel segmentation on an image to be detected by using a SLIC algorithm, and obtaining n non-overlapping superpixel block sets X ═ X by segmentation1,…xq,xq+1,…xn-wherein the first q superpixel blocks are marked query seed points, and the rest are unmarked superpixel nodes. Then, a closed-loop graph model G is constructed2(V, E). V represents all node sets, and E represents all undirected edge sets. Wherein, each node and its direct adjacent node, and adjacent node have the edge, the node on four borders interconnect.
C2: constructing a second incidence matrix
Figure BDA0001706125590000131
Wherein,
Figure BDA0001706125590000132
is the weight of the edge between the ith and jth superpixels, and
Figure BDA0001706125590000133
W2is a second incidence matrix; i, j belongs to V, and i is the serial number of the ith node; j is the serial number of the jth node; c. CiThe color mean value of all pixel points of the ith node in the CIE LAB color space; c. CjThe color mean value of all pixel points of the jth node in the CIE LAB color space is obtained; σ is a constant that controls the balance of weights, and is usually positively correlated with the Euclidean distance of the color mean between two nodes.
C3: according to the formula, D ═ diag { D ═ D11,…,dnnCalculating a degree matrix, wherein D is the degree matrix; diag { } is a diagonal matrix construction function; diiIs a degree matrix element, and
Figure BDA0001706125590000134
Figure BDA0001706125590000135
and the weights of the non-directional edges corresponding to the incidence matrix.
C4: for each node on the boundary, the labeled value of the node is labeled according to the boundary prior.
C5: using the formula, f: X → RmAnd calculating the sorting weight corresponding to the image to be detected, wherein f is a sorting function, and f is [ f ═ f [ ]1,…,fn]T;f1Is the ranking value of the 1 st node; f. ofnIs the ranking value of the nth node; n is the number of nodes; let y be [ y1,y2,…yn]TRepresenting a label vector, wherein the label value of the seed point is 1, and the label values of the rest nodes are 0; x is a characteristic matrix corresponding to the input image; r is real number space; rmIs m-dimensional real number space; m is a spatial dimension; y is a vector formed by label values of all the seed nodes;
c6: using a ranking functionThe formula is shown in the figure,
Figure BDA0001706125590000141
computing a closed solution, wherein f*Is a sorting function;
Figure BDA0001706125590000142
solving a function minimum independent variable function; sigma is a summation function; f. ofiIs the ranking value of the ith node; f. ofjIs the ranking value of the jth node; y is1Is the label value of the ith node;
Figure BDA0001706125590000148
a weight of the undirected edge; diiThe element of the ith row and the ith column in the degree matrix; djjThe element of j row and j column in the degree matrix; μ is the equilibrium parameter.
C7: using a formula in accordance with the closed solution,
Figure BDA0001706125590000143
obtaining a non-normalized solution, wherein D is a degree matrix; w2Is the second incidence matrix; s is W2The normalization matrix of (2).
The closed solution calculated in the step C6 may be:
Figure BDA0001706125590000144
wherein, λ is a preset parameter; f. of*Is the closed solution; i is the identity matrix.
From the closed-solution sum degree matrix, a non-normalized solution can be obtained
Figure BDA0001706125590000145
C8: by means of the formula (I) and (II),
Figure BDA0001706125590000146
respectively calculating the correlation between each node and the background seed points on the four boundaries to obtain the background probability value f of each node under four conditions, wherein the background probability value f is the value of the background probability of each node under four conditionsAnd λ is a preset parameter.
In practical application, in image saliency detection, a degree matrix is usually used to replace an identity matrix in a closed solution, so as to obtain the above formula. For example, the correlation value between each node and the background seed points on the four boundaries, i.e., the background probability value of each node in the four cases, is calculated as f.
C9: normalizing correlation values between each node and background seed points on four boundaries to obtain correlation values
Figure BDA0001706125590000147
Then carrying out negation to obtain a significant value of each node; and performing dot multiplication on the significant values obtained under the four conditions to obtain an initial result S _ MR as a foreground probability value.
In practical application, the value obtained by inverting the normalized background probability value is as follows:
Figure BDA0001706125590000151
the point multiplication of the significant values obtained in the four cases to obtain an initial result may be:
Sbq(i)=St(i)×Sb(i)×Sl(i)×Sr(i)。
it is emphasized that in step S103, the foreground probability value of each node is obtained by using an MR algorithm, and S _ MR (foreground probability value) of the image to be detected, which is obtained in the first stage of the classical graph-based manifold sorting algorithm, is used.
Taking each superpixel as a node, connecting each superpixel with the superpixel which belongs to the adjacent superpixel in the local area, constructing edges, and then establishing a closed-loop graph model G2. Then extracting LAB color characteristics of each super-pixel region, and calculating the weight W of the edge by calculating the difference value on the color characteristics between two connected super-pixel nodes2. The algorithm is mainly divided into two stages, and the supergrams on four boundaries of the image are selected according to the prior information calculated in the step S101The pixels are used as background Query points, then the correlation between each super-pixel node and the background Query points is calculated by using a manifold sorting algorithm, the probability that each super-pixel belongs to the background is calculated, then normalization is carried out, and then negation is carried out, so that an initial significant result is obtained. In the second stage, firstly, binarization is carried out on the initial result of the first stage, then foreground Query points are screened out, and then the correlation between each super-pixel node and each foreground Query point is calculated by using a manifold sorting algorithm, so that the S _ MR (foreground probability value) of the image to be detected is obtained.
S104: taking a set of nodes with foreground probability values larger than a first preset threshold value as a foreground seed point set ind _ form; taking a set of nodes with the foreground probability value smaller than a second preset threshold value as a background seed point set ind _ back; the first preset threshold is greater than the second preset threshold.
Specifically, the step S104 may include:
d1: by means of the formula (I) and (II),
Figure BDA0001706125590000161
acquiring a first preset threshold and a second preset threshold; h is1Is a first preset threshold value; h is2Is a second preset threshold; mean is an averaging function; max is a maximum evaluation function;
d2: by means of the formula (I) and (II),
Figure BDA0001706125590000162
acquiring a foreground seed point set ind _ form and a background seed point set ind _ back, wherein the ind _ form is the foreground seed point set; ind _ back is a background seed point set; theta is a preset parameter.
S105: and calculating by using a model of related constraint graph sequencing to obtain the foreground probability S _ f of each super-pixel node, and using the foreground probability value S _ f as a final significant estimation value S _ final.
Specifically, the step S105 may include: e1: using the formula, F: X → RnCalculating the corresponding sorting weight of the image to be detected, wherein F is a sorting function; fiIndicating the ranking value of the ith nodeAnd F ═ F, g; f is the probability that each node belongs to the foreground, and g is the probability that each node belongs to the background; e2: by means of the formula (I) and (II),
Figure BDA0001706125590000163
acquiring a label value of each node; using the formula, Y ═ Y1,y2)∈Rm×2And obtaining a label vector of each node, wherein,
y is a label vector of each node; y is1A label value for which the node belongs to the foreground; y is2A label value for which the node belongs to a background; rm×2Is a real space of 2m dimensions.
E3: the model formula of the constructed related constraint graph ordering is as follows,
Figure BDA0001706125590000164
wherein, F*Is a closed solution; wijA first incidence matrix corresponding to the weight of the undirected edge between the ith node and the jth node; fiIs the ranking value of the ith node; fjIs the ranking value of the jth node; diIs a degree matrix element; f. ofiForeground probability of the ith node; giThe background probability of the ith node; w is aiThe characteristic weight of the ith node; x is the number ofiIs the characteristic of the ith node; biIs a bias parameter; beta is a1Linear constraint coefficients to the foreground probability; beta is a2Linear constraint coefficient to background probability; e4: and (4) carrying out partial derivation on the foreground probability by using the sequencing model of the E3 step to obtain a significance value.
In the formula in step E3, the first polynomial is a smoothing term, and since the surrounding area of the area with a certain characteristic in the image also has similar features to it, i.e. the scores of the rankings between the nodes in the adjacent local areas are considered as similar as possible, the smoothing term can be added; the second term is a fitting term, so that the difference between the ranking value we finally calculate and the initial label value we give is as small as possible; the third term is a constraint term for f and g, in order to make the correlation between f and g calculated as small as possible; the fourth and fifth terms are linear constraint terms for f and g respectively, and the final significant value is constrained by linear learning of image features.
In practical applications, the step of E4 may include:
1) simplifying the model formula of the constructed related constraint graph sequencing to obtain an optimized formula,
Figure BDA0001706125590000171
2) fixing f can obtain the optimal solution of b and W, so there are:
Figure BDA0001706125590000172
and
Figure BDA0001706125590000173
wherein,
Figure BDA0001706125590000174
1 is a vector of all 1's. I ∈ R is the identity matrix, so one can get
Figure BDA0001706125590000175
Wherein,
Figure BDA0001706125590000181
3) writing the solving problem of the model with the sequencing of the related constraint graphs into the following formula:
J=Tr[FTA*F-μFTY]+λfTg+β1||XTWf+bf1-f||22||XTWg+bg1-g||2wherein
A*=(1+μ)D-W。
4) replacing F in the formula with FIn the alternative, F is (F, g), and Y is (Y)1,y2) And solving the reduction can obtain a formula,
Figure BDA0001706125590000182
5) and f in the step 4) is subjected to derivation to obtain the following results:
Figure BDA0001706125590000183
6) and g in the step 4) is subjected to derivation to obtain the following results:
Figure BDA0001706125590000184
7) from 5) and 6) can be obtained:
Figure BDA0001706125590000185
8) calculated according to the formula of 7):
f*=μ(λ2I-4(A*)2-2A*β1B-2β2BA*2β1B2)-1(λy2-2A*y12By1);
Figure BDA0001706125590000191
will f is*As the final significant estimate S final.
By applying the embodiment shown in the figure 1 of the invention, when the graph ranking function is constructed, the association parameters between the foreground clues and the background clues are introduced, and when the association between each super-pixel node and the given foreground query point and the background query is calculated at the same time, a correlation constraint condition is added, so that the correlation between the obtained foreground probability value and the obtained background probability value is reduced. Meanwhile, because the traditional mode based on composition ignores the effect of image features when the significance calculation is carried out, the final significance value is restrained by utilizing the linear learning of the image features, the composition information and the feature information are fully utilized, and the final detection result is further improved.
Corresponding to the image significance detection method based on the related constraint graph sorting provided by the embodiment of the invention, the embodiment of the invention also provides an image significance detection device based on the related constraint graph sorting.
Fig. 4 is a schematic structural diagram of an image saliency detection apparatus based on a related constraint map sorting according to an embodiment of the present invention, as shown in fig. 4, the apparatus includes:
the first calculation module 401 is configured to perform superpixel segmentation on each image to be detected by using a simple linear iterative clustering SLIC algorithm to obtain non-overlapping superpixel blocks, then establish a closed-loop graph model by using each non-overlapping superpixel block as a node, and further calculate prior information of each node;
an input module 402, configured to extract information such as color, texture, and position of an input image;
a second calculating module 403, configured to obtain a foreground probability value of each node by using an MR algorithm;
a first setting module 404, configured to use a set of nodes with a foreground probability value greater than a first preset threshold as a foreground seed point set ind _ form; taking a set of nodes with the foreground probability value smaller than a second preset threshold value as a background seed point set ind _ back; the first preset threshold is greater than the second preset threshold;
and a second setting module 405, configured to calculate a foreground probability S _ f and a background probability S _ g of each super-pixel node by using the model ordered by the relevant constraint graph, and use the foreground probability value S _ f as a final significant estimation value S _ final.
By applying the embodiment shown in fig. 4 of the invention, when the graph ranking function is constructed, the association parameters between the foreground clues and the background clues are introduced, and when the association between each super-pixel node and the given foreground query point and the background query is calculated at the same time, a correlation constraint condition is added, so that the correlation between the obtained foreground probability value and the obtained background probability value is reduced. Meanwhile, because the traditional mode based on composition ignores the effect of image features when the significance calculation is carried out, the final significance value is restrained by utilizing the linear learning of the image features, the composition information and the feature information are fully utilized, and the final detection result is further improved.
In a specific implementation manner of the embodiment of the present invention, the first calculating module 404 is further configured to:
a1: for each image to be detected, performing superpixel segmentation on the image into N superpixel blocks by using a SLIC algorithm, wherein each superpixel is used as a node in a set V; then, a non-directional edge corresponding to each node is obtained, and a non-directional graph model G is further constructed1=(V,E);
A2: by means of the formula (I) and (II),
Figure BDA0001706125590000201
central prior information is calculated for each node, wherein,
cicentral prior information of the ith node; x is the number ofiThe abscissa of the central position of the ith node is; y isiIs the longitudinal coordinate of the central position of the ith node; (x)0,y0) The coordinates of the center position of the whole image are shown; sigma1The balance parameter is used for controlling the discrete degree of the calculated position distance; exp () is an exponential function with a natural base number as the base number; i is the number of nodes.
In a specific implementation manner of the embodiment of the present invention, the second calculating module 403 is further configured to:
c1: acquiring the weight of each undirected edge of each node in the MR algorithm;
c2: constructing a second incidence matrix of the MR algorithm according to the weight of each non-directional edge
Figure BDA0001706125590000211
Wherein,
Figure BDA0001706125590000212
is the weight of the edge between the ith and jth superpixels, and
Figure BDA0001706125590000213
W2is a second incidence matrix; i, j belongs to V, and i is the serial number of the ith node; j is the serial number of the jth node; c. CiThe color mean value of all pixel points of the ith node in the CIE LAB color space; c. CjThe color mean value of all pixel points of the jth node in the CIELAB color space is obtained; sigma is a constant for controlling weight balance;
c3: according to the formula, D ═ diag { D ═ D11,…,dnn-calculating a degree matrix, wherein,
d is a degree matrix; diag { } is a diagonal matrix construction function; diiIs a degree matrix element, and
Figure BDA0001706125590000214
Figure BDA0001706125590000215
weights of the non-directional edges corresponding to the incidence matrix;
c4: for each node on the boundary, marking the marking value of the node according to boundary prior;
c5: using the formula, f: X → RmCalculating the corresponding sorting weight of the image to be detected, wherein,
f is the ranking function, and f ═ f1,…,fn]T;f1Is the ranking value of the 1 st node; f. ofnIs the ranking value of the nth node; n is the number of nodes; let y be [ y1,y2,…yn]TRepresenting a label vector, wherein the label value of the seed point is 1, and the label values of the rest nodes are 0; x is a characteristic matrix corresponding to the input image; r is real number space; rmIs m-dimensional real number space; m is a spatial dimension; y is a vector formed by label values of all the seed nodes;
c6: by using the formula of the ranking function,
Figure BDA0001706125590000216
a closed solution is calculated in which, among other things,
f*is a sorting function;
Figure BDA0001706125590000217
solving a function minimum independent variable function; sigma is a summation function; f. ofiIs the ranking value of the ith node; f. ofjIs the ranking value of the jth node; y isiIs the label value of the ith node;
Figure BDA0001706125590000218
a weight of the undirected edge; diiThe element of the ith row and the ith column in the degree matrix; djjThe element of j row and j column in the degree matrix; mu is a balance parameter;
c7: using a formula in accordance with the closed solution,
Figure BDA0001706125590000221
obtaining a non-normalized solution, wherein D is a degree matrix; w2Is the second incidence matrix; s is W2The normalization matrix of (a);
c8: by means of the formula (I) and (II),
Figure BDA0001706125590000222
respectively calculating the correlation between each node and the background seed points on the four boundaries to obtain the background probability value f of each node under four conditions, wherein lambda is a preset parameter;
C9: normalizing correlation values between each node and background seed points on four boundaries to obtain correlation values
Figure BDA0001706125590000226
Then carrying out negation to obtain a significant value of each node; and performing point multiplication on the significant values obtained under the four conditions to obtain an initial result S _ MR as a foreground probability value of the node.
In a specific implementation manner of the embodiment of the present invention, the second calculating module 403 is further configured to:
by means of the formula (I) and (II),
Figure BDA0001706125590000223
calculating the weight of each undirected edge to construct a first incidence matrix
Figure BDA0001706125590000224
Wherein,
Figure BDA0001706125590000225
a weight of an undirected edge between the ith node and the jth node; i and j are serial numbers of nodes, i is more than or equal to 0, and N is more than or equal to j; v. ofiIs a feature descriptor of the ith node, and vi∈R65,vi=[xi,yi,Li,ai,bi,cii];(xi,yi) Representing the center position coordinates of each superpixel node; (L)i,ai,bi) The color mean value of all pixel points contained in the CIE LAB color space of each super pixel node is represented; c. CiCentral prior information of the ith node; omegaiIs the LBP value of the ith node; v. ofjA feature descriptor for the jth node; σ is a preset constant for controlling the weight balance.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents and improvements made within the spirit and principle of the present invention are intended to be included within the scope of the present invention.

Claims (8)

1. An image saliency detection method based on relevance constraint graph sorting is characterized by comprising the following steps:
a: aiming at each image to be detected, performing superpixel segmentation on the image to be detected by using a simple linear iterative clustering SLIC algorithm to obtain non-overlapping superpixel blocks, then establishing a closed-loop graph model by taking each non-overlapping superpixel block as a node, and further calculating the central priori information of each node;
b: extracting color, texture and position information of an input image;
c: obtaining a foreground probability value of each node by using a significance detection algorithm based on manifold sorting of a graph;
d: taking a set of nodes with foreground probability values larger than a first preset threshold value as a foreground seed point set ind _ form; taking a set of nodes with the foreground probability value smaller than a second preset threshold value as a background seed point set ind _ back; the first preset threshold is greater than the second preset threshold;
e: calculating by using a model of related constraint graph sequencing to obtain a foreground probability S _ f of each super pixel node, and using the foreground probability value S _ f as a final significant estimation value S _ final;
and step C, comprising:
c1: acquiring the weight of each undirected edge of each node in a significance detection algorithm based on manifold sorting of a graph;
c2: constructing a second incidence matrix of the significance detection algorithm based on manifold sorting of the graph according to the weight of each undirected edge
Figure FDA0003098967730000011
Wherein,
Figure FDA0003098967730000012
is the weight of the edge between the ith super pixel and the jth super pixel,and is
Figure FDA0003098967730000013
W2Is a second incidence matrix; i, j belongs to V, and i is the serial number of the ith node; j is the serial number of the jth node; c. CiThe color mean value of all pixel points of the ith node in the CIE LAB color space; c. CjThe color mean value of all pixel points of the jth node in the CIE LAB color space is obtained; sigma is a constant for controlling weight balance;
c3: according to the formula, D ═ diag { D ═ D11,…,dnn-calculating a degree matrix, wherein,
d is a degree matrix; diag { } is a diagonal matrix construction function; diiIs a degree matrix element, and
Figure FDA0003098967730000021
Figure FDA0003098967730000022
weights of the non-directional edges corresponding to the incidence matrix;
c4: for each node on the boundary, marking the marking value of the node according to boundary prior;
c5: using the formula, f: X → RmCalculating the corresponding sorting weight of the image to be detected, wherein,
f is the ranking function, and f ═ f1,…,fn]T;f1Is the ranking value of the 1 st node; f. ofnIs the ranking value of the nth node; n is the number of nodes; let y be [ y1,y2,…yn]TRepresenting a label vector, wherein the label value of the seed point is 1, and the label values of the rest nodes are 0; x is a characteristic matrix corresponding to the input image; r is real number space; rmIs m-dimensional real number space; m is a spatial dimension; y is a vector formed by label values of all the seed nodes;
c6: by using the formula of the ranking function,
Figure FDA0003098967730000023
a closed solution is calculated in which, among other things,
f*is a sorting function;
Figure FDA0003098967730000024
solving a function minimum independent variable function; sigma is a summation function; f. ofiIs the ranking value of the ith node; f. ofjIs the ranking value of the jth node; y isiIs the label value of the ith node;
Figure FDA0003098967730000025
a weight of the undirected edge; diiThe element of the ith row and the ith column in the degree matrix; djjThe element of j row and j column in the degree matrix; mu is a balance parameter;
c7: using a formula in accordance with the closed solution,
Figure FDA0003098967730000026
obtaining a non-normalized solution, wherein,
d is a degree matrix; w2Is the second incidence matrix; s is W2The normalization matrix of (a);
c8: by means of the formula (I) and (II),
Figure FDA0003098967730000027
respectively calculating the correlation between each node and the background seed points on the four boundaries to obtain a background probability value f of each node under four conditions, wherein lambda is a preset parameter;
c9: normalizing correlation values between each node and background seed points on four boundaries to obtain correlation values
Figure FDA0003098967730000028
Then carrying out negation to obtain a significant value of each node; and performing point multiplication on the significant values obtained under the four conditions to obtain an initial result S _ MR as a foreground probability value of the node.
2. The method for detecting the significance of the image based on the related constraint map sorting as claimed in claim 1, wherein the step A comprises:
a1: for each image to be detected, performing superpixel segmentation on the image into N superpixel blocks by using a SLIC algorithm, wherein each superpixel is used as a node in a set V; then, a non-directional edge corresponding to each node is obtained, and a non-directional graph model G is further constructed1=(V,E);
A2: by means of the formula (I) and (II),
Figure FDA0003098967730000031
central prior information is calculated for each node, wherein,
cicentral prior information of the ith node; x is the number ofiThe abscissa of the central position of the ith node is; y isiIs the longitudinal coordinate of the central position of the ith node; (x)0,y0) The coordinates of the center position of the whole image are shown; sigma1The balance parameter is used for controlling the discrete degree of the calculated position distance; exp () is an exponential function with a natural base number as the base number; i is the number of nodes.
3. The method for detecting the saliency of images based on the ordering of related constraint maps according to claim 2, characterized in that the undirected edges are obtained by:
for the neighboring node l of the two neighbors k of each node i, the formula is used, dist (k, l) | | xk-xl||2Calculating the Euclidean distance between the color of the color k and the color k; if the Euclidean distance of the color is smaller than the threshold value theta, connecting a non-directional edge between the node l and the node i, and continuing searching after finding the connected nodes until all the nodes are connected, wherein dist (k, l) is the Euclidean distance of the color between the ith node and the kth node; x is the number ofkIs the color value of the kth node; x is the number oflIs the color value of the l node; and | | is a modulo function.
4. The method for detecting the saliency of images based on the ordering of correlation constraint maps according to claim 1, wherein the step B comprises:
according to the color, texture and position information of the extracted image; by means of the formula (I) and (II),
Figure FDA0003098967730000032
calculating the weight of each undirected edge to construct a first incidence matrix
Figure FDA0003098967730000041
Wherein,
Figure FDA0003098967730000042
a weight of an undirected edge between the ith node and the jth node; w1Is a first incidence matrix; i and j are serial numbers of nodes, i is more than or equal to 0, and N is more than or equal to j; v. ofiIs a feature descriptor of the ith node, and vi∈R65,vi=[xi,yi,Li,ai,bi,cii];(xi,yi) Representing the center position coordinates of each superpixel node; (L)i,ai,bi) The color mean value of all pixel points contained in the CIE LAB color space of each super pixel node is represented; c. CiCentral prior information of the ith node; omegaiIs the LBP value of the ith node; v. ofjA feature descriptor for the jth node; sigma is a preset constant for controlling weight balance; n is the number of superpixel blocks.
5. The method for detecting the significance of the image based on the related constraint map sorting as claimed in claim 1, wherein the step D comprises:
d1: by means of the formula (I) and (II),
Figure FDA0003098967730000043
acquiring a first preset threshold and a second preset threshold; wherein,
h1is a first preset threshold value;h2is a second preset threshold; mean is an averaging function; max is a maximum evaluation function;
d2: by means of the formula (I) and (II),
Figure FDA0003098967730000044
obtaining a foreground seed point set ind _ form and a background seed point set ind _ back, wherein,
ind _ form is a foreground seed point set; ind _ back is a background seed point set; theta is a preset parameter.
6. The method for detecting the significance of the image based on the related constraint map ordering according to claim 1, wherein the step E comprises the following steps:
e1: using the formula, F: X → RnCalculating the corresponding sorting weight of the image to be detected, wherein,
f is a sorting function; fiDenoted is the rank value of the ith node, and F ═ F, g; f is the probability that each node belongs to the foreground, and g is the probability that each node belongs to the background;
e2: by means of the formula (I) and (II),
Figure FDA0003098967730000051
acquiring a label value of each node; using the formula, Y ═ Y1,y2)∈Rm×2And obtaining a label vector of each node, wherein,
y is a label vector of each node; y is1A label value for which the node belongs to the foreground; y is2A label value for which the node belongs to a background; rm×2Is a real space with 2m dimension;
e3: the model formula of the constructed related constraint graph ordering is as follows,
Figure FDA0003098967730000052
wherein,
F*is a closed solution; wijAs a non-directional edge between the ith node and the jth nodeA first incidence matrix corresponding to the weight of (a); fiIs the ranking value of the ith node; fjIs the ranking value of the jth node; diIs a degree matrix element; f. ofiForeground probability of the ith node; giThe background probability of the ith node; w is aiThe characteristic weight of the ith node; x is the number ofiIs the characteristic of the ith node; biIs a bias parameter; beta is a1Linear constraint coefficients to the foreground probability; beta is a2Linear constraint coefficient to background probability;
e4: and (4) carrying out partial derivation on the foreground probability by using the sequencing model of the E3 step to obtain a significance value.
7. An apparatus for detecting image saliency based on relevance constraint map ordering, the apparatus comprising:
the first calculation module is used for carrying out superpixel segmentation on the image to be detected by using a simple linear iterative clustering SLIC algorithm aiming at each image to be detected to obtain non-overlapping superpixel blocks, then establishing a closed-loop graph model by taking each non-overlapping superpixel block as a node, and further calculating the central prior information of each node;
the input module is used for extracting color, texture and position information of an input image;
the second calculation module is used for acquiring the foreground probability value of each node by utilizing a significance detection algorithm based on manifold sorting of the graph;
the first setting module is used for taking a set of nodes with foreground probability values larger than a first preset threshold value as a foreground seed point set ind _ form; taking a set of nodes with the foreground probability value smaller than a second preset threshold value as a background seed point set ind _ back; the first preset threshold is greater than the second preset threshold;
the second setting module is used for calculating a foreground probability S _ f and a background probability S _ g of each super-pixel node by using a model ordered by a related constraint graph, and using the foreground probability value S _ f as a final significant estimation value S _ final;
the second computing module is further configured to:
c1: acquiring the weight of each undirected edge of each node in a significance detection algorithm based on manifold sorting of a graph;
c2: constructing a second incidence matrix of the significance detection algorithm based on manifold sorting of the graph according to the weight of each undirected edge
Figure FDA0003098967730000061
Wherein,
Figure FDA0003098967730000062
is the weight of the edge between the ith and jth superpixels, and
Figure FDA0003098967730000063
W2is a second incidence matrix; i, j belongs to V, and i is the serial number of the ith node; j is the serial number of the jth node; c. CiThe color mean value of all pixel points of the ith node in the CIE LAB color space; c. CjThe color mean value of all pixel points of the jth node in the CIE LAB color space is obtained; sigma is a constant for controlling weight balance;
c3: according to the formula, D ═ diag { D ═ D11,…,dnn-calculating a degree matrix, wherein,
d is a degree matrix; diag { } is a diagonal matrix construction function; diiIs a degree matrix element, and
Figure FDA0003098967730000064
Figure FDA0003098967730000065
the weight of the non-directional edge corresponding to the second incidence matrix;
c4: for each node on the boundary, marking the marking value of the node according to boundary prior;
c5: using the formula, f: X → RmCalculating the corresponding sorting weight of the image to be detected, wherein,
f is the ranking function, and f ═ f1,…,fn]T;f1Is the ranking value of the 1 st node; f. ofnIs the ranking value of the nth node; n is the number of nodes; let y be [ y1,y2,…yn]TRepresenting a label vector, wherein the label value of the seed point is 1, and the label values of the rest nodes are 0; x is a characteristic matrix corresponding to the input image; r is real number space; rmIs m-dimensional real number space; m is a spatial dimension; y is a vector formed by label values of all the seed nodes;
c6: by using the formula of the ranking function,
Figure FDA0003098967730000071
a closed solution is calculated in which, among other things,
f*is a sorting function;
Figure FDA0003098967730000072
solving a function minimum independent variable function; sigma is a summation function; f. ofiIs the ranking value of the ith node; f. ofjIs the ranking value of the jth node; y isiIs the label value of the ith node; y isnThe weight of the nth node;
Figure FDA0003098967730000073
a weight of the undirected edge; diiThe element of the ith row and the ith column in the degree matrix; djjThe element of j row and j column in the degree matrix; mu is a balance parameter;
c7: using a formula in accordance with the closed solution,
Figure FDA0003098967730000074
obtaining a non-normalized solution, wherein,
d is a degree matrix; w2Is the second incidence matrix; s is W2The normalization matrix of (a);
c8: by means of the formula (I) and (II),
Figure FDA0003098967730000075
respectively calculating the correlation between each node and the background seed points on the four boundaries to obtain a background probability value f of each node under four conditions, wherein lambda is a preset parameter;
c9: normalizing correlation values between each node and background seed points on four boundaries to obtain correlation values
Figure FDA0003098967730000076
Then carrying out negation to obtain a significant value of each node; and performing point multiplication on the significant values obtained under the four conditions to obtain an initial result S _ MR as a foreground probability value of the node.
8. The apparatus according to claim 7, wherein the first computing module is further configured to:
a1: for each image to be detected, performing superpixel segmentation on the image into N superpixel blocks by using a SLIC algorithm, wherein each superpixel is used as a node in a set V; then, a non-directional edge corresponding to each node is obtained, and a non-directional graph model G is further constructed1=(V,E);
A2: by means of the formula (I) and (II),
Figure FDA0003098967730000081
central prior information is calculated for each node, wherein,
cicentral prior information of the ith node; x is the number ofiThe abscissa of the central position of the ith node is; y isiIs the longitudinal coordinate of the central position of the ith node; (x)0,y0) The coordinates of the center position of the whole image are shown; sigma1The balance parameter is used for controlling the discrete degree of the calculated position distance; exp () is an exponential function with a natural base number as the base number; i is the number of nodes.
CN201810658629.3A 2018-06-25 2018-06-25 Image significance detection method and device based on related constraint graph sorting Active CN108846404B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810658629.3A CN108846404B (en) 2018-06-25 2018-06-25 Image significance detection method and device based on related constraint graph sorting

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810658629.3A CN108846404B (en) 2018-06-25 2018-06-25 Image significance detection method and device based on related constraint graph sorting

Publications (2)

Publication Number Publication Date
CN108846404A CN108846404A (en) 2018-11-20
CN108846404B true CN108846404B (en) 2021-10-01

Family

ID=64203559

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810658629.3A Active CN108846404B (en) 2018-06-25 2018-06-25 Image significance detection method and device based on related constraint graph sorting

Country Status (1)

Country Link
CN (1) CN108846404B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109993173B (en) * 2019-03-28 2023-07-21 华南理工大学 Weak supervision image semantic segmentation method based on seed growth and boundary constraint
CN110111353B (en) * 2019-04-29 2020-01-24 河海大学 Image significance detection method based on Markov background and foreground absorption chain
CN110188763B (en) * 2019-05-28 2021-04-30 江南大学 Image significance detection method based on improved graph model
CN110287802B (en) * 2019-05-29 2022-08-12 南京邮电大学 Human eye gaze point prediction method based on optimized image foreground and background seeds
CN110298842A (en) * 2019-06-10 2019-10-01 上海工程技术大学 A kind of rail clip image position method based on super-pixel node sequencing
CN110533593B (en) * 2019-09-27 2023-04-11 山东工商学院 Method for quickly creating accurate trimap
CN117372431B (en) * 2023-12-07 2024-02-20 青岛天仁微纳科技有限责任公司 Image detection method of nano-imprint mold

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6038344A (en) * 1996-07-12 2000-03-14 The United States Of America As Represented By The Secretary Of The Navy Intelligent hypersensor processing system (IHPS)
CN104123734A (en) * 2014-07-22 2014-10-29 西北工业大学 Visible light and infrared detection result integration based moving target detection method
CN104715251A (en) * 2015-02-13 2015-06-17 河南科技大学 Salient object detection method based on histogram linear fitting

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6038344A (en) * 1996-07-12 2000-03-14 The United States Of America As Represented By The Secretary Of The Navy Intelligent hypersensor processing system (IHPS)
CN104123734A (en) * 2014-07-22 2014-10-29 西北工业大学 Visible light and infrared detection result integration based moving target detection method
CN104715251A (en) * 2015-02-13 2015-06-17 河南科技大学 Salient object detection method based on histogram linear fitting

Also Published As

Publication number Publication date
CN108846404A (en) 2018-11-20

Similar Documents

Publication Publication Date Title
CN108846404B (en) Image significance detection method and device based on related constraint graph sorting
CN107066559B (en) Three-dimensional model retrieval method based on deep learning
CN107291945B (en) High-precision clothing image retrieval method and system based on visual attention model
Li et al. Robust saliency detection via regularized random walks ranking
CN112184752A (en) Video target tracking method based on pyramid convolution
CN110569738B (en) Natural scene text detection method, equipment and medium based on densely connected network
CN111881714A (en) Unsupervised cross-domain pedestrian re-identification method
CN112101150A (en) Multi-feature fusion pedestrian re-identification method based on orientation constraint
CN111625667A (en) Three-dimensional model cross-domain retrieval method and system based on complex background image
CN109086777B (en) Saliency map refining method based on global pixel characteristics
CN102982539B (en) Characteristic self-adaption image common segmentation method based on image complexity
CN112529005B (en) Target detection method based on semantic feature consistency supervision pyramid network
CN111126385A (en) Deep learning intelligent identification method for deformable living body small target
CN112712546A (en) Target tracking method based on twin neural network
CN107862680B (en) Target tracking optimization method based on correlation filter
CN111914642A (en) Pedestrian re-identification method, device, equipment and medium
CN104732534B (en) Well-marked target takes method and system in a kind of image
CN111768415A (en) Image instance segmentation method without quantization pooling
CN110598715A (en) Image recognition method and device, computer equipment and readable storage medium
CN110765882A (en) Video tag determination method, device, server and storage medium
CN110827312A (en) Learning method based on cooperative visual attention neural network
CN112287935B (en) Image semantic segmentation method and system based on significance prior
CN112288761A (en) Abnormal heating power equipment detection method and device and readable storage medium
CN111241924A (en) Face detection and alignment method and device based on scale estimation and storage medium
CN112132145A (en) Image classification method and system based on model extended convolutional neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant