CN108710883B - Complete salient object detection method adopting contour detection - Google Patents

Complete salient object detection method adopting contour detection Download PDF

Info

Publication number
CN108710883B
CN108710883B CN201810563281.XA CN201810563281A CN108710883B CN 108710883 B CN108710883 B CN 108710883B CN 201810563281 A CN201810563281 A CN 201810563281A CN 108710883 B CN108710883 B CN 108710883B
Authority
CN
China
Prior art keywords
contour
map
image
complete
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810563281.XA
Other languages
Chinese (zh)
Other versions
CN108710883A (en
Inventor
刚毅凝
杜红军
赵永彬
李巍
金成明
郝跃冬
刘嘉华
康睿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
State Grid Corp of China SGCC
NARI Group Corp
Nari Information and Communication Technology Co
Information and Telecommunication Branch of State Grid Liaoning Electric Power Co Ltd
Original Assignee
State Grid Corp of China SGCC
NARI Group Corp
Nari Information and Communication Technology Co
Information and Telecommunication Branch of State Grid Liaoning Electric Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by State Grid Corp of China SGCC, NARI Group Corp, Nari Information and Communication Technology Co, Information and Telecommunication Branch of State Grid Liaoning Electric Power Co Ltd filed Critical State Grid Corp of China SGCC
Priority to CN201810563281.XA priority Critical patent/CN108710883B/en
Publication of CN108710883A publication Critical patent/CN108710883A/en
Application granted granted Critical
Publication of CN108710883B publication Critical patent/CN108710883B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]

Abstract

The invention discloses a complete salient object detection method adopting contour detection, which comprises the steps of segmenting an image by a super-pixel segmentation method and constructing the image into a graph mode; acquiring a saliency map based on contour extraction; acquiring a two-value segmentation graph based on a background template; and obtaining a final saliency map based on the saliency map and the two-value segmentation map. The method can not only further highlight the salient region of the image, but also well inhibit the background region, and can be applied to scenes such as image retrieval, image segmentation, image classification, target recognition and the like.

Description

Complete salient object detection method adopting contour detection
Technical Field
The invention relates to a complete salient object detection method adopting contour detection, and belongs to the technical field of image recognition.
Background
With the development of intelligent devices and social networks, we are immersed in a large amount of digital media data. How to extract useful information from a large amount of data with limited time and effort becomes an important issue in preparation for subsequent processing. Just as the human visual system can only focus on the most compelling small amount of information in the scene seen, often only partially salient objects in an image attract our attention. Saliency detection refers to the identification of the most interesting regions in images, which has become a popular research area in recent years due to its wide application in image processing and computer vision fields, such as image retrieval, image segmentation, image classification and object recognition.
At present, the research methods for detecting the saliency are mainly divided into two types, one is viewpoint prediction, namely, the substantial area of the saliency of the image is judged according to the fixation and movement condition of human eyes on an eye tracker; the other is salient object detection, i.e. detecting the most salient objects in the image. The detection of the salient objects can be divided into a bottom-up model and a top-down model, wherein the former model is mainly based on the bottom-layer features (such as color, brightness, direction and the like) and the prior information (such as compactness, uniqueness, background and the like) of some images, and the latter model is mainly detected by labeling, training and detecting representative features in the images. The current salient object detection method based in part on background priors cannot completely highlight the salient region of the image.
Disclosure of Invention
In order to solve the technical problem, the invention provides a complete salient object detection method adopting contour detection.
In order to achieve the purpose, the invention adopts the technical scheme that:
a complete salient object detection method adopting contour detection comprises the following steps,
segmenting the image by a super-pixel segmentation method, and constructing the image into a graph mode;
acquiring a saliency map based on contour extraction;
acquiring a two-value segmentation graph based on a background template;
and obtaining a final saliency map based on the saliency map and the two-value segmentation map.
The process of constructing the graphical schema is that,
reading image data information, adaptively setting a threshold value to eliminate a noise contour, and constructing a graph mode;
the threshold formula for removing the noise profile is,
Figure BDA0001683844040000021
where xi is the threshold for eliminating the noise profile, xiiIs the gradient value of the ith contour line in the image, and N is the number of contour lines in the image.
And acquiring a saliency map based on contour extraction, specifically,
extracting initial contour features of an image in an image mode by using a contour detection algorithm based on a global probability boundary, and preprocessing the initial contour features by using a self-adaptive threshold method to obtain a self-adaptive contour map;
acquiring a contour map based on virtual connection by using a contour processing scheme based on virtual connection;
acquiring a complete contour map by using a closed-loop searching scheme based on the shortest path, and dividing the complete contour map into a plurality of regions with complete boundaries;
a saliency map based on contour detection is acquired.
A virtual join-based contour processing scheme, specifically,
if the end point of a certain contour line is only close to the end point of another contour line or the other end point of the contour line, a virtual end point is created by using the virtual connecting structure, and the end point of the contour line is connected with the virtual end point to form a new contour line;
if the end point of a contour line is close to a certain pixel point on another contour line, the pixel point is taken as a boundary point, the contour line is divided into two independent contour lines, and the end point is connected with the newly formed boundary point;
if two end points of a certain contour line can not establish a virtual connection point with other contour lines in the self-adaptive contour map, the contour line is regarded as an isolated contour line and is removed from the self-adaptive contour map;
if there are multiple close contours in a certain direction, these contours are fused into a new contour.
A shortest path based closed loop search scheme, specifically,
suppose there is N in a virtual join-based profileeA non-closed end point;
computing any two non-closed end points ej1And ej2Length of path between L (e)j1,ej2),
Figure BDA0001683844040000031
Wherein ξj2Is a non-closed end point ej2Gradient value of the contour line;
by continuously calculating the path lengths of any two non-closed end points and connecting the non-closed end points with the shortest path length, a plurality of closed annular contour lines can be formed, and a complete contour map can be obtained.
The process of obtaining a saliency map based on contour detection is,
suppose a complete profile is segmentedRegion with complete N1 boundaries I1,I2,...,IN1};
Region Ii′The intensity, color and direction characteristic values are respectively the mean values of the corresponding characteristic values of all the pixel points in the area, namely the intensity characteristic value
Figure BDA0001683844040000032
Color characteristic value
Figure BDA0001683844040000033
Direction characteristic value
Figure BDA0001683844040000034
The gradient values of the boundary lines belonging to the background template are set to 0, the others to 1, one saliency value is set for each region,
Figure BDA0001683844040000041
wherein, Pi′Is region Ii′The significance of (a) of (b),
Figure BDA0001683844040000043
is region Ii′R is a coefficient variable, Ai′As the area of the region of significance to be extracted, AkIs region IkIs k ∈ [1, N1 ]]And k ≠ i';
all the significant values are normalized, and then the cumulative sum of the significant values is calculated and is assigned to a corresponding target, so that a significant map based on contour detection is obtained.
The process of obtaining the two-value segmentation map based on the background template is that,
acquiring a saliency map based on a saliency detection algorithm of background template suppression;
and acquiring a corresponding significant pixel point and binary segmentation map by using a self-adaptive threshold segmentation method.
Computing corresponding adaptive thresholds from the saliency map sTa;
Figure BDA0001683844040000042
wherein S (x, y) is the significant value of pixel point I (x, y), IxAnd IyRespectively refer to the width and height of a saliency image;
and if the significance value of a certain pixel point is smaller than the adaptive threshold sTa, the significance value of the pixel point is assigned to be 0, otherwise, the significance value of the pixel point is set to be 1 and is regarded as a significant pixel point, and the obtained adaptive threshold sTa is utilized to perform adaptive threshold segmentation on the significant map to obtain a two-value segmentation map.
The process of obtaining the final saliency map is,
reserving a region in which the proportion of the salient pixel points in the salient image based on the contour detection is higher than the reference proportion, and obtaining an optimized salient image based on the contour detection;
and carrying out linear fusion on the binary segmentation image and the optimized salient image based on contour detection to obtain a final complete salient image.
The reference proportion formula is as follows,
Figure BDA0001683844040000051
wherein, B kappa is a reference proportion, and n is the number of significant pixel points;
the linear fusion formula is as follows,
Figure BDA0001683844040000052
wherein, IFTo complete the saliency map, IssAs a binary segmentation map, IfsFor the optimized outline-based detection saliency map, alpha and beta are respectively Iss,IfsThe coefficient of (a). .
The invention achieves the following beneficial effects: the method can not only further highlight the salient region of the image, but also well inhibit the background region, and can be applied to scenes such as image retrieval, image segmentation, image classification, target recognition and the like.
Drawings
FIG. 1 is a flow chart of the present invention;
fig. 2 is a diagram of a complete salient object detection architecture based on contour detection.
Detailed Description
The invention is further described below with reference to the accompanying drawings. The following examples are only for illustrating the technical solutions of the present invention more clearly, and the protection scope of the present invention is not limited thereby.
As shown in fig. 1, a complete salient object detection method using contour detection includes the following steps:
step 1, segmenting an image by a super-pixel segmentation method and constructing the image into a graph mode.
The subsequent steps are carried out based on the image of the graph mode, and the process of constructing the graph mode comprises the following steps: reading image data information, adaptively setting a threshold value to eliminate a noise contour, and constructing a graph mode.
The threshold formula for eliminating the noise profile is:
Figure BDA0001683844040000061
where xi is the threshold for eliminating the noise profile, xiiIs the gradient value of the ith contour line in the image, and N is the number of contour lines in the image.
Step 2, obtaining a saliency map extracted based on the contour; and acquiring a two-value segmentation graph based on the background template.
The specific process of obtaining the saliency map is as follows:
201) and extracting initial contour features of the image in the image mode by using a contour detection algorithm based on a global probability boundary, and preprocessing the initial contour features by using a self-adaptive threshold method to obtain a self-adaptive contour map.
202) And acquiring a virtual connection-based contour map by using a virtual connection-based contour processing scheme.
A contour processing scheme that performs the following operations:
if the end point of a certain contour line is only close to the end point of another contour line or the other end point of the contour line itself (when the distance between the two end points is smaller than the set threshold value, the two end points are considered to be close), a virtual end point is created by using the virtual connection structure, and the end point of the contour line is connected with the virtual end point to form a new contour line;
if the end point of a contour line is close to a certain pixel point on the other contour line (when the distance between the end point and the pixel point is smaller than a set threshold value, the end point is considered to be close to the pixel point), the pixel point is taken as a boundary point, the contour line is divided into two independent contour lines, and the end point is connected with the newly formed boundary point;
if two end points of a certain contour line can not establish a virtual connection point with other contour lines in the self-adaptive contour map, the contour line is regarded as an isolated contour line and is removed from the self-adaptive contour map;
if a plurality of contour lines close to each other exist in a certain direction (when the distance between two contour lines is smaller than a set threshold value, the two contour lines are considered to be close to each other), the contour lines are fused into a new contour line, wherein the plurality of contour lines comprise mutually parallel contour lines and contour lines positioned on a straight line.
203) And acquiring a complete contour map by using a closed-loop searching scheme based on the shortest path, and dividing the complete contour map into a plurality of regions with complete boundaries.
A closed-loop search scheme, performing the following operations:
a) suppose there is N in a virtual join-based profileeA non-closed end point.
b) Any two non-closed end points ej1And ej2Length of path between L (e)j1,ej2) Has positive correlation with Euclidean distance between two points and has same non-closed end point ej2The gradient values of the contour lines are in a negative correlation relationship, so that L (e)j1,ej2) The formula for calculating (a) is as follows,
Figure BDA0001683844040000071
wherein ξj2Is a non-closed end point ej2The gradient value of the contour line.
c) By continuously calculating the path lengths of any two non-closed end points and connecting the non-closed end points with the shortest path length, a plurality of closed annular contour lines can be formed, and a complete contour map can be obtained.
204) A saliency map based on contour detection is acquired.
The process of obtaining the saliency map is as follows:
a1) suppose the complete contour map is divided into N1 regions with complete boundaries I1,I2,...,IN1And each image consists of three color channels of red (R), green (G) and blue (B), and the intensity characteristic value omega of each pixel pointin,i′In order to realize the purpose,
ωin,i′=(R+G+B)/3
aiming at the four color pairs of (R, G), (G, R), (B, Y) and (Y, B), the following method is adopted to extract four wide tuning color channels,
Figure BDA0001683844040000081
any two pixel points pi″And pj″Color characteristic value ω therebetweenRG,i″j″、ωBY,i″j″Respectively, are as follows,
Figure BDA0001683844040000082
wherein RR (i '), GG (i'), BB (i ') and Y (i') are pixel points pi"four widely tuned color channels, RR (j"), GG (j "), BB (j"), and Y (j ") are pixel points pj"of four widely tuned color channels,
for a pixel point pi″Its color characteristic value omegaRG,i′、ωBY,i′Can be regarded as belonging to the same complete region Ii′The cumulative sum of the color characteristic values between all pixel points in the interior, i.e.
Figure BDA0001683844040000083
Wherein N isiTo remove a pixel point pi″Outer region Ii′The number of other pixel points in the pixel array,
by carrying out Gabor kernel convolution on the obtained intensity image, respectively adopting theta epsilon {0 degrees, 45 degrees, 90 degrees and 135 degrees } as the direction of the Gabor kernel, extracting direction characteristics, and aiming at any two pixel points pi″And pj″Inter directional characteristic value omegao,i″j″Where O () represents a computational kernel convolution, like color features, pixel point pi″Strength characteristics of
Figure BDA0001683844040000084
b1) The gradient values of the boundary lines belonging to the background template are set to 0, the others to 1, one saliency value is set for each region,
Figure BDA0001683844040000085
wherein, Pi′Is region Ii′The significance of (a) of (b),
Figure BDA0001683844040000091
is region Ii′R is a coefficient variable, Ai′As the area of the region of significance to be extracted, AkIs region IkIs k ∈ [1, N1 ]]And k ≠ i'.
c1) All the significant values are normalized, and then the cumulative sum of the significant values is calculated and is assigned to a corresponding target, so that a significant map based on contour detection is obtained.
The specific process of obtaining the two-value segmentation graph is as follows:
211) and acquiring a saliency map based on a saliency detection algorithm of background template suppression.
The corresponding adaptive threshold sTa is calculated from the saliency map,
Figure BDA0001683844040000092
wherein S (x, y) is the significant value of pixel point I (x, y), IxAnd IyRespectively refer to the width and height of a saliency image;
and if the significance value of a certain pixel point is smaller than the adaptive threshold sTa, the significance value of the pixel point is assigned to be 0, otherwise, the significance value of the pixel point is set to be 1 and is regarded as a significant pixel point, and the obtained adaptive threshold sTa is utilized to perform adaptive threshold segmentation on the significant map to obtain a two-value segmentation map.
212) And acquiring a corresponding significant pixel point and binary segmentation map by using a self-adaptive threshold segmentation method.
And 3, obtaining a final saliency map based on the saliency map and the binary segmentation map.
The specific process of obtaining the final saliency map is as follows:
301) and reserving the area in which the proportion of the salient pixel points in the salient image based on the contour detection is higher than the reference proportion, and obtaining the optimized salient image based on the contour detection.
Screening out a region larger than the reference proportion, namely a region belonging to a significant target, and removing a background region smaller than the reference proportion;
the reference proportion formula is as follows:
Figure BDA0001683844040000101
wherein, B kappa is a reference proportion, and n is the number of significant pixel points.
302) And carrying out linear fusion on the binary segmentation image and the optimized salient image based on contour detection to obtain a final complete salient image.
The linear fusion formula is:
Figure BDA0001683844040000102
wherein, IFTo complete the saliency map, IssAs a binary segmentation map, IfsFor the optimized outline-based detection saliency map, alpha and beta are respectively Iss,IfsThe coefficient of (a).
The principle of the invention is shown in fig. 2, the method uses a superpixel segmentation method to segment an image, constructs the image into a graph mode, extracts and obtains a saliency map based on contour extraction, obtains a binary segmentation map based on a background template, and linearly fuses the saliency map and the binary segmentation map to obtain a final saliency map, thereby not only further highlighting the saliency region of the image, but also well inhibiting the background region, and being applicable to scenes such as image retrieval, image segmentation, image classification, target identification and the like.
The above description is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, several modifications and variations can be made without departing from the technical principle of the present invention, and these modifications and variations should also be regarded as the protection scope of the present invention.

Claims (9)

1. A complete salient object detection method adopting contour detection is characterized in that: comprises the following steps of (a) carrying out,
segmenting the image by a super-pixel segmentation method, and constructing the image into a graph mode;
and acquiring a saliency map based on contour extraction, specifically,
extracting initial contour features of the image in the image mode by using a contour detection algorithm based on a global probability boundary, preprocessing the initial contour features by using a self-adaptive threshold method to obtain a self-adaptive contour map,
acquiring a virtual connection-based contour map by using a virtual connection-based contour processing scheme,
obtaining a complete contour map by using a closed-loop searching scheme based on the shortest path, dividing the complete contour map into a plurality of regions with complete boundaries,
acquiring a saliency map based on contour detection;
acquiring a two-value segmentation graph based on a background template;
and obtaining a final saliency map based on the saliency map and the two-value segmentation map.
2. The method for detecting complete salient objects by using contour detection according to claim 1, is characterized in that: the process of constructing the graphical schema is that,
reading image data information, adaptively setting a threshold value to eliminate a noise contour, and constructing a graph mode;
the threshold formula for removing the noise profile is,
Figure FDA0003091721390000011
where xi is the threshold for eliminating the noise profile, xiiIs the gradient value of the ith contour line in the image, and N is the number of contour lines in the image.
3. The method for detecting complete salient objects by using contour detection according to claim 1, is characterized in that: a virtual join-based contour processing scheme, specifically,
if the end point of a certain contour line is only close to the end point of another contour line or the other end point of the contour line, a virtual end point is created by using the virtual connecting structure, and the end point of the contour line is connected with the virtual end point to form a new contour line;
if the end point of a contour line is close to a certain pixel point on another contour line, the pixel point is taken as a boundary point, the contour line is divided into two independent contour lines, and the end point is connected with the newly formed boundary point;
if two end points of a certain contour line can not establish a virtual connection point with other contour lines in the self-adaptive contour map, the contour line is regarded as an isolated contour line and is removed from the self-adaptive contour map;
if there are multiple close contours in a certain direction, these contours are fused into a new contour.
4. The method for detecting complete salient objects by using contour detection according to claim 1, is characterized in that: a shortest path based closed loop search scheme, specifically,
suppose there is N in a virtual join-based profileeA non-closed end point;
computing any two non-closed end points ej1And ej2Length of path between L (e)j1,ej2),
Figure FDA0003091721390000021
Wherein ξj2Is a non-closed end point ej2Gradient value of the contour line;
by continuously calculating the path lengths of any two non-closed end points and connecting the non-closed end points with the shortest path length, a plurality of closed annular contour lines can be formed, and a complete contour map can be obtained.
5. The method for detecting complete salient objects by using contour detection according to claim 1, is characterized in that: the process of obtaining a saliency map based on contour detection is,
suppose the complete contour map is divided into N1 regions with complete boundaries I1,I2,...,IN1};
Region Ii′The intensity, color and direction characteristic values are respectively the mean values of the corresponding characteristic values of all the pixel points in the area, namely the intensity characteristic value
Figure FDA0003091721390000031
Color characteristic value
Figure FDA0003091721390000032
Direction characteristic value
Figure FDA0003091721390000033
The gradient values of the boundary lines belonging to the background template are set to 0, the others to 1, one saliency value is set for each region,
Figure FDA0003091721390000034
wherein, Pi′Is region Ii′The significance of (a) of (b),
Figure FDA0003091721390000035
is region Ii′R is a coefficient variable, Ai′As the area of the region of significance to be extracted, AkIs region IkIs k ∈ [1, N1 ]]And k ≠ i';
all the significant values are normalized, and then the cumulative sum of the significant values is calculated and is assigned to a corresponding target, so that a significant map based on contour detection is obtained.
6. The method for detecting complete salient objects by using contour detection according to claim 1, is characterized in that: the process of obtaining the two-value segmentation map based on the background template is that,
acquiring a saliency map based on a saliency detection algorithm of background template suppression;
and acquiring a corresponding significant pixel point and binary segmentation map by using a self-adaptive threshold segmentation method.
7. The method for detecting complete salient objects by using contour detection according to claim 6, is characterized in that: computing corresponding adaptive thresholds from the saliency map sTa;
Figure FDA0003091721390000036
wherein S (x, y) is the significant value of pixel point I (x, y), IxAnd IyRespectively refer to the width and height of a saliency image;
and if the significance value of a certain pixel point is smaller than the adaptive threshold sTa, the significance value of the pixel point is assigned to be 0, otherwise, the significance value of the pixel point is set to be 1 and is regarded as a significant pixel point, and the obtained adaptive threshold sTa is utilized to perform adaptive threshold segmentation on the significant map to obtain a two-value segmentation map.
8. The method for detecting complete salient objects by using contour detection according to claim 1, is characterized in that: the process of obtaining the final saliency map is,
reserving a region in which the proportion of the salient pixel points in the salient image based on the contour detection is higher than the reference proportion, and obtaining an optimized salient image based on the contour detection;
and carrying out linear fusion on the binary segmentation image and the optimized salient image based on contour detection to obtain a final complete salient image.
9. The method for detecting complete salient objects by using contour detection according to claim 8, is characterized in that: the reference proportion formula is as follows,
Figure FDA0003091721390000041
wherein B kappa is a reference proportion, n is the number of significant pixel points, IxAnd IyRespectively refer to the width and height of a saliency image;
the linear fusion formula is as follows,
Figure FDA0003091721390000042
wherein, IFTo complete the saliency map, IssAs a binary segmentation map, IfsFor the optimized outline-based detection saliency map, alpha and beta are respectively Iss,IfsThe coefficient of (a).
CN201810563281.XA 2018-06-04 2018-06-04 Complete salient object detection method adopting contour detection Active CN108710883B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810563281.XA CN108710883B (en) 2018-06-04 2018-06-04 Complete salient object detection method adopting contour detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810563281.XA CN108710883B (en) 2018-06-04 2018-06-04 Complete salient object detection method adopting contour detection

Publications (2)

Publication Number Publication Date
CN108710883A CN108710883A (en) 2018-10-26
CN108710883B true CN108710883B (en) 2021-08-24

Family

ID=63870346

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810563281.XA Active CN108710883B (en) 2018-06-04 2018-06-04 Complete salient object detection method adopting contour detection

Country Status (1)

Country Link
CN (1) CN108710883B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109657669B (en) * 2018-12-13 2023-02-14 江西金格科技有限公司 Intelligent electronic seal extraction method based on image processing
CN111369491B (en) * 2018-12-25 2023-06-30 宁波舜宇光电信息有限公司 Image stain detection method, device, system and storage medium
CN111242240B (en) * 2020-02-13 2023-04-07 深圳市联合视觉创新科技有限公司 Material detection method and device and terminal equipment
CN111353974B (en) * 2020-02-20 2023-08-18 苏州凌云光工业智能技术有限公司 Method and device for detecting image boundary defects

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6535512B1 (en) * 1996-03-07 2003-03-18 Lsi Logic Corporation ATM communication system interconnect/termination unit
CN103020941A (en) * 2012-12-28 2013-04-03 昆山市工业技术研究院有限责任公司 Panoramic stitching based rotary camera background establishment method and panoramic stitching based moving object detection method
CN104392231A (en) * 2014-11-07 2015-03-04 南京航空航天大学 Block and sparse principal feature extraction-based rapid collaborative saliency detection method
CN105354405A (en) * 2014-08-20 2016-02-24 中国科学院上海高等研究院 Machine learning based immunohistochemical image automatic interpretation system
CN105957077A (en) * 2015-04-29 2016-09-21 国网河南省电力公司电力科学研究院 Detection method for foreign body in transmission lines based on visual saliency analysis
CN106228138A (en) * 2016-07-26 2016-12-14 国网重庆市电力公司电力科学研究院 A kind of Road Detection algorithm of integration region and marginal information
WO2017147086A1 (en) * 2016-02-22 2017-08-31 Harmonic, Inc. Virtual converged cable access platform (ccap) core
CN107220982A (en) * 2017-04-02 2017-09-29 南京大学 It is a kind of to suppress the ship conspicuousness video detecting method that stern drags line
CN107622503A (en) * 2017-08-10 2018-01-23 上海电力学院 A kind of layering dividing method for recovering image Ouluding boundary

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6535512B1 (en) * 1996-03-07 2003-03-18 Lsi Logic Corporation ATM communication system interconnect/termination unit
CN103020941A (en) * 2012-12-28 2013-04-03 昆山市工业技术研究院有限责任公司 Panoramic stitching based rotary camera background establishment method and panoramic stitching based moving object detection method
CN105354405A (en) * 2014-08-20 2016-02-24 中国科学院上海高等研究院 Machine learning based immunohistochemical image automatic interpretation system
CN104392231A (en) * 2014-11-07 2015-03-04 南京航空航天大学 Block and sparse principal feature extraction-based rapid collaborative saliency detection method
CN105957077A (en) * 2015-04-29 2016-09-21 国网河南省电力公司电力科学研究院 Detection method for foreign body in transmission lines based on visual saliency analysis
WO2017147086A1 (en) * 2016-02-22 2017-08-31 Harmonic, Inc. Virtual converged cable access platform (ccap) core
CN106228138A (en) * 2016-07-26 2016-12-14 国网重庆市电力公司电力科学研究院 A kind of Road Detection algorithm of integration region and marginal information
CN107220982A (en) * 2017-04-02 2017-09-29 南京大学 It is a kind of to suppress the ship conspicuousness video detecting method that stern drags line
CN107622503A (en) * 2017-08-10 2018-01-23 上海电力学院 A kind of layering dividing method for recovering image Ouluding boundary

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于图像显著轮廓的目标检测;毕威 等;《电子学报》;20170815;第45卷(第8期);1902-1910 *
基于引导 Boosting 算法的显著性检测;叶子童 等;《计算机应用》;20170910;第37卷(第9期);2652-2658 *

Also Published As

Publication number Publication date
CN108710883A (en) 2018-10-26

Similar Documents

Publication Publication Date Title
CN107316031B (en) Image feature extraction method for pedestrian re-identification
CN108710883B (en) Complete salient object detection method adopting contour detection
CN108520219B (en) Multi-scale rapid face detection method based on convolutional neural network feature fusion
Deng et al. Color image segmentation
CN107330397B (en) Pedestrian re-identification method based on large-interval relative distance measurement learning
CN111144376B (en) Video target detection feature extraction method
CN111639564B (en) Video pedestrian re-identification method based on multi-attention heterogeneous network
US20180174301A1 (en) Iterative method for salient foreground detection and multi-object segmentation
CN109086777B (en) Saliency map refining method based on global pixel characteristics
Gui et al. A new method for soybean leaf disease detection based on modified salient regions
CN111583279A (en) Super-pixel image segmentation method based on PCBA
CN111914762A (en) Gait information-based identity recognition method and device
CN109448024B (en) Visual tracking method and system for constructing constraint correlation filter by using depth data
Lu et al. Clustering based road detection method
CN111079757A (en) Clothing attribute identification method and device and electronic equipment
Xia et al. Lazy texture selection based on active learning
Lu et al. Unstructured road detection from a single image
Karimi et al. Spatio-temporal saliency detection using abstracted fully-connected graphical models
CN110599517A (en) Target feature description method based on local feature and global HSV feature combination
Prinosil Blind face indexing in video
Nguyen et al. Enhanced pixel-wise voting for image vanishing point detection in road scenes
Nguyen et al. Efficient vanishing point estimation for unstructured road scenes
Ni et al. Newton optimization based Congealing for facial image alignment
KA A Review on Example-based Colourization
Shaik et al. Unsupervised segmentation of image using novel curve evolution method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20190610

Address after: 11 006 No. 18 Ningbo Road, Heping District, Shenyang City, Liaoning Province

Applicant after: Guo Wang Information Communication Branch Company of Liaoning Electric Power Co., Ltd.

Applicant after: NARI Group Co. Ltd.

Applicant after: NANJING NARI INFORMATION COMMUNICATION SCIENCE & TECHNOLOGY CO., LTD.

Applicant after: State Grid Corporation of China

Address before: 11 006 No. 18 Ningbo Road, Heping District, Shenyang City, Liaoning Province

Applicant before: Guo Wang Information Communication Branch Company of Liaoning Electric Power Co., Ltd.

Applicant before: NARI Group Co. Ltd.

Applicant before: NANJING NARI INFORMATION COMMUNICATION SCIENCE & TECHNOLOGY CO., LTD.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant