CN115131566A - Automatic image segmentation method based on super-pixels and improved fuzzy C-means clustering - Google Patents

Automatic image segmentation method based on super-pixels and improved fuzzy C-means clustering Download PDF

Info

Publication number
CN115131566A
CN115131566A CN202210877458.XA CN202210877458A CN115131566A CN 115131566 A CN115131566 A CN 115131566A CN 202210877458 A CN202210877458 A CN 202210877458A CN 115131566 A CN115131566 A CN 115131566A
Authority
CN
China
Prior art keywords
pixel
super
neighborhood
pixel point
pixels
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210877458.XA
Other languages
Chinese (zh)
Inventor
何亚茹
贾慧彤
李方方
鄂超
杨溯
杨杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Digsur Science And Technology Co ltd
Original Assignee
Beijing Digsur Science And Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Digsur Science And Technology Co ltd filed Critical Beijing Digsur Science And Technology Co ltd
Priority to CN202210877458.XA priority Critical patent/CN115131566A/en
Publication of CN115131566A publication Critical patent/CN115131566A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

Embodiments of the present invention provide an automatic image segmentation method based on superpixels and improved fuzzy C-means clustering. The method comprises the following steps: selecting any neighborhood from the target image, calculating local gray change information of pixels in the neighborhood and spatial distance information of the pixels in the neighborhood and a central pixel, and counting the influence weight of the local gray change information and the influence weight of the spatial distance information to obtain an optimized weighted image; pre-dividing the optimized weighted image by using an SLIC algorithm to obtain super pixels; performing density peak value clustering on the super pixels to obtain the number of clustering clusters; and performing secondary segmentation on the superpixels by adopting the optimized membership degree improved fuzzy C-means clustering algorithm. In this way, the number of the clustering clusters can be automatically determined, manual participation is not needed, the influence of noise points on image segmentation is effectively reduced, and the image segmentation efficiency and the image segmentation precision are improved.

Description

Automatic image segmentation method based on super-pixels and improved fuzzy C-means clustering
Technical Field
The present invention relates generally to the field of image processing technology, and more particularly, to an automatic image segmentation method based on superpixels and improved fuzzy C-means clustering.
Background
The clustering algorithm is a very hot research topic in the field of data mining, and is also an important data analysis technology. Fuzzy clustering is an important branch of a clustering algorithm, wherein the most classical method is a Fuzzy C-means (FCM) clustering algorithm, the algorithm takes a target function as a reference, the Fuzzy clustering problem is regarded as a nonlinear programming problem, the target function is subjected to iterative minimization, and the membership value of each sample data to a clustering center is obtained, so that the Fuzzy division of a data set is realized. The fuzzy C-means clustering algorithm is simple in principle and easy to implement, and can well depict the uncertainty of sample data classification, so that the fuzzy C-means clustering algorithm is suitable for processing objects with fuzzy characteristics such as images.
However, when an image is segmented by using the fuzzy C-means clustering algorithm, two obvious disadvantages exist, the number of clustering clusters needs to be manually set and the clustering clusters are very sensitive to noise, and although the number of clusters can be found by introducing the density peak value clustering algorithm, because one image often contains a large number of pixel points, the segmentation speed is slow, and the image segmentation efficiency is reduced.
Disclosure of Invention
According to an embodiment of the invention, an automatic image segmentation scheme based on superpixel and improved fuzzy C-means clustering is provided. According to the scheme, the influence of local neighborhood pixel points on the central point can be fully considered, the idea of the super-pixel block is introduced into the density peak value clustering algorithm, the calculation efficiency can be effectively improved, the number of clustering clusters is automatically determined, the optimized membership degree is used for improving the fuzzy C-means clustering algorithm, and finally image segmentation is realized without manual intervention.
In a first aspect of the invention, an automatic image segmentation method based on superpixels and improved fuzzy C-means clustering is provided. The method comprises the following steps:
selecting any neighborhood from a target image, calculating local gray scale change information of pixels in the neighborhood and spatial distance information of the pixels in the neighborhood and a central pixel, and counting the influence weight of the pixels in the neighborhood on the local gray scale change information of the central pixel and the influence weight of the spatial distance information to obtain an optimized weighted image;
pre-dividing the optimized weighted image by using an SLIC algorithm to obtain super pixels;
performing density peak value clustering on the super pixels to obtain the number of clustering clusters;
and performing secondary segmentation on the superpixel by adopting an optimized membership degree improved fuzzy C-means clustering algorithm to obtain an image after secondary segmentation.
Further, calculating local gray scale change information of the pixel points in the neighborhood, and calculating the influence weight of the pixel points in the neighborhood on the local gray scale change information of the central pixel point, including:
comparing the gray value of each pixel point in the neighborhood with the average gray value of all pixel points in the neighborhood to obtain local gray change information; the local gray scale change information is:
Figure 100002_DEST_PATH_IMAGE001
wherein the content of the first and second substances,
Figure 455778DEST_PATH_IMAGE002
is local gray scale change information;
Figure 697404DEST_PATH_IMAGE003
the gray value of the pixel point j in the neighborhood is obtained;
Figure 972528DEST_PATH_IMAGE004
a local window centered on a pixel point i;
Figure 805879DEST_PATH_IMAGE005
is the average gray of all pixel points in the neighborhoodDegree of and
Figure 141045DEST_PATH_IMAGE006
wherein, in the step (A),
Figure 869967DEST_PATH_IMAGE007
as a partial window
Figure 886464DEST_PATH_IMAGE004
The number of internal pixel points;
the influence weight of the pixel points in the neighborhood on the local gray level change information of the central pixel point is as follows:
Figure 384442DEST_PATH_IMAGE008
wherein the content of the first and second substances,
Figure 100002_DEST_PATH_IMAGE009
influence weight on local gray scale change information of the central pixel point for pixel points in the neighborhood;
Figure 343039DEST_PATH_IMAGE002
is the local gray scale variation information.
Further, calculating spatial distance information between the pixel point in the neighborhood and the central pixel point, and controlling the weight of the influence of the pixel point in the neighborhood on the spatial distance information of the central pixel point, including:
defining spatial distance information between the pixel points in the neighborhood and the central pixel point by using a Gaussian kernel function;
controlling the influence weight of the pixel points in the neighborhood on the spatial distance information of the central pixel point by using an exponential function; the spatial distance information influence weight is as follows:
Figure 293678DEST_PATH_IMAGE010
wherein the content of the first and second substances,
Figure 176183DEST_PATH_IMAGE011
influence weight on spatial distance information of a central pixel point for pixels in a neighborhood;
Figure 528667DEST_PATH_IMAGE012
and the spatial distance information of the pixel point in the neighborhood and the central pixel point is obtained.
Further, the optimized weighted image is:
Figure 408898DEST_PATH_IMAGE013
wherein the content of the first and second substances,
Figure 846833DEST_PATH_IMAGE014
optimizing the gray value of a pixel point i in the weighted image;
Figure 267450DEST_PATH_IMAGE004
a local window centered on a pixel point i;
Figure 926970DEST_PATH_IMAGE009
influence weight on local gray scale change information of the central pixel point for pixel points in the neighborhood;
Figure 40420DEST_PATH_IMAGE011
influence the weight for the spatial distance information of the pixel points in the neighborhood to the central pixel point;
Figure 965651DEST_PATH_IMAGE003
and the gray value of the pixel point j in the neighborhood is obtained.
Further, the pre-dividing the optimized weighted image by using the SLIC algorithm to obtain the superpixels includes:
according to the set number of the super pixels, uniformly distributing seed points in the optimized weighted image;
according to
Figure 127642DEST_PATH_IMAGE015
Calculating the distance between two adjacent super pixel centers
Figure 189139DEST_PATH_IMAGE016
Wherein, in the step (A),
Figure 473489DEST_PATH_IMAGE017
in order to optimize the total number of pixels of the weighted image,
Figure 886016DEST_PATH_IMAGE018
the number of the pre-divided super pixels;
at a distance of the centers of the two adjacent super pixels
Figure 100966DEST_PATH_IMAGE016
As step length, uniformly selecting a plurality of pixel points in the optimized weighted image as an initial super-pixel clustering center;
assigning a class label to each pixel point in the neighborhood around each seed point, the search range being limited to
Figure 282548DEST_PATH_IMAGE019
According to
Figure 472221DEST_PATH_IMAGE020
A distance measure is calculated, wherein,
Figure 309727DEST_PATH_IMAGE021
is a distance measure;
Figure 875838DEST_PATH_IMAGE022
the gray value difference between the pixel point i and the seed pixel point b is obtained;
Figure 177506DEST_PATH_IMAGE014
is the gray value at the pixel point i,
Figure 538080DEST_PATH_IMAGE023
is the gray value at pixel point b, and
Figure 126798DEST_PATH_IMAGE024
Figure 496600DEST_PATH_IMAGE025
is a pixel point
Figure 387196DEST_PATH_IMAGE026
And seed pixel point
Figure 121933DEST_PATH_IMAGE027
A spatial distance of (a) and
Figure 730769DEST_PATH_IMAGE028
wherein
Figure 904262DEST_PATH_IMAGE029
And
Figure 914943DEST_PATH_IMAGE030
respectively are the horizontal coordinates of the two pixel points;
Figure 69850DEST_PATH_IMAGE031
and
Figure 165982DEST_PATH_IMAGE032
respectively are the vertical coordinates of two pixel points;
Figure 877586DEST_PATH_IMAGE033
to measure the compactness index of the superpixel, an
Figure 680457DEST_PATH_IMAGE033
The larger the value of (a), the higher the compactness;
Figure 819314DEST_PATH_IMAGE016
the distance between the centers of two adjacent super pixels; taking a seed point with the minimum distance measure with the pixel point as a clustering center of the pixel point;
and iteratively updating the super-pixel clustering center until convergence or the preset maximum iteration number of the super-pixel clustering is reached, and finishing super-pixel segmentation to obtain a super-pixel segmentation image.
Further, performing density peak clustering on the super pixels to obtain the number of clustered clusters, including:
randomly selecting two superpixels, and calculating a first distance between the two superpixels; the first distance is:
Figure 402742DEST_PATH_IMAGE034
wherein the content of the first and second substances,
Figure 839408DEST_PATH_IMAGE035
is the p-th super pixel
Figure 824682DEST_PATH_IMAGE036
And the qth super pixel
Figure 134441DEST_PATH_IMAGE037
A first distance therebetween;
Figure 205165DEST_PATH_IMAGE038
is the p-th super pixel
Figure 196254DEST_PATH_IMAGE036
The number of pixel points contained in the image;
Figure 36035DEST_PATH_IMAGE039
is the qth super pixel
Figure 516694DEST_PATH_IMAGE037
The number of pixel points contained in the image;
Figure 996086DEST_PATH_IMAGE040
is the p-th super pixel
Figure 853184DEST_PATH_IMAGE036
The number of the pixel points in (1),
Figure 547470DEST_PATH_IMAGE041
is composed of
Figure 199031DEST_PATH_IMAGE042
q super pixels
Figure 182031DEST_PATH_IMAGE043
The pixel point in (2);
Figure 577240DEST_PATH_IMAGE044
is a pixel point
Figure 126033DEST_PATH_IMAGE045
Is determined by the gray-scale value of (a),
Figure 403955DEST_PATH_IMAGE046
is a pixel point
Figure 670989DEST_PATH_IMAGE047
The gray value of (a);
calculating the local density of any one super pixel and a second distance from the current super pixel to the super pixel with larger local density and closest distance; the local density is:
Figure 869889DEST_PATH_IMAGE048
wherein the content of the first and second substances,
Figure 210871DEST_PATH_IMAGE049
is the p-th super pixel
Figure 469814DEST_PATH_IMAGE036
The local density of (a);
Figure 224144DEST_PATH_IMAGE035
is the p-th super pixel
Figure 961156DEST_PATH_IMAGE036
And the qth super pixel
Figure 671492DEST_PATH_IMAGE037
A first distance therebetween;
Figure 101336DEST_PATH_IMAGE039
is the qth super pixel
Figure 77382DEST_PATH_IMAGE037
The number of pixel points contained in the image;
Figure 555768DEST_PATH_IMAGE050
is the total number of super-pixels,
Figure 199239DEST_PATH_IMAGE051
Figure 534405DEST_PATH_IMAGE052
Figure 100002_DEST_PATH_IMAGE053
is a truncation distance;
the second distance is:
Figure 450278DEST_PATH_IMAGE054
wherein the content of the first and second substances,
Figure 529092DEST_PATH_IMAGE055
is the p-th super pixel
Figure 27070DEST_PATH_IMAGE036
A second distance to a super-pixel with a greater local density and closest distance;
Figure 470820DEST_PATH_IMAGE049
is the p-th super pixel
Figure 687038DEST_PATH_IMAGE036
The local density of (a);
Figure 569543DEST_PATH_IMAGE056
is the qth super pixel
Figure 108978DEST_PATH_IMAGE037
The local density of (a);
Figure 51526DEST_PATH_IMAGE035
is the p-th super pixel
Figure 489461DEST_PATH_IMAGE036
And the qth super pixel
Figure 910078DEST_PATH_IMAGE037
A first distance therebetween;
at local density
Figure 320331DEST_PATH_IMAGE049
Is a horizontal axis and is a second distance
Figure 168201DEST_PATH_IMAGE055
For the vertical axis, a first decision graph is generated
Figure 93432DEST_PATH_IMAGE057
And the first decision graph is used
Figure 507620DEST_PATH_IMAGE058
Normalizing to obtain a second decision chart
Figure 834696DEST_PATH_IMAGE059
Wherein
Figure 119047DEST_PATH_IMAGE060
Represents the minimum value of the total number of the unit,
Figure 531574DEST_PATH_IMAGE061
represents the maximum value;
the second decision graph is used
Figure 231676DEST_PATH_IMAGE062
Mapping to a third decision graph
Figure 413259DEST_PATH_IMAGE063
And said first stepNormalizing the three decision graphs to obtain a fourth decision graph
Figure 868511DEST_PATH_IMAGE064
And arranging the element values contained in the fourth decision diagram in the descending order, sequentially calculating the absolute value of the difference between the current element and the next element, and taking the index value corresponding to the maximum absolute value of the difference as the cluster number.
Further, the second decision graph is used for carrying out the decision making
Figure 955285DEST_PATH_IMAGE062
Mapping to a third decision graph
Figure 521395DEST_PATH_IMAGE063
The method comprises the following steps:
Figure 557484DEST_PATH_IMAGE065
Figure 121321DEST_PATH_IMAGE066
Figure 508440DEST_PATH_IMAGE067
wherein the content of the first and second substances,
Figure 612662DEST_PATH_IMAGE050
is the total number of superpixels;
Figure 768837DEST_PATH_IMAGE068
is a threshold value, and
Figure 752842DEST_PATH_IMAGE069
Figure 361678DEST_PATH_IMAGE070
as a constraint condition
Figure 535171DEST_PATH_IMAGE071
Second decision diagram
Figure 483535DEST_PATH_IMAGE062
The number of (2);
Figure 451491DEST_PATH_IMAGE072
is an indicator function;
Figure 547623DEST_PATH_IMAGE073
the first term is 0, the last term is 1, and the tolerance is
Figure 993648DEST_PATH_IMAGE074
The series of arithmetic difference numbers of (1),
Figure 311366DEST_PATH_IMAGE075
Figure 450223DEST_PATH_IMAGE076
is a constant number of times, and is,
Figure 33651DEST_PATH_IMAGE077
Figure 221050DEST_PATH_IMAGE078
further, the performing secondary segmentation on the superpixel by using the optimized membership improved fuzzy C-means clustering algorithm to obtain an image after secondary segmentation includes:
initializing a membership matrix to obtain an initial membership matrix; the initial membership matrix is:
Figure 206323DEST_PATH_IMAGE079
wherein, the first and the second end of the pipe are connected with each other,
Figure 516082DEST_PATH_IMAGE080
is an initial membership matrix;
Figure 776686DEST_PATH_IMAGE081
is a super pixel
Figure 830093DEST_PATH_IMAGE036
Initial membership to the kth cluster;
Figure 669873DEST_PATH_IMAGE082
the number of the clustering clusters is obtained;
Figure 150533DEST_PATH_IMAGE050
the total number of the super pixels;
according to
Figure 380657DEST_PATH_IMAGE083
Calculating the cluster center of each cluster to obtain a cluster center set
Figure 237755DEST_PATH_IMAGE084
(ii) a Wherein, the first and the second end of the pipe are connected with each other,
Figure 932041DEST_PATH_IMAGE085
is the cluster center of the kth cluster;
Figure 770553DEST_PATH_IMAGE086
is a super pixel
Figure 815870DEST_PATH_IMAGE036
Membership to the kth cluster;
Figure 211079DEST_PATH_IMAGE087
is a super pixel
Figure 759872DEST_PATH_IMAGE036
Corresponding gray values;
Figure 785597DEST_PATH_IMAGE088
is a fuzzy index;
Figure 52630DEST_PATH_IMAGE050
is the total number of superpixels;
according to
Figure 100002_DEST_PATH_IMAGE089
Calculating each superpixel
Figure 985951DEST_PATH_IMAGE036
Obtaining a membership matrix for the membership of the kth cluster; wherein the content of the first and second substances,
Figure 841780DEST_PATH_IMAGE090
is a first
Figure 100723DEST_PATH_IMAGE091
The center of each cluster is provided with a plurality of clusters,
Figure 589474DEST_PATH_IMAGE085
is the kth cluster center;
Figure 529748DEST_PATH_IMAGE088
is a fuzzy index;
Figure 53133DEST_PATH_IMAGE087
is a super pixel
Figure 217398DEST_PATH_IMAGE036
Corresponding gray values;
Figure 645974DEST_PATH_IMAGE082
the number of clustering clusters is obtained;
if the iteration termination condition is reached, outputting a membership matrix calculated for the last time; otherwise, continuing the iterative computation;
after the membership matrix calculated at the last time is obtained, distributing the membership to each pixel point in a target image, selecting a local window with a preset size, and counting the optimized membership of a central pixel point i in the local window to a kth cluster
Figure 186677DEST_PATH_IMAGE092
Wherein, in the step (A),
Figure 564569DEST_PATH_IMAGE004
is a local window centered on the pixel i,
Figure 165314DEST_PATH_IMAGE093
the membership degree of a pixel point j in the local window to the kth cluster;
Figure 566340DEST_PATH_IMAGE094
is a Gaussian kernel function
Figure 910733DEST_PATH_IMAGE094
The method is used for reflecting the influence degree of the membership degree of the pixel point j to the kth cluster in the local window on the membership degree of the central pixel point to the kth cluster; re-counting each super pixel
Figure 408711DEST_PATH_IMAGE036
Optimizing membership degree of kth cluster
Figure 92940DEST_PATH_IMAGE095
Wherein
Figure 309158DEST_PATH_IMAGE096
Is a super pixel
Figure 191663DEST_PATH_IMAGE036
The membership degree of the kth cluster after optimization;
Figure 544147DEST_PATH_IMAGE097
optimizing the membership degree of the kth cluster for the pixel point i;
Figure 158799DEST_PATH_IMAGE038
is the p-th super pixel
Figure 596734DEST_PATH_IMAGE036
The number of pixel points contained in the image;
according to the principle of maximum membership, from
Figure 282930DEST_PATH_IMAGE098
Determining superpixelsAnd obtaining the secondary segmented image by the affiliated cluster, wherein,
Figure 676871DEST_PATH_IMAGE099
is a super pixel
Figure 790321DEST_PATH_IMAGE036
The cluster to which it belongs;
Figure 715551DEST_PATH_IMAGE096
is a super pixel
Figure 877542DEST_PATH_IMAGE036
And (4) optimizing the membership degree of the kth cluster.
In a second aspect of the invention, an automatic image segmentation apparatus based on superpixel and improved fuzzy C-means clustering is provided. The device comprises:
the calculation and statistics module is used for selecting any neighborhood from the target image, calculating the local gray change information of the pixel points in the neighborhood and the spatial distance information between the pixel points in the neighborhood and the central pixel point, and calculating the influence weight of the pixel points in the neighborhood on the local gray change information and the spatial distance information of the central pixel point to obtain an optimized weighted image;
the pre-segmentation module is used for pre-segmenting the optimized weighted image by utilizing an SLIC algorithm to obtain superpixels;
the clustering module is used for carrying out density peak value clustering on the super pixels to obtain the number of clustering clusters;
and the segmentation module is used for carrying out secondary segmentation on the superpixel by adopting the optimized membership degree improved fuzzy C-means clustering algorithm to obtain an image subjected to secondary segmentation.
In a third aspect of the invention, an electronic device is provided. The electronic device at least one processor; and a memory communicatively coupled to the at least one processor; the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of the first aspect of the invention.
It should be understood that the statements made in this summary are not intended to limit the key or critical features of the embodiments of the present invention, or to limit the scope of the invention. Other features of the present invention will become apparent from the following description.
Drawings
The above and other features, advantages and aspects of various embodiments of the present invention will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. In the drawings, like or similar reference characters designate like or similar elements, and wherein:
FIG. 1 shows a flow diagram of a method for automatic image segmentation based on superpixel and improved fuzzy C-means clustering, according to an embodiment of the present invention;
FIG. 2 shows a flow diagram of a pre-segmentation process according to an embodiment of the invention;
FIG. 3 shows a flow diagram for density peak clustering of superpixels in accordance with an embodiment of the present invention;
FIG. 4 shows a flow diagram for bi-segmenting a super-pixel according to an embodiment of the invention;
FIG. 5 is a diagram illustrating segmentation results obtained on a composite image containing noise according to an embodiment of the present invention;
FIG. 6 is a diagram illustrating segmentation results obtained on a natural image according to an embodiment of the present invention;
FIG. 7 is a diagram illustrating segmentation results obtained on a remote sensing image according to an embodiment of the present invention;
FIG. 8 illustrates a schematic structural diagram of an automatic image segmentation system based on superpixel and improved fuzzy C-means clustering, according to an embodiment of the present invention;
FIG. 9 illustrates a block diagram of an exemplary electronic device capable of implementing embodiments of the present invention;
among them, 900 is an electronic device, 901 is a CPU, 902 is a ROM, 903 is a RAM, 904 is a bus, 905 is an I/O interface, 906 is an input unit, 907 is an output unit, 908 is a storage unit, and 909 is a communication unit.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be obtained by a person skilled in the art without any inventive step based on the embodiments of the present invention, are within the scope of the present invention.
In addition, the term "and/or" herein is only one kind of association relationship describing an associated object, and means that there may be three kinds of relationships, for example, a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
The method can fully consider the influence of local neighborhood pixel points on the central point, and introduces the idea of the super-pixel block into the density peak value clustering algorithm, thereby effectively improving the calculation efficiency, automatically determining the number of clustering clusters, and finally realizing image segmentation without manual intervention.
FIG. 1 shows a flow chart of a method for automatic image segmentation based on superpixels and improved fuzzy C-means clustering in accordance with an embodiment of the present invention.
The method comprises the following steps:
s101, selecting any neighborhood from a target image, calculating local gray change information of pixels in the neighborhood and spatial distance information of the pixels in the neighborhood and a central pixel, and counting the influence weight of the pixels in the neighborhood on the local gray change information of the central pixel and the influence weight of the spatial distance information to obtain an optimized weighted image.
As an embodiment of the present invention, a local window may be established, and a region in the local window corresponding to the target image may be used as a neighborhood.
As an embodiment of the present invention, for any neighborhood in the target image, the gray value of each pixel point j in the neighborhood is determined
Figure 204619DEST_PATH_IMAGE003
And the average gray scale of all pixel points in the neighborhood
Figure 488969DEST_PATH_IMAGE005
And comparing to obtain an average gray value difference value as local gray change information.
In this embodiment, the local gray scale change information is:
Figure 635917DEST_PATH_IMAGE001
wherein the content of the first and second substances,
Figure 850867DEST_PATH_IMAGE002
is local gray scale change information;
Figure 32449DEST_PATH_IMAGE003
the gray value of the pixel point j in the neighborhood is obtained;
Figure 487701DEST_PATH_IMAGE004
a local window centered on the pixel point i, the size of the local window being
Figure 325207DEST_PATH_IMAGE100
Figure 891318DEST_PATH_IMAGE005
Is the average gray value of all pixel points in the neighborhood, and
Figure 927407DEST_PATH_IMAGE006
wherein, in the step (A),
Figure 553560DEST_PATH_IMAGE007
as a partial window
Figure 862051DEST_PATH_IMAGE004
The number of pixels in the column.
For the region with gentle gray scale change, local grayDegree change information
Figure 231852DEST_PATH_IMAGE002
The value of (d) is small; and local gray scale change information for the region with abrupt gray scale change (such as edge, noise, etc.)
Figure 388027DEST_PATH_IMAGE002
The value of (a) is large. Visible, local gray scale variation information
Figure 122765DEST_PATH_IMAGE002
The uniformity degree of the pixel gray distribution in the local neighborhood can be reflected. When calculating the gray value of the central pixel point of the local window, for
Figure 731601DEST_PATH_IMAGE002
Points with larger values should have a reduced effect on the center pixel intensity calculation and conversely should have an increased effect.
Since the exponential function has a faster decay rate, the influence weight of the neighboring pixels is further controlled by the exponential function. The influence weight of the pixel points in the neighborhood on the local gray level change information of the central pixel point is as follows:
Figure 639514DEST_PATH_IMAGE008
wherein, the first and the second end of the pipe are connected with each other,
Figure 650195DEST_PATH_IMAGE009
influence weight on local gray scale change information of the central pixel point for pixel points in the neighborhood;
Figure 808032DEST_PATH_IMAGE002
is the local gray scale variation information.
As an embodiment of the present invention, calculating spatial distance information between a pixel point in a neighborhood and a center pixel point, and controlling a weight of influence of the pixel point in the neighborhood on the spatial distance information of the center pixel point includes:
height of utilizationThe spatial distance information of the pixel points in the neighborhood and the central pixel point is defined by the Gaussian kernel function; specifically, a standard deviation of 2 and a size of
Figure 904164DEST_PATH_IMAGE100
Gaussian kernel function of
Figure 615768DEST_PATH_IMAGE094
To define a local window
Figure 684218DEST_PATH_IMAGE004
And the spatial distance between the pixel point of the inner neighborhood and the central pixel point.
Figure 823075DEST_PATH_IMAGE094
The smaller the value of (A) is, the larger the distance between the neighborhood pixel point and the central pixel point is,
Figure 140924DEST_PATH_IMAGE094
the larger the value of (A), the smaller the distance between the neighborhood pixel point and the central pixel point is.
And controlling the influence weight of the pixel points in the neighborhood on the spatial distance information of the central pixel point by utilizing the exponential function in view of the fact that the exponential function has faster attenuation speed.
The spatial distance information influence weight is as follows:
Figure 843170DEST_PATH_IMAGE101
wherein the content of the first and second substances,
Figure 828443DEST_PATH_IMAGE011
influence weight on spatial distance information of a central pixel point for pixels in a neighborhood;
Figure 138202DEST_PATH_IMAGE012
calculating the space distance information of the pixel point and the central pixel point in the neighborhood through a Gaussian kernel function
Figure 943347DEST_PATH_IMAGE012
And is and
Figure 934437DEST_PATH_IMAGE102
further, the weight of the local neighborhood pixel point to the center pixel point is fully measured by combining the local gray scale change information and the spatial change information, and an optimized weighted image is obtained. The optimized weighted image is:
Figure 774217DEST_PATH_IMAGE103
wherein the content of the first and second substances,
Figure 254876DEST_PATH_IMAGE014
optimizing the value of a pixel point i in the weighted image;
Figure 999847DEST_PATH_IMAGE004
a local window centered on a pixel point i;
Figure 591366DEST_PATH_IMAGE009
influence weight on local gray scale change information of the central pixel point for pixel points in the neighborhood;
Figure 551232DEST_PATH_IMAGE011
influence weight on spatial distance information of a central pixel point for pixels in a neighborhood;
Figure 202793DEST_PATH_IMAGE003
and the gray value of the pixel point j in the neighborhood is obtained.
According to the embodiment of the invention, the influence weight of neighborhood pixel points in a local window on a central point is calculated by adopting local gray scale change and spatial information, so that an optimized weighted image is obtained; as a preprocessing step for the original image, the influence of local neighborhood pixel points on the central point is fully considered, the influence of noise points on the image segmentation process is effectively reduced, and the image segmentation precision is improved.
And S102, pre-dividing the optimized weighted image by using an SLIC algorithm to obtain the superpixel.
As an embodiment of the present invention, as shown in fig. 2, the pre-segmentation process includes:
s201, presetting the number of super pixels, and uniformly distributing seed points in the optimized weighted image according to the set number of the super pixels.
S202, according to
Figure 920213DEST_PATH_IMAGE015
Calculating the distance between two adjacent super pixel centers
Figure 315422DEST_PATH_IMAGE016
(ii) a Wherein the content of the first and second substances,
Figure 129795DEST_PATH_IMAGE017
in order to optimize the total number of pixels of the weighted image,
Figure 139208DEST_PATH_IMAGE018
is a pre-divided number of super-pixels, and
Figure 406241DEST_PATH_IMAGE104
s203, the distance between the centers of the two adjacent super pixels
Figure 605141DEST_PATH_IMAGE016
And as the step length, uniformly selecting a plurality of pixel points in the optimized weighted image as an initial super-pixel clustering center.
S204, distributing the belonging label to each pixel point in the neighborhood around each seed point, and limiting the search range to
Figure 211703DEST_PATH_IMAGE019
S205, according to
Figure 205067DEST_PATH_IMAGE020
A distance measure is calculated, wherein,
Figure 959396DEST_PATH_IMAGE021
is a distance measure;
Figure 961987DEST_PATH_IMAGE022
the gray value difference between the pixel point i and the seed pixel point b is obtained;
Figure 675253DEST_PATH_IMAGE105
is the gray value at the pixel point i,
Figure 839518DEST_PATH_IMAGE106
is the gray value at pixel point b, and
Figure 81143DEST_PATH_IMAGE024
Figure 559529DEST_PATH_IMAGE025
is a pixel point
Figure 937421DEST_PATH_IMAGE026
And seed pixel point
Figure 538167DEST_PATH_IMAGE027
A spatial distance of, and
Figure 188460DEST_PATH_IMAGE028
wherein
Figure 532853DEST_PATH_IMAGE029
And
Figure 765252DEST_PATH_IMAGE030
respectively are the horizontal coordinates of the two pixel points;
Figure 536898DEST_PATH_IMAGE031
and
Figure 425220DEST_PATH_IMAGE032
respectively are the vertical coordinates of two pixel points;
Figure 307725DEST_PATH_IMAGE033
to measure the compactness index of the super-pixel, an
Figure 660209DEST_PATH_IMAGE107
Figure 789708DEST_PATH_IMAGE033
The larger the value of (a), the higher the compactness;
Figure 227643DEST_PATH_IMAGE016
the distance between the centers of two adjacent super pixels; and taking the seed point with the minimum distance measure with the pixel point as the clustering center of the pixel point.
And S206, iterating the steps S202-S205, and updating the super-pixel clustering center until convergence or the preset maximum iteration number of the super-pixel clustering is reached. The maximum number of iterations of the superpixel clustering is, for example, 10.
S103, performing density peak value clustering on the super pixels to obtain the number of clustering clusters.
In this embodiment, as shown in fig. 3, performing density peak clustering on the super pixels to obtain the number of clusters includes:
s301, two superpixels are selected randomly, and a first distance between the two superpixels is calculated.
The first distance is:
Figure 913839DEST_PATH_IMAGE108
wherein the content of the first and second substances,
Figure 120830DEST_PATH_IMAGE035
is the p-th super pixel
Figure 171962DEST_PATH_IMAGE036
And the qth super pixel
Figure 97193DEST_PATH_IMAGE037
A first distance therebetween;
Figure 321501DEST_PATH_IMAGE038
is the p-th super pixel
Figure 835528DEST_PATH_IMAGE036
The number of pixel points contained in the image;
Figure 854299DEST_PATH_IMAGE039
is the qth super pixel
Figure 266826DEST_PATH_IMAGE037
The number of pixel points contained in the image;
Figure 232508DEST_PATH_IMAGE040
is the p-th super pixel
Figure 414091DEST_PATH_IMAGE036
The number of the pixel points in (1),
Figure 869343DEST_PATH_IMAGE041
is composed of
Figure 503586DEST_PATH_IMAGE042
q super pixels
Figure 259577DEST_PATH_IMAGE043
The pixel point in (1);
Figure 561246DEST_PATH_IMAGE044
is a pixel point
Figure 187399DEST_PATH_IMAGE045
Is measured in a predetermined time period, and the gray value of (b),
Figure 246622DEST_PATH_IMAGE046
is a pixel point
Figure 616423DEST_PATH_IMAGE047
The gray value of (a).
S302, calculating the local density of any super pixel and a second distance from the current super pixel to the super pixel with larger local density and closest distance.
Specifically, the local density is:
Figure 772598DEST_PATH_IMAGE109
wherein the content of the first and second substances,
Figure 569653DEST_PATH_IMAGE049
is the p-th super pixel
Figure 365439DEST_PATH_IMAGE036
The local density of (a);
Figure 273353DEST_PATH_IMAGE035
is the p-th super pixel
Figure 284034DEST_PATH_IMAGE036
And the qth super pixel
Figure 189673DEST_PATH_IMAGE037
A first distance therebetween;
Figure 285805DEST_PATH_IMAGE039
is the qth super pixel
Figure 997409DEST_PATH_IMAGE037
The number of pixel points contained in the image;
Figure 862597DEST_PATH_IMAGE050
is the total number of super-pixels,
Figure 188405DEST_PATH_IMAGE051
Figure 771833DEST_PATH_IMAGE052
Figure 287128DEST_PATH_IMAGE053
is a truncation distance;
the second distance is:
Figure 944505DEST_PATH_IMAGE110
wherein the content of the first and second substances,
Figure 254264DEST_PATH_IMAGE055
is the p-th super pixel
Figure 324988DEST_PATH_IMAGE036
A second distance to a super-pixel with a greater local density and closest distance;
Figure 565346DEST_PATH_IMAGE035
is the p-th super pixel
Figure 405126DEST_PATH_IMAGE036
And the qth super pixel
Figure 885785DEST_PATH_IMAGE037
A first distance therebetween;
Figure 443806DEST_PATH_IMAGE049
is the p-th super pixel
Figure 973007DEST_PATH_IMAGE036
The local density of (a);
Figure 667294DEST_PATH_IMAGE056
is the qth super pixel
Figure 584434DEST_PATH_IMAGE037
The local density of (a).
S303, using local density
Figure 565770DEST_PATH_IMAGE049
Is a horizontal axis and is a second distance
Figure 960980DEST_PATH_IMAGE055
For the vertical axis, a first block is generatedMaking a graph; mapping the first decision graph
Figure 775352DEST_PATH_IMAGE058
Normalizing to obtain a second decision chart
Figure 535498DEST_PATH_IMAGE111
The first decision graph
Figure 802531DEST_PATH_IMAGE057
The second decision graph
Figure 1431DEST_PATH_IMAGE112
Wherein the content of the first and second substances,
Figure 670310DEST_PATH_IMAGE060
represents the minimum value of the number of the optical fibers,
Figure 850624DEST_PATH_IMAGE061
represents the maximum value.
S304, the second decision chart is used
Figure 604954DEST_PATH_IMAGE111
Mapping to a third decision graph
Figure 341965DEST_PATH_IMAGE063
And normalizing the third decision diagram to obtain a fourth decision diagram.
Specifically, the second decision graph is used
Figure 803034DEST_PATH_IMAGE111
Mapping to a third decision graph
Figure 232878DEST_PATH_IMAGE063
The method comprises the following steps:
Figure 474504DEST_PATH_IMAGE065
Figure 749627DEST_PATH_IMAGE113
Figure 580049DEST_PATH_IMAGE114
wherein the content of the first and second substances,
Figure 915215DEST_PATH_IMAGE050
is the total number of super pixels;
Figure 644137DEST_PATH_IMAGE068
is a threshold value, and
Figure 660634DEST_PATH_IMAGE069
Figure 158612DEST_PATH_IMAGE070
as a constraint condition
Figure 930259DEST_PATH_IMAGE071
Second decision diagram
Figure 67848DEST_PATH_IMAGE111
The number of (2);
Figure 950353DEST_PATH_IMAGE072
is an indicator function;
Figure 302837DEST_PATH_IMAGE073
the first term is 0, the last term is 1, and the tolerance is
Figure 245385DEST_PATH_IMAGE074
The series of arithmetic difference numbers of (1),
Figure 621003DEST_PATH_IMAGE075
Figure 41620DEST_PATH_IMAGE076
is a constant number of times, and is,
Figure 514190DEST_PATH_IMAGE077
Figure 817520DEST_PATH_IMAGE078
the fourth decision graph
Figure 742750DEST_PATH_IMAGE115
S305, a fourth decision diagram
Figure 967058DEST_PATH_IMAGE116
The element values contained in the cluster are arranged according to the descending order, the absolute value of the difference between the current element and the next element is calculated in sequence, the index value corresponding to the maximum absolute value of the difference is used as the cluster number, and the cluster number is used
Figure 966238DEST_PATH_IMAGE082
To indicate.
By introducing the idea of the super-pixel block into the density peak value clustering algorithm, compared with the pixel points, the calculation efficiency of the density peak value clustering algorithm can be effectively improved by using the super-pixel block, the prior information of the image is obtained, the number of the clustering clusters is automatically determined, and the clustering clusters are used as the input of the subsequent clustering.
And S104, performing secondary segmentation on the superpixel by adopting the optimized membership improved fuzzy C-means clustering algorithm to obtain an image subjected to secondary segmentation.
As an embodiment of the present invention, as shown in fig. 4, the performing the secondary division on the super pixel specifically includes:
s401, initializing a membership matrix to obtain an initial membership matrix.
The initial membership matrix is:
Figure 250589DEST_PATH_IMAGE117
wherein the content of the first and second substances,
Figure 663116DEST_PATH_IMAGE080
is an initial membership matrix;
Figure 691115DEST_PATH_IMAGE081
is a super pixel
Figure 59648DEST_PATH_IMAGE118
Initial membership to the kth cluster;
Figure 249321DEST_PATH_IMAGE082
the number of clustering clusters is obtained;
Figure 149144DEST_PATH_IMAGE050
is the total number of super pixels.
S402, calculating the cluster center of each cluster to obtain a cluster center set
Figure 652937DEST_PATH_IMAGE084
The calculating of the cluster center of each cluster comprises:
Figure 954606DEST_PATH_IMAGE119
wherein the content of the first and second substances,
Figure 315180DEST_PATH_IMAGE085
is the cluster center of the kth cluster;
Figure 702299DEST_PATH_IMAGE086
is a super pixel
Figure 259051DEST_PATH_IMAGE036
Membership to the kth cluster;
Figure 149647DEST_PATH_IMAGE087
is a super pixel
Figure 946701DEST_PATH_IMAGE036
Corresponding toGray value;
Figure 493220DEST_PATH_IMAGE088
is a fuzzy index, and
Figure 666713DEST_PATH_IMAGE120
Figure DEST_PATH_IMAGE121
is the total number of superpixels.
S403, calculating each super pixel
Figure 864345DEST_PATH_IMAGE036
And obtaining a membership matrix for the membership of the kth cluster.
Said computing each superpixel
Figure 832301DEST_PATH_IMAGE036
The degree of membership to the kth cluster includes:
Figure 928433DEST_PATH_IMAGE122
wherein
Figure 640037DEST_PATH_IMAGE123
Is a super pixel
Figure 442908DEST_PATH_IMAGE036
Membership to the kth cluster;
Figure 581765DEST_PATH_IMAGE090
is as follows
Figure 165193DEST_PATH_IMAGE091
The center of each cluster is provided with a plurality of clusters,
Figure 604789DEST_PATH_IMAGE085
is the kth cluster center;
Figure 590063DEST_PATH_IMAGE087
is a super imageVegetable oil
Figure 899821DEST_PATH_IMAGE036
Corresponding gray values;
Figure 970546DEST_PATH_IMAGE088
is a fuzzy index;
Figure 961635DEST_PATH_IMAGE082
is the number of clusters.
Circularly iterating the steps S401-S403, and if the iteration termination condition is reached, outputting a membership matrix calculated for the last time; otherwise, the iterative computation is continued.
In this embodiment, the condition for terminating the iteration is
Figure 801415DEST_PATH_IMAGE124
Or the current number of iterations
Figure 282075DEST_PATH_IMAGE125
Wherein, in the step (A),
Figure 761467DEST_PATH_IMAGE126
threshold for iteration termination, an
Figure 618565DEST_PATH_IMAGE127
Figure 312851DEST_PATH_IMAGE128
Is the maximum number of iterations, and
Figure 902095DEST_PATH_IMAGE129
s404, after the membership matrix calculated at the last time is obtained, distributing the membership to each pixel point in a target image; and optimizing the membership degree of each pixel point to the kth cluster by using a Gaussian kernel function, and further optimizing the membership degree of each super pixel to the kth cluster. Wherein, the membership degree of the pixel points belonging to the same super pixel to the kth cluster is also the same.
Presets are selectedAnd counting the membership degree of a central pixel point i in the local window to the kth cluster. The size of the preset local window is
Figure 947412DEST_PATH_IMAGE100
(ii) a The optimized membership degree of the central pixel point i in the local window to the kth cluster is as follows:
Figure 342621DEST_PATH_IMAGE130
wherein the content of the first and second substances,
Figure 78365DEST_PATH_IMAGE004
is a pixel point
Figure 166407DEST_PATH_IMAGE014
A local window that is a center of the window,
Figure 433440DEST_PATH_IMAGE093
the membership degree of a pixel point j in the local window to the kth cluster;
Figure 570023DEST_PATH_IMAGE094
is a standard deviation of 2 and a size of
Figure 973323DEST_PATH_IMAGE100
The Gaussian kernel function of
Figure 232266DEST_PATH_IMAGE094
And the method is used for reflecting the influence degree of the membership degree of the pixel point j to the kth cluster in the local window on the membership degree of the central pixel point to the kth cluster. When the distance between the neighborhood pixel point and the central pixel point in the local window is larger, the Gaussian kernel function
Figure 986595DEST_PATH_IMAGE094
The smaller the value of (A), the smaller the value of (A) indicates that the neighborhood pixel point is aligned with the center pixel point
Figure DEST_PATH_IMAGE131
The smaller the influence on the value of the degree of membership of the kth cluster is; when the distance between the neighborhood pixel point and the central pixel point in the local window is smaller, the Gaussian kernel function
Figure 910558DEST_PATH_IMAGE094
The larger the value of (a) is, the larger the influence of the membership degree of the neighborhood pixel point to the kth cluster on the membership degree of the central pixel point to the kth cluster is.
Based on this, each super pixel is re-counted
Figure 433943DEST_PATH_IMAGE036
The optimal membership of the kth cluster is as follows:
Figure 801470DEST_PATH_IMAGE132
wherein the content of the first and second substances,
Figure 777516DEST_PATH_IMAGE096
is a super pixel
Figure 318219DEST_PATH_IMAGE036
The membership degree of the kth cluster after optimization;
Figure 961690DEST_PATH_IMAGE097
optimizing the membership degree of the kth cluster for the pixel point i;
Figure 486737DEST_PATH_IMAGE038
is the p-th super pixel
Figure 215658DEST_PATH_IMAGE036
The number of pixels contained therein.
S405, determining clusters to which the super pixels belong according to a maximum membership principle to obtain an image subjected to secondary segmentation.
The determining the cluster to which each super pixel belongs includes:
Figure 294473DEST_PATH_IMAGE133
wherein the content of the first and second substances,
Figure 730133DEST_PATH_IMAGE099
is a super pixel
Figure 236201DEST_PATH_IMAGE036
The cluster to which it belongs;
Figure 452419DEST_PATH_IMAGE096
is a super pixel
Figure 334924DEST_PATH_IMAGE036
And (4) optimizing the membership degree of the kth cluster.
According to the method, each super pixel is subjected to post-segmentation by adopting a membership degree optimized fuzzy C-means clustering algorithm, so that image segmentation is realized, and the method has strong application flexibility. The method can more accurately and effectively identify the noise points in the image segmentation process, improve the segmentation precision, and automatically determine the cluster number without manual input.
In some alternative implementation scenarios of the present embodiment, fig. 5 shows a segmentation result obtained by the present invention on a synthetic image containing noise, where fig. 5 (a) is a synthetic image containing noise, and fig. 5 (b) is a segmentation image obtained on a synthetic image containing noise. It can be seen that the invention can effectively identify noise points when segmenting the synthetic image containing noise, thereby ensuring the definition of the boundary and obtaining good segmentation result.
In some optional implementation scenarios of this embodiment, fig. 6 shows a segmentation result obtained by the present invention on a natural image. Fig. 6 (a) shows a natural image, and fig. 6 (b) shows a divided image obtained on the natural image. It can be seen that when the method is used for segmenting the natural image containing a large number of noise points, the target contour can be effectively extracted, and a good segmentation result is obtained.
In some optional implementation scenarios of this embodiment, fig. 7 shows a segmentation result obtained on a remote sensing image according to the present invention. Fig. 7 (a) is a remote sensing image, and fig. 7 (b) is a divided image obtained on the remote sensing image. It can be seen that when the remote sensing image containing noise is segmented, a smooth segmentation result is obtained on the basis of keeping the boundary clear.
According to the embodiment of the invention, the local gray scale change and the spatial information of the image are fully considered, the influence degree of the local neighborhood point on the central point can be more fully measured, and the method can be used as a preprocessing step on the original image, effectively reduce the influence of the noise point on the image segmentation process and improve the image segmentation precision.
According to the embodiment of the invention, the super-pixel idea is introduced to improve the density peak value clustering algorithm, so that the calculation efficiency is effectively improved, the image prior information is obtained, and the clustering cluster number is set according to the image prior information without manual participation.
According to the embodiment of the invention, the membership degree in the fuzzy C-means clustering algorithm is optimized, so that the noise point can be more accurately identified, and the segmentation precision is improved.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the order of acts, as some steps may occur in other orders or concurrently in accordance with the invention. Further, those skilled in the art should also appreciate that the embodiments described in the specification are exemplary embodiments and that the acts and modules referred to are not necessarily required to practice the invention.
The above is a description of method embodiments, and the embodiments of the present invention are further described below by way of apparatus embodiments.
As shown in fig. 8, the apparatus 800 includes:
the calculation and statistics module 810 is configured to select any neighborhood from the target image, calculate local gray scale change information of a pixel point in the neighborhood and spatial distance information between the pixel point in the neighborhood and the center pixel point, and calculate an influence weight of the pixel point in the neighborhood on the local gray scale change information and the spatial distance information of the center pixel point, so as to obtain an optimized weighted image;
a pre-segmentation module 820, configured to pre-segment the optimized weighted image by using an SLIC algorithm to obtain a superpixel;
a clustering module 830, configured to perform density peak clustering on the superpixels to obtain a cluster number;
and the segmentation module 840 is used for performing secondary segmentation on the superpixel by adopting the optimized membership degree improved fuzzy C-means clustering algorithm to obtain an image after the secondary segmentation.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working process of the described module may refer to the corresponding process in the foregoing method embodiment, and is not described herein again.
In the technical scheme of the invention, the acquisition, storage, application and the like of the personal information of the related user all accord with the regulations of related laws and regulations without violating the good customs of the public order.
According to an embodiment of the invention, the invention further provides an electronic device.
FIG. 9 shows a schematic block diagram of an electronic device 900 that may be used to implement an embodiment of the invention. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the inventions described and/or claimed herein.
The device 900 comprises a computing unit 901 which may perform various suitable actions and processes in accordance with a computer program stored in a Read Only Memory (ROM) 902 or a computer program loaded from a storage unit 908 into a Random Access Memory (RAM) 903. In the RAM 903, various programs and data required for the operation of the device 900 can also be stored. The calculation unit 901, ROM 902, and RAM 903 are connected to each other via a bus 904. An input/output (I/O) interface 905 is also connected to bus 904.
A number of components in the device 900 are connected to the I/O interface 905, including: an input unit 906 such as a keyboard, a mouse, and the like; an output unit 907 such as various types of displays, speakers, and the like; a storage unit 908 such as a magnetic disk, optical disk, or the like; and a communication unit 909 such as a network card, a modem, a wireless communication transceiver, and the like. The communication unit 909 allows the device 900 to exchange information/data with other devices through a computer network such as the internet and/or various telecommunication networks.
The computing unit 901 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of the computing unit 901 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and so forth. The computing unit 901 performs the respective methods and processes described above, such as the methods S101 to S104. For example, in some embodiments, methods S101-S104 may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 908. In some embodiments, part or all of the computer program may be loaded and/or installed onto device 900 via ROM 902 and/or communications unit 909. When the computer program is loaded into the RAM 903 and executed by the computing unit 901, one or more steps of the methods S101-S104 described above may be performed. Alternatively, in other embodiments, the computing unit 901 may be configured to perform the methods S101-S104 by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present invention may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program code, when executed by the processor or controller, causes the functions/acts specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of the present invention, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user can be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server with a combined blockchain.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present invention may be executed in parallel, sequentially, or in different orders, and are not limited herein as long as the desired results of the technical solution of the present invention can be achieved.
The above-described embodiments should not be construed as limiting the scope of the invention. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. An automatic image segmentation method based on superpixels and improved fuzzy C-means clustering is characterized by comprising the following steps:
selecting any neighborhood from a target image, calculating local gray scale change information of pixels in the neighborhood and spatial distance information of the pixels in the neighborhood and a central pixel, and counting the influence weight of the pixels in the neighborhood on the local gray scale change information of the central pixel and the influence weight of the spatial distance information to obtain an optimized weighted image;
pre-dividing the optimized weighted image by using an SLIC algorithm to obtain super pixels;
performing density peak value clustering on the super pixels to obtain the number of clustering clusters;
and performing secondary segmentation on the superpixel by adopting an optimized membership degree improved fuzzy C-means clustering algorithm to obtain an image after secondary segmentation.
2. The method of claim 1, wherein calculating the local gray scale change information of the pixels in the neighborhood, and calculating the influence weight of the pixels in the neighborhood on the local gray scale change information of the central pixel comprises:
comparing the gray value of each pixel point in the neighborhood with the average gray value of all pixel points in the neighborhood to obtain local gray change information; the local gray scale change information is:
Figure DEST_PATH_IMAGE001
wherein, the first and the second end of the pipe are connected with each other,
Figure 875881DEST_PATH_IMAGE002
is local gray scale change information;
Figure 861154DEST_PATH_IMAGE003
the gray value of the pixel point j in the neighborhood is obtained;
Figure 360793DEST_PATH_IMAGE004
a local window centered on a pixel point i;
Figure 431517DEST_PATH_IMAGE005
is the average gray scale of all pixel points in the neighborhood, and
Figure 484924DEST_PATH_IMAGE006
wherein, in the step (A),
Figure 324704DEST_PATH_IMAGE007
as a partial window
Figure 743047DEST_PATH_IMAGE004
The number of internal pixel points;
the influence weight of the pixel points in the neighborhood on the local gray level change information of the central pixel point is as follows:
Figure 35488DEST_PATH_IMAGE008
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE009
as an intra-neighborhood imageThe weight of the pixel point on the local gray scale change information of the central pixel point is influenced;
Figure 79536DEST_PATH_IMAGE002
is the local gray scale variation information.
3. The method of claim 1, wherein calculating spatial distance information between the pixel point in the neighborhood and the center pixel point, and controlling the weight of the influence of the pixel point in the neighborhood on the spatial distance information of the center pixel point comprises:
defining spatial distance information between the pixel point in the neighborhood and the central pixel point by utilizing a Gaussian kernel function;
controlling the influence weight of the pixel points in the neighborhood on the spatial distance information of the central pixel point by using an exponential function; the spatial distance information influence weight is as follows:
Figure 773823DEST_PATH_IMAGE010
wherein the content of the first and second substances,
Figure 425384DEST_PATH_IMAGE011
influence weight on spatial distance information of a central pixel point for pixels in a neighborhood;
Figure 408384DEST_PATH_IMAGE012
and the spatial distance information of the pixel point in the neighborhood and the central pixel point is obtained.
4. A method according to claim 2 or 3, wherein the optimized weighted image is:
Figure 803593DEST_PATH_IMAGE013
wherein, the first and the second end of the pipe are connected with each other,
Figure 352386DEST_PATH_IMAGE014
optimizing the gray value of a pixel point i in the weighted image;
Figure 627378DEST_PATH_IMAGE004
a local window centered on a pixel point i;
Figure 894412DEST_PATH_IMAGE009
influence weight on local gray scale change information of the central pixel point for pixel points in the neighborhood;
Figure 827733DEST_PATH_IMAGE011
influence weight on spatial distance information of a central pixel point for pixels in a neighborhood;
Figure 496611DEST_PATH_IMAGE003
and the gray value of the pixel point j in the neighborhood is obtained.
5. The method of claim 1, wherein said pre-segmenting said optimized weighted image using SLIC algorithm to obtain superpixels comprises:
according to the set number of the super pixels, uniformly distributing seed points in the optimized weighted image;
according to
Figure 693237DEST_PATH_IMAGE015
Calculating the distance between two adjacent super pixel centers
Figure 181988DEST_PATH_IMAGE016
Wherein, in the step (A),
Figure 184579DEST_PATH_IMAGE017
in order to optimize the total number of pixels of the weighted image,
Figure 894915DEST_PATH_IMAGE018
number of superpixels for pre-segmentation;
At a distance of the centers of the two adjacent super pixels
Figure 59180DEST_PATH_IMAGE016
As step length, uniformly selecting a plurality of pixel points in the optimized weighted image as an initial super-pixel clustering center;
assigning a class label to each pixel point in the neighborhood around each seed point, the search range being limited to
Figure 300805DEST_PATH_IMAGE019
According to
Figure 779191DEST_PATH_IMAGE020
A distance measure is calculated, wherein,
Figure 157083DEST_PATH_IMAGE021
is a distance measure;
Figure 757828DEST_PATH_IMAGE022
is a pixel point
Figure 221171DEST_PATH_IMAGE023
And seed pixel point
Figure 755445DEST_PATH_IMAGE024
The gray value difference of (a);
Figure 253422DEST_PATH_IMAGE014
is the gray value at the pixel point i,
Figure 759490DEST_PATH_IMAGE025
is the gray value at pixel point b, and
Figure 913391DEST_PATH_IMAGE026
Figure 795896DEST_PATH_IMAGE027
is a pixel point
Figure 148380DEST_PATH_IMAGE023
And seed pixel point
Figure 12300DEST_PATH_IMAGE024
A spatial distance of (a) and
Figure 450234DEST_PATH_IMAGE028
wherein
Figure 136431DEST_PATH_IMAGE029
And
Figure 281104DEST_PATH_IMAGE030
respectively the abscissa of two pixel points;
Figure 394554DEST_PATH_IMAGE031
and
Figure 319784DEST_PATH_IMAGE032
respectively are the vertical coordinates of two pixel points;
Figure 544092DEST_PATH_IMAGE033
to measure the compactness index of the super-pixel, an
Figure 58119DEST_PATH_IMAGE033
The larger the value of (a), the higher the compactness;
Figure 342470DEST_PATH_IMAGE016
the distance between the centers of two adjacent super pixels; taking a seed point with the minimum distance measure with the pixel point as a clustering center of the pixel point;
and iteratively updating the super-pixel clustering center until convergence or the preset maximum iteration number of the super-pixel clustering is reached, and finishing super-pixel segmentation to obtain a super-pixel segmentation image.
6. The method of claim 1, wherein performing density peak clustering on the superpixels to obtain cluster numbers comprises:
randomly selecting two superpixels, and calculating a first distance between the two superpixels; the first distance is:
Figure 489417DEST_PATH_IMAGE034
wherein the content of the first and second substances,
Figure 455099DEST_PATH_IMAGE035
is the p-th super pixel
Figure 636682DEST_PATH_IMAGE036
And the qth super pixel
Figure 91934DEST_PATH_IMAGE037
A first distance therebetween;
Figure 991757DEST_PATH_IMAGE038
is the p-th super pixel
Figure 744818DEST_PATH_IMAGE036
The number of the pixel points contained in the image,
Figure 780907DEST_PATH_IMAGE039
is the qth super pixel
Figure 407061DEST_PATH_IMAGE037
The number of pixel points contained in the image;
Figure 466284DEST_PATH_IMAGE040
is the p-th super pixel
Figure 836085DEST_PATH_IMAGE036
The number of the pixel points in (1),
Figure 992260DEST_PATH_IMAGE041
is composed of
Figure 967476DEST_PATH_IMAGE042
q super pixels
Figure 576312DEST_PATH_IMAGE043
The pixel point in (1);
Figure 484225DEST_PATH_IMAGE044
is a pixel point
Figure 494907DEST_PATH_IMAGE045
Is determined by the gray-scale value of (a),
Figure 400546DEST_PATH_IMAGE046
is a pixel point
Figure 496678DEST_PATH_IMAGE047
The gray value of (a);
calculating the local density of any one super pixel and a second distance from the current super pixel to the super pixel with larger local density and closest distance; the local density is:
Figure 208282DEST_PATH_IMAGE048
wherein the content of the first and second substances,
Figure 525999DEST_PATH_IMAGE049
is the p-th super pixel
Figure 664857DEST_PATH_IMAGE036
The local density of (a);
Figure 982706DEST_PATH_IMAGE035
is the p-th super pixel
Figure 498001DEST_PATH_IMAGE036
And the qth super pixel
Figure 420957DEST_PATH_IMAGE037
A first distance therebetween;
Figure 730716DEST_PATH_IMAGE039
is the qth super pixel
Figure 535861DEST_PATH_IMAGE037
The number of pixel points contained in the image;
Figure 776218DEST_PATH_IMAGE050
is the total number of super-pixels,
Figure 615998DEST_PATH_IMAGE051
Figure 96658DEST_PATH_IMAGE052
Figure DEST_PATH_IMAGE053
is a truncation distance;
the second distance is:
Figure 592362DEST_PATH_IMAGE054
wherein the content of the first and second substances,
Figure 183880DEST_PATH_IMAGE055
is the p-th super pixel
Figure 330696DEST_PATH_IMAGE036
A second distance to a super-pixel with a higher local density and closest distance;
Figure 982257DEST_PATH_IMAGE049
is the p-th super pixel
Figure 761995DEST_PATH_IMAGE036
The local density of (a);
Figure 157204DEST_PATH_IMAGE056
is the qth super pixel
Figure 909259DEST_PATH_IMAGE037
The local density of (a);
Figure 731722DEST_PATH_IMAGE035
is the p-th super pixel
Figure 998755DEST_PATH_IMAGE036
And the qth super pixel
Figure 387536DEST_PATH_IMAGE037
A first distance therebetween;
at local density
Figure 56414DEST_PATH_IMAGE049
Is a horizontal axis and is a second distance
Figure 49778DEST_PATH_IMAGE055
For the vertical axis, a first decision graph is generated
Figure 741791DEST_PATH_IMAGE057
And the first decision graph is used
Figure 744382DEST_PATH_IMAGE058
Normalizing to obtain a second decision chart
Figure 267767DEST_PATH_IMAGE059
Wherein
Figure 618983DEST_PATH_IMAGE060
Represents the minimum value of the total number of the unit,
Figure 860608DEST_PATH_IMAGE061
represents the maximum value;
the second decision graph is processed
Figure 401311DEST_PATH_IMAGE062
Mapping to a third decision graph
Figure 716886DEST_PATH_IMAGE063
And normalizing the third decision diagram to obtain a fourth decision diagram
Figure 317631DEST_PATH_IMAGE064
And arranging the element values contained in the fourth decision diagram in the descending order, sequentially calculating the absolute value of the difference between the current element and the next element, and taking the index value corresponding to the maximum absolute value of the difference as the cluster number.
7. The method of claim 6, wherein the second decision graph is generated
Figure 780974DEST_PATH_IMAGE062
Mapping to a third decision graph
Figure 125367DEST_PATH_IMAGE063
The method comprises the following steps:
Figure 544716DEST_PATH_IMAGE065
Figure 316363DEST_PATH_IMAGE066
Figure 267002DEST_PATH_IMAGE067
wherein, the first and the second end of the pipe are connected with each other,
Figure 87190DEST_PATH_IMAGE050
is the total number of superpixels;
Figure 439674DEST_PATH_IMAGE068
is a threshold value, and
Figure 382222DEST_PATH_IMAGE069
Figure 7108DEST_PATH_IMAGE070
as a constraint condition
Figure 693304DEST_PATH_IMAGE071
Second decision diagram
Figure 900294DEST_PATH_IMAGE062
The number of (2);
Figure 13744DEST_PATH_IMAGE072
is an indicator function;
Figure 876658DEST_PATH_IMAGE073
the first term is 0, the last term is 1, and the tolerance is
Figure 100965DEST_PATH_IMAGE074
The series of arithmetic difference numbers of (1),
Figure 428042DEST_PATH_IMAGE075
Figure 636694DEST_PATH_IMAGE076
is a constant number of times, and is,
Figure 49220DEST_PATH_IMAGE077
Figure 77219DEST_PATH_IMAGE078
8. the method of claim 1, wherein the performing a quadratic segmentation on the superpixel using the optimized membership improved fuzzy C-means clustering algorithm to obtain a quadratic segmented image comprises:
initializing a membership matrix to obtain an initial membership matrix; the initial membership matrix is:
Figure 196485DEST_PATH_IMAGE079
wherein the content of the first and second substances,
Figure 651737DEST_PATH_IMAGE080
is an initial membership matrix;
Figure 285981DEST_PATH_IMAGE081
is a super pixel
Figure 852091DEST_PATH_IMAGE036
Initial membership to the kth cluster;
Figure 340710DEST_PATH_IMAGE082
the number of clustering clusters is obtained;
Figure 966864DEST_PATH_IMAGE050
the total number of the super pixels;
according to
Figure 88404DEST_PATH_IMAGE083
Calculating the cluster center of each cluster to obtain a cluster center set
Figure 395888DEST_PATH_IMAGE084
(ii) a Wherein the content of the first and second substances,
Figure 552063DEST_PATH_IMAGE085
cluster center for the kth cluster;
Figure 349118DEST_PATH_IMAGE086
is a super pixel
Figure 144904DEST_PATH_IMAGE036
Membership to the kth cluster;
Figure 52817DEST_PATH_IMAGE087
is a super pixel
Figure 63499DEST_PATH_IMAGE036
Corresponding gray values;
Figure 31455DEST_PATH_IMAGE088
is a fuzzy index;
Figure 65270DEST_PATH_IMAGE050
is the total number of superpixels;
according to
Figure DEST_PATH_IMAGE089
Calculating each superpixel
Figure 776874DEST_PATH_IMAGE036
Obtaining a membership matrix for the membership of the kth cluster; wherein the content of the first and second substances,
Figure 829012DEST_PATH_IMAGE090
is as follows
Figure 967870DEST_PATH_IMAGE091
The center of each cluster is provided with a plurality of clusters,
Figure 551298DEST_PATH_IMAGE085
is the kth cluster center;
Figure 4276DEST_PATH_IMAGE087
is a super pixel
Figure 723970DEST_PATH_IMAGE036
Corresponding gray values;
Figure 33729DEST_PATH_IMAGE082
the number of clustering clusters is obtained;
Figure 294333DEST_PATH_IMAGE088
is a blur index;
if the iteration termination condition is reached, outputting a membership matrix calculated for the last time; otherwise, continuing the iterative computation;
after the membership matrix calculated at the last time is obtained, distributing the membership to each pixel point in a target image, selecting a local window with a preset size, and counting the optimized membership of a central pixel point i in the local window to a kth cluster:
Figure 347740DEST_PATH_IMAGE092
wherein, the first and the second end of the pipe are connected with each other,
Figure 187520DEST_PATH_IMAGE004
is a local window centered on the pixel point i,
Figure 668180DEST_PATH_IMAGE093
the membership degree of a pixel point j in the local window to the kth cluster;
Figure 163883DEST_PATH_IMAGE094
is a Gaussian kernel function
Figure 755402DEST_PATH_IMAGE094
The method is used for reflecting the influence degree of the membership degree of the pixel point j to the kth cluster in the local window on the membership degree of the central pixel point to the kth cluster;
re-counting each super pixel
Figure 449688DEST_PATH_IMAGE036
Optimizing the membership degree of the kth cluster:
Figure 553779DEST_PATH_IMAGE095
wherein
Figure 333516DEST_PATH_IMAGE096
Is a super pixel
Figure 728726DEST_PATH_IMAGE036
The membership degree of the kth cluster after optimization;
Figure 480781DEST_PATH_IMAGE097
is a pixel point
Figure 303244DEST_PATH_IMAGE023
The membership degree of the kth cluster after optimization;
Figure 570277DEST_PATH_IMAGE038
is the p-th super pixel
Figure 769177DEST_PATH_IMAGE036
The number of pixel points contained in the image;
according to the principle of maximum membership, from
Figure 625006DEST_PATH_IMAGE098
Determining the cluster to which each super pixel belongs to obtain the image after secondary segmentation, wherein,
Figure 618370DEST_PATH_IMAGE099
is a super pixel
Figure 372700DEST_PATH_IMAGE036
The cluster to which it belongs;
Figure 47395DEST_PATH_IMAGE096
is a super pixel
Figure 570780DEST_PATH_IMAGE036
And (4) optimizing the membership degree of the kth cluster.
9. An automatic image segmentation device based on improved fuzzy C-means clustering, comprising:
the calculation and statistics module is used for selecting any neighborhood from the target image, calculating local gray scale change information of pixels in the neighborhood and spatial distance information of the pixels in the neighborhood and a central pixel, and calculating the influence weight of the pixels in the neighborhood on the local gray scale change information of the central pixel and the influence weight of the spatial distance information to obtain an optimized weighted image;
the pre-segmentation module is used for pre-segmenting the optimized weighted image by using an SLIC algorithm to obtain superpixels;
the clustering module is used for carrying out density peak value clustering on the super pixels to obtain the number of clustering clusters;
and the segmentation module is used for carrying out secondary segmentation on the superpixel by adopting the optimized membership degree improved fuzzy C-means clustering algorithm to obtain an image subjected to secondary segmentation.
10. An electronic device comprising at least one processor; and
a memory communicatively coupled to the at least one processor; it is characterized in that the preparation method is characterized in that,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-8.
CN202210877458.XA 2022-07-25 2022-07-25 Automatic image segmentation method based on super-pixels and improved fuzzy C-means clustering Pending CN115131566A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210877458.XA CN115131566A (en) 2022-07-25 2022-07-25 Automatic image segmentation method based on super-pixels and improved fuzzy C-means clustering

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210877458.XA CN115131566A (en) 2022-07-25 2022-07-25 Automatic image segmentation method based on super-pixels and improved fuzzy C-means clustering

Publications (1)

Publication Number Publication Date
CN115131566A true CN115131566A (en) 2022-09-30

Family

ID=83384970

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210877458.XA Pending CN115131566A (en) 2022-07-25 2022-07-25 Automatic image segmentation method based on super-pixels and improved fuzzy C-means clustering

Country Status (1)

Country Link
CN (1) CN115131566A (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115314714A (en) * 2022-10-12 2022-11-08 南通虎神金属制品有限公司 Data compression method for weld image storage
CN115393737A (en) * 2022-10-27 2022-11-25 南通有来信息技术有限公司 Method for determining remote sensing object
CN115601362A (en) * 2022-12-14 2023-01-13 临沂农业科技职业学院(筹)(Cn) Welding quality evaluation method based on image processing
CN115841600A (en) * 2023-02-23 2023-03-24 山东金诺种业有限公司 Deep learning-based sweet potato appearance quality classification method
CN116188496A (en) * 2023-04-25 2023-05-30 牧马人(山东)勘察测绘集团有限公司 Remote sensing image self-adaptive segmentation method based on land utilization type
CN116563312A (en) * 2023-07-11 2023-08-08 山东古天电子科技有限公司 Method for dividing display image of double-screen machine
CN116681701A (en) * 2023-08-02 2023-09-01 青岛市妇女儿童医院(青岛市妇幼保健院、青岛市残疾儿童医疗康复中心、青岛市新生儿疾病筛查中心) Children lung ultrasonic image processing method
CN116823857A (en) * 2023-07-25 2023-09-29 查维斯机械制造(北京)有限公司 Slaughter line pig carcass shearing intelligent positioning method and system
CN117058393A (en) * 2023-08-30 2023-11-14 南通大学 Super-pixel three-evidence DPC method for fundus hard exudation image segmentation
CN117237646A (en) * 2023-11-15 2023-12-15 深圳市润海电子有限公司 PET high-temperature flame-retardant adhesive tape flaw extraction method and system based on image segmentation
CN117437247A (en) * 2023-12-18 2024-01-23 津泰(天津)医疗器械有限公司 Lesion region extraction and segmentation method based on natural cavity image
CN117690030A (en) * 2024-02-02 2024-03-12 陕西仙喜辣木茯茶有限公司 Multi-face flower identification method and system based on image processing

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115314714A (en) * 2022-10-12 2022-11-08 南通虎神金属制品有限公司 Data compression method for weld image storage
CN115393737A (en) * 2022-10-27 2022-11-25 南通有来信息技术有限公司 Method for determining remote sensing object
CN115601362A (en) * 2022-12-14 2023-01-13 临沂农业科技职业学院(筹)(Cn) Welding quality evaluation method based on image processing
CN115841600A (en) * 2023-02-23 2023-03-24 山东金诺种业有限公司 Deep learning-based sweet potato appearance quality classification method
CN116188496A (en) * 2023-04-25 2023-05-30 牧马人(山东)勘察测绘集团有限公司 Remote sensing image self-adaptive segmentation method based on land utilization type
CN116188496B (en) * 2023-04-25 2023-07-07 牧马人(山东)勘察测绘集团有限公司 Remote sensing image self-adaptive segmentation method based on land utilization type
CN116563312A (en) * 2023-07-11 2023-08-08 山东古天电子科技有限公司 Method for dividing display image of double-screen machine
CN116563312B (en) * 2023-07-11 2023-09-12 山东古天电子科技有限公司 Method for dividing display image of double-screen machine
CN116823857B (en) * 2023-07-25 2024-03-19 查维斯机械制造(北京)有限公司 Slaughter line pig carcass shearing intelligent positioning method and system
CN116823857A (en) * 2023-07-25 2023-09-29 查维斯机械制造(北京)有限公司 Slaughter line pig carcass shearing intelligent positioning method and system
CN116681701B (en) * 2023-08-02 2023-11-03 青岛市妇女儿童医院(青岛市妇幼保健院、青岛市残疾儿童医疗康复中心、青岛市新生儿疾病筛查中心) Children lung ultrasonic image processing method
CN116681701A (en) * 2023-08-02 2023-09-01 青岛市妇女儿童医院(青岛市妇幼保健院、青岛市残疾儿童医疗康复中心、青岛市新生儿疾病筛查中心) Children lung ultrasonic image processing method
CN117058393A (en) * 2023-08-30 2023-11-14 南通大学 Super-pixel three-evidence DPC method for fundus hard exudation image segmentation
CN117058393B (en) * 2023-08-30 2024-06-25 南通大学 Super-pixel three-evidence DPC method for fundus hard exudation image segmentation
CN117237646A (en) * 2023-11-15 2023-12-15 深圳市润海电子有限公司 PET high-temperature flame-retardant adhesive tape flaw extraction method and system based on image segmentation
CN117237646B (en) * 2023-11-15 2024-01-30 深圳市润海电子有限公司 PET high-temperature flame-retardant adhesive tape flaw extraction method and system based on image segmentation
CN117437247A (en) * 2023-12-18 2024-01-23 津泰(天津)医疗器械有限公司 Lesion region extraction and segmentation method based on natural cavity image
CN117437247B (en) * 2023-12-18 2024-03-05 津泰(天津)医疗器械有限公司 Lesion region extraction and segmentation method based on natural cavity image
CN117690030A (en) * 2024-02-02 2024-03-12 陕西仙喜辣木茯茶有限公司 Multi-face flower identification method and system based on image processing
CN117690030B (en) * 2024-02-02 2024-04-26 陕西仙喜辣木茯茶有限公司 Multi-face flower identification method and system based on image processing

Similar Documents

Publication Publication Date Title
CN115131566A (en) Automatic image segmentation method based on super-pixels and improved fuzzy C-means clustering
CN112801164A (en) Training method, device and equipment of target detection model and storage medium
CN113065614B (en) Training method of classification model and method for classifying target object
Cong et al. Image segmentation algorithm based on superpixel clustering
CN113870334B (en) Depth detection method, device, equipment and storage medium
CN112800915A (en) Building change detection method, building change detection device, electronic device, and storage medium
CN114677565B (en) Training method and image processing method and device for feature extraction network
CN108805174A (en) clustering method and device
CN110633717A (en) Training method and device for target detection model
CN114648676A (en) Point cloud processing model training and point cloud instance segmentation method and device
CN113902696A (en) Image processing method, image processing apparatus, electronic device, and medium
CN113902010A (en) Training method of classification model, image classification method, device, equipment and medium
CN114781650B (en) Data processing method, device, equipment and storage medium
CN108734718B (en) Processing method, device, storage medium and equipment for image segmentation
CN116310356B (en) Training method, target detection method, device and equipment of deep learning model
US20210166129A1 (en) Multi-scale object detection with a trained neural network
CN115409856A (en) Lung medical image processing method, device, equipment and storage medium
CN115359322A (en) Target detection model training method, device, equipment and storage medium
CN114549838A (en) Method, device, equipment and computer readable medium for segmenting point cloud data
CN114511862A (en) Form identification method and device and electronic equipment
CN114491416B (en) Processing method and device of characteristic information, electronic equipment and storage medium
CN114092739A (en) Image processing method, apparatus, device, storage medium, and program product
CN114693950B (en) Training method and device of image feature extraction network and electronic equipment
CN114724090B (en) Training method of pedestrian re-identification model, and pedestrian re-identification method and device
CN114491416A (en) Characteristic information processing method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination