CN113160252B - Hierarchical segmentation method for cultural pattern image - Google Patents

Hierarchical segmentation method for cultural pattern image Download PDF

Info

Publication number
CN113160252B
CN113160252B CN202110563186.1A CN202110563186A CN113160252B CN 113160252 B CN113160252 B CN 113160252B CN 202110563186 A CN202110563186 A CN 202110563186A CN 113160252 B CN113160252 B CN 113160252B
Authority
CN
China
Prior art keywords
pixel
representing
calculating
formula
adjacent
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110563186.1A
Other languages
Chinese (zh)
Other versions
CN113160252A (en
Inventor
梁昊光
侯小刚
赵海英
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BEIJING INTERNATIONAL STUDIES UNIVERSITY
Beijing University of Posts and Telecommunications
Original Assignee
BEIJING INTERNATIONAL STUDIES UNIVERSITY
Beijing University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BEIJING INTERNATIONAL STUDIES UNIVERSITY, Beijing University of Posts and Telecommunications filed Critical BEIJING INTERNATIONAL STUDIES UNIVERSITY
Priority to CN202110563186.1A priority Critical patent/CN113160252B/en
Publication of CN113160252A publication Critical patent/CN113160252A/en
Application granted granted Critical
Publication of CN113160252B publication Critical patent/CN113160252B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a hierarchical segmentation method for a cultural pattern image, which is proposed based on compression and merging theory, wherein the image is segmented into a plurality of initial areas by calculating color similarity characteristics among pixels, and the areas are merged by calculating colors, textures, area sizes and interweaving degrees among the areas to obtain a hierarchical segmentation result. Compared with the existing segmentation method, the method has the advantages that the segmentation speed is high, more image details can be reserved as a result, and simultaneously, cultural patterns with different detail degrees can be obtained.

Description

Hierarchical segmentation method for cultural pattern image
Technical Field
The invention belongs to the field of image processing and computer vision, and particularly relates to a hierarchical segmentation method for cultural pattern images.
Background
As the Chinese ethnic excellent traditional culture is becoming popular, the research of traditional culture is gradually paid attention to, and the interpretation of ethnic historical development and custom culture becomes the beginning direction of research. The method aims to solve the problem that cultural patterns are obtained by storing cultural relics, frescoes and the like containing the cultural patterns as image data through photographing, scanning and other modes and dividing the images. Compared with a natural image, the cultural pattern image has the characteristics of complex texture, unsmooth color area and many noise points, and when the conventional image segmentation method is used for segmenting the cultural pattern image, the pattern with complete edges cannot be extracted, but the method is continuously optimized aiming at the characteristics of the cultural pattern image, and the segmentation result with complete edges can be obtained.
Disclosure of Invention
The invention aims to solve the problems in the prior art, and provides a hierarchical segmentation method for a cultural pattern image, which is used for segmenting one image into a plurality of results with hierarchical relationship, wherein each layer of segmentation result accords with causal and inclusion principles, and the automatic segmentation of the cultural pattern image is completed, so that all elements in the image are obtained.
The invention provides a hierarchical segmentation method for cultural pattern images, which is characterized by comprising the following steps of:
step 1, converting a cultural pattern image into a Lab color space, and compressing pixels with similar color characteristics together, so that the image is divided into a plurality of areas;
step 2, calculating likelihood ratios between adjacent areas based on color features, and combining the adjacent areas with the likelihood ratios larger than a preset threshold value to obtain an over-segmentation result, wherein a likelihood ratio calculation formula is as follows:
Figure BDA0003079779040000021
wherein i and j represent the sequence numbers of the regions, R i and Rj Representing the area of the neighborhood,
Figure BDA0003079779040000022
representation area R i Is used for the color value of the color filter,
Figure BDA0003079779040000023
representation area R j Mean color value of>
Figure BDA0003079779040000024
Mean color value representing boundary between adjacent regions, S representing the color represented by adjacent region R i And R is R j Covariance matrix of pixel color values of the composed region;
step 3, calculating the difference of adjacent areas
And 3.1, calculating the average difference of the colors between the adjacent areas and the average difference of the colors at the boundaries of the adjacent areas, wherein the calculation formulas are respectively as follows:
Figure BDA0003079779040000025
Figure BDA0003079779040000026
/>
in the formula,Ri and Rj Representing the area of the neighborhood,
Figure BDA0003079779040000027
representation area R i Pixel average color value of>
Figure BDA0003079779040000028
Representation area R j Pixel average color value of B ij Representing the boundary between adjacent regions, p and q representing the pixels of the regions on either side of the boundary, W p Sliding window representing center pixel p, B C (R i ,R j ) The number of (p, q) pairs representing the boundary, < ->
Figure BDA0003079779040000029
Respectively representing the brightness of the pixel p in Lab color space, the component from green to red, the component from blue to yellow,/respectively>
Figure BDA00030797790400000210
Respectively representing the brightness of the pixel q in the Lab color space, the component from green to red, and the component from blue to yellow;
and 3.2, calculating texture differences between adjacent areas, wherein the calculation formula is as follows:
D T (R i ,R j )=D AB (R i ,R j )*D W (R i ,R j )
in the formula,Ri and Rj Representing adjacent regions, D AB (R i ,R j ) Representing the chromaticity difference of adjacent regions, D W (R i ,R j ) Representing texture feature differences for adjacent regions;
3.3, calculating the scale difference between adjacent areas, wherein the calculation formula is as follows:
Figure BDA0003079779040000031
wherein ,
Figure BDA0003079779040000032
representation area R i The number of pixels in +.>
Figure BDA0003079779040000033
Representation area R j The number of pixels in;
and 3.4, calculating an interleaving value between adjacent areas, wherein the calculating formula is as follows:
Figure BDA0003079779040000034
in the formula,Np Representing the most pixel color value, N, within a sliding window centered on pixel p q Representing the most pixel color value, C, within a sliding window centered on pixel q j Representation area R j Color value of the pixel with the largest inner part, C i Representation area R i The pixel color value with the largest inner, when a=b, δ (a, b) =1; when a+.b, δ (a, b) =0;
and 3.5, calculating the comprehensive difference between adjacent areas, wherein the calculation formula is as follows:
Figure BDA0003079779040000035
wherein α represents texture metric coefficients, and β represents boundary metric coefficients;
and step 4, arranging the comprehensive differences from small to large and combining the comprehensive differences in sequence to obtain a segmentation result with a hierarchical relationship.
The invention applies the compression and merging theory in the field of image matting to image segmentation and provides a hierarchical segmentation method for cultural pattern images. Compared with a natural image, the texture of the cultural pattern image is more complex, the existing segmentation method can generate the phenomena of low speed and incomplete segmentation area during processing, and the method provides a new texture feature extraction operator to process the complex texture of the cultural pattern image, thereby improving the segmentation effect. Compared with the existing method, the method can completely divide the cultural elements in the cultural pattern image, and simultaneously displays different details of the cultural pattern through multiple layers, thereby providing technical support for the traditional cultural research.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flow chart of a hierarchical segmentation method for cultural pattern images according to an embodiment of the invention.
Detailed Description
As shown in fig. 1, the hierarchical segmentation method for cultural pattern images of the present invention is characterized by comprising the following steps:
and step 1, converting the cultural pattern image into a Lab color space, and compressing pixels with similar color characteristics together, so that the image is divided into a plurality of areas. The image compression method can refer to the theory proposed by Tseng et al in Learning-based hierarchical graph for unsupervised matting and foreground estimation. In order to quickly obtain an over-color image, feature computation time is reduced, and only color features are used for affinity measurement during pixel compression. After pixel compression, the image pixels are gathered closer together, forming a plurality of small cells (regions).
Step 2, calculating likelihood ratios between adjacent areas based on color features, and combining the adjacent areas with the likelihood ratios larger than a preset threshold value to obtain an over-segmentation result, wherein a likelihood ratio calculation formula is as follows:
Figure BDA0003079779040000041
wherein i and j represent the sequence numbers of the regions, R i and Rj Representing the area of the neighborhood,
Figure BDA0003079779040000042
representation area R i Is used for the color value of the color filter,
Figure BDA0003079779040000043
representation area R j Mean color value of>
Figure BDA0003079779040000044
Mean color value representing boundary between adjacent regions, S representing the color represented by adjacent region R i And R is R j Covariance matrix of pixel color values of the composed region.
During the pixel-based puncturing and merging process, the pixels of the smooth region are quickly merged. However, for pixels around the boundary region, they will merge with pixels in the boundary direction, rather than with pixels in the neighboring smooth region. Thus, after the pixel-based puncturing and merging process, many small areas around the boundary occur, affecting the result of area compression and merging. In order to solve this problem, the present embodiment uses a residual removal method to process, and sets that the merging area around the boundary contains no more than [ N/5000], where N is the total number of image pixels, and merges the pixels in the residual area into a smooth area with the minimum adjacent color difference.
Step 3, calculating the difference of adjacent areas
3.1, calculating the color average difference between adjacent areas according to the following calculation formulas:
Figure BDA0003079779040000051
in the formula,Ri and Rj Representing the area of the neighborhood,
Figure BDA0003079779040000052
representation area R i Pixel average color value of>
Figure BDA0003079779040000053
Representation area R j Is used for the color values of the pixel averages.
Regions of similar color will merge together, but as regions become larger in the iterative merge process, the color variance within the regions increases and the importance of the color information decreases. Thus, in merging, the region color difference is represented by the color difference of the adjacent region boundaries. For two adjacent regions, a window of 3*3 was used to calculate the boundary average color difference:
Figure BDA0003079779040000054
in the formula,Bij Representing the boundary between adjacent regions, p and q representing the pixels of the regions on either side of the boundary, W p Representing a sliding window of size 5*5 for the center pixel p, B C (R i ,R j ) The number of (p, q) pairs representing the boundary,
Figure BDA0003079779040000055
respectively representing the brightness of the pixel p in Lab color space, the component from green to red, the component from blue to yellow,/respectively>
Figure BDA0003079779040000056
Respectively representing the brightness of the pixel q in Lab color space, the component from green to red, and the component from blue to yellow.
And 3.2, calculating texture differences between adjacent areas.
Compared with a natural image, the texture of the cultural pattern image is more complex, and the phenomenon of edge deletion easily occurs when the texture extraction is carried out by using a general method, so that a new texture characteristic operator is provided for calculation, and the texture characteristic operator consists of two parts: differential excitation and directional excitation.
Chroma difference D of adjacent areas AB (R i ,R j ) Obtained by calculation by the following formula:
Figure BDA0003079779040000061
Figure BDA0003079779040000062
Figure BDA0003079779040000063
in the formula,
Figure BDA0003079779040000064
respectively represent regions R i Average value of the components of the inner pixel from green to red, average value of the components from blue to yellow,>
Figure BDA0003079779040000065
respectively represent regions R j The average value of the components of the inner pixel from green to red, the average value of the components from blue to yellow.
Texture feature difference D of adjacent regions W (R i ,R j ) The method is calculated and obtained by the following steps:
3.2.1 pixel Point x for a Single region c Its differential excitation ζ (x c )
Figure BDA0003079779040000066
(x, y) represents the current pixel x c Coordinates, I, represents pixel x c The value of (a) is a preset constant, and the value of (a) is (0, 1)];
3.2.2 pixel Point x for a Single region c Its directional excitation θ (x) c )
Figure BDA0003079779040000067
in the formula,xu 、x D 、x L 、x R Respectively representing pixels x c Adding the intensity values of the upper, lower, left and right pixels in eight adjacent areas around the pixel;
3.2.3 quantizing the differential and directional excitations of the region into a two-dimensional histogram of T x D, converting the two-dimensional histogram of T x D into a one-dimensional eigenvector W of TD x 1, calculating each pixel intensity using three windows of 3 x 3,5 x 5,7 x 7, respectively, obtaining three different eigenvectors W 1 ,W 2 and W3 And combining to obtain a final texture feature vector W= [ W ] 1 T ,W 2 T ,W 3 T ];
3.2.4 calculating the neighboring region R i and Rj Texture feature differences:
Figure BDA0003079779040000068
in the formula,
Figure BDA0003079779040000071
representation area R i Texture feature vector, ">
Figure BDA0003079779040000072
Representation area R j Are calculated according to steps 3.2.1-3.2.3, respectively. />
The texture difference calculation formula between the adjacent regions is calculated as follows:
D T (R i ,R j )=D AB (R i ,R j )*D W (R i ,R j )
in the formula,Ri and Rj Representing adjacent regions, D AB (R i ,R j ) Representing the chromaticity difference of adjacent regionsDifferent, D W (R i ,R j ) Representing texture feature differences for adjacent regions.
3.3, calculating the scale difference between adjacent areas, wherein the calculation formula is as follows:
Figure BDA0003079779040000073
wherein ,
Figure BDA0003079779040000074
representation area R i The number of pixels in +.>
Figure BDA0003079779040000075
Representation area R j The number of pixels in.
Region-scale features can speed up region merging when two small regions merge or merge one small region into a larger region. To avoid excessive dominance of large regions in the merging process, the region scale is limited using the following formula: in this step, the region scale is limited using the following formula:
Figure BDA0003079779040000076
and 3.4, calculating an interleaving value between adjacent areas.
Because the cultural pattern has processes of embroidery, knitting and the like in the manufacturing process, the pattern is smooth, and a plurality of color lines are interwoven together. Thus, the present document measures the degree of spatial interleaving between adjacent regions to merge fragmented regions.
In this embodiment, the interleaving value between adjacent areas is calculated according to the following formula:
Figure BDA0003079779040000077
in the formula,Np Representing the maximum size within a 5*5 sliding window centered on pixel pPixel color value, N q Represents the most pixel color value within a 5*5 sliding window centered on pixel q, C j Representation area R j Color value of the pixel with the largest inner part, C i Representation area R i The pixel color value with the largest inner, when a=b, δ (a, b) =1; when a+.b, δ (a, b) =0;
and 3.5, calculating the comprehensive difference between adjacent areas, wherein the calculation formula is as follows:
Figure BDA0003079779040000081
wherein α represents texture metric coefficients, and β represents boundary metric coefficients; in the present embodiment, α=3, β=1 is set.
And step 4, arranging the comprehensive differences from small to large and combining the comprehensive differences in sequence to obtain a segmentation result with a hierarchical relationship.
In addition to the embodiments described above, other embodiments of the invention are possible. All technical schemes formed by equivalent substitution or equivalent transformation fall within the protection scope of the invention.

Claims (7)

1. The hierarchical segmentation method for the cultural pattern image is characterized by comprising the following steps of:
step 1, converting a cultural pattern image into a Lab color space, and compressing pixels with similar color characteristics together, so that the image is divided into a plurality of areas;
step 2, calculating likelihood ratios between adjacent areas based on color features, and combining the adjacent areas with the likelihood ratios larger than a preset threshold value to obtain an over-segmentation result, wherein a likelihood ratio calculation formula is as follows:
Figure FDA0003820482030000011
wherein i and j represent the sequence numbers of the regions, R i and Rj Representing the area of the neighborhood,
Figure FDA0003820482030000012
representation area R i Mean color value of>
Figure FDA0003820482030000013
Representation area R j Mean color value of>
Figure FDA0003820482030000014
Mean color value representing boundary between adjacent regions, S representing the color represented by adjacent region R i And R is R j Covariance matrix of pixel color values of the composed region;
step 3, calculating the difference of adjacent areas
And 3.1, calculating the average difference of the colors between the adjacent areas and the average difference of the colors at the boundaries of the adjacent areas, wherein the calculation formulas are respectively as follows:
Figure FDA0003820482030000015
Figure FDA0003820482030000016
in the formula,Ri and Rj Representing the area of the neighborhood,
Figure FDA0003820482030000017
representation area R i Pixel average color value of>
Figure FDA0003820482030000018
Representation area R j Pixel average color value of B ij Representing the boundary between adjacent regions, p and q representing the pixels of the regions on either side of the boundary, W p Sliding window representing center pixel p, B C (R i ,R j ) The number of (p, q) pairs representing the boundary, < ->
Figure FDA0003820482030000019
Respectively representing the brightness of the pixel p in Lab color space, the component from green to red, the component from blue to yellow,/respectively>
Figure FDA00038204820300000110
Respectively representing the brightness of the pixel q in the Lab color space, the component from green to red, and the component from blue to yellow;
and 3.2, calculating texture differences between adjacent areas, wherein the calculation formula is as follows:
D T (R i ,R j )=D AB (R i ,R j )*D W (R i ,R j )
in the formula,Ri and Rj Representing adjacent regions, D AB (R i ,R j ) Representing the chromaticity difference of adjacent regions, D W (R i ,R j ) Representing texture feature differences for adjacent regions;
3.3, calculating the scale difference between adjacent areas, wherein the calculation formula is as follows:
Figure FDA0003820482030000021
wherein ,
Figure FDA0003820482030000022
representation area R i The number of pixels in +.>
Figure FDA0003820482030000023
Representation area R j The number of pixels in;
and 3.4, calculating an interleaving value between adjacent areas, wherein the calculating formula is as follows:
Figure FDA0003820482030000024
in the formula,Np Representing the most pixel color value, N, within a sliding window centered on pixel p q Representing the most pixel color value, C, within a sliding window centered on pixel q j Representation area R j Color value of the pixel with the largest inner part, C i Representation area R i The pixel color value with the largest inner, when a=b, δ (a, b) =1; when a+.b, δ (a, b) =0;
and 3.5, calculating the comprehensive difference between adjacent areas, wherein the calculation formula is as follows:
Figure FDA0003820482030000025
wherein α represents texture metric coefficients, and β represents boundary metric coefficients;
step 4, arranging the comprehensive differences from small to large and combining the comprehensive differences in sequence to obtain a segmentation result with a hierarchical relationship;
in step 3.2, the texture feature differences D of the adjacent regions W (R i ,R j ) The method is calculated and obtained by the following steps:
3.2.1 pixel Point x for a Single region c Its differential excitation ζ (x c )
Figure FDA0003820482030000026
(x, y) represents the current pixel x c Coordinates, I, represents pixel x c Is a preset constant;
3.2.2 pixel Point x for a Single region c Its directional excitation θ (x) c )
Figure FDA0003820482030000031
in the formula,xu 、x D 、x L 、x R Respectively representing pixels x c Adding the intensity values of the upper, lower, left and right pixels in eight adjacent areas around the pixel;
3.2.3 quantizing the differential and directional excitations of the region into a two-dimensional histogram of T x D, converting the two-dimensional histogram of T x D into a one-dimensional eigenvector W of TD x 1, calculating each pixel intensity using three windows of 3 x 3,5 x 5,7 x 7, respectively, obtaining three different eigenvectors W 1 ,W 2 and W3 And combining to obtain a final texture feature vector W= [ W ] 1 T ,W 2 T ,W 3 T ];
3.2.4 calculating the neighboring region R i and Rj Texture feature differences:
Figure FDA0003820482030000032
in the formula,
Figure FDA0003820482030000033
representation area R i Texture feature vector, ">
Figure FDA0003820482030000034
Representation area R j Are calculated according to steps 3.2.1-3.2.3, respectively.
2. The hierarchical segmentation method for cultural pattern images as defined in claim 1, wherein: in step 2, processing the over-segmentation result by adopting a residual error removing method, setting the merging area around the boundary to contain no more than [ N/5000], wherein N is the total number of image pixels, and merging pixels in the residual error area into a smooth area with minimum adjacent chromatic aberration.
3. The hierarchical segmentation method for cultural pattern images as defined in claim 1, wherein: in step 3.3, the region scale is limited using the following formula:
Figure FDA0003820482030000035
Figure FDA0003820482030000036
4. the hierarchical segmentation method for cultural pattern images as defined in claim 1, wherein: in step 3.2, the chrominance difference D of the adjacent areas AB (R i ,R j ) Obtained by calculation by the following formula:
Figure FDA0003820482030000037
Figure FDA0003820482030000038
/>
Figure FDA0003820482030000041
in the formula,
Figure FDA0003820482030000042
respectively represent regions R i Average value of the components of the inner pixel from green to red, average value of the components from blue to yellow,>
Figure FDA0003820482030000043
respectively represent regions R j The average value of the components of the inner pixel from green to red, the average value of the components from blue to yellow.
5. The hierarchical segmentation method for cultural pattern images as defined in claim 1, wherein: in step 3.2.1, the value of σ is in the range of (0, 1).
6. The hierarchical segmentation method for cultural pattern images as defined in claim 1, wherein: in step 3.5, α=3, β=1 is set.
7. The hierarchical segmentation method for cultural pattern images as defined in claim 1, wherein: in step 3, the size of the sliding window is 5*5.
CN202110563186.1A 2021-05-24 2021-05-24 Hierarchical segmentation method for cultural pattern image Active CN113160252B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110563186.1A CN113160252B (en) 2021-05-24 2021-05-24 Hierarchical segmentation method for cultural pattern image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110563186.1A CN113160252B (en) 2021-05-24 2021-05-24 Hierarchical segmentation method for cultural pattern image

Publications (2)

Publication Number Publication Date
CN113160252A CN113160252A (en) 2021-07-23
CN113160252B true CN113160252B (en) 2023-04-21

Family

ID=76877535

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110563186.1A Active CN113160252B (en) 2021-05-24 2021-05-24 Hierarchical segmentation method for cultural pattern image

Country Status (1)

Country Link
CN (1) CN113160252B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107833224A (en) * 2017-10-09 2018-03-23 西南交通大学 A kind of image partition method based on multi-level region synthesis
CN109272467A (en) * 2018-09-25 2019-01-25 南京大学 A kind of stratification image partition method based on multi-scale edge clue

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107833224A (en) * 2017-10-09 2018-03-23 西南交通大学 A kind of image partition method based on multi-level region synthesis
CN109272467A (en) * 2018-09-25 2019-01-25 南京大学 A kind of stratification image partition method based on multi-scale edge clue

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Fractal Beauty in Xinjiang Folk Art Patterns;Peng Hong等;《Computer Aided Drafting, Design and Manufacturing》;20140930;全文 *
基于边缘先验的文化图案层次分割算法;寇晓斌 等;《电子技术与软件工程》;20210901;全文 *
基于边缘形态变换的彩色织物图像分割算法;赵海英等;《中国体视学与图像分析》;20111231(第01期);全文 *

Also Published As

Publication number Publication date
CN113160252A (en) 2021-07-23

Similar Documents

Publication Publication Date Title
CN111415363B (en) Image edge identification method
Luan et al. Natural image colorization
US8774503B2 (en) Method for color feature extraction
CN108537239B (en) Method for detecting image saliency target
CN102800094A (en) Fast color image segmentation method
CN110458172A (en) A kind of Weakly supervised image, semantic dividing method based on region contrast detection
CN112101370B (en) Automatic image matting method for pure-color background image, computer-readable storage medium and equipment
CN111583279A (en) Super-pixel image segmentation method based on PCBA
CN110268442B (en) Computer-implemented method of detecting a foreign object on a background object in an image, device for detecting a foreign object on a background object in an image, and computer program product
CN111932601B (en) Dense depth reconstruction method based on YCbCr color space light field data
CN108038458B (en) Method for automatically acquiring outdoor scene text in video based on characteristic abstract diagram
CN114511567B (en) Tongue body and tongue coating image identification and separation method
CN114708165A (en) Edge perception texture filtering method combining super pixels
CN113989299A (en) Open-pit mine rock stratum image segmentation method based on k-means clustering
CN105608683B (en) A kind of single image to the fog method
Tan et al. Image haze removal based on superpixels and Markov random field
CN115100226A (en) Contour extraction method based on monocular digital image
CN111868783B (en) Region merging image segmentation algorithm based on boundary extraction
CN113160252B (en) Hierarchical segmentation method for cultural pattern image
JP3923243B2 (en) Character extraction method from color document image
Shan et al. Image highlight removal based on double edge-preserving filter
CN112560740A (en) PCA-Kmeans-based visible light remote sensing image change detection method
CN111915500A (en) Foggy day image enhancement method based on improved Retinex algorithm
CN115601358B (en) Tongue picture image segmentation method under natural light environment
CN108154485A (en) A kind of ancient painting restorative procedure based on layering and stroke direction parsing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant