CN108154188A - Complex Background work Text Extraction based on FCM - Google Patents
Complex Background work Text Extraction based on FCM Download PDFInfo
- Publication number
- CN108154188A CN108154188A CN201810017727.9A CN201810017727A CN108154188A CN 108154188 A CN108154188 A CN 108154188A CN 201810017727 A CN201810017727 A CN 201810017727A CN 108154188 A CN108154188 A CN 108154188A
- Authority
- CN
- China
- Prior art keywords
- text
- fcm
- cluster
- image
- gray
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
- G06F18/232—Non-hierarchical techniques
- G06F18/2321—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
- G06F18/23213—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/22—Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Multimedia (AREA)
- Life Sciences & Earth Sciences (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Artificial Intelligence (AREA)
- Probability & Statistics with Applications (AREA)
- Image Analysis (AREA)
- Character Input (AREA)
Abstract
A kind of Complex Background work Text Extraction based on FCM algorithms, including:Each pixel is extracted in the image-region containing artificial text about position and 5 dimensional features of color;It determines the number of the cluster of FCM algorithms and the center of cluster, carries out FCM clusters, obtain text layers;Obtained text tomographic image is converted into gray-scale map, and carry out gray variance calculating, obtains the gray variance of gray-scale map;Judge whether cluster is completed, be, using the text tomographic image as the artificial text finally extracted, the text tomographic image otherwise to be extracted into pixel point feature as input picture again.The information that pixel is included is utilized in the present invention more fully hereinafter, it is more accurate by the classification of the fuzzy division text character with stronger data expression capability simultaneously, the final accurate extraction for realizing Complex Background work text has the advantages that artificial text and background separation are good, character is clear, reservation degree is high.
Description
Technical field
The present invention relates to a kind of artificial text extracting methods.More particularly to a kind of Complex Background work based on FCM
Text Extraction.
Background technology
With the rapid development of Internet technology, Digital object identifier is brought more and more just to daily life
Profit.The media such as picture, video have large information capacity, comfort, it is easily transmitted the advantages that increasingly pursued by people.Manually
Text as its name suggests, is exactly that artificially addition in video, picture, is not belonging to the word of scene content of video, picture itself, uses
In to video, picture material supplemented, illustrated, common artificial text has headline, speaker's subtitle etc..Picture regards
The excavation of artificial text information is conducive to it is appreciated that wherein content and obtaining corresponding knowledge in frequency.And usually these are artificial literary
This has complicated background texture and is obtained so that being difficult to directly extract.Artificial character under complex background is extracted aiming at this
The series of algorithms that one problem proposes.
Including OTSU the and Niblack binarization segmentation methods based on gray value information.Otsu methods are also known as most
The method of big inter-class variance, proposed in 1979, is a kind of adaptive Threshold.It is special according to the gray scale of image
Property, divides the image into background and target two parts, and the variance of background and target is bigger, illustrates that two-part difference is bigger, with this
Artificial text is partitioned into from background for core, which has preferable for simple background, the apparent text segmentation of contrast
Effect, it is but bad for the artificial text extraction effect in the not high region of contrast.Niblack Binarization methods are a kind of common
Relatively effective local threshold algorithm.The basic thought of this algorithm is to each point in image, in its R*R neighborhoods
It is interior, the mean value and variance of pixel are calculated, then calculating threshold value using following formula carries out binaryzation:T (x, y)=m (x, y)+k*s
(x, y) wherein for pixel point coordinates (x, y), T (x, y) is this threshold value, and m (x, y) is pixel in the R*R neighborhoods of the point
Pixel mean value, s (x, y) are pixel criterion variance in the point R*R neighborhoods, and k is correction factor (usually taking -0.2).Niblack side
In method, the selection of window size is extremely important, small should can inhibit noise to enough local details can be kept to arrive greatly again.
Niblack methods can keep image detail well, and good binarization segmentation knot is capable of providing for clearly image text
Fruit, but can retain some unnecessary details in the image text of some degenerations.
K-means clusters based on gray level, the K-means clusters based on RGB triple channel information.K-means is clustered
A kind of hard clustering method, the structure that data set is classified according to the rule of certain hardness.The advantages of hard clustering algorithm is meter
Calculation amount is smaller, can more meet the requirement of real-time of engineer application.But it there is also some it is apparent the defects of, due to clustering firmly
The degree of membership of method only has 0 and 1 two kind of value, this to force to divide so that cluster result and practical deviation are often larger.K-
Means algorithms have many advantages, such as to calculate simple, suitable processing larger data collection, but simultaneously there is also some shortcomings such as due to hard
What the division of property not necessarily obtained is that global optimum and algorithm are more sensitive etc. to noise spot.
FCM (Fuzzy C-Means Cluster Algorithm) based on fuzzy partitioning, is to be built in the scope of fuzzy mathematics, will be subordinate to
The value range of degree has been extended to [0,1], to represent that data belong to each group of possibility, so that sample gathers with all
Class all sets up corresponding contact.Final subordinated-degree matrix U is obtained by the method optimized to object function, it then follows
Maximum in fuzzy set is subordinate to criterion, it will be able to determine which class is each sample data be most divided at last.It is final to realize
Algorithm have good data expression capability and classifying quality.But FCM algorithms are there are also shortcoming, as cluster classification
Selection will have priori, if classification number is chosen improperly, be easily destroyed adaptive ability of algorithm etc..
Invention content
The technical problem to be solved by the invention is to provide a kind of artificial text and background separation are good, character reservation degree
The high Complex Background work Text Extraction based on FCM.
The technical solution adopted in the present invention is:A kind of Complex Background work Text Extraction based on FCM algorithms,
Include the following steps:
1) pixel point feature is extracted to the image-region containing artificial text, that is, extracts the image-region containing artificial text
Interior each pixel is about position and 5 dimensional features of color;
2) number of the cluster of FCM algorithms and the center of cluster are determined, FCM clusters is carried out, obtains text layers;
3) obtained text tomographic image is converted into gray-scale map, and carries out gray variance calculating, obtain the gray scale of gray-scale map
Variance;
4) judge whether cluster is completed, be, using the text tomographic image as the artificial text finally extracted,
Otherwise using the text tomographic image as input picture, return to step 1).
Pixel character representation described in step 1) is as follows:
X=[w*Xx,w*Xy,Xr,Xg,Xb]
Wherein, w is the parameter for adjusting weight, and w takes the number between 0-1, XxRepresent the abscissa information of image, XyTable
The ordinate information of diagram picture, XrRepresent the R channel informations of image, XgRepresent the G channel informations of image, XbRepresent that the B of image leads to
Road information.
Step 2) includes:
(1) number of cluster is set as 3;
(2) cluster centre of 3 clusters is respectively set as:
Double c1 [5]={ 0,0,25,25,25 }, double c2 [5]={ 0,0,100,100,100 } and double
C3 [5]={ 0,0,200,200,200 };
(3) in FCM algorithms, Weighting exponent m is set as 2;
(4) cluster three obtained layer is referred to as character layer, background layer and noise floor, wherein, character pixels are concentrated
In character layer, background pixel is concentrated in background layer, and noise then concentrates on noise floor.
Judgement described in step 4) is:
If the variation that the gray variance of gray-scale map is less than before and after the variance threshold values set or gray variance twice is less than
The variance change threshold of setting, then the text tomographic image, otherwise, will be described as the artificial text finally extracted
Text tomographic image is as input picture, return to step 1).
The Complex Background work Text Extraction based on FCM of the present invention, the artificial text in fully analysis image
The characteristics of on the basis of, extract the image containing artificial text about color, the five-dimensional information of position, establish specific FCM moulds
Type compared with the common binarization segmentation method based on gray scale, the hard clustering method based on color, is utilized more fully hereinafter
The information that pixel is included, while by the classification of the fuzzy division text character with stronger data expression capability more
It is accurate to add, the final accurate extraction for realizing Complex Background work text, has artificial text and background separation is good, character is clear
Clear, the advantages of reservation degree is high.
Description of the drawings
Fig. 1 is the flow chart of the Complex Background work Text Extraction the present invention is based on FCM algorithms;
Fig. 2 a are the artificial text areas under complex background;
Fig. 2 b are the artificial text split-run test results of Niblack methods;
Fig. 2 c are the artificial text split-run test results of OTSU methods;
Fig. 2 d are the artificial text split-run test results of gray level K-means clusters;
Fig. 2 e are the artificial text split-run test results of the K-means clusters based on RGB color information;
Fig. 2 f are the artificial text split-run test results polymerizeing after RGB triple channels are handled respectively;
Fig. 2 g are the artificial text split-run test results of first time FCM hierarchical cluster;
Fig. 2 h are the artificial text split-run test results of second of FCM hierarchical cluster;
Fig. 3 a are the text areas of FCM hierarchical clusters;
Fig. 3 b are the background layers of FCM hierarchical clusters;
Fig. 3 c are the noise floors of FCM hierarchical clusters;
Fig. 3 d are the character layers of FCM hierarchical clusters.
Specific embodiment
With reference to embodiment and attached drawing to the Complex Background work Text Extraction based on FCM algorithms of the present invention
It is described in detail.
As shown in Figure 1, the Complex Background work Text Extraction based on FCM algorithms of the present invention, including walking as follows
Suddenly:
1) pixel point feature is extracted to the image-region containing artificial text, that is, extracts the image-region containing artificial text
Interior each pixel is about position and 5 dimensional features of color;The position and 5 dimensional features of color include:Abscissa, vertical seat
Mark and rgb colouring informations.The pixel character representation is as follows:
X=[w*Xx,w*Xy,Xr,Xg,Xb]
Wherein, w is the parameter for adjusting weight, and w takes the number between 0-1, XxRepresent the abscissa information of image, XyTable
The ordinate information of diagram picture, XrRepresent the R channel informations of image, XgRepresent the G channel informations of image, XbRepresent that the B of image leads to
Road information.
2) number of the cluster of FCM algorithms and the center of cluster are determined, FCM clusters is carried out, obtains text layers;Including:
(1) it is found by a large amount of contrast experiment, clusters number usually can obtain relatively good effect when being 3.Therefore
The number of cluster is set as 3;
(2) for the cluster centre of starting, the artificial text in image is for visual effect or with brightness pole
Big color either partially dark but color and background have that certain contrast or character are black and background is white, based on this,
The cluster centre of 3 clusters is respectively set as:
Double c1 [5]={ 0,0,25,25,25 }, double c2 [5]={ 0,0,100,100,100 } and double
C3 [5]={ 0,0,200,200,200 };
(3) in the FCM algorithms that the present invention uses, Weighting exponent m is set as 2;
Each pixel belongs to degree of membership uij of the possibility value of three classes between 0,1 to represent in FCM, degree of membership
Meet following formula, wherein the number s=3, n that are clustered in the present invention represent the image slices vegetarian refreshments number containing artificial text:
The form of the objective function of FCM such as formula (2):
Cluster centres of the ci for ith cluster, dij=| | xj-ci||2Cluster centre and j-th of data for ith cluster
Euclidean distance between point;And m ∈ [1, ∞) it is a Weighted Index, Weighting exponent m of the present invention is set as 2.
It is solved using method of Lagrange multipliers, the following new object function of construction:
Here λ j, j are the Lagrange multipliers of the n constraint formula of (1) formula from 1 to n, to inputting parameter derivation, then formula
(3) necessary condition for reaching minimum value is:
With
Degree of membership is calculated according to formula (5), further according to formula (2) calculating target function value, threshold value if below setting or
It is less than the change threshold of setting with respect to the knots modification of last time target function value, then this cluster terminates, and is determined according to degree of membership
Each data point generic, wherein it is background layer to belong to classification 1, wherein it is noise floor to belong to classification 2, wherein belonging to class
Other 3 be text layers, as shown in Fig. 3 a, Fig. 3 b, Fig. 3 c, Fig. 3 d.Otherwise new cluster centre is calculated according to formula (4), further according to formula
(5) new degree of membership is calculated, target function value is obtained according to formula (2), such iteration is until meet cluster termination condition.
(4) in character layer, background layer and the noise floor obtained in cluster, character pixels concentrate on character layer, background pixel
Background layer is concentrated on, noise then concentrates on noise floor.
3) obtained text tomographic image is converted into gray-scale map, and carries out gray variance calculating, obtain the gray scale of gray-scale map
Variance;
4) judge whether cluster is completed, be, using the text tomographic image as the artificial text finally extracted,
Otherwise using the text tomographic image as input picture, return to step 1).
The judgement is:If the gray variance of gray-scale map is less than before and after variance threshold values or the gray variance of setting
Variation twice is less than the variance change threshold of setting, then the text tomographic image is as the artificial text finally extracted
This, otherwise, using the text tomographic image as input picture, return to step 1).
In order to verify the performance of the present invention, carried out pair using several common text segmenting methods and the method for the present invention
Than analysis.It compares algorithm and mainly includes threshold segmentation method and clustering method based on gray-scale information, based on colouring information
Clustering method and three-channel processing method, experimental result are as follows:
By the observation and analysis to experimental result, we can obtain drawing a conclusion:
(1) as shown in Fig. 2 b, Fig. 2 c, Fig. 2 d, merely with gray-scale information, want to obtain word by simple Threshold segmentation
Backbone, either local threshold Niblack or global threshold OTSU are all difficult to accomplish.The background of artificial text is complicated, it is difficult to
By simple Threshold segmentation with regard to good segmentation effect can be obtained.K-means clusters based on grey scale pixel value equally can not
Obtain good segmentation result.
(2) as shown in figure Fig. 2 e, Fig. 2 f, experiment uses text filed colouring information.Experimental result is shown, based on RGB
The method of triple channel pixel value K-means clusters, based on didactic polymerization side after RGB triple channels difference Threshold segmentation binaryzation
Method is all not ideal enough for the effect of Text segmentation, although both methods can be good at the color letter in applicating text region
Breath, but also it is not enough to accurate Ground Split background and text.
(3) as shown in figure Fig. 2 g, Fig. 2 h, the text segmenting method effect based on the layering of FCM color clusters used in this chapter
Fruit is good.In this example after first time FCM hierarchical cluster, the requirement of gray value variance is not reached, segmentation effect is bad.Second
Meet threshold condition after FCM hierarchical clusters and stop operation, text and background separation are good, and character is full, have good performance.
Artificial text under complex background is extracted, previous algorithm employs gray-scale information and colouring information more.
In view of artificial text in position with the continuity in color, the present invention proposes a kind of based on fuzzy C-means clustering and 5 dimensions
The text segmenting method of position color characteristic.By flexible degree of membership, inter- object distance measurement and update iteration reach convergence shape
State completes cluster process, and carrying for artificial text is finally completed by enlightening artificial text gray variance control cluster number
It takes, many experiments show the validity of this method.
Claims (4)
1. a kind of Complex Background work Text Extraction based on FCM algorithms, which is characterized in that include the following steps:
1) pixel point feature is extracted to the image-region containing artificial text, that is, extracts every in the image-region containing artificial text
A pixel is about position and 5 dimensional features of color;
2) number of the cluster of FCM algorithms and the center of cluster are determined, FCM clusters is carried out, obtains text layers;
3) obtained text tomographic image is converted into gray-scale map, and carries out gray variance calculating, obtain the gray variance of gray-scale map;
4) judge whether cluster is completed, be, using the text tomographic image as the artificial text finally extracted, otherwise
Using the text tomographic image as input picture, return to step 1).
2. the Complex Background work Text Extraction according to claim 1 based on FCM, which is characterized in that step
1) the pixel character representation described in is as follows:
X=[w*Xx,w*Xy,Xr,Xg,Xb]
Wherein, w is the parameter for adjusting weight, and w takes the number between 0-1, XxRepresent the abscissa information of image, XyRepresent figure
The ordinate information of picture, XrRepresent the R channel informations of image, XgRepresent the G channel informations of image, XbRepresent the channel B letter of image
Breath.
3. the Complex Background work Text Extraction according to claim 1 based on FCM, which is characterized in that step
2) include:
(1) number of cluster is set as 3;
(2) cluster centre of 3 clusters is respectively set as:
Double c1 [5]={ 0,0,25,25,25 }, double c2 [5]={ 0,0,100,100,100 } and double c3
[5]={ 0,0,200,200,200 };
(3) in FCM algorithms, Weighting exponent m is set as 2;
(4) cluster three obtained layer is referred to as character layer, background layer and noise floor, wherein, character pixels concentrate on word
Layer is accorded with, background pixel is concentrated in background layer, and noise then concentrates on noise floor.
4. the Complex Background work Text Extraction according to claim 1 based on FCM, which is characterized in that step
4) judgement described in is:
If the variation that the gray variance of gray-scale map is less than before and after the variance threshold values set or gray variance twice is less than setting
Variance change threshold, then the text tomographic image is as the artificial text finally extracted, otherwise, by the text
Tomographic image is as input picture, return to step 1).
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810017727.9A CN108154188B (en) | 2018-01-08 | 2018-01-08 | FCM-based artificial text extraction method under complex background |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810017727.9A CN108154188B (en) | 2018-01-08 | 2018-01-08 | FCM-based artificial text extraction method under complex background |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108154188A true CN108154188A (en) | 2018-06-12 |
CN108154188B CN108154188B (en) | 2021-11-19 |
Family
ID=62460888
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810017727.9A Active CN108154188B (en) | 2018-01-08 | 2018-01-08 | FCM-based artificial text extraction method under complex background |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108154188B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111027546A (en) * | 2019-12-05 | 2020-04-17 | 北京嘉楠捷思信息技术有限公司 | Character segmentation method and device and computer readable storage medium |
CN111104936A (en) * | 2019-11-19 | 2020-05-05 | 泰康保险集团股份有限公司 | Text image recognition method, device, equipment and storage medium |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103049636A (en) * | 2012-09-12 | 2013-04-17 | 江苏大学 | Method and system for possibly fuzzy K-harmonic means clustering |
CN103268481A (en) * | 2013-05-29 | 2013-08-28 | 焦点科技股份有限公司 | Method for extracting text in complex background image |
KR20140000601A (en) * | 2012-06-25 | 2014-01-03 | 수원대학교산학협력단 | Method of controlling sensibility lighting system using rgbw led |
CN105069788A (en) * | 2015-08-05 | 2015-11-18 | 中北大学 | Cluster segmentation method for ancient architecture wall inscription contaminated writing brush character image |
CN106326895A (en) * | 2015-06-16 | 2017-01-11 | 富士通株式会社 | Image processing device and image processing method |
CN106339661A (en) * | 2015-07-17 | 2017-01-18 | 阿里巴巴集团控股有限公司 | Method nand device for detecting text region in image |
-
2018
- 2018-01-08 CN CN201810017727.9A patent/CN108154188B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20140000601A (en) * | 2012-06-25 | 2014-01-03 | 수원대학교산학협력단 | Method of controlling sensibility lighting system using rgbw led |
CN103049636A (en) * | 2012-09-12 | 2013-04-17 | 江苏大学 | Method and system for possibly fuzzy K-harmonic means clustering |
CN103268481A (en) * | 2013-05-29 | 2013-08-28 | 焦点科技股份有限公司 | Method for extracting text in complex background image |
CN106326895A (en) * | 2015-06-16 | 2017-01-11 | 富士通株式会社 | Image processing device and image processing method |
CN106339661A (en) * | 2015-07-17 | 2017-01-18 | 阿里巴巴集团控股有限公司 | Method nand device for detecting text region in image |
CN105069788A (en) * | 2015-08-05 | 2015-11-18 | 中北大学 | Cluster segmentation method for ancient architecture wall inscription contaminated writing brush character image |
Non-Patent Citations (1)
Title |
---|
柳玉辉等: ""一种基于黑洞算法的模糊C均值文本聚类方法"", 《东北大学学报(自然科学版)》 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111104936A (en) * | 2019-11-19 | 2020-05-05 | 泰康保险集团股份有限公司 | Text image recognition method, device, equipment and storage medium |
CN111027546A (en) * | 2019-12-05 | 2020-04-17 | 北京嘉楠捷思信息技术有限公司 | Character segmentation method and device and computer readable storage medium |
CN111027546B (en) * | 2019-12-05 | 2024-03-26 | 嘉楠明芯(北京)科技有限公司 | Character segmentation method, device and computer readable storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN108154188B (en) | 2021-11-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109977812B (en) | Vehicle-mounted video target detection method based on deep learning | |
CN109583425B (en) | Remote sensing image ship integrated recognition method based on deep learning | |
CN107452010B (en) | Automatic cutout algorithm and device | |
US10474874B2 (en) | Applying pixelwise descriptors to a target image that are generated by segmenting objects in other images | |
CN111738064B (en) | Haze concentration identification method for haze image | |
CN108009518A (en) | A kind of stratification traffic mark recognition methods based on quick two points of convolutional neural networks | |
CN108537239B (en) | Method for detecting image saliency target | |
CN102902956B (en) | A kind of ground visible cloud image identifying processing method | |
CN110309781B (en) | House damage remote sensing identification method based on multi-scale spectrum texture self-adaptive fusion | |
CN107392968B (en) | The image significance detection method of Fusion of Color comparison diagram and Color-spatial distribution figure | |
CN108647602B (en) | A kind of aerial remote sensing images scene classification method determined based on image complexity | |
CN111582111B (en) | Cell component segmentation method based on semantic segmentation | |
CN105205804A (en) | Caryon-cytolymph separation method and apparatus of white blood cells in blood cell image, and classification method and apparatus of white blood cells in blood cell image | |
CN104599271A (en) | CIE Lab color space based gray threshold segmentation method | |
CN110264454B (en) | Cervical cancer histopathological image diagnosis method based on multi-hidden-layer conditional random field | |
CN108305253A (en) | A kind of pathology full slice diagnostic method based on more multiplying power deep learnings | |
CN106991686A (en) | A kind of level set contour tracing method based on super-pixel optical flow field | |
CN110276764A (en) | K-Means underwater picture background segment innovatory algorithm based on the estimation of K value | |
CN106373096A (en) | Multi-feature weight adaptive shadow elimination method | |
CN105335949A (en) | Video image rain removal method and system | |
CN112950780A (en) | Intelligent network map generation method and system based on remote sensing image | |
CN108154158A (en) | A kind of building image partition method applied towards augmented reality | |
CN112396619A (en) | Small particle segmentation method based on semantic segmentation and internally complex composition | |
CN113052228A (en) | Liver cancer pathological section classification method based on SE-Incepton | |
Tyagi et al. | Performance comparison and analysis of medical image segmentation techniques |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |