CN114972254A - Cervical cell image segmentation method based on convolutional neural network - Google Patents

Cervical cell image segmentation method based on convolutional neural network Download PDF

Info

Publication number
CN114972254A
CN114972254A CN202210581902.3A CN202210581902A CN114972254A CN 114972254 A CN114972254 A CN 114972254A CN 202210581902 A CN202210581902 A CN 202210581902A CN 114972254 A CN114972254 A CN 114972254A
Authority
CN
China
Prior art keywords
network
feature map
cervical cell
module
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210581902.3A
Other languages
Chinese (zh)
Inventor
赵艳丽
王军
吕杰
王培培
李小敏
王彦力
刘凯
周红丽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ningxia Institute Of Science And Technology
Original Assignee
Ningxia Institute Of Science And Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ningxia Institute Of Science And Technology filed Critical Ningxia Institute Of Science And Technology
Priority to CN202210581902.3A priority Critical patent/CN114972254A/en
Publication of CN114972254A publication Critical patent/CN114972254A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30024Cell structures in vitro; Tissue sections in vitro

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

A cervical cell image segmentation method based on a convolutional neural network comprises the following steps: acquiring a training sample set from a cervical cell data set, uniformly adjusting the size of the training sample set to be a fixed size, dividing the training sample set into a training set and a test set according to a proportion, and expanding the training set by adopting a data enhancement method; constructing a lightweight convolution module, and performing feature extraction by using the depth separable convolution layer to further reduce network parameters; constructing a multi-scale feature extraction module, and using a cavity group convolution layer and an SE channel attention module to concern feature information so as to improve the network segmentation precision; constructing a cervical cell segmentation network model, and establishing a basic network, a network encoder and a network decoder for feature extraction; training a cervical cell segmentation network model by using a training set, and screening and storing a training model with good performance; and testing the stored trained model by using the test set to verify the actual segmentation effect of the cervical cell segmentation network model.

Description

Cervical cell image segmentation method based on convolutional neural network
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a cervical cell image segmentation method based on a convolutional neural network.
Background
In recent years, the incidence rate of cervical cancer is higher and higher, and the cervical cancer becomes one of malignant tumors which seriously threaten the health of women. Therefore, regular cervical cancer screening can detect the occurrence of cervical lesions in women as early as possible for effective treatment to improve patient survival.
The cervical cytology examination is used as an important means for screening cervical precancerous lesions and cervical cancer, is particularly suitable for large-scale cervical lesion screening of risk groups, and can reduce the morbidity and mortality of the cervical cancer. In the current cervical cytology examination, since the morphology of cells or nuclei is crucial to the classification of diseases, the cell segmentation is the first step in the diagnosis of cervical diseases, and the accuracy of the result is crucial to the subsequent whole disease diagnosis process. Usually, one cervical cell image contains hundreds of cells, and a pathologist labels the contour of a cell or a cell nucleus on the cervical cell image according to experience, and because the shape, the size, the position and the like of different cells are different, uneven cell staining, overlapped cells, impurities and the like also interfere with segmentation, and the segmentation of the cells by the pathologist is time-consuming and labor-consuming due to a large number of cervical cells, complex background information and the like. Meanwhile, with the increasing number of patients diagnosed, the diagnosis efficiency of doctors is gradually reduced. Therefore, the cervical cancer screening is realized by using the computer-aided diagnosis system, which is helpful for doctors to improve the diagnosis efficiency and reduce the workload, and has very important significance and effect on early diagnosis and treatment of the cervical cancer.
The traditional cervical cell image segmentation algorithm realizes the segmentation of cervical cells through the procedures of image data preprocessing, feature extraction, cell and cell nucleus region marking and the like. This approach has the following disadvantages: firstly, manually marking cell and/or cell nucleus areas, and the workload of manual segmentation is large; secondly, doctors carry out image analysis by combining own medical knowledge and clinical experience, the standardized and quantitative description of cervical cell expression is lacked, and doctors with different levels and experiences can make different diagnoses on the same image, so that the method is very subjective; thirdly, the traditional algorithm based on the artificial feature extraction has limitations in the aspects of morphological features, textural features, local features and the like, the method flow is complicated, and the segmentation precision is still a certain gap from the clinical application.
The development of deep learning greatly promotes the research of computer-aided diagnosis, so that the characteristic does not need to be extracted manually, and the speed and the precision are greatly improved. The convolutional neural network is one of representative neural networks in the current deep learning technology, and compared with the traditional image processing method, the convolutional neural network has the capability of automatically learning sample characteristics, can fully utilize high-level semantic information in an image, reduces the design of artificial characteristics, and enables a model to have stronger robustness and fault tolerance. In the field of cervical cancer auxiliary diagnosis, automatic and intelligent screening can be realized by using deep learning instead of the traditional method, and the method has a good development prospect.
The existing cervical cancer auxiliary diagnosis algorithm based on deep learning mainly has the following defects: (1) in the field of computer-aided medical diagnosis at the present stage, cervical cells are rarely researched, and particularly cervical cell images have the characteristics of cell overlapping, fuzzy boundary, low contrast, shape diversity and the like. (2) The medical image segmentation network at the present stage does not design different scale characteristics of cervical cells and cell nuclei, and related researchers often adopt a unified network for fine adjustment, so that the segmentation effect of the cells and the cell nuclei is poor, and the diagnosis performance of an assistant doctor is not ideal. (3) The current multi-scale target segmentation network is often complex in design, huge and redundant in network structure, low in parameter utilization rate and incapable of rapidly and accurately extracting characteristic information, so that the network is not favorable for being deployed into practical application.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides the cervical cell image segmentation method based on the convolutional neural network, which ensures the accuracy of cell and cell nucleus segmentation, improves the segmentation speed, reduces the reading pressure of doctors and improves the working efficiency of the doctors.
The technical scheme for solving the technical problems is as follows: a cervical cell image segmentation method based on a convolutional neural network comprises the following steps:
s1, obtaining a training sample set from a cervical cell data set Herlev, uniformly adjusting the size of the training sample set to be a fixed size, dividing the training sample set into a training set and a testing set in proportion, and expanding the training set by adopting a data enhancement method of rotation, overturning, cutting and elastic deformation;
s2, constructing a lightweight convolution module, and performing feature extraction by using the depth separable convolution layer to further reduce network parameters;
s3, constructing a multi-scale feature extraction module, and using a cavity group convolution layer and an SE channel attention module to concern feature information to improve network segmentation precision;
s4, constructing a cervical cell segmentation network model, establishing a basic network for feature extraction, wherein the basic network comprises a network encoder and a network decoder, and applying a two-classification cross loss function to improve the prediction effect of the cervical cell segmentation network model;
s5, training a cervical cell segmentation network model by using a training set, and screening and storing training models with good effects;
and S6, testing the trained model stored in the step S5 by using a test set, and verifying the actual segmentation effect of the cervical cell segmentation network model.
As a preferred technical solution, the network encoder encoding method in step S4 includes the following steps:
A1. extracting shallow features of an original input image by using a3 x 3 common convolutional layer and a3 x 3 group convolutional layer, and outputting a shallow feature map;
A2. a1, the shallow feature map output by the step is processed by a2 x 2 maximum pooling downsampling operation and a lightweight convolution module to obtain a deeper lightweight feature map;
A3. repeating the step A2 for three times, and then extracting the multi-scale features by using a multi-scale feature extraction module to obtain a multi-scale feature map;
the network decoder decoding method in step S4 includes the following steps:
B1. taking a multi-scale characteristic diagram output by a network encoder as an input, and sequentially passing through a random discarding layer with the discarding rate of 0.5 and a2 multiplied by 2 group deconvolution layer to obtain a first up-sampling characteristic diagram;
B2. splicing the first upsampling feature map obtained in the step B1 with a lightweight feature map with the same resolution in a network encoder by using jump connection, and then sequentially inputting the first upsampling feature map and the lightweight feature map into a lightweight convolution module and a2 x 2 group deconvolution layer to obtain a second upsampling feature map;
B3. splicing the second upsampling feature map obtained in the step B2 with the lightweight feature map with the same resolution in the network encoder by using jump connection, and then sequentially inputting the second upsampling feature map and the lightweight feature map into a lightweight convolution module and a2 x 2 group deconvolution layer to obtain a third upsampling feature map;
B4. and inputting the third upsampled feature map into a lightweight convolution module to obtain a lightweight feature map with the same resolution as the original input image, and finally, obtaining the binary prediction result of the cervical cells and the cell nucleuses through a1 × 1 convolution layer with the channel number of 2 and a Softmax activation function.
As a preferable technical solution, the lightweight convolution module in step S2 is composed of a1 × 1 normal convolution layer and a3 × 3 depth separable convolution layer, and the 1 × 1 normal convolution layer and the 3 × 3 depth separable convolution layer sequentially extract features from the input image, and then use a residual structure to splice the extracted features to obtain the lightweight feature map.
As a preferable technical solution, the multi-scale feature extraction module in step S3 is formed by sequentially connecting a first 3 × 3 hole group convolutional layer, a first Ghost module, a second 3 × 3 hole group convolutional layer, an SE channel attention module, and a second Ghost module, and the multi-scale feature extraction module performs feature extraction on an input image, and then splices features output by the first 3 × 3 hole group convolutional layer and features output by the second Ghost module through a residual error structure to obtain a multi-scale feature map.
As a preferred technical solution, in step S1, the pixels of the images in the training sample set are uniformly adjusted to 128 × 128 by using an affine transformation method, and the ratio of the training set to the test set is 8: 2.
The invention has the following beneficial effects:
(1) the invention adopts a deep learning algorithm, and realizes the automatic segmentation of the cervical cell image by extracting the semantic features of the image through a deep convolution neural network.
(2) According to the invention, a multi-scale feature attention module is designed according to different scale features of cervical cells and cell nucleuses, so that the segmentation accuracy of the cells and the cell nucleuses in the cervical cell image is improved.
(3) In the cervical cell segmentation network model, the lightweight convolution module is integrated into the network encoder, so that the segmentation efficiency of the cervical cells and cell nucleuses can be greatly improved while the network calculation cost is reduced, and the gradient disappearance problem brought by error back propagation is relieved by using jump connection in the network decoder so as to improve the feature extraction capability of the network model.
Drawings
Fig. 1 is a flowchart of an overall cervical cell image segmentation process.
Fig. 2 is a schematic structural diagram of a lightweight convolution module.
Fig. 3 is a schematic structural diagram of a multi-scale feature extraction module.
Fig. 4 is a schematic diagram of the overall structure of the cervical cell segmentation network model.
Fig. 5 is a diagram showing the result of cervical cell segmentation.
Detailed Description
The present invention will be described in further detail below with reference to the drawings and examples, but the present invention is not limited to the embodiments described below.
Example 1
In fig. 1, a method for segmenting a cervical cell image based on a convolutional neural network of this embodiment includes the following steps:
s1: establishing a training sample set
S11, acquiring a cervical cell data set Herlev from an open source website provided by a management and decision engineering laboratory of Harmony, Greek, Eisein, wherein the Herlev comprises 917 color microscopic images of single cervical cells, the Herlev is used as a training sample set, pixels of the color microscopic images in the training sample set are uniformly adjusted to be 128 x 128 by affine transformation, and therefore target pixels in the sample set are guaranteed not to be distorted;
s12, dividing the training sample set into training sets and testing sets according to the ratio of 8:2, wherein the number of the training sets is 825 and the number of the testing sets is 92 finally;
s13, performing data enhancement on the training set by adopting methods of rotation, overturning, cutting, elastic deformation, random saturation and contrast adjustment to increase the number of samples, further enhancing the generalization capability of the model and preventing the overfitting phenomenon of the model in the training process;
s2, constructing a lightweight convolution module
The lightweight convolution module consists of a1 × 1 common convolution layer and a3 × 3 depth separable convolution layer, wherein the 1 × 1 common convolution layer and the 3 × 3 depth separable convolution layer sequentially extract features of an input image, and then the extracted features are spliced by adopting a residual error structure to obtain a lightweight feature map, as shown in fig. 2;
s3, constructing a multi-scale feature extraction module
The multi-scale feature extraction module is formed by sequentially connecting a first 3 × 3 cavity group convolution layer, a first Ghost module, a second 3 × 3 cavity group convolution layer, an SE channel attention module and a second Ghost module, performs multi-scale feature extraction on an input image, and then splices features output by the first 3 × 3 cavity group convolution layer and features output by the second Ghost module through a residual error structure to obtain a multi-scale feature map, as shown in FIG. 3;
s4, constructing a cervical cell segmentation network model
Establishing a basic network for feature extraction by using the lightweight convolution module constructed in the step S2 and the multi-scale feature extraction module constructed in the step S3, wherein the basic network comprises a network encoder for extracting features and a network decoder for recovering the features, and according to the real labels and the prediction results of the training set, applying a two-classification cross-loss function to improve the prediction effect of the cervical cell segmentation network model, as shown in FIG. 4;
the network encoder coding method comprises the following steps:
A1. extracting shallow features of an original input image by using a3 x 3 common convolutional layer and a3 x 3 group convolutional layer, and outputting a shallow feature map;
A2. a1, the shallow feature map output by the step is processed by a2 x 2 maximum pooling downsampling operation and a lightweight convolution module to obtain a deeper lightweight feature map;
A3. repeating the step A2 for three times, and then extracting the multi-scale features by using a multi-scale feature extraction module to obtain a multi-scale feature map;
the network decoder decoding method comprises the following steps:
B1. taking a multi-scale feature map output by a network encoder as an input, and sequentially passing through a random discarding layer with a discarding rate of 0.5 and a2 x 2 group deconvolution layer to obtain a first-time up-sampling feature map;
B2. splicing the first upsampling feature map obtained in the step B1 with a lightweight feature map with the same resolution in a network encoder by using jump connection, and then sequentially inputting the first upsampling feature map and the lightweight feature map into a lightweight convolution module and a2 x 2 group deconvolution layer to obtain a second upsampling feature map;
B3. splicing the second upsampling feature map obtained in the step B2 with the lightweight feature map with the same resolution in the network encoder by using jump connection, and then sequentially inputting the second upsampling feature map and the lightweight feature map into a lightweight convolution module and a2 x 2 group deconvolution layer to obtain a third upsampled feature map;
B4. inputting the third upsampling feature map into a lightweight convolution module to obtain a lightweight feature map with the same resolution as the original input image, and finally, obtaining the binary prediction results of cervical cells and cell nuclei through a1 × 1 common convolution layer with the channel number of 2 and a Softmax activation function;
the two-class cross-loss function is:
Figure BDA0003662360970000081
wherein, K is the number of categories, and K is 2; y is i Is a label of class i, p i Is the probability of the ith class prediction;
s5, training the cervical cell segmentation network model by using a training set, and screening and storing the cervical cell segmentation network model with good effect;
s51, adjusting hyper-parameters of a training process, including a network optimizer, a learning rate, an attenuation factor and a training round number;
s52, putting the training set into a cervical cell segmentation network model, randomly extracting a training set with a certain proportion as a verification set for use, and starting to train the model;
s53, recording the performance of each round model on a verification set in the training process, and screening out an optimal model for storage;
and S6, testing the trained model stored in the step S5 by using a test set, selecting a Splace coefficient as a performance evaluation index of the cervical cell segmentation network model, and verifying the actual segmentation effect of the cervical cell segmentation network model.
Experiment of
In order to verify the beneficial effects of the invention, the inventor predicts a plurality of cervical cell images by applying the trained cervical cell image segmentation method based on the convolutional neural network, wherein the segmentation result of one sample image is shown in fig. 5, the left part in the image is the original image, the middle part is the label, and the right part is the visualization result of model prediction.
The performance of the cervical cell image segmentation method based on the convolutional neural network is analyzed by using the index of the dess coefficient, 46 samples are selected in an experiment, and multiple tests are carried out on the test set. After 65 rounds of iteration, the segmentation accuracy of the cervical cells can reach 83.7%, and the segmentation accuracy of the cell nucleus can reach 94.4%. According to experimental results, the cervical cell image segmentation method based on the convolutional neural network can better segment complex cervical cell images.

Claims (5)

1. A cervical cell image segmentation method based on a convolutional neural network is characterized by comprising the following steps:
s1, obtaining a training sample set from a cervical cell data set Herlev, uniformly adjusting the size of the training sample set to be a fixed size, dividing the training sample set into a training set and a testing set in proportion, and expanding the training set by adopting a data enhancement method of rotation, overturning, cutting and elastic deformation;
s2, constructing a lightweight convolution module, and performing feature extraction by using the depth separable convolution layer to further reduce network parameters;
s3, constructing a multi-scale feature extraction module, and using a cavity group convolution layer and an SE channel attention module to concern feature information to improve network segmentation precision;
s4, constructing a cervical cell segmentation network model, establishing a basic network for feature extraction, wherein the basic network comprises a network encoder and a network decoder, and applying a two-classification cross loss function to improve the prediction effect of the cervical cell segmentation network model;
s5, training a cervical cell segmentation network model by using a training set, and screening and storing training models with good effects;
and S6, testing the trained model stored in the step S5 by using a test set, and verifying the actual segmentation effect of the cervical cell segmentation network model.
2. The cervical cell image segmentation method based on convolutional neural network as claimed in claim 1, wherein the network encoder encoding method in step S4 comprises the following steps:
A1. extracting shallow features of an original input image by using a3 x 3 common convolutional layer and a3 x 3 group convolutional layer, and outputting a shallow feature map;
A2. a1, the shallow feature map output by the step is processed by a2 x 2 maximum pooling downsampling operation and a lightweight convolution module to obtain a deeper lightweight feature map;
A3. repeating the step A2 for three times, and then extracting the multi-scale features by using a multi-scale feature extraction module to obtain a multi-scale feature map;
the network decoder decoding method in step S4 includes the following steps:
B1. taking a multi-scale feature map output by a network encoder as an input, and sequentially passing through a random discarding layer with a discarding rate of 0.5 and a2 x 2 group of deconvolution layers to obtain a first-time up-sampling feature map;
B2. splicing the first upsampling feature map obtained in the step B1 with a lightweight feature map with the same resolution in a network encoder by using jump connection, and then sequentially inputting the first upsampling feature map and the lightweight feature map into a lightweight convolution module and a2 x 2 group deconvolution layer to obtain a second upsampling feature map;
B3. splicing the second upsampling feature map obtained in the step B2 with the lightweight feature map with the same resolution in the network encoder by using jump connection, and then sequentially inputting the second upsampling feature map and the lightweight feature map into a lightweight convolution module and a2 x 2 group deconvolution layer to obtain a third upsampling feature map;
B4. and inputting the third upsampled feature map into a lightweight convolution module to obtain a lightweight feature map with the same resolution as the original input image, and finally, obtaining the binary prediction result of the cervical cells and the cell nucleuses through a1 × 1 convolution layer with the channel number of 2 and a Softmax activation function.
3. The cervical cell image segmentation method based on the convolutional neural network as set forth in claim 1 or 2, wherein: the light-weight convolution module in step S2 is composed of a1 × 1 ordinary convolution layer and a3 × 3 depth separable convolution layer, where the 1 × 1 ordinary convolution layer and the 3 × 3 depth separable convolution layer sequentially extract features from the input image, and then the extracted features are spliced by using a residual error structure to obtain a light-weight feature map.
4. The cervical cell image segmentation method based on the convolutional neural network as set forth in claim 1 or 2, wherein: the multi-scale feature extraction module in the step S3 is formed by sequentially connecting a first 3 × 3 cavity group convolutional layer, a first Ghost module, a second 3 × 3 cavity group convolutional layer, an SE channel attention module, and a second Ghost module, and performs feature extraction on an input image, and then splices features output by the first 3 × 3 cavity group convolutional layer and features output by the second Ghost module through a residual error structure to obtain a multi-scale feature map.
5. The cervical cell image segmentation method based on the convolutional neural network as set forth in claim 1, wherein: the pixels of the images in the training sample set in step S1 are uniformly adjusted to 128 × 128 by affine transformation, and the ratio of the training set to the test set is 8: 2.
CN202210581902.3A 2022-05-25 2022-05-25 Cervical cell image segmentation method based on convolutional neural network Pending CN114972254A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210581902.3A CN114972254A (en) 2022-05-25 2022-05-25 Cervical cell image segmentation method based on convolutional neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210581902.3A CN114972254A (en) 2022-05-25 2022-05-25 Cervical cell image segmentation method based on convolutional neural network

Publications (1)

Publication Number Publication Date
CN114972254A true CN114972254A (en) 2022-08-30

Family

ID=82955459

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210581902.3A Pending CN114972254A (en) 2022-05-25 2022-05-25 Cervical cell image segmentation method based on convolutional neural network

Country Status (1)

Country Link
CN (1) CN114972254A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117115444A (en) * 2023-09-08 2023-11-24 北京卓视智通科技有限责任公司 Multitasking image segmentation method, system, computer equipment and storage medium
CN117218443A (en) * 2023-09-22 2023-12-12 东北大学 Pap smear cervical cell image classification method and system
CN117351202A (en) * 2023-09-28 2024-01-05 河北翔拓航空科技有限公司 Image segmentation method for lightweight multi-scale UNet network
CN117710285A (en) * 2023-10-20 2024-03-15 重庆理工大学 Cervical lesion cell mass detection method and system based on self-adaptive feature extraction
CN118053031A (en) * 2024-02-27 2024-05-17 珠海市人民医院 Peripheral blood image recognition method, system and terminal based on deep learning

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117115444A (en) * 2023-09-08 2023-11-24 北京卓视智通科技有限责任公司 Multitasking image segmentation method, system, computer equipment and storage medium
CN117115444B (en) * 2023-09-08 2024-04-16 北京卓视智通科技有限责任公司 Multitasking image segmentation method, system, computer equipment and storage medium
CN117218443A (en) * 2023-09-22 2023-12-12 东北大学 Pap smear cervical cell image classification method and system
CN117218443B (en) * 2023-09-22 2024-03-05 东北大学 Pap smear cervical cell image classification method and system
CN117351202A (en) * 2023-09-28 2024-01-05 河北翔拓航空科技有限公司 Image segmentation method for lightweight multi-scale UNet network
CN117351202B (en) * 2023-09-28 2024-04-30 河北翔拓航空科技有限公司 Image segmentation method for lightweight multi-scale UNet network
CN117710285A (en) * 2023-10-20 2024-03-15 重庆理工大学 Cervical lesion cell mass detection method and system based on self-adaptive feature extraction
CN117710285B (en) * 2023-10-20 2024-07-16 重庆理工大学 Cervical lesion cell mass detection method and system based on self-adaptive feature extraction
CN118053031A (en) * 2024-02-27 2024-05-17 珠海市人民医院 Peripheral blood image recognition method, system and terminal based on deep learning

Similar Documents

Publication Publication Date Title
CN114972254A (en) Cervical cell image segmentation method based on convolutional neural network
CN112101451B (en) Breast cancer tissue pathological type classification method based on generation of antagonism network screening image block
CN110245657B (en) Pathological image similarity detection method and detection device
CN109410219A (en) A kind of image partition method, device and computer readable storage medium based on pyramid fusion study
CN110472676A (en) Stomach morning cancerous tissue image classification system based on deep neural network
CN112396621B (en) High-resolution microscopic endoscope image nucleus segmentation method based on deep learning
CN110633758A (en) Method for detecting and locating cancer region aiming at small sample or sample unbalance
CN114038037B (en) Expression label correction and identification method based on separable residual error attention network
CN113610859B (en) Automatic thyroid nodule segmentation method based on ultrasonic image
CN110675411A (en) Cervical squamous intraepithelial lesion recognition algorithm based on deep learning
CN110189293A (en) Cell image processing method, device, storage medium and computer equipment
CN112508953A (en) Meningioma rapid segmentation qualitative method based on deep neural network
CN113269799A (en) Cervical cell segmentation method based on deep learning
Xiang et al. A novel weight pruning strategy for light weight neural networks with application to the diagnosis of skin disease
CN112233085A (en) Cervical cell image segmentation method based on pixel prediction enhancement
CN113902669A (en) Method and system for reading urine exfoliative cell fluid-based smear
CN117036288A (en) Tumor subtype diagnosis method for full-slice pathological image
CN116310329A (en) Skin lesion image segmentation method based on lightweight multi-scale UNet
CN115471701A (en) Lung adenocarcinoma histology subtype classification method based on deep learning and transfer learning
CN116883341A (en) Liver tumor CT image automatic segmentation method based on deep learning
CN113222944B (en) Cell nucleus segmentation method and cancer auxiliary analysis system and device based on pathological image
CN109919216B (en) Counterlearning method for computer-aided diagnosis of prostate cancer
CN113538422B (en) Pathological image automatic classification method based on dyeing intensity matrix
CN114565601A (en) Improved liver CT image segmentation algorithm based on DeepLabV3+
CN114140437A (en) Fundus hard exudate segmentation method based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination