CN113570633A - Method for segmenting and counting fat cell images based on deep learning model - Google Patents

Method for segmenting and counting fat cell images based on deep learning model Download PDF

Info

Publication number
CN113570633A
CN113570633A CN202110861762.0A CN202110861762A CN113570633A CN 113570633 A CN113570633 A CN 113570633A CN 202110861762 A CN202110861762 A CN 202110861762A CN 113570633 A CN113570633 A CN 113570633A
Authority
CN
China
Prior art keywords
segmentation
images
deep learning
counting
fat
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110861762.0A
Other languages
Chinese (zh)
Inventor
沈红斌
王春晖
王计秋
宁光
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Jiaotong University
Original Assignee
Shanghai Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jiaotong University filed Critical Shanghai Jiaotong University
Priority to CN202110861762.0A priority Critical patent/CN113570633A/en
Publication of CN113570633A publication Critical patent/CN113570633A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/32Indexing scheme for image data processing or generation, in general involving image mosaicing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30242Counting objects in image

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Investigating Or Analysing Biological Materials (AREA)
  • Image Analysis (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)

Abstract

A method for segmenting and counting fat cell images based on a deep learning model comprises the steps of inputting the fat images into a deep learning network to obtain the segmentation probability of each pixel in the images, further generating fat cell edge images based on the probability images, sequentially removing bubbles through morphological processing and performing segmentation processing through a watershed algorithm to generate fat cell segmentation images, analyzing the cell area distribution of the fat cell segmentation images through connected domain analysis, and counting the number of fat cells on current target images. The time consumption of the manual counting of the fat cells is obviously shortened.

Description

Method for segmenting and counting fat cell images based on deep learning model
Technical Field
The invention relates to a technology in the field of image processing, in particular to a method for segmenting and counting an adipocyte image based on a deep learning model.
Background
The key operation of the prior art on cell image processing comprises image segmentation, and accurate image segmentation can increase the accuracy of cell counting and more accurately analyze the area, thereby obtaining a better analysis result. The analysis efficiency of the existing cell segmentation algorithm on high-definition cell images is still low, so that the development of cell statistics technology is limited.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides a method for segmenting and counting the fat cell image based on a deep learning model, which obviously shortens the time consumption of manual counting of the fat cells.
The invention is realized by the following technical scheme:
the invention relates to a method for segmenting and counting fat cell images based on a deep learning model, which comprises the steps of inputting the fat images into a deep learning network to obtain the segmentation probability of each pixel in the images, further generating fat cell edge images based on the probability images, sequentially removing bubbles through morphological processing and generating the fat cell segmentation images through segmentation processing by a watershed algorithm, finally analyzing the cell area distribution of the fat cell segmentation images through connected domain analysis and counting the number of fat cells on the current target images.
The deep learning network is a Unet + + network based on up-sampling and down-sampling.
The deep learning network is trained by a training set which is subjected to data enhancement including rotation, turnover, scaling and scale transformation, cross entropy is used as a loss function, and points marked as black and white in a marked image are correspondingly multiplied by final output probability to obtain a final loss function.
The cross entropy loss function
Figure BDA0003186009930000011
Wherein: x is the number ofiTo input, yiTo train the focused binary labels, hw(xi) The probability that the point is identified as a membrane is output for the network, m represents how many pixel points the graph has in common, and j (w) is the value of the error function.
The fat cell edge image is a gray image generated on the basis of the probability image, and is converted through binarization processing to obtain the fat cell edge image.
The bubble removal refers to: bubbles that were misidentified as cells due to image stitching were removed using the gaussian filter function provided by Matlab.
The watershed algorithm is used for segmentation, and specifically comprises the following steps: and acquiring watershed identified as the edge by all watershed algorithms, and adding the watershed into the original image when the watershed is judged to be the cell edge.
The judgment that the cell edge is the cell edge needs to satisfy the following conditions at the same time:
1. the length of the current watershed is smaller than a set threshold value;
2. the ellipticity ((major axis-minor axis)/major axis 100%) of the cells with watershed is less than a set threshold;
3. the ratio of the areas of the two regions after division is 1.
The connected domain analysis specifically comprises: and analyzing the connected domains of the image, counting the information of the areas, the positions and the like of all the connected domains, filtering the connected domains with the areas smaller than a threshold value T, and counting each of the rest connected domains, namely the number of the fat cells.
The cell area distribution is obtained by a connected domain analysis mode.
The invention relates to a system for realizing the method, which comprises the following steps: the device comprises a depth network segmentation unit, a binarization processing unit, a watershed re-segmentation unit and a connected domain analysis unit, wherein: the depth network segmentation unit is connected with the binarization processing unit and transmits probability image information, the binarization processing unit is connected with the watershed resegmentation unit and transmits binary image information, and the watershed resegmentation unit is connected with the connected domain analysis and transmits segmentation image information.
Technical effects
The invention integrally solves the defects of the prior art that the segmentation precision is not accurate enough and the segmentation result is not clear enough; the invention integrates the functions of fat cell segmentation and subsequent related analysis processing, can automatically extract fat cell edges, automatically fill related unsegmented areas and automatically resegmented under-segmented areas, provides threshold conversion, cell number statistics, image staining, manual post-processing and histogram analysis, and shows higher precision in an application level. Compared with the prior art, the accuracy of the method reaches 99.65%, the recall rate reaches 98.38%, and the F1-score reaches 99.01%.
Drawings
FIG. 1 is a flow chart of the method of the present invention;
FIG. 2 is an image of adipocytes input in the example;
FIG. 3 is a probability map of the output of the deep learning model of an embodiment;
FIG. 4 is a result of the filtering process performed in FIG. 3;
FIG. 5 is the result of the binarization in FIG. 4;
FIG. 6 shows the result of a re-segmentation using the watershed method;
FIG. 7 shows the result of coloring adipocytes;
fig. 8 is a schematic diagram of the operation of the Unet + + network.
Detailed Description
As shown in fig. 1, the present embodiment relates to a method for segmenting and counting fat cell images based on a deep learning model, which specifically includes the following steps:
step 1) inputting a fat image I shown in fig. 2, and setting initial parameters: the method comprises the following steps of an area threshold T, the size of a morphological closed operator, a watershed length threshold L and a threshold c of connected domain ellipticity.
And 2) graying the image.
Step 3) cell edge extraction, which specifically comprises the following steps:
3.1. the calculated output probability map after inputting the Unet + + model shown in fig. 8 is shown in fig. 3.
3.2. The image is gaussian filtered as shown in fig. 4.
3.3. The probability map is binarized to obtain a black and white image, as shown in fig. 5.
Step 4), image post-processing: and (4) performing re-segmentation by using a watershed algorithm, selecting a watershed and adding the watershed into the cell edge image to obtain a re-segmentation result, as shown in fig. 6.
Step 5) cell counting: firstly, analyzing a connected region, extracting the area, the perimeter and the position information of the connected region, filtering out the area smaller than T, and then randomly coloring each connected region, as shown in fig. 7; specifically, 3 integers between 0 and 255 are generated and filled into RGB three-color channels.
Finally, the segmentation accuracy of the Unet + + model reaches 0.9606%, and the final loss function is calculated to be 0.0908.
Through a specific practical experiment, the above apparatus/method is started/operated with T2500, c 10, L50, and gaussian operator size 5 as a parameter, and experimental data can be obtained as follows: the accuracy rate reaches 99.65%, the recall rate reaches 98.38%, and the F1-score reaches 99.01%.
For a cell image with a total cell number of 107, accurately segmented cells were elevated from 59 to 94. For a cell image with total cell number 169, the number of accurately segmented cells was raised from 112 to 140. The accuracy rate reaches 99.65%, the recall rate reaches 98.38%, and the F1-score reaches 99.01%.
The foregoing embodiments may be modified in many different ways by those skilled in the art without departing from the spirit and scope of the invention, which is defined by the appended claims and all changes that come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein.

Claims (10)

1. A method for segmenting and counting fat cell images based on a deep learning model is characterized in that the fat images are input into a deep learning network to obtain the segmentation probability of each pixel in the images, after fat cell edge images are further generated based on the probability images, the fat cell edge images are sequentially subjected to morphological processing to remove bubbles and watershed algorithm to be segmented to generate fat cell segmentation images, and finally the cell area distribution of the fat cell segmentation images is analyzed and the number of the fat cells on the current target images is counted through connected domain analysis.
2. The method for segmentation and enumeration of adipocyte images based on deep learning model as claimed in claim 1, wherein said deep learning network is a Unet + + network based on up-sampling and down-sampling.
3. The method as claimed in claim 1, wherein the deep learning network is trained by a training set including rotation, inversion, scaling and scale transformation, and uses cross entropy as a loss function, and multiplies a point marked as black and white in the labeled image by a final output probability to obtain a final loss function.
4. The method for segmentation and enumeration of adipocyte images based on deep learning model as claimed in claim 3, wherein said cross entropy loss function
Figure FDA0003186009920000011
Wherein: x is the number ofiTo input, yiTo train the focused binary labels, hw(xi) The probability that the point is identified as a membrane is output for the network, m represents how many pixel points the graph has in common, and j (w) is the value of the error function.
5. The method for segmenting and counting the fat cell image based on the deep learning model as claimed in claim 1, wherein the fat cell edge map is obtained by generating a gray image based on a probability map and performing binarization processing and transformation.
6. The method for segmentation and counting of fat cell images based on deep learning model as claimed in claim 1, wherein the removing of bubbles is: bubbles that were misidentified as cells due to image stitching were removed using the gaussian filter function provided by Matlab.
7. The method for segmenting and counting the fat cell images based on the deep learning model as claimed in claim 1, wherein the watershed algorithm is used for segmentation, and specifically comprises: and acquiring watershed identified as the edge by all watershed algorithms, and adding the watershed into the original image when the watershed is judged to be the cell edge.
8. The method for segmentation and counting of fat cell images based on deep learning model as claimed in claim 7, wherein the determination of the cell edge is a cell edge, and the following conditions are satisfied:
1. the length of the current watershed is smaller than a set threshold value;
2. the ellipticity ((major axis-minor axis)/major axis 100%) of the cells with watershed is less than a set threshold;
3. the ratio of the areas of the two regions after division is 1.
9. The method for segmenting and counting the fat cell images based on the deep learning model as claimed in claim 1, wherein the connected component analysis specifically comprises: analyzing the connected domains of the image, counting the information of the areas, the positions and the like of all the connected domains, filtering the connected domains with the areas smaller than a threshold value T, and counting each of the rest connected domains, namely the number of the fat cells;
the cell area distribution is obtained by a connected domain analysis mode.
10. A system for segmentation counting of fat cell images based on a deep learning model, which realizes the method of any one of claims 1 to 9, is characterized by comprising: the device comprises a depth network segmentation unit, a binarization processing unit, a watershed re-segmentation unit and a connected domain analysis unit, wherein: the depth network segmentation unit is connected with the binarization processing unit and transmits probability image information, the binarization processing unit is connected with the watershed resegmentation unit and transmits binary image information, and the watershed resegmentation unit is connected with the connected domain analysis and transmits segmentation image information.
CN202110861762.0A 2021-07-29 2021-07-29 Method for segmenting and counting fat cell images based on deep learning model Pending CN113570633A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110861762.0A CN113570633A (en) 2021-07-29 2021-07-29 Method for segmenting and counting fat cell images based on deep learning model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110861762.0A CN113570633A (en) 2021-07-29 2021-07-29 Method for segmenting and counting fat cell images based on deep learning model

Publications (1)

Publication Number Publication Date
CN113570633A true CN113570633A (en) 2021-10-29

Family

ID=78168919

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110861762.0A Pending CN113570633A (en) 2021-07-29 2021-07-29 Method for segmenting and counting fat cell images based on deep learning model

Country Status (1)

Country Link
CN (1) CN113570633A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114943723A (en) * 2022-06-08 2022-08-26 北京大学口腔医学院 Method for segmenting and counting irregular cells and related equipment
CN115715994A (en) * 2022-11-18 2023-02-28 深圳大学 Image excitation ultramicro injection method, system and equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107316077A (en) * 2017-06-21 2017-11-03 上海交通大学 A kind of fat cell automatic counting method based on image segmentation and rim detection
US20200074271A1 (en) * 2018-08-29 2020-03-05 Arizona Board Of Regents On Behalf Of Arizona State University Systems, methods, and apparatuses for implementing a multi-resolution neural network for use with imaging intensive applications including medical imaging
CN112070772A (en) * 2020-08-27 2020-12-11 闽江学院 Blood leukocyte image segmentation method based on UNet + + and ResNet
CN112964712A (en) * 2021-02-05 2021-06-15 中南大学 Method for rapidly detecting state of asphalt pavement

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107316077A (en) * 2017-06-21 2017-11-03 上海交通大学 A kind of fat cell automatic counting method based on image segmentation and rim detection
US20200074271A1 (en) * 2018-08-29 2020-03-05 Arizona Board Of Regents On Behalf Of Arizona State University Systems, methods, and apparatuses for implementing a multi-resolution neural network for use with imaging intensive applications including medical imaging
CN112070772A (en) * 2020-08-27 2020-12-11 闽江学院 Blood leukocyte image segmentation method based on UNet + + and ResNet
CN112964712A (en) * 2021-02-05 2021-06-15 中南大学 Method for rapidly detecting state of asphalt pavement

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114943723A (en) * 2022-06-08 2022-08-26 北京大学口腔医学院 Method for segmenting and counting irregular cells and related equipment
CN114943723B (en) * 2022-06-08 2024-05-28 北京大学口腔医学院 Method for dividing and counting irregular cells and related equipment
CN115715994A (en) * 2022-11-18 2023-02-28 深圳大学 Image excitation ultramicro injection method, system and equipment
CN115715994B (en) * 2022-11-18 2023-11-21 深圳大学 Image excitation ultramicro injection method, system and equipment

Similar Documents

Publication Publication Date Title
CN107944452B (en) Character recognition method for circular seal
Yousif et al. Toward an optimized neutrosophic K-means with genetic algorithm for automatic vehicle license plate recognition (ONKM-AVLPR)
CN109886974B (en) Seal removing method
CN111145209B (en) Medical image segmentation method, device, equipment and storage medium
CN107316077B (en) Automatic adipose cell counting method based on image segmentation and edge detection
CN110619642B (en) Method for separating seal and background characters in bill image
CN112017191A (en) Method for establishing and segmenting liver pathology image segmentation model based on attention mechanism
CN113570633A (en) Method for segmenting and counting fat cell images based on deep learning model
CN112614062B (en) Colony counting method, colony counting device and computer storage medium
CN109934828B (en) Double-chromosome image cutting method based on Compact SegUnet self-learning model
CN106384112A (en) Rapid image text detection method based on multi-channel and multi-dimensional cascade filter
JP2015065654A (en) Color document image segmentation using automatic recovery and binarization
CN110838100A (en) Colonoscope pathological section screening and segmenting system based on sliding window
CN107085726A (en) Oracle bone rubbing individual character localization method based on multi-method denoising and connected component analysis
Shaikh et al. A novel approach for automatic number plate recognition
CN110110667B (en) Processing method and system of diatom image and related components
Azad et al. New method for optimization of license plate recognition system with use of edge detection and connected component
CN110991439A (en) Method for extracting handwritten characters based on pixel-level multi-feature joint classification
CN114331869B (en) Dam face crack semantic segmentation method
CN112270317A (en) Traditional digital water meter reading identification method based on deep learning and frame difference method
CN111126162A (en) Method, device and storage medium for identifying inflammatory cells in image
Chakraborty et al. An improved template matching algorithm for car license plate recognition
CN107145888A (en) Video caption real time translating method
CN111104944A (en) License plate character detection and segmentation method based on R-FCN
CN112767321B (en) Random forest based tubercle bacillus fluorescence detection method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination