CN116665210A - Cell classification method and device based on multichannel information fusion - Google Patents

Cell classification method and device based on multichannel information fusion Download PDF

Info

Publication number
CN116665210A
CN116665210A CN202310935341.7A CN202310935341A CN116665210A CN 116665210 A CN116665210 A CN 116665210A CN 202310935341 A CN202310935341 A CN 202310935341A CN 116665210 A CN116665210 A CN 116665210A
Authority
CN
China
Prior art keywords
cell
image
channel
fusion
classification
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310935341.7A
Other languages
Chinese (zh)
Other versions
CN116665210B (en
Inventor
吕行
王华嘉
邝英兰
叶莘
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhuhai Hengqin Shengao Yunzhi Technology Co ltd
Original Assignee
Zhuhai Hengqin Shengao Yunzhi Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhuhai Hengqin Shengao Yunzhi Technology Co ltd filed Critical Zhuhai Hengqin Shengao Yunzhi Technology Co ltd
Priority to CN202310935341.7A priority Critical patent/CN116665210B/en
Publication of CN116665210A publication Critical patent/CN116665210A/en
Application granted granted Critical
Publication of CN116665210B publication Critical patent/CN116665210B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/69Microscopic objects, e.g. biological cells or cellular parts
    • G06V20/698Matching; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/69Microscopic objects, e.g. biological cells or cellular parts
    • G06V20/695Preprocessing, e.g. image segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/70Labelling scene content, e.g. deriving syntactic or semantic representations
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Investigating Or Analysing Biological Materials (AREA)
  • Investigating, Analyzing Materials By Fluorescence Or Luminescence (AREA)

Abstract

The invention provides a cell classification method and device based on multi-channel information fusion, which are characterized in that a cell example segmentation model is utilized to carry out example segmentation on a cell nucleus image to obtain a cell region of each cell in the cell nucleus image, then a cell fluorescence image of the same cell in each channel is cut out from a fluorescence signal point image of each channel, then the cell fluorescence image of any cell in each channel is input into a classification model, the image feature extraction is carried out on the cell fluorescence image of each channel by utilizing the classification model, the signal point feature under each channel is extracted and fused, the fusion image feature of the cell is obtained, the cell classification result of the cell is obtained based on the fusion image feature of the cell, the signal point feature of the cell fluorescence image can be better extracted and learned, and effective feature fusion is carried out, so that more reasonable judgment is carried out on the cell, and the cell classification efficiency and accuracy are improved.

Description

Cell classification method and device based on multichannel information fusion
Technical Field
The invention relates to the technical field of cell classification, in particular to a cell classification method and device based on multichannel information fusion.
Background
Fluorescence in situ hybridization (Fluorescent In Situ Hybridization, FISH) is a cytogenetic technique that can be used to detect and localize nucleic acids. The fluorescence-labeled nucleic acid probes are only hybridized with nucleic acids having high similarity, and can be used for positioning genes on chromosomes or labeling ribosomal RNA in bacteria or archaea of different classifications in molecular ecology. FISH microscopic imaging based on a plurality of fluorescent markers can specifically detect diseases with high sensitivity, so that fluorescent imaging of cells under different channels can be detected, and whether any cell is a target cell can be confirmed. For example, whether a cell is a circulating abnormal cell or not can be determined by detecting and judging cells in the visual field one by using the four-channel fluorescent signal point image and the Dapi image (Circulating genetically Abnormal Cell, CAC).
Current practice for detecting CAC cells is to locate cells based on their Dapi images and then determine whether the cells are CAC cells based on the number of fluorescent signal points in the four channel signal point images of the located cells. However, the conventional method is to manually identify and judge the signal points of each channel of the cells, which is time-consuming and labor-consuming and has low efficiency. In addition, there are also ways to detect signal points of fluorescent signal points of cells based on a deep-learning target detection model, and then determine whether the cells are CAC cells according to the number of identified fluorescent signal points. The problem with this approach is that it is highly risky, and if the fluorescent signal points of a channel are not correctly identified, it can affect the determination of the cell type, resulting in erroneous cell classification.
Disclosure of Invention
The invention provides a cell classification method and device based on multichannel information fusion, which are used for solving the defects that the traditional manual detection mode in the prior art is low in efficiency, the detection mode based on deep learning is high in risk degree and easy to cause classification errors.
The invention provides a cell classification method based on multichannel information fusion, which comprises the following steps:
acquiring a cell nucleus image and fluorescent signal point images of all channels, which are shot in the same visual field;
performing example segmentation on the cell nucleus image based on a cell example segmentation model to obtain cell areas of cells in the cell nucleus image;
based on the cell areas of the cells in the cell nucleus image, cutting out the cell fluorescence image of the same cell in each channel from the fluorescence signal point image of each channel;
inputting the cell fluorescence image of any cell in each channel into a classification model, extracting image features of the cell fluorescence image of any cell in each channel based on the classification model to obtain fusion image features of any cell, and classifying any cell based on the fusion image features of any cell to obtain a cell classification result of any cell.
According to the cell classification method based on multi-channel information fusion provided by the invention, the image feature extraction is carried out on the cell fluorescence image of any cell in each channel based on the classification model to obtain the fusion image feature of any cell, and the cell classification result of any cell is obtained based on the fusion image feature of any cell, which specifically comprises the following steps:
based on the feature extraction layer in the classification model, respectively extracting image features of the cell fluorescence image of any cell in each channel to obtain channel image features of any cell in each channel;
based on the fusion layer in the two classification models, fusing the channel image characteristics of any cell in each channel to obtain fused image characteristics of any cell;
and classifying any cell based on the classification layer in the classification model by combining the fusion image characteristics of any cell to obtain a cell classification result of any cell.
According to the cell classification method based on multi-channel information fusion provided by the invention, the characteristic extraction layer based on the two classification models respectively extracts image characteristics of the cell fluorescence image of any cell in each channel to obtain the channel image characteristics of any cell in each channel, and the method specifically comprises the following steps:
sequentially extracting image features of the cell fluorescence images of any cell in each channel based on a plurality of continuous depth separable convolution layers in the feature extraction layers of the two classification models to obtain channel image features of any cell in each channel output by the last depth separable convolution layer;
the number of the convolution kernels in each depth separable convolution layer is the same as the number of the cell fluorescent images, each group of convolution kernels is used for extracting image features of the cell fluorescent images of the corresponding channel, and the number of the convolution kernels of a plurality of continuous depth separable convolution layers in the feature extraction layer is sequentially increased.
According to the cell classification method based on multi-channel information fusion, the fusion layer based on the two classification models fuses the channel image characteristics of any cell in each channel to obtain the fused image characteristics of any cell, and the method specifically comprises the following steps:
based on a primary fusion layer in the fusion layers, carrying out primary fusion on the splicing result of the channel image characteristics of any cell in each channel to obtain the initial fusion characteristics of any cell;
and carrying out feature coding on the initial fusion characteristics of any cell based on the fusion coding layer in the fusion layer to obtain the fusion image characteristics of any cell.
According to the cell classification method based on multichannel information fusion, the primary fusion layer is composed of a convolution layer, and the fusion coding layer is composed of an encoder.
According to the cell classification method based on multichannel information fusion, the classification layer in the classification model consists of two full-connection layers.
According to the cell classification method based on multichannel information fusion, the cell instance segmentation model and the classification model are obtained based on training of the following steps:
training to obtain the cell instance segmentation model based on a sample cell nucleus image and coordinate information of each cell outline in the sample cell nucleus image;
based on the coordinate information of each cell outline in the sample cell nucleus image, cutting the sample fluorescent signal point image of each channel shot under the same visual field with the sample cell nucleus image to obtain a sample cell fluorescent image of each sample cell in each channel;
and training to obtain the classification model based on the sample cell fluorescence images of the sample cells in each channel and the classification labels of the sample cells.
The invention also provides a cell classification device based on multichannel information fusion, which comprises:
the image acquisition unit is used for acquiring a nuclear image and fluorescent signal point images of all channels, which are shot under the same visual field;
the cell segmentation unit is used for carrying out example segmentation on the cell nucleus image based on a cell example segmentation model to obtain cell areas of cells in the cell nucleus image;
the image cutting unit is used for cutting out a cell fluorescence image of the same cell in each channel from the fluorescence signal point image of each channel based on the cell area of each cell in the cell nucleus image;
the cell classification unit is used for inputting the cell fluorescence image of any cell in each channel into a classification model, extracting the image features of the cell fluorescence image of any cell in each channel based on the classification model to obtain the fusion image features of any cell, and classifying any cell based on the fusion image features of any cell to obtain the cell classification result of any cell.
The invention also provides an electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the cell classification method based on multi-channel information fusion as described in any one of the above when executing the program.
The present invention also provides a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements a cell classification method based on multichannel information fusion as described in any of the above.
The invention also provides a computer program product comprising a computer program which when executed by a processor implements a method of cell classification based on multi-channel information fusion as described in any of the above.
According to the cell classification method and device based on multi-channel information fusion, the cell example segmentation model is utilized to conduct example segmentation on the cell nucleus image, after cell areas of cells in the cell nucleus image are obtained, cell fluorescence images of the same cell in each channel are cut out from fluorescence signal point images of each channel, then the cell fluorescence images of any cell in each channel are input into the classification model, the image feature extraction is conducted on the cell fluorescence images of the cells in each channel through the classification model, the signal point features under each channel are extracted and fused, fusion image features of the cells are obtained, the cells are classified based on the fusion image features of the cells, cell classification results of the cells are obtained, the signal point features of the cell fluorescence images can be better extracted and learned, effective feature fusion is conducted, and therefore reasonable judgment can be conducted on the cells, and cell classification efficiency and accuracy are improved.
Drawings
In order to more clearly illustrate the invention or the technical solutions of the prior art, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are some embodiments of the invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic flow chart of a cell classification method based on multi-channel information fusion according to the present invention;
FIG. 2 is a second flow chart of a cell sorting method based on multi-channel information fusion according to the present invention;
FIG. 3 is a schematic flow chart of a method for operating a classification model according to the present invention;
FIG. 4 is a schematic diagram of a cell sorter based on multi-channel information fusion according to the present invention;
fig. 5 is a schematic structural diagram of an electronic device provided by the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Fig. 1 is a schematic flow chart of a cell classification method based on multi-channel information fusion according to the present invention, as shown in fig. 1, the method includes:
step 110, acquiring a nuclear image and fluorescent signal point images of all channels, which are shot under the same visual field;
step 120, performing instance segmentation on the cell nucleus image based on a cell instance segmentation model to obtain cell areas of each cell in the cell nucleus image;
step 130, based on the cell area of each cell in the cell nucleus image, cutting out the cell fluorescence image of the same cell in each channel from the fluorescence signal point image of each channel;
and 140, inputting the cell fluorescence image of any cell in each channel into a classification model, extracting image features of the cell fluorescence image of any cell in each channel based on the classification model to obtain fusion image features of any cell, and classifying any cell based on the fusion image features of any cell to obtain a cell classification result of any cell.
Specifically, a nuclear image (for example, a Dapi image) and a fluorescent signal point image (for example, fluorescent images captured under four channels of Gold, red, blue and Green) of each channel, which are captured by a microscope under the same field of view, are obtained, wherein the fluorescent signal point image of each channel includes a signal point image of the interior of each cell.
Subsequently, as shown in fig. 2, the embodiment of the present invention designs a two-stage deep learning network to process the above-mentioned cell nucleus image and the fluorescent signal point image of each channel respectively, where the first stage is a cell example segmentation model, and the cell nucleus image of the Dapi channel (for example, the Dapi image with the layer number greater than 3) is used to segment each cell in the cell nucleus image to obtain the cell region of each cell in the cell nucleus image. Wherein, the cell instance segmentation model can be constructed based on Mask-rcnn network. The second stage adopts a deep learning network-a classification model fused with multichannel fluorescence signal information as a classifier of the second stage based on the segmentation result (namely the cell region of each cell) of the first stage.
Specifically, based on the cell area of each cell in the nuclear image, the cell fluorescence image of the same cell in each channel can be cut out from the fluorescence signal point images of each channel (such as Green, red, blue and Gold channels). That is, the cell fluorescence image of any channel is a sub-image of a corresponding cell of the fluorescent signal point image of that channel. And inputting the cell fluorescence image of any cell in each channel into a classification model, wherein the classification model can extract the image characteristics of the cell fluorescence image of the cell in each channel to obtain the fusion image characteristics of the cell. The fusion image features of the cells are fused with image semantic information of the cells about signal points in the cell fluorescence images of the channels. Then, the cells are classified into two classes based on the fusion image features of the cells, and a cell classification result of the cells is obtained, and the cell classification result indicates whether the cells are target cells (e.g., CAC cells).
Because the signal point semantic information of any cell under each channel is fused in the image feature extraction stage, classification is performed based on the fusion result of the signal point semantic information of the cell under each channel, and target cells and non-target cells can be more accurately distinguished, so that an accurate cell classification result is obtained. In addition, the cell classification mode combined with the fusion result of the signal point semantic information of the cells under each channel can better extract and learn the signal point characteristics of the cell fluorescence image and perform effective characteristic fusion, so that more reasonable judgment is made on the cells, the number of manual interpretation and rechecking of the cells is reduced, and the method is high in efficiency and stable in high accuracy.
In some embodiments, as shown in fig. 3, the image feature extraction is performed on the cell fluorescence image of the any cell in each channel based on the classification model to obtain the fused image feature of the any cell, and the classification is performed on the any cell based on the fused image feature of the any cell to obtain the cell classification result of the any cell, which specifically includes:
step 310, respectively extracting image features of the cell fluorescence image of any cell in each channel based on the feature extraction layer in the two classification models to obtain channel image features of any cell in each channel;
step 320, based on the fusion layer in the two classification models, fusing the channel image features of any cell in each channel to obtain fused image features of any cell;
and 330, classifying any cell based on the classification layer in the classification model by combining the fusion image characteristics of any cell to obtain a cell classification result of any cell.
Specifically, the classification model includes a feature extraction layer, a fusion layer, and a classification layer. The characteristic extraction layer is used for respectively extracting independent image characteristics of the cell fluorescence images of the cells in each channel, so that the channel image characteristics of the cells in each channel are obtained. The characteristic extraction of the signal points is performed on the cell fluorescence images of each channel, and the fact that the cell fluorescence images of each channel belong to the same cell and have certain correlation with each other is considered, but the different cell fluorescence images of the same cell contain rich differential information, such as signal point positions, signal point areas, signal point intensities and other signal point characteristics, so that the characteristic extraction layer is used for performing the characteristic extraction of the cell fluorescence images of each channel without mutual interference, the two classification models can better perform characteristic learning and classification on the cell fluorescence images of each channel, and the cell classification precision of the two classification models is improved.
In other embodiments, the feature extraction layer of the classification model includes a plurality of successive depth separable convolution layers. Specifically, when image feature extraction is performed on the cell fluorescence image of the cell in each channel based on the feature extraction layer in the two-class model to obtain the channel image feature of the cell in each channel, image feature extraction can be sequentially performed on the cell fluorescence image of the cell in each channel based on a plurality of continuous depth separable convolution layers in the feature extraction layer of the two-class model to obtain the channel image feature of the cell in each channel output by the last depth separable convolution layer. Here, each depth separable convolution layer is a layer of convolution layers, which includes a plurality of convolution kernels, and the convolution kernels in the same depth separable convolution layer are divided into a plurality of groups, wherein the number of groups of convolution kernels is the same as the number of cell fluorescence images of the cell (i.e., the number of channels) (if the channels are Green, red, blue and Gold, the convolution kernels are divided into 4 groups), and each group of convolution kernels is used for performing image feature extraction on the cell fluorescence images of one channel. The final layer depth separable convolution layer sets of convolution kernels output the channel image features of the cell in each channel. In addition, the number of convolution kernels of a plurality of successive depth-separable convolution layers in the feature extraction layer increases in order to gradually enhance the feature extraction capability of each layer of the depth-separable convolution layers.
Taking three depth-separable convolution layers, green, red, blue and Gold four channels as an example, a first set of convolution kernels of the first depth-separable convolution layer is used to perform feature extraction on a cell fluorescence image of a Green channel (which is only an example, and is not particularly limited by the embodiment of the present invention, which is specific to which of the four channels is processed), a second set of convolution kernels is used to perform feature extraction on a cell fluorescence image of a Red channel, a third set of convolution kernels is used to perform feature extraction on a cell fluorescence image of a Blue channel, and a fourth set of convolution kernels is used to perform feature extraction on a cell fluorescence image of a Gold channel. The second and third depth-separable convolutional layers are similar, the first set of convolutional kernels is used for feature extraction of the feature map output by the first set of convolutional kernels of the previous depth-separable convolutional layer, the second set of convolutional kernels is used for feature extraction of the feature map output by the second set of convolutional kernels of the previous depth-separable convolutional layer, and so on.
The fusion layer of the classification model is used for fusing the channel image characteristics of the cell in each channel to obtain the fusion image characteristics of the cell. In some embodiments, the fusion layer includes a primary fusion layer and a fusion coding layer, and specifically when the fusion layer fuses the channel image features of the cell in each channel, the primary fusion can be performed on the splicing result of the channel image features of the cell in each channel based on the primary fusion layer in the fusion layer, so as to obtain the initial fusion feature of the cell. The primary fusion layer may be a convolution layer, and is configured to perform convolution operation on a splicing result of the channel image features of each channel, so as to implement primary fusion of the signal point features of each channel. And then, based on a fusion coding layer in the fusion layer, carrying out feature coding on the initial fusion characteristics of the cell to obtain fusion image characteristics of the cell. The fusion coding layer is formed by an encoder, and the initial fusion characteristics are subjected to characteristic coding by utilizing an encoder network, so that the relevance and difference characteristics of the signal points of each channel can be better extracted and fused from the characteristics of the signal points of each channel contained in the initial fusion characteristics. The encoder may adopt resnet, densenet and swin transformer v2 models, and the embodiment of the present invention is not limited thereto.
The classification layer of the classification model is used for classifying the cells based on the fusion image characteristics of the cells to obtain cell classification results of the cells. Wherein, the classification layer can be composed of two fully connected layers.
In some embodiments, the cell instance segmentation model and the classification model may be trained based on the following steps:
training to obtain the cell instance segmentation model based on a sample cell nucleus image and coordinate information of each cell outline in the sample cell nucleus image;
based on the coordinate information of each cell outline in the sample cell nucleus image, cutting the sample fluorescent signal point image of each channel shot under the same visual field with the sample cell nucleus image to obtain a sample cell fluorescent image of each sample cell in each channel;
and training to obtain the classification model based on the sample cell fluorescence images of the sample cells in each channel and the classification labels of the sample cells.
Specifically, a large amount of sample data is first collected as a basis for model training. The sample data comprises a sample cell nucleus image and sample fluorescent signal point images of all channels in the same visual field and corresponding data labels. The data labels can be stored in a json form, each json file corresponds to a sample cell nucleus image, and the json file contains coordinate information of each cell outline of the image and whether each cell belongs to a class label of a target cell. The label of the cell instance segmentation model can be generated according to the coordinate information of the cell outline of the json file, so that training of the cell instance segmentation network is completed, and the cell instance segmentation model can be obtained specifically based on the sample cell nucleus image and the coordinate information of each cell outline in the sample cell nucleus image. Before training the cell instance segmentation model, the sample cell nucleus image can be subjected to data preprocessing, and mainly can be subjected to data enhancement processing such as rand crop, HSV enhancement, random vertical overturn and the like.
According to the coordinate information of the cell outline in the sample cell nucleus image in the json file and the class label of whether each cell belongs to the target cell, the sample fluorescent signal point images of all channels in the same visual field can be cut, so that the sample cell fluorescent images of all types of sample cells in all channels can be obtained, and the classification model can be trained. The two classification models can be obtained by training based on sample cell fluorescence images of sample cells in each channel and class labels of corresponding sample cells. Similarly, before training the classification model, the sample cell fluorescence image can be subjected to data preprocessing, and mainly can be subjected to data enhancement processing such as rand crop, HSV enhancement, random vertical overturn and the like.
In summary, according to the method provided by the embodiment of the invention, the cell example segmentation model is utilized to segment the cell nucleus image, after the cell area of each cell in the cell nucleus image is obtained, the cell fluorescence image of the same cell in each channel is cut out from the fluorescence signal point image of each channel, then the cell fluorescence image of any cell in each channel is input into the classification model, the image feature extraction is carried out on the cell fluorescence image of the cell in each channel by using the classification model, the signal point feature under each channel is extracted and fused, the fusion image feature of the cell is obtained, the cell is classified based on the fusion image feature of the cell, the cell classification result of the cell is obtained, the signal point feature of the cell fluorescence image can be better extracted and learned, and effective feature fusion is carried out, so that more reasonable judgment is carried out on the cell, and the efficiency and the accuracy of cell classification are improved.
The cell classification device based on multi-channel information fusion provided by the invention is described below, and the cell classification device based on multi-channel information fusion described below and the cell classification method based on multi-channel information fusion described above can be correspondingly referred to each other.
Based on any of the above embodiments, fig. 4 is a schematic structural diagram of a cell sorting apparatus based on multi-channel information fusion according to the present invention, as shown in fig. 4, the apparatus includes:
an image acquisition unit 410 for acquiring a nuclear image and fluorescent signal point images of each channel photographed under the same field of view;
a cell segmentation unit 420, configured to perform an instance segmentation on the cell nucleus image based on a cell instance segmentation model, so as to obtain a cell region of each cell in the cell nucleus image;
an image cutting unit 430, configured to cut out a cell fluorescence image of the same cell in each channel from the fluorescence signal point images of each channel based on the cell area of each cell in the cell nucleus image;
the cell classification unit 440 is configured to input a cell fluorescence image of any cell in each channel into a classification model, extract image features of the cell fluorescence image of any cell in each channel based on the classification model, obtain fusion image features of any cell, and classify any cell based on the fusion image features of any cell, so as to obtain a cell classification result of any cell.
According to the device provided by the embodiment of the invention, the cell example segmentation model is utilized to segment the cell nucleus image, after the cell area of each cell in the cell nucleus image is obtained, the cell fluorescence image of the same cell in each channel is cut out from the fluorescence signal point image of each channel, then the cell fluorescence image of any cell in each channel is input into the classification model, the image feature extraction is carried out on the cell fluorescence image of the cell in each channel by using the classification model, the signal point feature under each channel is extracted and fused, the fused image feature of the cell is obtained, the cell is classified based on the fused image feature of the cell, the cell classification result of the cell is obtained, the signal point feature of the cell fluorescence image can be better extracted and learned, and effective feature fusion is carried out, so that more reasonable judgment is carried out on the cell, and the cell classification efficiency and accuracy are improved.
Based on any of the above embodiments, the extracting the image features of the fluorescence image of the cells in each channel based on the classification model to obtain the fused image features of the cells, and classifying the cells based on the fused image features of the cells to obtain the cell classification result of the cells specifically includes:
based on the feature extraction layer in the classification model, respectively extracting image features of the cell fluorescence image of any cell in each channel to obtain channel image features of any cell in each channel;
based on the fusion layer in the two classification models, fusing the channel image characteristics of any cell in each channel to obtain fused image characteristics of any cell;
and classifying any cell based on the classification layer in the classification model by combining the fusion image characteristics of any cell to obtain a cell classification result of any cell.
Based on any one of the above embodiments, the extracting image features of the fluorescence image of the cell in each channel based on the feature extracting layer in the classification model, to obtain the channel image features of the cell in each channel, specifically includes:
sequentially extracting image features of the cell fluorescence images of any cell in each channel based on a plurality of continuous depth separable convolution layers in the feature extraction layers of the two classification models to obtain channel image features of any cell in each channel output by the last depth separable convolution layer;
the number of the convolution kernels in each depth separable convolution layer is the same as the number of the cell fluorescent images, each group of convolution kernels is used for extracting image features of the cell fluorescent images of the corresponding channel, and the number of the convolution kernels of a plurality of continuous depth separable convolution layers in the feature extraction layer is sequentially increased.
Based on any of the above embodiments, based on the fusion layer in the classification model, fusing the channel image features of any cell in each channel to obtain a fused image feature of any cell, which specifically includes:
based on a primary fusion layer in the fusion layers, carrying out primary fusion on the splicing result of the channel image characteristics of any cell in each channel to obtain the initial fusion characteristics of any cell;
and carrying out feature coding on the initial fusion characteristics of any cell based on the fusion coding layer in the fusion layer to obtain the fusion image characteristics of any cell.
Based on any of the above embodiments, the primary fusion layer is comprised of a convolutional layer and the fusion coding layer is comprised of an encoder.
Based on any of the above embodiments, the classification layer in the classification model is composed of two fully connected layers.
Based on any of the above embodiments, the cell instance segmentation model and the classification model are trained based on the following steps:
training to obtain the cell instance segmentation model based on a sample cell nucleus image and coordinate information of each cell outline in the sample cell nucleus image;
based on the coordinate information of each cell outline in the sample cell nucleus image, cutting the sample fluorescent signal point image of each channel shot under the same visual field with the sample cell nucleus image to obtain a sample cell fluorescent image of each sample cell in each channel;
and training to obtain the classification model based on the sample cell fluorescence images of the sample cells in each channel and the classification labels of the sample cells.
Fig. 5 is a schematic structural diagram of an electronic device according to the present invention, and as shown in fig. 5, the electronic device may include: processor 510, memory 520, communication interface (Communications Interface) 530, and communication bus 540, wherein processor 510, memory 520, and communication interface 530 communicate with each other via communication bus 540. Processor 510 may invoke logic instructions in memory 520 to perform a cell classification method based on multi-channel information fusion, the method comprising: acquiring a cell nucleus image and fluorescent signal point images of all channels, which are shot in the same visual field; performing example segmentation on the cell nucleus image based on a cell example segmentation model to obtain cell areas of cells in the cell nucleus image; based on the cell areas of the cells in the cell nucleus image, cutting out the cell fluorescence image of the same cell in each channel from the fluorescence signal point image of each channel; inputting the cell fluorescence image of any cell in each channel into a classification model, extracting image features of the cell fluorescence image of any cell in each channel based on the classification model to obtain fusion image features of any cell, and classifying any cell based on the fusion image features of any cell to obtain a cell classification result of any cell.
Further, the logic instructions in the memory 520 described above may be implemented in the form of software functional units and may be stored in a computer-readable storage medium when sold or used as a stand-alone product. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
In another aspect, the present invention also provides a computer program product comprising a computer program stored on a non-transitory computer readable storage medium, the computer program comprising program instructions which, when executed by a computer, enable the computer to perform a method of cell classification based on multi-channel information fusion provided by the above methods, the method comprising: acquiring a cell nucleus image and fluorescent signal point images of all channels, which are shot in the same visual field; performing example segmentation on the cell nucleus image based on a cell example segmentation model to obtain cell areas of cells in the cell nucleus image; based on the cell areas of the cells in the cell nucleus image, cutting out the cell fluorescence image of the same cell in each channel from the fluorescence signal point image of each channel; inputting the cell fluorescence image of any cell in each channel into a classification model, extracting image features of the cell fluorescence image of any cell in each channel based on the classification model to obtain fusion image features of any cell, and classifying any cell based on the fusion image features of any cell to obtain a cell classification result of any cell.
In yet another aspect, the present invention also provides a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, is implemented to perform the above provided multi-channel information fusion-based cell classification method, the method comprising: acquiring a cell nucleus image and fluorescent signal point images of all channels, which are shot in the same visual field; performing example segmentation on the cell nucleus image based on a cell example segmentation model to obtain cell areas of cells in the cell nucleus image; based on the cell areas of the cells in the cell nucleus image, cutting out the cell fluorescence image of the same cell in each channel from the fluorescence signal point image of each channel; inputting the cell fluorescence image of any cell in each channel into a classification model, extracting image features of the cell fluorescence image of any cell in each channel based on the classification model to obtain fusion image features of any cell, and classifying any cell based on the fusion image features of any cell to obtain a cell classification result of any cell.
The apparatus embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
From the above description of the embodiments, it will be apparent to those skilled in the art that the embodiments may be implemented by means of software plus necessary general hardware platforms, or of course may be implemented by means of hardware. Based on this understanding, the foregoing technical solution may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a computer readable storage medium, such as ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method described in the respective embodiments or some parts of the embodiments.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (10)

1. A method of classifying cells based on multi-channel information fusion, comprising:
acquiring a cell nucleus image and fluorescent signal point images of all channels, which are shot in the same visual field;
performing example segmentation on the cell nucleus image based on a cell example segmentation model to obtain cell areas of cells in the cell nucleus image;
based on the cell areas of the cells in the cell nucleus image, cutting out the cell fluorescence image of the same cell in each channel from the fluorescence signal point image of each channel;
inputting the cell fluorescence image of any cell in each channel into a classification model, extracting image features of the cell fluorescence image of any cell in each channel based on the classification model to obtain fusion image features of any cell, and classifying any cell based on the fusion image features of any cell to obtain a cell classification result of any cell.
2. The method for classifying cells based on multi-channel information fusion according to claim 1, wherein the extracting image features of the fluorescence images of cells of any cell in each channel based on the classification model to obtain fusion image features of any cell, and classifying any cell based on the fusion image features of any cell to obtain a cell classification result of any cell, specifically comprises:
based on the feature extraction layer in the classification model, respectively extracting image features of the cell fluorescence image of any cell in each channel to obtain channel image features of any cell in each channel;
based on the fusion layer in the two classification models, fusing the channel image characteristics of any cell in each channel to obtain fused image characteristics of any cell;
and classifying any cell based on the classification layer in the classification model by combining the fusion image characteristics of any cell to obtain a cell classification result of any cell.
3. The method for classifying cells based on multi-channel information fusion according to claim 2, wherein the feature extraction layer based on the classification model performs image feature extraction on the fluorescence images of cells of any cell in each channel to obtain the channel image features of any cell in each channel, and specifically comprises:
sequentially extracting image features of the cell fluorescence images of any cell in each channel based on a plurality of continuous depth separable convolution layers in the feature extraction layers of the two classification models to obtain channel image features of any cell in each channel output by the last depth separable convolution layer;
the number of the convolution kernels in each depth separable convolution layer is the same as the number of the cell fluorescent images, each group of convolution kernels is used for extracting image features of the cell fluorescent images of the corresponding channel, and the number of the convolution kernels of a plurality of continuous depth separable convolution layers in the feature extraction layer is sequentially increased.
4. The method for classifying cells based on multi-channel information fusion according to claim 2, wherein the fusing layer based on the classification model fuses the channel image features of any cell in each channel to obtain fused image features of any cell, and specifically comprises:
based on a primary fusion layer in the fusion layers, carrying out primary fusion on the splicing result of the channel image characteristics of any cell in each channel to obtain the initial fusion characteristics of any cell;
and carrying out feature coding on the initial fusion characteristics of any cell based on the fusion coding layer in the fusion layer to obtain the fusion image characteristics of any cell.
5. The method of claim 4, wherein the primary fusion layer is formed of a convolutional layer and the fusion coding layer is formed of an encoder.
6. The method of claim 2, wherein the classification layer in the classification model is composed of two fully connected layers.
7. The method of cell classification based on multi-channel information fusion according to any of claims 1 to 6, wherein the cell instance segmentation model and the classification model are trained based on the steps of:
training to obtain the cell instance segmentation model based on a sample cell nucleus image and coordinate information of each cell outline in the sample cell nucleus image;
based on the coordinate information of each cell outline in the sample cell nucleus image, cutting the sample fluorescent signal point image of each channel shot under the same visual field with the sample cell nucleus image to obtain a sample cell fluorescent image of each sample cell in each channel;
and training to obtain the classification model based on the sample cell fluorescence images of the sample cells in each channel and the classification labels of the sample cells.
8. A cell sorter based on multichannel information fusion, comprising:
the image acquisition unit is used for acquiring a nuclear image and fluorescent signal point images of all channels, which are shot under the same visual field;
the cell segmentation unit is used for carrying out example segmentation on the cell nucleus image based on a cell example segmentation model to obtain cell areas of cells in the cell nucleus image;
the image cutting unit is used for cutting out a cell fluorescence image of the same cell in each channel from the fluorescence signal point image of each channel based on the cell area of each cell in the cell nucleus image;
the cell classification unit is used for inputting the cell fluorescence image of any cell in each channel into a classification model, extracting the image features of the cell fluorescence image of any cell in each channel based on the classification model to obtain the fusion image features of any cell, and classifying any cell based on the fusion image features of any cell to obtain the cell classification result of any cell.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the multi-channel information fusion based cell classification method of any one of claims 1 to 7 when the program is executed by the processor.
10. A non-transitory computer readable storage medium having stored thereon a computer program, wherein the computer program when executed by a processor implements the multi-channel information fusion based cell classification method according to any of claims 1 to 7.
CN202310935341.7A 2023-07-28 2023-07-28 Cell classification method and device based on multichannel information fusion Active CN116665210B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310935341.7A CN116665210B (en) 2023-07-28 2023-07-28 Cell classification method and device based on multichannel information fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310935341.7A CN116665210B (en) 2023-07-28 2023-07-28 Cell classification method and device based on multichannel information fusion

Publications (2)

Publication Number Publication Date
CN116665210A true CN116665210A (en) 2023-08-29
CN116665210B CN116665210B (en) 2023-10-17

Family

ID=87728237

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310935341.7A Active CN116665210B (en) 2023-07-28 2023-07-28 Cell classification method and device based on multichannel information fusion

Country Status (1)

Country Link
CN (1) CN116665210B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116913532A (en) * 2023-09-12 2023-10-20 四川互慧软件有限公司 Clinical path recommendation method

Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150110381A1 (en) * 2013-09-22 2015-04-23 The Regents Of The University Of California Methods for delineating cellular regions and classifying regions of histopathology and microanatomy
US20150133321A1 (en) * 2013-11-13 2015-05-14 General Electric Company Quantitative in situ characterization of biological samples
CN106226247A (en) * 2016-07-15 2016-12-14 暨南大学 A kind of cell detection method based on EO-1 hyperion micro-imaging technique
CN106650796A (en) * 2016-12-06 2017-05-10 国家纳米科学中心 Artificial intelligence based cell fluorescence image classification method and system
US20190258846A1 (en) * 2018-02-20 2019-08-22 The Regents Of The University Of Michigan Three-Dimensional Cell and Tissue Image Analysis For Cellular And Sub-Cellular Morphological Modeling And Classification
US20190279360A1 (en) * 2018-03-09 2019-09-12 Case Western Reserve University Predicting overall survival in early stage lung cancer with feature driven local cell graphs (fedeg)
CN111062296A (en) * 2019-12-11 2020-04-24 武汉兰丁医学高科技有限公司 Automatic white blood cell identification and classification method based on computer
CN111492368A (en) * 2017-12-22 2020-08-04 文塔纳医疗***公司 System and method for classifying cells in tissue images based on membrane characteristics
US20200302606A1 (en) * 2019-03-22 2020-09-24 Becton, Dickinson And Company Spectral unmixing of fluorescence imaging using radiofrequency-multiplexed excitation data
CN113177927A (en) * 2021-05-17 2021-07-27 西安交通大学 Bone marrow cell classification and identification method and system based on multiple features and multiple classifiers
CN113689396A (en) * 2021-08-20 2021-11-23 深圳先进技术研究院 Cell fluorescence image thresholding method, system, terminal and storage medium
CN113763370A (en) * 2021-09-14 2021-12-07 佰诺全景生物技术(北京)有限公司 Digital pathological image processing method and device, electronic equipment and storage medium
CN114092934A (en) * 2020-07-31 2022-02-25 骏实生物科技(上海)有限公司 Method for classifying circulating tumor cells
US20220076067A1 (en) * 2020-09-08 2022-03-10 Insitro, Inc. Biological image transformation using machine-learning models
CN114332855A (en) * 2021-12-24 2022-04-12 杭州电子科技大学 Unmarked leukocyte three-classification method based on bright field microscopic imaging
CN114898866A (en) * 2022-05-24 2022-08-12 广州锟元方青医疗科技有限公司 Thyroid cell auxiliary diagnosis method, equipment and storage medium
CN115063796A (en) * 2022-08-18 2022-09-16 珠海横琴圣澳云智科技有限公司 Cell classification method and device based on signal point content constraint
CN115100474A (en) * 2022-06-30 2022-09-23 武汉兰丁智能医学股份有限公司 Thyroid gland puncture image classification method based on topological feature analysis
CN115100647A (en) * 2022-07-29 2022-09-23 烟台至公生物医药科技有限公司 Cervical cancer cell identification method and device

Patent Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150110381A1 (en) * 2013-09-22 2015-04-23 The Regents Of The University Of California Methods for delineating cellular regions and classifying regions of histopathology and microanatomy
US20150133321A1 (en) * 2013-11-13 2015-05-14 General Electric Company Quantitative in situ characterization of biological samples
CN106226247A (en) * 2016-07-15 2016-12-14 暨南大学 A kind of cell detection method based on EO-1 hyperion micro-imaging technique
CN106650796A (en) * 2016-12-06 2017-05-10 国家纳米科学中心 Artificial intelligence based cell fluorescence image classification method and system
CN111492368A (en) * 2017-12-22 2020-08-04 文塔纳医疗***公司 System and method for classifying cells in tissue images based on membrane characteristics
US20190258846A1 (en) * 2018-02-20 2019-08-22 The Regents Of The University Of Michigan Three-Dimensional Cell and Tissue Image Analysis For Cellular And Sub-Cellular Morphological Modeling And Classification
US20190279360A1 (en) * 2018-03-09 2019-09-12 Case Western Reserve University Predicting overall survival in early stage lung cancer with feature driven local cell graphs (fedeg)
US20200302606A1 (en) * 2019-03-22 2020-09-24 Becton, Dickinson And Company Spectral unmixing of fluorescence imaging using radiofrequency-multiplexed excitation data
CN111062296A (en) * 2019-12-11 2020-04-24 武汉兰丁医学高科技有限公司 Automatic white blood cell identification and classification method based on computer
CN114092934A (en) * 2020-07-31 2022-02-25 骏实生物科技(上海)有限公司 Method for classifying circulating tumor cells
US20220076067A1 (en) * 2020-09-08 2022-03-10 Insitro, Inc. Biological image transformation using machine-learning models
CN113177927A (en) * 2021-05-17 2021-07-27 西安交通大学 Bone marrow cell classification and identification method and system based on multiple features and multiple classifiers
CN113689396A (en) * 2021-08-20 2021-11-23 深圳先进技术研究院 Cell fluorescence image thresholding method, system, terminal and storage medium
CN113763370A (en) * 2021-09-14 2021-12-07 佰诺全景生物技术(北京)有限公司 Digital pathological image processing method and device, electronic equipment and storage medium
CN114332855A (en) * 2021-12-24 2022-04-12 杭州电子科技大学 Unmarked leukocyte three-classification method based on bright field microscopic imaging
CN114898866A (en) * 2022-05-24 2022-08-12 广州锟元方青医疗科技有限公司 Thyroid cell auxiliary diagnosis method, equipment and storage medium
CN115100474A (en) * 2022-06-30 2022-09-23 武汉兰丁智能医学股份有限公司 Thyroid gland puncture image classification method based on topological feature analysis
CN115100647A (en) * 2022-07-29 2022-09-23 烟台至公生物医药科技有限公司 Cervical cancer cell identification method and device
CN115063796A (en) * 2022-08-18 2022-09-16 珠海横琴圣澳云智科技有限公司 Cell classification method and device based on signal point content constraint

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116913532A (en) * 2023-09-12 2023-10-20 四川互慧软件有限公司 Clinical path recommendation method
CN116913532B (en) * 2023-09-12 2023-12-08 四川互慧软件有限公司 Clinical path recommendation method

Also Published As

Publication number Publication date
CN116665210B (en) 2023-10-17

Similar Documents

Publication Publication Date Title
CN106248559B (en) A kind of five sorting technique of leucocyte based on deep learning
CN111739010B (en) Method and device for detecting abnormal circulating cells
CN109492706B (en) Chromosome classification prediction device based on recurrent neural network
CN113454733A (en) Multi-instance learner for prognostic tissue pattern recognition
US8379961B2 (en) Mitotic figure detector and counter system and method for detecting and counting mitotic figures
EP2719754B1 (en) Image processing apparatus, image processing method and image processing system
CN112069921A (en) Small sample visual target identification method based on self-supervision knowledge migration
CN115063796B (en) Cell classification method and device based on signal point content constraint
CN116665210B (en) Cell classification method and device based on multichannel information fusion
CN114038037B (en) Expression label correction and identification method based on separable residual error attention network
WO2024032623A1 (en) Method and device for recognizing fluorescence staining signal point in cell nucleus image
CN114998332B (en) Method and device for determining karyotype abnormal cells
CN106023159A (en) Disease spot image segmentation method and system for greenhouse vegetable leaf
CN115862073A (en) Transformer substation harmful bird species target detection and identification method based on machine vision
US20220390735A1 (en) Method and device for capturing microscopy objects in image data
CN104573701B (en) A kind of automatic testing method of Tassel of Corn
CN114049330A (en) Method and system for fusing fluorescence characteristics in fluorescence in-situ hybridization image
CN110414317B (en) Full-automatic leukocyte classification counting method based on capsule network
KR20190114241A (en) Apparatus for algae classification and cell countion based on deep learning and method for thereof
Marcuzzo et al. Automated Arabidopsis plant root cell segmentation based on SVM classification and region merging
CN114387596A (en) Automatic interpretation system for cytopathology smear
Aggarwal et al. Protein Subcellular Localization Prediction by Concatenation of Convolutional Blocks for Deep Features Extraction from Microscopic Images
CN115115939B (en) Remote sensing image target fine-grained identification method based on characteristic attention mechanism
CN114037868B (en) Image recognition model generation method and device
CN114863163A (en) Method and system for cell classification based on cell image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant