CN116363123A - Fluorescence microscopic imaging system and method for detecting circulating tumor cells - Google Patents

Fluorescence microscopic imaging system and method for detecting circulating tumor cells Download PDF

Info

Publication number
CN116363123A
CN116363123A CN202310581412.8A CN202310581412A CN116363123A CN 116363123 A CN116363123 A CN 116363123A CN 202310581412 A CN202310581412 A CN 202310581412A CN 116363123 A CN116363123 A CN 116363123A
Authority
CN
China
Prior art keywords
fluorescence
training
feature map
shallow
map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310581412.8A
Other languages
Chinese (zh)
Other versions
CN116363123B (en
Inventor
张开山
饶浪晴
田华
赵丹
于磊
马宁
李超
郭志敏
刘艳省
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
HANGZHOU WATSON BIOTECH Inc
Original Assignee
HANGZHOU WATSON BIOTECH Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by HANGZHOU WATSON BIOTECH Inc filed Critical HANGZHOU WATSON BIOTECH Inc
Priority to CN202310581412.8A priority Critical patent/CN116363123B/en
Publication of CN116363123A publication Critical patent/CN116363123A/en
Application granted granted Critical
Publication of CN116363123B publication Critical patent/CN116363123B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10056Microscopic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10064Fluorescence image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Quality & Reliability (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The utility model relates to the technical field of fluorescence microscopic imaging, and specifically discloses a fluorescence microscopic imaging system for detecting circulating tumor cells and a method thereof, wherein a fluorescence development image of a detected sample is firstly obtained, then the fluorescence development image is subjected to image preprocessing and passes through a shallow feature extractor based on a first convolution neural network model to obtain a fluorescence shallow feature map, then the fluorescence shallow feature map passes through a spatial attention module and a deep feature extractor based on a second convolution neural network model to obtain a fluorescence deep feature map, finally, the spatial enhanced fluorescence shallow feature map and the fluorescence deep feature map are fused, then image semantic segmentation is carried out through a decoder to obtain a segmentation result, and the number of CTC cells is determined based on the segmentation result. In this way, the number of CTC cells is accurately calculated.

Description

Fluorescence microscopic imaging system and method for detecting circulating tumor cells
Technical Field
The present application relates to the field of fluorescence microscopy imaging technology, and more particularly, to a fluorescence microscopy imaging system for detecting circulating tumor cells and a method thereof.
Background
Circulating Tumor Cells (CTCs) are rare cells shed into the blood circulation from primary or metastatic tumors and can serve as important markers for tumor biology and clinical treatment. The detection and analysis of CTCs are of great significance for early diagnosis, prognosis evaluation, treatment effect monitoring and personalized treatment of tumors.
However, the detection and isolation of CTCs is a very challenging task because of the extremely low number of CTCs in the blood (about 1-10 per milliliter of blood) and the morphology and size that is similar to normal blood cells. At present, the commonly used CTC detection methods mainly comprise a surface antigen-based immune enrichment method, a physical property-based separation method, a functional property-based separation method and the like, but the methods have certain limitations such as low specificity, complex operation, large sample loss and the like.
Thus, an optimized solution is desired.
Disclosure of Invention
The application provides a fluorescence microscopic imaging system for detecting circulating tumor cells and a method thereof, wherein a fluorescence development image of a detected sample is firstly obtained, then the fluorescence development image is subjected to image preprocessing and passes through a shallow feature extractor based on a first convolution neural network model to obtain a fluorescence shallow feature map, then the fluorescence shallow feature map passes through a spatial attention module and a deep feature extractor based on a second convolution neural network model to obtain a fluorescence deep feature map, finally, the spatial enhanced fluorescence shallow feature map and the fluorescence deep feature map are fused, then image semantic segmentation is carried out through a decoder to obtain a segmentation result, and the number of CTC cells is determined based on the segmentation result. In this way, the number of CTC cells is accurately calculated.
In a first aspect, there is provided a fluorescence microscopy imaging system for detection of circulating tumor cells, the system comprising: the data acquisition unit is used for acquiring a fluorescence development image of the detected sample; the image preprocessing unit is used for preprocessing the fluorescence development image to obtain a preprocessed fluorescence development image; the shallow feature extraction unit is used for enabling the preprocessed fluorescence development image to pass through a shallow feature extractor based on a first convolutional neural network model so as to obtain a fluorescence shallow feature map; the space strengthening unit is used for enabling the fluorescence shallow feature map to pass through the space attention module to obtain a space strengthening fluorescence shallow feature map; the deep feature extraction unit is used for enabling the space enhanced fluorescence shallow feature map to pass through a deep feature extractor based on a second convolutional neural network model so as to obtain a fluorescence deep feature map; the feature fusion unit is used for fusing the space enhanced fluorescence shallow feature map and the fluorescence deep feature map to obtain a fluorescence feature map; a decoding unit for passing the fluorescence characteristic map through a decoder to obtain a decoded image; and a counting unit for performing image semantic segmentation on the decoded image to obtain a segmentation result, and determining the number of CTC cells based on the segmentation result.
With reference to the first aspect, in an implementation manner of the first aspect, the shallow feature extraction unit is configured to: input data are respectively carried out in forward transfer of layers by using each layer of the first convolutional neural network model: carrying out convolution processing on the input data to obtain a convolution characteristic diagram; carrying out pooling treatment based on a local feature matrix on the convolution feature map to obtain a pooled feature map; non-linear activation is carried out on the pooled feature map so as to obtain an activated feature map; the input of the first layer of the first convolutional neural network model is the preprocessed fluorescence development image, the output of the last layer of the first convolutional neural network model is the fluorescence shallow layer characteristic diagram, and the number of layers of the first convolutional neural network model is more than or equal to 1 and less than or equal to 6.
With reference to the first aspect, in an implementation manner of the first aspect, the spatial reinforcement unit includes: the pooling unit is used for carrying out average pooling and maximum pooling along the channel dimension on the fluorescence shallow feature map respectively so as to obtain an average feature matrix and a maximum feature matrix; the cascade unit is used for carrying out cascade connection and channel adjustment on the average characteristic matrix and the maximum characteristic matrix to obtain a channel characteristic matrix; the convolution coding unit is used for carrying out convolution coding on the channel characteristic matrix by using a convolution layer of the spatial attention module so as to obtain a convolution characteristic matrix; the score matrix acquisition unit is used for enabling the convolution feature matrix to obtain a space attention score matrix through an activation function; and the attention applying unit is used for multiplying the spatial attention score matrix and each characteristic matrix of the fluorescence shallow characteristic map along the channel dimension by the spatial enhanced fluorescence shallow characteristic map according to position points.
With reference to the first aspect, in an implementation manner of the first aspect, the deep feature extraction unit is configured to: and respectively carrying out convolution processing, pooling processing based on a local feature matrix and nonlinear activation processing on input data in forward transfer of layers by using each layer of the second convolutional neural network model so as to output the fluorescence deep feature map by the last layer of the second convolutional neural network model.
With reference to the first aspect, in an implementation manner of the first aspect, the feature fusion unit is configured to: fusing the spatially enhanced fluorescence shallow feature map and the fluorescence deep feature map using the following fusion formula to obtain a fluorescence feature map; wherein, the fusion formula is:
Figure SMS_1
wherein,,
Figure SMS_2
for the fluorescence profile, < >>
Figure SMS_3
For the spatially enhanced fluorescence shallow feature map, < >>
Figure SMS_4
For the fluorescence deep profile, ">
Figure SMS_5
"means that the elements at the corresponding positions of the spatially enhanced fluorescence shallow feature map and the fluorescence deep feature map are added up,">
Figure SMS_6
And->
Figure SMS_7
Is a weighting parameter for controlling the balance between the spatially enhanced fluorescence shallow feature map and the fluorescence deep feature map in the fluorescence feature map.
With reference to the first aspect, in an implementation manner of the first aspect, the method further includes a training module for training the shallow feature extractor based on the first convolutional neural network model, the spatial attention module, the deep feature extractor based on the second convolutional neural network model, and the decoder; wherein, training module includes: the training data acquisition unit is used for acquiring training data, wherein the training data comprises training fluorescence development images of the detected sample and a true value of the number of CTC cells; the training image preprocessing unit is used for carrying out image preprocessing on the training fluorescence development image to obtain a fluorescence development image after training preprocessing; the training shallow feature extraction unit is used for enabling the fluorescence development image after training pretreatment to pass through the shallow feature extractor based on the first convolutional neural network model so as to obtain a training fluorescence shallow feature map; the training space strengthening unit is used for enabling the training fluorescence shallow feature map to pass through the space attention module so as to obtain a training space strengthening fluorescence shallow feature map; the training deep feature extraction unit is used for enabling the training space enhanced fluorescence shallow feature map to pass through the deep feature extractor based on the second convolutional neural network model so as to obtain a training fluorescence deep feature map; the training feature fusion unit is used for fusing the training space enhanced fluorescence shallow feature map and the training fluorescence deep feature map to obtain a training fluorescence feature map; the optimizing unit is used for optimizing the characteristic manifold curved surface of the training fluorescence characteristic map to obtain an optimized fluorescence characteristic map; the training decoding unit is used for enabling the optimized fluorescence characteristic diagram to pass through the decoder so as to obtain a training decoded image; the training counting unit is used for carrying out image semantic segmentation on the decoded image to obtain a segmentation result, and determining the number of CTC cells based on the segmentation result; and a training unit for calculating a mean square error between the number of CTC cells and a true value of the number of CTC cells as a loss function value, and training the shallow feature extractor based on the first convolutional neural network model, the spatial attention module, the deep feature extractor based on the second convolutional neural network model, and the decoder by a propagation direction of gradient descent.
With reference to the first aspect, in an implementation manner of the first aspect, the optimizing unit is configured to: optimizing the characteristic manifold curved surface of the training fluorescence characteristic map by using the following optimization formula to obtain an optimized fluorescence characteristic map; wherein, the optimization formula is:
Figure SMS_8
wherein the method comprises the steps of
Figure SMS_9
Is the +.f. of the training fluorescence profile>
Figure SMS_10
Characteristic value of individual position->
Figure SMS_11
And->
Figure SMS_12
Is the mean and standard deviation of the feature value sets of each position in the training fluorescence feature map, +.>
Figure SMS_13
Represents the maximum function, and->
Figure SMS_14
Is the +.f. of the optimized fluorescence profile>
Figure SMS_15
Characteristic values of the location.
In a second aspect, a method for monitoring a remote transmission state of a gas pipe network based on data intelligence is provided, and the method comprises the following steps: acquiring a fluorescence development image of a detected sample; performing image preprocessing on the fluorescence development image to obtain a preprocessed fluorescence development image; passing the preprocessed fluorescence development image through a shallow feature extractor based on a first convolutional neural network model to obtain a fluorescence shallow feature map; the fluorescence shallow feature map passes through a spatial attention module to obtain a spatially enhanced fluorescence shallow feature map; the space enhanced fluorescence shallow feature map passes through a deep feature extractor based on a second convolution neural network model to obtain a fluorescence deep feature map; fusing the space enhanced fluorescence shallow layer characteristic map and the fluorescence deep layer characteristic map to obtain a fluorescence characteristic map; passing the fluorescence signature through a decoder to obtain a decoded image; and performing image semantic segmentation on the decoded image to obtain a segmentation result, and determining the number of CTC cells based on the segmentation result.
In a third aspect, there is provided a chip comprising an input-output interface, at least one processor, at least one memory and a bus, the at least one memory to store instructions, the at least one processor to invoke the instructions in the at least one memory to perform the method in the second aspect.
In a fourth aspect, a computer readable medium is provided for storing a computer program comprising instructions for performing the method of the second aspect described above.
In a fifth aspect, there is provided a computer program product comprising instructions which, when executed by a computer, perform the method of the second aspect described above.
According to the fluorescence microscopic imaging system and the method for detecting the circulating tumor cells, firstly, a fluorescence development image of a detected sample is obtained, then, the fluorescence development image is subjected to image preprocessing, a fluorescence shallow feature image is obtained through a shallow feature extractor based on a first convolution neural network model, then, the fluorescence shallow feature image is subjected to a spatial attention module and a deep feature extractor based on a second convolution neural network model to obtain a fluorescence deep feature image, finally, the spatial enhanced fluorescence shallow feature image and the fluorescence deep feature image are fused, then, image semantic segmentation is carried out through a decoder to obtain a segmentation result, and the number of CTC cells is determined based on the segmentation result. In this way, the number of CTC cells is accurately calculated.
Drawings
FIG. 1 is a schematic block diagram of a fluorescence microscopy imaging system for circulating tumor cell detection in accordance with an embodiment of the present application.
FIG. 2 is a schematic diagram of the structure of a training module in a fluorescence microscopy imaging system for circulating tumor cell detection in accordance with an embodiment of the present application.
FIG. 3 is a schematic flow chart of a fluorescence microscopy imaging method for detection of circulating tumor cells according to an embodiment of the present application.
Fig. 4 is a schematic diagram of a model architecture of a fluorescence microscopy imaging method for circulating tumor cell detection in accordance with an embodiment of the present application.
Fig. 5 is a schematic flow chart of a training phase in a fluorescence microscopy imaging method for detection of circulating tumor cells according to an embodiment of the present application.
FIG. 6 is a schematic diagram of a model architecture of a training phase in a fluorescence microscopy imaging method for circulating tumor cell detection in accordance with an embodiment of the present application.
Detailed Description
The technical solutions in the present application will be described below with reference to the accompanying drawings.
Because of the deep learning-based deep neural network model, related terms and concepts of the deep neural network model that may be related to embodiments of the present application are described below.
1. Deep neural network model
In the deep neural network model, the hidden layers may be convolutional layers and pooled layers. The set of weight values corresponding to the convolutional layer is referred to as a filter, also referred to as a convolutional kernel. The filter and the input eigenvalue are both represented as a multi-dimensional matrix, correspondingly, the filter represented as a multi-dimensional matrix is also called a filter matrix, the input eigenvalue represented as a multi-dimensional matrix is also called an input eigenvalue, of course, besides the input eigenvalue, the eigenvector can also be input, and the input eigenvector is only exemplified by the input eigenvector. The operation of the convolution layer is called a convolution operation, which is to perform an inner product operation on a part of eigenvalues of the input eigenvalue matrix and weight values of the filter matrix.
The operation process of each convolution layer in the deep neural network model can be programmed into software, and then the output result of each layer of network, namely the output characteristic matrix, is obtained by running the software in an operation device. For example, the software performs inner product operation by taking the upper left corner of the input feature matrix of each layer of network as a starting point and taking the size of the filter as a window in a sliding window mode, and extracting data of one window from the feature value matrix each time. After the inner product operation is completed between the data of the right lower corner window of the input feature matrix and the filter, a two-dimensional output feature matrix of each layer of network can be obtained. The software repeats the above process until the entire output feature matrix for each layer of network is generated.
The convolution layer operation process is to slide a window with a filter size across the whole input image (i.e. the input feature matrix), and at each moment, to perform inner product operation on the input feature value covered in the window and the filter, wherein the step length of window sliding is 1. Specifically, the upper left corner of the input feature matrix is used as a starting point, the size of the filter is used as a window, the sliding step length of the window is 1, the input feature value of one window is extracted from the feature value matrix each time and the filter performs inner product operation, and when the data of the lower right corner of the input feature matrix and the filter complete inner product operation, a two-dimensional output feature matrix of the input feature matrix can be obtained.
Since it is often necessary to reduce the number of training parameters, the convolutional layer often requires a periodic introduction of a pooling layer, the only purpose of which is to reduce the spatial size of the image during image processing. The pooling layer may include an average pooling operator and/or a maximum pooling operator for sampling the input image to obtain a smaller size image. The average pooling operator may calculate pixel values in the image over a particular range to produce an average as a result of the average pooling. The max pooling operator may take the pixel with the largest value in a particular range as the result of max pooling. In addition, just as the size of the weighting matrix used in the convolutional layer should be related to the image size, the operators in the pooling layer should also be related to the image size. The size of the image output after the processing by the pooling layer can be smaller than the size of the image input to the pooling layer, and each pixel point in the image output by the pooling layer represents the average value or the maximum value of the corresponding sub-region of the image input to the pooling layer.
Since the functions actually required to be simulated in the deep neural network are nonlinear, but the previous rolling and pooling can only simulate linear functions, in order to introduce nonlinear factors in the deep neural network model to increase the characterization capacity of the whole network, an activation layer is further arranged after the pooling layer, an activation function is arranged in the activation layer, and the commonly used excitation functions include sigmoid, tanh, reLU functions and the like.
2. Softmax classification function
The Softmax classification function is also called soft maximum function, normalized exponential function. One K-dimensional vector containing arbitrary real numbers can be "compressed" into another K-dimensional real vector such that each element ranges between (0, 1) and the sum of all elements is 1. The Softmax classification function is commonly used to classify problems.
Having described the relevant terms and concepts of the deep neural network model that may be involved in embodiments of the present application, the following description will be made of the basic principles of the present application for the convenience of understanding by those skilled in the art.
3. Attention (attention) mechanism
In colloquial terms, the attention mechanism is to focus attention on important points, while ignoring other unimportant factors. For example, the attention mechanism is similar to the visual attention mechanism of human, and human vision can obtain a target area needing to be focused on, namely a focus of attention, by rapidly scanning a global image when facing the image, and then throw more attention resources into the area to acquire more detail information needing to be focused on, and inhibit other useless information. The determination of the importance level may depend on the application scenario, among other things.
In view of the above technical problems, the technical idea of the present application is to acquire a fluorescence developed image of a sample to be detected, to divide CTC cells from the fluorescence developed image using an image processing technique based on deep learning, and to accurately calculate the number of CTC cells.
Specifically, in the technical scheme of the present application, first, a fluorescence developed image of a sample to be detected is acquired. Here, the fluorescence developed image is an image in which a sample to be detected is irradiated with ultraviolet rays by using the principle of a fluorescence microscope to emit fluorescence, and then the shape and position thereof are observed under the microscope. The effective information and basis can be provided for subsequent CTC detection and analysis by acquiring a fluorescence development image of the detected sample.
Then, the fluorescence developed image is subjected to image preprocessing to obtain a preprocessed fluorescence developed image. Here, the use of image preprocessing can suppress unwanted distortion or enhance important image features, and specifically, can be performed using operations such as noise removal, contrast enhancement, brightness adjustment, and the like. In general, image preprocessing can improve the quality of images, reduce the information quantity, reduce the computational complexity and improve the accuracy and efficiency of subsequent feature extraction and segmentation.
Considering that the edge, texture and shape information in the preprocessed fluorescence development image have important significance for distinguishing CTC cells from normal blood cells, in the technical scheme of the application, the preprocessed fluorescence development image is passed through a shallow feature extractor based on a first convolutional neural network model to obtain a fluorescence shallow feature map. That is, the shallow feature extractor based on the first convolutional neural network model is utilized to extract local shallow feature information in the preprocessed fluoroscopic image.
And then, the fluorescence shallow feature map passes through a spatial attention module to obtain a spatially enhanced fluorescence shallow feature map. Here, the spatial attention module may dynamically adjust the weights of the spatial locations such that useful information is enhanced while irrelevant information is suppressed. Specifically, the spatial attention module can be used for strengthening the spatial characteristics of the data by weighting different parts of the input data. That is, the fluorescence shallow feature map is input into the spatial attention module, and the module automatically selects and intensifies the feature parts having discrimination on CTC cells and suppresses the feature parts irrelevant to classification, thereby obtaining the spatially intensified fluorescence shallow feature map. The fluorescence shallow feature map after spatial enhancement can better reflect the difference and relationship between CTC cells and other cells.
Further, the space enhanced fluorescence shallow feature map is passed through a deep feature extractor based on a second convolutional neural network model to obtain a fluorescence deep feature map. The deep feature extractor based on the second convolutional neural network model performs further abstraction on the space enhanced fluorescence shallow feature map through a plurality of convolutional layers and a pooling layer, so that semantic information is enhanced.
In order to keep the detail information of the shallow features and the semantic information of the deep features, in the technical scheme of the application, the space enhanced fluorescence shallow feature map and the fluorescence deep feature map are fused to obtain a fluorescence feature map. Thus, the fluorescence characteristic diagram has more abundant characteristic expression capability.
The fluorescence signature is then passed through a decoder to obtain a decoded image. Specifically, the decoder can decode the characteristic information of the position and morphology of CTC cells in the regression fluorescence characteristic map by deconvolution. Subsequently, image semantic segmentation is performed on the decoded image to obtain a segmentation result, and the number of CTC cells is determined based on the segmentation result. Wherein the image semantic segmentation is capable of learning features of the image and predicting a class of each pixel from the features. In this way CTC cells can be distinguished from other cells or background.
Here, when the spatially enhanced fluorescence shallow feature map and the fluorescence deep feature map are fused to obtain a fluorescence feature map, in order to fully utilize the image semantic shallow feature and the deep feature expressed by the spatially enhanced fluorescence shallow feature map and the fluorescence deep feature map, the fluorescence feature map is preferably obtained by directly cascading the spatially enhanced fluorescence shallow feature map and the fluorescence deep feature map along a channel dimension, so that when the fluorescence feature map is decoded, since the fluorescence feature map simultaneously contains the image semantic shallow feature and the deep feature with consistency differences, the convergence effect of the fluorescence feature map through a decoder is poor due to the consistency differences between the feature values of the fluorescence feature map, and the training speed of a model is reduced.
Thus, in the context of the present application, the fluorescence profile is, for example, denoted as
Figure SMS_16
Carrying out tangential plane directional distance normalization of the characteristic manifold curved surface based on the neighborhood points, wherein the tangential plane directional distance normalization concretely comprises the following steps:
Figure SMS_17
wherein the method comprises the steps of
Figure SMS_18
And->
Figure SMS_19
Is a feature value set +.>
Figure SMS_20
Mean and standard deviation of (2), and->
Figure SMS_21
Is the +.f. of the optimized classification feature graph>
Figure SMS_22
Characteristic values of the location.
Here, neighborhood point-based tangent plane directed distance normalization of the feature manifold surface may be performed by the fluorescence feature map
Figure SMS_23
To construct a local linear tangent space based on a statistical neighborhood for each feature value thereof, to orthographically orient feature values by selecting a maximum geometric measure of tangent vectors within the local linear tangent space, and to normalize the expression of local non-European geometric properties for points on a flow surface based on an inner product distance expression of the orientation vectors, to thereby promote the fluorescent feature map by means of geometric correction of the flow surface>
Figure SMS_24
The expression consistency of each characteristic value of the high-dimensional characteristic set is improved, so that the convergence effect of the optimized fluorescent characteristic diagram through a decoder is poor, and the training speed of the model is improved.
The application has the following technical effects:
1. a fluorescence microscopy imaging protocol for the detection of circulating tumor cells is provided, and more particularly, an intelligent detection and counting protocol for circulating tumor cells.
2. According to the scheme, the CTC cells can be effectively separated from the fluorescence development image, so that the CTC cells and other blood cells are distinguished, and false detection or omission caused by similar shapes and sizes is avoided. Meanwhile, the number of CTC cells can be accurately calculated, and valuable information is provided for early diagnosis and treatment of tumors.
Having described the basic principles of the present application, various non-limiting embodiments of the present application are described in detail below with reference to the accompanying drawings.
FIG. 1 is a schematic block diagram of a fluorescence microscopy imaging system for circulating tumor cell detection in accordance with an embodiment of the present application. As shown in fig. 1, a fluorescence microscopy imaging system 100 for detection of circulating tumor cells, comprising:
a data acquisition unit 110 for acquiring a fluorescence developed image of the sample to be detected. It should be understood that a fluorescence developed image is an image of the shape and position of a sample to be inspected by irradiating it with ultraviolet rays using the principle of a fluorescence microscope, causing it to fluoresce, and then observing it under the microscope. The effective information and basis can be provided for subsequent CTC detection and analysis by acquiring a fluorescence development image of the detected sample.
The image preprocessing unit 120 is configured to perform image preprocessing on the fluorescence developed image to obtain a preprocessed fluorescence developed image. It will be appreciated that the use of image pre-processing may suppress unwanted distortion or enhance important image features, and in particular may be done using operations such as noise removal, contrast enhancement, brightness adjustment, etc. In general, image preprocessing can improve the quality of images, reduce the information quantity, reduce the computational complexity and improve the accuracy and efficiency of subsequent feature extraction and segmentation.
And the shallow feature extraction unit 130 is used for passing the preprocessed fluorescence development image through a shallow feature extractor based on a first convolutional neural network model to obtain a fluorescence shallow feature map. It should be understood that, considering that the edge, texture and shape information in the pre-processed fluorescence developed image is of great importance for distinguishing CTC cells from normal blood cells, in the technical solution of the present application, the pre-processed fluorescence developed image is passed through a shallow feature extractor based on a first convolutional neural network model to obtain a fluorescence shallow feature map. That is, the shallow feature extractor based on the first convolutional neural network model is utilized to extract local shallow feature information in the preprocessed fluoroscopic image.
Optionally, in an embodiment of the present application, the shallow feature extraction unit is configured to: input data are respectively carried out in forward transfer of layers by using each layer of the first convolutional neural network model: carrying out convolution processing on the input data to obtain a convolution characteristic diagram; carrying out pooling treatment based on a local feature matrix on the convolution feature map to obtain a pooled feature map; non-linear activation is carried out on the pooled feature map so as to obtain an activated feature map; the input of the first layer of the first convolutional neural network model is the preprocessed fluorescence development image, the output of the last layer of the first convolutional neural network model is the fluorescence shallow layer characteristic diagram, and the number of layers of the first convolutional neural network model is more than or equal to 1 and less than or equal to 6.
The space enhancement unit 140 is configured to pass the fluorescence shallow feature map through a space attention module to obtain a space enhanced fluorescence shallow feature map. It should be appreciated that the spatial attention module may dynamically adjust the weights of the spatial locations such that useful information is enhanced while irrelevant information is suppressed. Specifically, the spatial attention module can be used for strengthening the spatial characteristics of the data by weighting different parts of the input data. That is, the fluorescence shallow feature map is input into the spatial attention module, and the module automatically selects and intensifies the feature parts having discrimination on CTC cells and suppresses the feature parts irrelevant to classification, thereby obtaining the spatially intensified fluorescence shallow feature map. The fluorescence shallow feature map after spatial enhancement can better reflect the difference and relationship between CTC cells and other cells.
Optionally, in an embodiment of the present application, the space enhancement unit includes: the pooling unit is used for carrying out average pooling and maximum pooling along the channel dimension on the fluorescence shallow feature map respectively so as to obtain an average feature matrix and a maximum feature matrix; the cascade unit is used for carrying out cascade connection and channel adjustment on the average characteristic matrix and the maximum characteristic matrix to obtain a channel characteristic matrix; the convolution coding unit is used for carrying out convolution coding on the channel characteristic matrix by using a convolution layer of the spatial attention module so as to obtain a convolution characteristic matrix; the score matrix acquisition unit is used for enabling the convolution feature matrix to obtain a space attention score matrix through an activation function; and the attention applying unit is used for multiplying the spatial attention score matrix and each characteristic matrix of the fluorescence shallow characteristic map along the channel dimension by the spatial enhanced fluorescence shallow characteristic map according to position points.
And a deep feature extraction unit 150, configured to pass the spatially enhanced fluorescence shallow feature map through a deep feature extractor based on a second convolutional neural network model to obtain a fluorescence deep feature map. It should be appreciated that considering that the neural network model is used in feature extraction, if only shallow features are extracted, the accuracy is not high, but deep features can be extracted without doubt to enhance the accuracy of the final result. Thus, the spatially enhanced fluorescence shallow feature map is passed through a deep feature extractor based on a second convolutional neural network model to obtain a fluorescence deep feature map. The deep feature extractor based on the second convolutional neural network model performs further abstraction on the space enhanced fluorescence shallow feature map through a plurality of convolutional layers and a pooling layer, so that semantic information is enhanced.
Optionally, in an embodiment of the present application, the deep feature extraction unit is configured to: and respectively carrying out convolution processing, pooling processing based on a local feature matrix and nonlinear activation processing on input data in forward transfer of layers by using each layer of the second convolutional neural network model so as to output the fluorescence deep feature map by the last layer of the second convolutional neural network model. Here, a ratio of the number of layers of the second convolutional neural network model to the number of layers of the first convolutional neural network model is 5 or more.
And a feature fusion unit 160, configured to fuse the spatially enhanced fluorescence shallow feature map and the fluorescence deep feature map to obtain a fluorescence feature map. It should be understood that, in order to preserve the detail information of the shallow features and the semantic information of the deep features, in the technical solution of the present application, the spatially enhanced fluorescence shallow feature map and the fluorescence deep feature map are fused to obtain a fluorescence feature map. Thus, the fluorescence characteristic diagram has more abundant characteristic expression capability.
Optionally, in an embodiment of the present application, the feature fusion unit is configured to: fusing the spatially enhanced fluorescence shallow feature map and the fluorescence deep feature map using the following fusion formula to obtain a fluorescence feature map; wherein, the fusion formula is:
Figure SMS_25
wherein,,
Figure SMS_26
for the fluorescence profile, < >>
Figure SMS_27
For the spatially enhanced fluorescence shallow feature map, < >>
Figure SMS_28
For the fluorescence deep profile, ">
Figure SMS_29
"means that the elements at the corresponding positions of the spatially enhanced fluorescence shallow feature map and the fluorescence deep feature map are added up,">
Figure SMS_30
And->
Figure SMS_31
Is a weighting parameter for controlling the balance between the spatially enhanced fluorescence shallow feature map and the fluorescence deep feature map in the fluorescence feature map.
And a decoding unit 170 for passing the fluorescence characteristic map through a decoder to obtain a decoded image. It will be appreciated that the characteristic information of the position, morphology of CTC cells in the regression fluorescence profile can be decoded by deconvolution using a decoder.
And a counting unit 180 for performing image semantic segmentation on the decoded image to obtain a segmentation result, and determining the number of CTC cells based on the segmentation result. It should be appreciated that the decoded image is subjected to image semantic segmentation to obtain segmentation results, and the number of CTC cells is determined based on the segmentation results. Wherein the image semantic segmentation is capable of learning features of the image and predicting a class of each pixel from the features. In this way CTC cells can be distinguished from other cells or background. Here, one skilled in the art will appreciate that an image is made up of many pixels, and semantic segmentation, as the name implies, is the grouping or segmentation of pixels according to differences in the meaning of the semantics of the representation of the image.
FIG. 2 is a schematic block diagram of a training module in a fluorescence microscopy imaging system for circulating tumor cell detection in accordance with an embodiment of the present application. As shown in fig. 2, in an embodiment of the present application, the fluorescence microscopy imaging system for detecting circulating tumor cells further includes a training module 200 for training the shallow feature extractor based on the first convolutional neural network model, the spatial attention module, the deep feature extractor based on the second convolutional neural network model, and the decoder; wherein the training module 200 comprises: a training data acquisition unit 210 for acquiring training data including a training fluorescence developed image of the sample to be detected and a true value of the number of CTC cells; a training image preprocessing unit 220, configured to perform image preprocessing on the training fluorescence development image to obtain a fluorescence development image after training preprocessing; a training shallow feature extraction unit 230, configured to pass the fluorescence development image after training pretreatment through the shallow feature extractor based on the first convolutional neural network model to obtain a training fluorescence shallow feature map; a training space enhancement unit 240, configured to pass the training fluorescence shallow feature map through the spatial attention module to obtain a training space enhanced fluorescence shallow feature map; a training deep feature extraction unit 250, configured to pass the training space enhanced fluorescence shallow feature map through the deep feature extractor based on the second convolutional neural network model to obtain a training fluorescence deep feature map; a training feature fusion unit 260, configured to fuse the training space enhanced fluorescence shallow feature map and the training fluorescence deep feature map to obtain a training fluorescence feature map; an optimizing unit 270, configured to perform feature manifold surface optimization on the training fluorescence feature map to obtain an optimized fluorescence feature map; a training decoding unit 280, configured to pass the optimized fluorescence feature map through the decoder to obtain a training decoded image; a training counting unit 290 for performing image semantic segmentation on the decoded image to obtain a segmentation result, and determining the number of CTC cells based on the segmentation result; and a training unit 300 for calculating a mean square error between the number of CTC cells and a true value of the number of CTC cells as a loss function value, and training the shallow feature extractor based on the first convolutional neural network model, the spatial attention module, the deep feature extractor based on the second convolutional neural network model, and the decoder by a propagation direction of gradient descent.
When the space enhanced fluorescence shallow feature map and the fluorescence deep feature map are fused to obtain a fluorescence feature map, in order to fully utilize the image semantic shallow feature and the deep feature expressed by the space enhanced fluorescence shallow feature map and the fluorescence deep feature map, the fluorescence feature map is preferably obtained by directly cascading the space enhanced fluorescence shallow feature map and the fluorescence deep feature map along a channel dimension, so that when the fluorescence feature map is decoded, as the fluorescence feature map simultaneously contains the image semantic shallow feature and the deep feature with consistency difference, the convergence effect of the fluorescence feature map through a decoder is poor due to the consistency difference between feature values of the fluorescence feature map, and the training speed of a model is reduced. Thus, in the context of the present application, the fluorescence profile is, for example, denoted as
Figure SMS_32
And carrying out tangential plane directional distance normalization of the characteristic manifold curved surface based on the neighborhood points.
Optionally, in an embodiment of the present application, the optimizing unit is configured to: optimizing the characteristic manifold curved surface of the training fluorescence characteristic map by using the following optimization formula to obtain an optimized fluorescence characteristic map;
Wherein, the optimization formula is:
Figure SMS_33
wherein the method comprises the steps of
Figure SMS_34
Is the +.f. of the training fluorescence profile>
Figure SMS_35
Characteristic value of individual position->
Figure SMS_36
And->
Figure SMS_37
Is the mean and standard deviation of the feature value sets of each position in the training fluorescence feature map, +.>
Figure SMS_38
Represents the maximum function, and->
Figure SMS_39
Is the +.f. of the optimized fluorescence profile>
Figure SMS_40
Characteristic values of the location.
Here, neighborhood point-based tangent plane directed distance normalization of the feature manifold surface may be performed by the fluorescence feature map
Figure SMS_41
To construct a statistical neighborhood based local linear tangent space for each feature value thereof, to orthographically orient feature values by selecting a maximum geometric measure of tangent vectors within the local linear tangent space, and to orthographically express local non-euro-geometric properties of points on a surface of a flow based on an inner product distance expression of orientation vectorsThereby improving the fluorescence profile by means of geometric correction of the manifold curve>
Figure SMS_42
The expression consistency of each characteristic value of the high-dimensional characteristic set is improved, so that the convergence effect of the optimized fluorescent characteristic diagram through a decoder is poor, and the training speed of the model is improved.
In summary, the fluorescence microscopic imaging system for detecting the circulating tumor cells firstly obtains a fluorescence development image of a detected sample, then performs image preprocessing on the fluorescence development image, obtains a fluorescence shallow feature map through a shallow feature extractor based on a first convolutional neural network model, then obtains a fluorescence deep feature map through a spatial attention module and a deep feature extractor based on a second convolutional neural network model, and finally fuses the spatially enhanced fluorescence shallow feature map and the fluorescence deep feature map, performs image semantic segmentation through a decoder to obtain a segmentation result, and determines the number of CTC cells based on the segmentation result. In this way, the number of CTC cells is accurately calculated.
FIG. 3 is a schematic flow chart of a fluorescence microscopy imaging method for detection of circulating tumor cells according to an embodiment of the present application. As shown in fig. 3, the fluorescence microscopic imaging method for detecting the circulating tumor cells comprises the following steps: s110, acquiring a fluorescence development image of a detected sample; s120, carrying out image preprocessing on the fluorescence development image to obtain a preprocessed fluorescence development image; s130, passing the preprocessed fluorescence development image through a shallow feature extractor based on a first convolutional neural network model to obtain a fluorescence shallow feature map; s140, the fluorescence shallow feature map passes through a spatial attention module to obtain a spatially enhanced fluorescence shallow feature map; s150, passing the space enhanced fluorescence shallow feature map through a deep feature extractor based on a second convolutional neural network model to obtain a fluorescence deep feature map; s160, fusing the space enhanced fluorescence shallow layer characteristic map and the fluorescence deep layer characteristic map to obtain a fluorescence characteristic map; s170, passing the fluorescence characteristic map through a decoder to obtain a decoded image; and S180, performing image semantic segmentation on the decoded image to obtain a segmentation result, and determining the number of CTC cells based on the segmentation result.
Fig. 4 is a schematic diagram of a model architecture of a fluorescence microscopy imaging method for circulating tumor cell detection in accordance with an embodiment of the present application. As shown in fig. 4, the input of the model architecture of the fluorescence microscopic imaging method for detecting the circulating tumor cells is a fluorescence development image of the detected sample. Firstly, carrying out image preprocessing on the fluorescence developed image to obtain a preprocessed fluorescence developed image, and enabling the preprocessed fluorescence developed image to pass through a shallow feature extractor based on a first convolution neural network model to obtain a fluorescence shallow feature map. And then, the fluorescence shallow feature map passes through a spatial attention module to obtain a spatially enhanced fluorescence shallow feature map, and the spatially enhanced fluorescence shallow feature map passes through a deep feature extractor based on a second convolutional neural network model to obtain a fluorescence deep feature map. And then, fusing the space enhanced fluorescence shallow layer characteristic map and the fluorescence deep layer characteristic map to obtain a fluorescence characteristic map, and enabling the fluorescence characteristic map to pass through a decoder to obtain a decoded image. Finally, performing image semantic segmentation on the decoded image to obtain a segmentation result, and determining the number of CTC cells based on the segmentation result
Fig. 5 is a schematic flow chart of a training phase in a fluorescence microscopy imaging method for detection of circulating tumor cells according to an embodiment of the present application. As shown in fig. 5, the method further includes a training phase for training the shallow feature extractor based on the first convolutional neural network model, the spatial attention module, the deep feature extractor based on the second convolutional neural network model, and the decoder; wherein the training phase comprises: s210, acquiring training data, wherein the training data comprises training fluorescence development images of detected samples and true values of the number of CTC cells; s220, performing image preprocessing on the training fluorescence development image to obtain a fluorescence development image after training preprocessing; s230, passing the fluorescence development image after training pretreatment through the shallow feature extractor based on the first convolutional neural network model to obtain a training fluorescence shallow feature map; s240, passing the training fluorescence shallow feature map through the spatial attention module to obtain a training spatial enhanced fluorescence shallow feature map; s250, passing the training space enhanced fluorescence shallow feature map through the deep feature extractor based on the second convolutional neural network model to obtain a training fluorescence deep feature map; s260, fusing the training space enhanced fluorescence shallow feature map and the training fluorescence deep feature map to obtain a training fluorescence feature map; s270, optimizing the characteristic manifold curved surface of the training fluorescence characteristic map to obtain an optimized fluorescence characteristic map; s280, enabling the optimized fluorescence characteristic diagram to pass through the decoder to obtain a training decoded image; s290, performing image semantic segmentation on the decoded image to obtain a segmentation result, and determining the number of CTC cells based on the segmentation result; and S300, calculating the mean square error between the number of the CTC cells and the true value of the number of the CTC cells as a loss function value, and training the shallow feature extractor based on the first convolutional neural network model, the spatial attention module, the deep feature extractor based on the second convolutional neural network model and the decoder through the propagation direction of gradient descent.
FIG. 6 is a schematic diagram of a model architecture of a training phase in a fluorescence microscopy imaging method for circulating tumor cell detection in accordance with an embodiment of the present application. As shown in fig. 6, the input of the model architecture of the training phase is a training fluoroscopic image of the sample being tested. Firstly, carrying out image preprocessing on the training fluorescence development image to obtain a training preprocessed fluorescence development image, and enabling the training preprocessed fluorescence development image to pass through the shallow feature extractor based on the first convolutional neural network model to obtain a training fluorescence shallow feature map. And then, the training fluorescence shallow feature map passes through the spatial attention module to obtain a training spatial enhanced fluorescence shallow feature map, and the training spatial enhanced fluorescence shallow feature map passes through the deep feature extractor based on the second convolutional neural network model to obtain a training fluorescence deep feature map. And then, fusing the training space enhanced fluorescence shallow feature map and the training fluorescence deep feature map to obtain a training fluorescence feature map, and carrying out feature manifold curved surface optimization on the training fluorescence feature map to obtain an optimized fluorescence feature map. And then, the optimized fluorescence characteristic map passes through the decoder to obtain a training decoded image, image semantic segmentation is carried out on the decoded image to obtain a segmentation result, and the number of CTC cells is determined based on the segmentation result. Finally, calculating the mean square error between the number of CTC cells and the true value of the number of CTC cells as a loss function value, and training the shallow feature extractor based on the first convolutional neural network model, the spatial attention module, the deep feature extractor based on the second convolutional neural network model and the decoder through the propagation direction of gradient descent.
Optionally, in an embodiment of the present application, performing feature manifold surface optimization on the training fluorescence feature map to obtain an optimized fluorescence feature map includes: optimizing the characteristic manifold curved surface of the training fluorescence characteristic map by using the following optimization formula to obtain an optimized fluorescence characteristic map;
wherein, the optimization formula is:
Figure SMS_43
wherein the method comprises the steps of
Figure SMS_44
Is the +.f. of the training fluorescence profile>
Figure SMS_45
Characteristic value of individual position->
Figure SMS_46
And->
Figure SMS_47
Is the mean and standard deviation of the feature value sets of each position in the training fluorescence feature map, +.>
Figure SMS_48
Represents the maximum function, and->
Figure SMS_49
Is the +.f. of the optimized fluorescence profile>
Figure SMS_50
Characteristic values of the location.
Here, it will be understood by those skilled in the art that the specific operations of the respective steps in the above-described fluorescence microscopy imaging method for detection of circulating tumor cells have been described in detail in the above description of the fluorescence microscopy imaging system for detection of circulating tumor cells with reference to fig. 1 to 2, and thus, repetitive descriptions thereof will be omitted.
The embodiment of the invention also provides a chip system, which comprises at least one processor, and when the program instructions are executed in the at least one processor, the method provided by the embodiment of the application is realized.
The embodiment of the invention also provides a computer storage medium, on which a computer program is stored, which when executed by a computer causes the computer to perform the method of the above-described method embodiment.
The present invention also provides a computer program product comprising instructions which, when executed by a computer, cause the computer to perform the method of the method embodiment described above.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any other combination. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, produces a flow or function in accordance with embodiments of the present invention, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by a wired (e.g., coaxial cable, fiber optic, digital subscriber line (digital subscriber line, DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., a floppy disk, a hard disk, a magnetic tape), an optical medium (e.g., a digital video disc (digital video disc, DVD)), or a semiconductor medium (e.g., a Solid State Disk (SSD)), or the like.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the several embodiments provided by the present invention, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.

Claims (10)

1. A fluorescence microscopy imaging system for detection of circulating tumor cells, comprising:
the data acquisition unit is used for acquiring a fluorescence development image of the detected sample;
the image preprocessing unit is used for preprocessing the fluorescence development image to obtain a preprocessed fluorescence development image;
the shallow feature extraction unit is used for enabling the preprocessed fluorescence development image to pass through a shallow feature extractor based on a first convolutional neural network model so as to obtain a fluorescence shallow feature map;
the space strengthening unit is used for enabling the fluorescence shallow feature map to pass through the space attention module to obtain a space strengthening fluorescence shallow feature map;
The deep feature extraction unit is used for enabling the space enhanced fluorescence shallow feature map to pass through a deep feature extractor based on a second convolutional neural network model so as to obtain a fluorescence deep feature map;
the feature fusion unit is used for fusing the space enhanced fluorescence shallow feature map and the fluorescence deep feature map to obtain a fluorescence feature map;
a decoding unit for passing the fluorescence characteristic map through a decoder to obtain a decoded image; and
and the counting unit is used for carrying out image semantic segmentation on the decoded image to obtain a segmentation result and determining the number of CTC cells based on the segmentation result.
2. The fluorescence microscopy imaging system for detection of circulating tumor cells of claim 1, wherein the shallow feature extraction unit is configured to: input data are respectively carried out in forward transfer of layers by using each layer of the first convolutional neural network model:
carrying out convolution processing on the input data to obtain a convolution characteristic diagram;
carrying out pooling treatment based on a local feature matrix on the convolution feature map to obtain a pooled feature map;
non-linear activation is carried out on the pooled feature map so as to obtain an activated feature map;
The input of the first layer of the first convolutional neural network model is the preprocessed fluorescence development image, the output of the last layer of the first convolutional neural network model is the fluorescence shallow layer characteristic diagram, and the number of layers of the first convolutional neural network model is more than or equal to 1 and less than or equal to 6.
3. The fluorescence microscopy imaging system for detection of circulating tumor cells of claim 2, wherein the spatial enhancement unit comprises:
the pooling unit is used for carrying out average pooling and maximum pooling along the channel dimension on the fluorescence shallow feature map respectively so as to obtain an average feature matrix and a maximum feature matrix;
the cascade unit is used for carrying out cascade connection and channel adjustment on the average characteristic matrix and the maximum characteristic matrix to obtain a channel characteristic matrix;
the convolution coding unit is used for carrying out convolution coding on the channel characteristic matrix by using a convolution layer of the spatial attention module so as to obtain a convolution characteristic matrix;
the score matrix acquisition unit is used for enabling the convolution feature matrix to obtain a space attention score matrix through an activation function;
and the attention applying unit is used for multiplying the spatial attention score matrix and each characteristic matrix of the fluorescence shallow characteristic map along the channel dimension by the spatial enhanced fluorescence shallow characteristic map according to position points.
4. The fluorescence microscopy imaging system for detection of circulating tumor cells of claim 3, wherein the deep feature extraction unit is configured to: and respectively carrying out convolution processing, pooling processing based on a local feature matrix and nonlinear activation processing on input data in forward transfer of layers by using each layer of the second convolutional neural network model so as to output the fluorescence deep feature map by the last layer of the second convolutional neural network model.
5. The fluorescence microscopy imaging system for detection of circulating tumor cells of claim 4, wherein the feature fusion unit is configured to: fusing the spatially enhanced fluorescence shallow feature map and the fluorescence deep feature map using the following fusion formula to obtain the fluorescence feature map;
wherein, the fusion formula is:
Figure QLYQS_1
wherein,,
Figure QLYQS_2
for the fluorescence profile, < >>
Figure QLYQS_3
For the spatially enhanced fluorescence shallow feature map, < >>
Figure QLYQS_4
For the fluorescence deep profile, ">
Figure QLYQS_5
"means that the elements at the corresponding positions of the spatially enhanced fluorescence shallow feature map and the fluorescence deep feature map are added up,">
Figure QLYQS_6
And->
Figure QLYQS_7
Is a weighting parameter for controlling the balance between the spatially enhanced fluorescence shallow feature map and the fluorescence deep feature map in the fluorescence feature map.
6. The fluorescence microscopy imaging system of claim 5, further comprising a training module for training the shallow feature extractor based on the first convolutional neural network model, the spatial attention module, the deep feature extractor based on the second convolutional neural network model, and the decoder;
wherein, training module includes:
the training data acquisition unit is used for acquiring training data, wherein the training data comprises training fluorescence development images of the detected sample and a true value of the number of CTC cells;
the training image preprocessing unit is used for carrying out image preprocessing on the training fluorescence development image to obtain a fluorescence development image after training preprocessing;
the training shallow feature extraction unit is used for enabling the fluorescence development image after training pretreatment to pass through the shallow feature extractor based on the first convolutional neural network model so as to obtain a training fluorescence shallow feature map;
the training space strengthening unit is used for enabling the training fluorescence shallow feature map to pass through the space attention module so as to obtain a training space strengthening fluorescence shallow feature map;
The training deep feature extraction unit is used for enabling the training space enhanced fluorescence shallow feature map to pass through the deep feature extractor based on the second convolutional neural network model so as to obtain a training fluorescence deep feature map;
the training feature fusion unit is used for fusing the training space enhanced fluorescence shallow feature map and the training fluorescence deep feature map to obtain a training fluorescence feature map;
the optimizing unit is used for optimizing the characteristic manifold curved surface of the training fluorescence characteristic map to obtain an optimized fluorescence characteristic map;
the training decoding unit is used for enabling the optimized fluorescence characteristic diagram to pass through the decoder so as to obtain a training decoded image;
the training counting unit is used for carrying out image semantic segmentation on the decoded image to obtain a segmentation result, and determining the number of CTC cells based on the segmentation result; and
and the training unit is used for calculating the mean square error between the number of the CTC cells and the true value of the number of the CTC cells as a loss function value and training the shallow feature extractor based on the first convolutional neural network model, the spatial attention module, the deep feature extractor based on the second convolutional neural network model and the decoder through the propagation direction of gradient descent.
7. The fluorescence microscopy imaging system for detection of circulating tumor cells of claim 6, wherein the optimization unit is configured to: optimizing the characteristic manifold curved surface of the training fluorescence characteristic map by using the following optimization formula to obtain the optimized fluorescence characteristic map;
wherein, the optimization formula is:
Figure QLYQS_8
wherein the method comprises the steps of
Figure QLYQS_9
Is the +.f. of the training fluorescence profile>
Figure QLYQS_10
Characteristic value of individual position->
Figure QLYQS_11
And->
Figure QLYQS_12
Is the mean and standard deviation of the feature value sets of each position in the training fluorescence feature map, +.>
Figure QLYQS_13
Represents the maximum function, and->
Figure QLYQS_14
Is the +.f. of the optimized fluorescence profile>
Figure QLYQS_15
Characteristic values of the location.
8. A fluorescence microscopy imaging method for detecting circulating tumor cells, comprising:
acquiring a fluorescence development image of a detected sample;
performing image preprocessing on the fluorescence development image to obtain a preprocessed fluorescence development image;
passing the preprocessed fluorescence development image through a shallow feature extractor based on a first convolutional neural network model to obtain a fluorescence shallow feature map;
the fluorescence shallow feature map passes through a spatial attention module to obtain a spatially enhanced fluorescence shallow feature map;
the space enhanced fluorescence shallow feature map passes through a deep feature extractor based on a second convolution neural network model to obtain a fluorescence deep feature map;
Fusing the space enhanced fluorescence shallow layer characteristic map and the fluorescence deep layer characteristic map to obtain a fluorescence characteristic map;
passing the fluorescence signature through a decoder to obtain a decoded image; and
and performing image semantic segmentation on the decoded image to obtain a segmentation result, and determining the number of CTC cells based on the segmentation result.
9. The fluorescence microscopy imaging method for circulating tumor cell detection of claim 8, further comprising a training phase for training the shallow feature extractor based on the first convolutional neural network model, the spatial attention module, the deep feature extractor based on the second convolutional neural network model, and the decoder;
wherein the training phase comprises:
acquiring training data, wherein the training data comprises training fluorescence development images of detected samples and a true value of the number of CTC cells;
performing image preprocessing on the training fluorescence development image to obtain a fluorescence development image after training preprocessing;
the fluorescence development image after training pretreatment passes through the shallow feature extractor based on the first convolutional neural network model to obtain a training fluorescence shallow feature map;
The training fluorescence shallow feature map passes through the spatial attention module to obtain a training space enhanced fluorescence shallow feature map;
passing the training space enhanced fluorescence shallow feature map through the deep feature extractor based on the second convolutional neural network model to obtain a training fluorescence deep feature map;
fusing the training space enhanced fluorescence shallow feature map and the training fluorescence deep feature map to obtain a training fluorescence feature map;
optimizing the characteristic manifold curved surface of the training fluorescence characteristic map to obtain an optimized fluorescence characteristic map;
passing the optimized fluorescence signature through the decoder to obtain a training decoded image;
performing image semantic segmentation on the decoded image to obtain a segmentation result, and determining the number of CTC cells based on the segmentation result; and
calculating the mean square error between the number of CTC cells and the true value of the number of CTC cells as a loss function value, and training the shallow feature extractor based on the first convolutional neural network model, the spatial attention module, the deep feature extractor based on the second convolutional neural network model and the decoder through the propagation direction of gradient descent.
10. The fluorescence microscopy imaging method for detection of circulating tumor cells of claim 9, wherein feature manifold surface optimization of the training fluorescence signature to obtain an optimized fluorescence signature comprises: optimizing the characteristic manifold curved surface of the training fluorescence characteristic map by using the following optimization formula to obtain the optimized fluorescence characteristic map;
wherein, the optimization formula is:
Figure QLYQS_16
wherein the method comprises the steps of
Figure QLYQS_17
Is the +.f. of the training fluorescence profile>
Figure QLYQS_18
Characteristic value of individual position->
Figure QLYQS_19
And->
Figure QLYQS_20
Is the mean and standard deviation of the feature value sets of each position in the training fluorescence feature map, +.>
Figure QLYQS_21
Represents the maximum function, and->
Figure QLYQS_22
Is the +.f. of the optimized fluorescence profile>
Figure QLYQS_23
Characteristic values of the location.
CN202310581412.8A 2023-05-23 2023-05-23 Fluorescence microscopic imaging system and method for detecting circulating tumor cells Active CN116363123B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310581412.8A CN116363123B (en) 2023-05-23 2023-05-23 Fluorescence microscopic imaging system and method for detecting circulating tumor cells

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310581412.8A CN116363123B (en) 2023-05-23 2023-05-23 Fluorescence microscopic imaging system and method for detecting circulating tumor cells

Publications (2)

Publication Number Publication Date
CN116363123A true CN116363123A (en) 2023-06-30
CN116363123B CN116363123B (en) 2023-12-22

Family

ID=86922509

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310581412.8A Active CN116363123B (en) 2023-05-23 2023-05-23 Fluorescence microscopic imaging system and method for detecting circulating tumor cells

Country Status (1)

Country Link
CN (1) CN116363123B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116586924A (en) * 2023-07-17 2023-08-15 浙江一益医疗器械有限公司 Stainless steel needle tube with needle tip five-inclined-plane structure and preparation process thereof
CN116612472A (en) * 2023-07-21 2023-08-18 北京航空航天大学杭州创新研究院 Single-molecule immune array analyzer based on image and method thereof
CN116630313A (en) * 2023-07-21 2023-08-22 北京航空航天大学杭州创新研究院 Fluorescence imaging detection system and method thereof
CN117392134A (en) * 2023-12-12 2024-01-12 苏州矩度电子科技有限公司 On-line visual detection system for high-speed dispensing
CN117475241A (en) * 2023-12-27 2024-01-30 山西省水利建筑工程局集团有限公司 Geological mutation detection system and method for tunnel excavation of cantilever type heading machine

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107748256A (en) * 2017-10-11 2018-03-02 上海医盈网络科技有限公司 A kind of liquid biopsy detection method of circulating tumor cell
CN109886273A (en) * 2019-02-26 2019-06-14 四川大学华西医院 A kind of CMR classification of image segmentation system
CN113688826A (en) * 2021-07-05 2021-11-23 北京工业大学 Pollen image detection method and system based on feature fusion
US11222217B1 (en) * 2020-08-14 2022-01-11 Tsinghua University Detection method using fusion network based on attention mechanism, and terminal device
WO2022257408A1 (en) * 2021-06-10 2022-12-15 南京邮电大学 Medical image segmentation method based on u-shaped network
CN115619320A (en) * 2022-11-09 2023-01-17 西安巨子生物基因技术股份有限公司 Product external packing informatization error-preventing system
US20230036359A1 (en) * 2020-01-07 2023-02-02 Raycan Technology Co., Ltd. (Suzhou) Image reconstruction method, device,equipment, system, and computer-readable storage medium
CN115868923A (en) * 2022-04-21 2023-03-31 华中科技大学 Fluorescence molecule tomography method and system based on expanded cyclic neural network
WO2023070447A1 (en) * 2021-10-28 2023-05-04 京东方科技集团股份有限公司 Model training method, image processing method, computing processing device, and non-transitory computer readable medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107748256A (en) * 2017-10-11 2018-03-02 上海医盈网络科技有限公司 A kind of liquid biopsy detection method of circulating tumor cell
CN109886273A (en) * 2019-02-26 2019-06-14 四川大学华西医院 A kind of CMR classification of image segmentation system
US20230036359A1 (en) * 2020-01-07 2023-02-02 Raycan Technology Co., Ltd. (Suzhou) Image reconstruction method, device,equipment, system, and computer-readable storage medium
US11222217B1 (en) * 2020-08-14 2022-01-11 Tsinghua University Detection method using fusion network based on attention mechanism, and terminal device
WO2022257408A1 (en) * 2021-06-10 2022-12-15 南京邮电大学 Medical image segmentation method based on u-shaped network
CN113688826A (en) * 2021-07-05 2021-11-23 北京工业大学 Pollen image detection method and system based on feature fusion
WO2023070447A1 (en) * 2021-10-28 2023-05-04 京东方科技集团股份有限公司 Model training method, image processing method, computing processing device, and non-transitory computer readable medium
CN115868923A (en) * 2022-04-21 2023-03-31 华中科技大学 Fluorescence molecule tomography method and system based on expanded cyclic neural network
CN115619320A (en) * 2022-11-09 2023-01-17 西安巨子生物基因技术股份有限公司 Product external packing informatization error-preventing system

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
DAWEI YANG, ET AL.: "Image semantic segmentation with hierarchical feature fusion based on deep neural network", 《CONNECTION SCIENCE》, pages 158 *
周鹏 等: "基于改进 DeepLab-v3 +的火星地形分割算法", 《空间控制技术与应用》 *
蒋欣欣: "结合实例分割的多目标跟踪研究及***实现", 《中国优秀硕士学位论文全文数据库 信息科技辑》, no. 1, pages 4 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116586924A (en) * 2023-07-17 2023-08-15 浙江一益医疗器械有限公司 Stainless steel needle tube with needle tip five-inclined-plane structure and preparation process thereof
CN116586924B (en) * 2023-07-17 2024-02-27 浙江一益医疗器械有限公司 Stainless steel needle tube with needle tip five-inclined-plane structure and preparation process thereof
CN116612472A (en) * 2023-07-21 2023-08-18 北京航空航天大学杭州创新研究院 Single-molecule immune array analyzer based on image and method thereof
CN116630313A (en) * 2023-07-21 2023-08-22 北京航空航天大学杭州创新研究院 Fluorescence imaging detection system and method thereof
CN116612472B (en) * 2023-07-21 2023-09-19 北京航空航天大学杭州创新研究院 Single-molecule immune array analyzer based on image and method thereof
CN116630313B (en) * 2023-07-21 2023-09-26 北京航空航天大学杭州创新研究院 Fluorescence imaging detection system and method thereof
CN117392134A (en) * 2023-12-12 2024-01-12 苏州矩度电子科技有限公司 On-line visual detection system for high-speed dispensing
CN117392134B (en) * 2023-12-12 2024-02-27 苏州矩度电子科技有限公司 On-line visual detection system for high-speed dispensing
CN117475241A (en) * 2023-12-27 2024-01-30 山西省水利建筑工程局集团有限公司 Geological mutation detection system and method for tunnel excavation of cantilever type heading machine

Also Published As

Publication number Publication date
CN116363123B (en) 2023-12-22

Similar Documents

Publication Publication Date Title
CN116363123B (en) Fluorescence microscopic imaging system and method for detecting circulating tumor cells
CN116189179B (en) Circulating tumor cell scanning analysis equipment
JP7273215B2 (en) Automated assay evaluation and normalization for image processing
CN113344849B (en) Microemulsion head detection system based on YOLOv5
US10621412B2 (en) Dot detection, color classification of dots and counting of color classified dots
CN111488921B (en) Intelligent analysis system and method for panoramic digital pathological image
CN110853011B (en) Method for constructing convolutional neural network model for pulmonary nodule detection
CN109102498B (en) Method for segmenting cluster type cell nucleus in cervical smear image
CN113393443B (en) HE pathological image cell nucleus segmentation method and system
CN112926652B (en) Fish fine granularity image recognition method based on deep learning
CN115410050A (en) Tumor cell detection equipment based on machine vision and method thereof
CN117015796A (en) Method for processing tissue images and system for processing tissue images
CN112990015B (en) Automatic identification method and device for lesion cells and electronic equipment
CN116287138B (en) FISH-based cell detection system and method thereof
CN111680575B (en) Human epithelial cell staining classification device, equipment and storage medium
CN114581698A (en) Target classification method based on space cross attention mechanism feature fusion
Abbasi-Sureshjani et al. Molecular subtype prediction for breast cancer using H&E specialized backbone
CN113536896B (en) Insulator defect detection method and device based on improved Faster RCNN and storage medium
CN112703531A (en) Generating annotation data for tissue images
CN112991281B (en) Visual detection method, system, electronic equipment and medium
CN117746077A (en) Chip defect detection method, device, equipment and storage medium
CN116843974A (en) Breast cancer pathological image classification method based on residual neural network
CN113837255B (en) Method, apparatus and medium for predicting cell-based antibody karyotype class
KR20230063147A (en) Efficient Lightweight CNN and Ensemble Machine Learning Classification of Prostate Tissue Using Multilevel Feature Analysis Method and System
Princy Magdaline et al. Detection of lung cancer using novel attention gate residual U-Net model and KNN classifier from computer tomography images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant