CN114332854A - Image processing method, device, equipment and storage medium - Google Patents

Image processing method, device, equipment and storage medium Download PDF

Info

Publication number
CN114332854A
CN114332854A CN202111547404.9A CN202111547404A CN114332854A CN 114332854 A CN114332854 A CN 114332854A CN 202111547404 A CN202111547404 A CN 202111547404A CN 114332854 A CN114332854 A CN 114332854A
Authority
CN
China
Prior art keywords
image
target
cell nucleus
medical
target cell
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111547404.9A
Other languages
Chinese (zh)
Inventor
张闻华
张军
韩骁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202111547404.9A priority Critical patent/CN114332854A/en
Publication of CN114332854A publication Critical patent/CN114332854A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Investigating Or Analysing Biological Materials (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to an image processing method, an image processing device, image processing equipment and a storage medium, and relates to the technical field of artificial intelligence. The method comprises the following steps: acquiring an identification image of a target cell nucleus based on the medical microscopic image; the identification image is a multi-channel image consisting of a target image block and a mask image; identifying the cell type based on the identification image to acquire the cell type of the cell corresponding to the target cell nucleus; the scheme can more accurately extract the characteristics related to cell classification, and further improve the accuracy of classifying the cells in the medical microscopic image.

Description

Image processing method, device, equipment and storage medium
Technical Field
The present application relates to the field of artificial intelligence technologies, and in particular, to an image processing method, apparatus, device, and storage medium.
Background
With the continuous development of the application of Artificial Intelligence (AI) in the medical field, cells in medical microscopic images can be classified by AI to assist medical personnel in making medical-related decisions.
In the related art, a microscopic image containing cells may be processed through a deep neural network to obtain a cell type to which the cells therein belong; for example, a developer may pre-train an image recognition model through a model training device; in the application process, the computer device inputs the microscopic image containing the cell into the image recognition model, and the cell type of the cell is output by the image recognition model.
However, the image recognition model in the related art has a poor ability to extract features related to cell classification from an input microscopic image, resulting in low accuracy in classification of cells.
Disclosure of Invention
The embodiment of the application provides an image processing method, an image processing device, image processing equipment and a storage medium, which can improve the accuracy of classifying cells in a microscopic image.
In one aspect, an image processing method is provided, and the method includes:
acquiring a medical microscopic image, the medical microscopic image comprising at least one cell nucleus;
acquiring an identification image of a target cell nucleus in the at least one cell nucleus based on the medical microscopic image; the identification image is a multi-channel image composed of a target image block and a mask image; the target image block is an image block containing the target cell nucleus in the medical microscopic image; the mask image is used for indicating the position of the target cell nucleus in the target image block;
identifying the cell type based on the identification image to obtain the classification probability distribution of the target cell nucleus; the classification probability distribution is used for indicating the probability that the cell corresponding to the target cell nucleus belongs to various cell types;
and acquiring the cell type of the cell corresponding to the target cell nucleus based on the classification probability distribution of the target cell nucleus.
In still another aspect, there is provided an image processing apparatus, the apparatus including:
a microscopic image acquisition module for acquiring a medical microscopic image, the medical microscopic image including at least one cell nucleus;
the identification image acquisition module is used for acquiring an identification image of a target cell nucleus in the at least one cell nucleus based on the medical microscopic image; the identification image is a multi-channel image composed of a target image block and a mask image; the target image block is an image block containing the target cell nucleus in the medical microscopic image; the mask image is used for indicating the position of the target cell nucleus in the target image block;
a probability distribution obtaining module, configured to perform cell type identification based on the identification image, and obtain a classification probability distribution of the target cell nucleus; the classification probability distribution is used for indicating the probability that the cell corresponding to the target cell nucleus belongs to various cell types;
and the cell type acquisition module is used for acquiring the cell type of the cell corresponding to the target cell nucleus based on the classification probability distribution of the target cell nucleus.
In one possible implementation, the identification image acquisition module is configured to,
carrying out cell nucleus position identification on the medical microscopic image to obtain position information of the target cell nucleus in the medical microscopic image;
intercepting the target image block from the medical microscopic image based on the position information of the target cell nucleus in the medical microscopic image;
generating the mask image based on the position information of the target cell nucleus in the target image block;
combining the target image patch and the mask image to generate the identification image.
In one possible implementation, the identification image acquisition module is configured to,
intercepting at least two target image blocks from the medical microscopic image based on the position information of the target cell nucleus in the medical microscopic image; the target cell nucleus is positioned differently in different target image blocks;
generating at least two mask images based on the position information of the target cell nucleus in at least two target image blocks;
and combining at least two target image blocks with the mask images of the at least two target image blocks respectively to generate at least two identification images.
In a possible implementation manner, the identification image obtaining module is configured to randomly intercept at least two target image blocks from the medical microscope image based on the position information of the target cell nucleus in the medical microscope image.
In a possible implementation manner, the identification image obtaining module is configured to intercept at least two target image blocks from the medical microscope image based on the position information of the target cell nucleus in the medical microscope image and at least two position restriction conditions;
wherein the position limitation condition is used for limiting the distance between the position of the target cell nucleus in the target image block and the edge of the target image block.
In one possible implementation, the cell type acquisition module is configured to,
fusing the classification probability distribution of at least two identification images to obtain fused classification probability distribution;
and acquiring the cell type of the cell corresponding to the target cell nucleus based on the fusion classification probability distribution.
In yet another aspect, a computer device is provided, which includes a processor and a memory, where at least one computer instruction is stored, and the at least one computer instruction is loaded and executed by the processor to implement the image processing method.
In yet another aspect, a computer-readable storage medium is provided, in which at least one computer instruction is stored, the at least one computer instruction being loaded and executed by a processor to implement the image processing method described above.
In yet another aspect, a computer program product or computer program is provided that includes computer instructions stored in a computer-readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device executes the image processing method.
The beneficial effects brought by the technical scheme provided by the embodiment of the application at least comprise:
for a target cell nucleus to be classified in the medical microscopic image, acquiring a multi-channel identification image which is composed of a target image block containing the target cell nucleus in the medical microscopic image and a mask image used for indicating the target cell nucleus in the target image block, and identifying the multi-channel identification image to obtain a cell type to which a cell corresponding to the target cell nucleus belongs; in the above process, the input information for identifying the cell type includes not only the image block in the medical microscopic image but also the mask image indicating the position of the target cell nucleus, so that the image identification model can distinguish the target cell nucleus and the background of the target cell nucleus from the input data, thereby more accurately extracting the features related to the cell classification, and further improving the accuracy of classifying the cells in the medical microscopic image.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application.
FIG. 1 is a system configuration diagram of a cell sorting system according to each embodiment of the present application;
FIG. 2 is a flow diagram illustrating an image processing method according to an exemplary embodiment;
FIG. 3 is a flow diagram of a cell identification scheme according to an exemplary embodiment;
FIG. 4 is a flow diagram illustrating an image processing method according to an exemplary embodiment;
FIG. 5 is a schematic view of HE staining according to the embodiment of FIG. 4;
FIG. 6 is a diagram illustrating the result of the segmentation of the cell nucleus according to the embodiment shown in FIG. 4;
FIG. 7 is a major algorithmic flow of the scheme shown in the present application;
FIG. 8 is a flow diagram of a cell identification scheme according to an example embodiment;
FIG. 9 is a schematic diagram illustrating cell sorting results according to an exemplary embodiment;
fig. 10 is a block diagram showing a configuration of an image processing apparatus according to an exemplary embodiment;
FIG. 11 is a block diagram illustrating a computer device in accordance with an exemplary embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
Referring to fig. 1, a system configuration diagram of a cell sorting system according to various embodiments of the present application is shown. As shown in fig. 1, the system may include a medical microscopic image acquisition apparatus 120, a terminal 140, and a server 160; optionally, the system may further include a database 180.
The medical microscopic image acquiring device 120 may be a medical device for acquiring medical microscopic images, such as a medical microscope or the like. Accordingly, the medical microscope image may be a color microscope image or the like.
The medical microscopic image collecting apparatus 120 may include an image output Interface, such as a Universal Serial Bus (USB) Interface, a High Definition Multimedia Interface (HDMI) Interface, an ethernet Interface, or the like; alternatively, the image output interface may be a Wireless interface, such as a Wireless Local Area Network (WLAN) interface, a bluetooth interface, or the like.
Accordingly, according to the type of the image output interface, there are various ways for the operator to export the medical microscope image, for example, the medical microscope image may be imported to the terminal 140 through a wired or short-distance wireless manner, or the medical microscope image may be imported to the terminal 140 or the server 160 through a local area network or the internet.
The terminal 140 may be a terminal device with certain processing capability and interface display function, for example, the terminal 140 may be a mobile phone, a tablet computer, an e-book reader, smart glasses, a laptop computer, a desktop computer, and the like.
Terminals 140 may include terminals used by developers, as well as terminals used by medical personnel.
When the terminal 140 is implemented as a terminal used by a developer, the developer can develop a machine learning model for classifying cells in the medical microscopic image through the terminal 140 and deploy the machine learning model into the server 160 or a terminal used by the medical staff.
When the terminal 140 is implemented as a terminal used by medical staff, an application program for classifying cells in the medical microscopic image and presenting a classification result may be installed in the terminal 140, and after the terminal 140 acquires the medical image acquired by the medical microscopic image acquisition device 120, the medical microscopic image is classified and identified by the application program to obtain a classification result, and the classification result is presented, so that a doctor can perform operations such as pathological diagnosis.
In the system shown in fig. 1, the terminal 140 and the medical microscopic image acquisition device 120 are physically separate physical devices. Optionally, in another possible implementation manner, when the terminal 140 is implemented as a terminal used by a medical staff, the terminal 140 and the medical microscopic image collecting device 120 may also be integrated into a single entity device; for example, the terminal 140 may be a terminal device having a medical microscopic image acquisition function.
The server 160 may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a Network service, cloud communication, a middleware service, a domain name service, a security service, a Content Delivery Network (CDN), a big data and artificial intelligence platform, and the like.
The server 160 may be a server that provides a background service for an application program installed in the terminal 140, and the background server may be version management of the application program, perform background classification processing on a medical microscopic image acquired by the application program and return a processing result, perform background training on a machine learning model developed by a developer, and the like.
The database 180 may be a Redis database, or may be another type of database. The database 180 is used for storing various types of data.
Optionally, the terminal 140 and the server 160 are connected via a communication network. Optionally, the medical microscopic image collecting apparatus 120 is connected to the server 160 through a communication network. Optionally, the communication network is a wired network or a wireless network.
Optionally, the system may further include a management device (not shown in fig. 1), which is connected to the server 160 through a communication network. Optionally, the communication network is a wired network or a wireless network.
FIG. 2 is a flow diagram illustrating an image processing method according to an exemplary embodiment. The method may be performed by a computer device, for example, the computer device may be a server, or the computer device may also be a terminal, or the computer device may include a server and a terminal, wherein the server may be the server 160 in the embodiment shown in fig. 1, and the terminal may be the terminal 140 used by medical staff in the embodiment shown in fig. 1. The computer device may be implemented as a classification device for classifying cells in a medical microscopic image. As shown in fig. 2, the image processing method may include the following steps.
Step 201, a medical microscopic image is acquired, wherein the medical microscopic image comprises at least one cell nucleus.
In the embodiment of the present application, the medical microscopic image may be an image obtained by image-capturing stained tissue under a microscope.
For example, after the medical staff collects the cell tissue sample, the medical staff may stain the cell tissue sample to stain different tissues such as cell nucleus and cytoplasm in the cell tissue sample with different colors, so as to distinguish the cell nucleus and cytoplasm in the cell tissue sample.
Step 202, acquiring an identification image of a target cell nucleus in at least one cell nucleus based on the medical microscopic image; the identification image is a multi-channel image consisting of a target image block and a mask image; the target image block is an image block containing target cell nuclei in the medical microscopic image; the mask image is used to indicate the location of the target cell nucleus in the target image block.
In an embodiment of the present application, the target cell nucleus is one of at least one cell nucleus included in the medical microscopic image.
The identification image is provided with a plurality of image channels which respectively correspond to the target image block and the mask image.
For example, the target image block is a three-channel image, the mask image is a single-channel image, and each image channel corresponds to one color. The identification image at this time may be a four-channel image, and the pixel value in each channel image is the color value of the color corresponding to the current channel in the pixel point in the target image block.
The mask image may be a binary image. For example, in the binarized image, the pixel value corresponding to the position of the target cell nucleus may be 1, and the pixel values other than the position corresponding to the target cell nucleus may be 0. Alternatively, the pixel values in the binarized image may be other values than 0 and 1, as long as it is ensured that the pixel values at the positions of the target cell nuclei are different from the pixel values at the positions other than the target cell nuclei.
Step 203, identifying the cell type based on the identification image to obtain the classification probability distribution of the target cell nucleus; the classification probability distribution is used to indicate the probability that the cell corresponding to the target nucleus belongs to various cell types.
In the embodiment of the application, the computer device may perform cell type recognition on the recognition image through a pre-trained image recognition model to output the classification probability distribution of the target cell nucleus. The input of the image recognition model is a multi-channel image (such as a four-channel image), and the image recognition model performs feature extraction and classification by taking the multi-channel image as a whole.
And 204, acquiring the cell type of the cell corresponding to the target cell nucleus based on the classification probability distribution of the target cell nucleus.
For each cell nucleus in the medical microscopic image, the computer device may perform the above steps 201 to 204 with each cell nucleus as a target cell nucleus, respectively, to obtain a cell type to which a cell of each cell nucleus in the medical microscopic image belongs.
Optionally, the computer device may further display cell types to which the respective cells of the respective cell nuclei belong, corresponding to the medical microscopic image. For example, in one exemplary approach, the computer device may label individual nuclei with different colored boxes based on the medical microscopy images.
To sum up, in the scheme shown in the embodiment of the present application, for a target cell nucleus to be classified in a medical microscopic image, a multi-channel identification image composed of a target image block including the target cell nucleus in the medical microscopic image and a mask image for indicating the target cell nucleus in the target image block is obtained, and the multi-channel identification image is identified to obtain a cell type to which a cell corresponding to the target cell nucleus belongs; in the above process, the input information for identifying the cell type includes not only the image block in the medical microscopic image but also the mask image indicating the position of the target cell nucleus, so that the image identification model can distinguish the target cell nucleus and the background of the target cell nucleus from the input data, thereby more accurately extracting the features related to the cell classification, and further improving the accuracy of classifying the cells in the medical microscopic image.
In addition, according to the scheme shown in the embodiment of the application, the position of the target cell nucleus in the target image block is indicated by adding a mask image outside the target image block, and compared with the mode of specifying the position of the target cell nucleus by using an image, the model of the scheme of the embodiment of the application only needs to be provided with a single input port, so that the simplicity of the model structure is ensured, and the training and classification efficiency of the model is ensured while the accuracy of model classification is improved.
The solution in the embodiment shown in fig. 2 described above in the present application may be implemented based on AI, for example, the step of performing cell type recognition on the recognition image may be performed by an image recognition model trained based on AI technology.
AI is a theory, method, technique and application system that uses a digital computer or a machine controlled by a digital computer to simulate, extend and expand human intelligence, perceive the environment, acquire knowledge and use the knowledge to obtain the best results. In other words, artificial intelligence is a comprehensive technique of computer science that attempts to understand the essence of intelligence and produce a new intelligent machine that can react in a manner similar to human intelligence. Artificial intelligence is the research of the design principle and the realization method of various intelligent machines, so that the machines have the functions of perception, reasoning and decision making. The artificial intelligence technology is a comprehensive subject and relates to the field of extensive technology, namely the technology of a hardware level and the technology of a software level. The artificial intelligence infrastructure generally includes technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and the like.
Machine Learning (ML) is a multi-domain cross discipline, and relates to a plurality of disciplines such as probability theory, statistics, approximation theory, convex analysis, algorithm complexity theory and the like. The special research on how a computer simulates or realizes the learning behavior of human beings so as to acquire new knowledge or skills and reorganize the existing knowledge structure to continuously improve the performance of the computer. Machine learning is the core of artificial intelligence, is the fundamental approach for computers to have intelligence, and is applied to all fields of artificial intelligence. Machine learning and deep learning generally include techniques such as artificial neural networks, belief networks, reinforcement learning, transfer learning, inductive learning, and teaching learning.
The scheme in the embodiment shown in fig. 2 described above in the present application can be applied to a Medical cloud (Medical cloud). Wherein. The medical cloud is a medical health service cloud platform established by using cloud computing on the basis of new technologies such as cloud computing, mobile technology, multimedia, wireless communication, big data, internet of things and the like in combination with medical technology, so that sharing of medical resources and expansion of medical scope are realized. Due to the combination of the cloud computing technology, the medical cloud improves the efficiency of medical institutions and brings convenience to residents to see medical advice. Like the appointment register, the electronic medical record, the medical insurance and the like of the existing hospital are all products combining cloud computing and the medical field, and the medical cloud also has the advantages of data security, information sharing, dynamic expansion and overall layout.
Referring to fig. 3, a flow diagram of a medical cloud-based cell identification scheme is shown in accordance with an exemplary embodiment of the present application. As shown in fig. 3, the medical cloud-based cell identification scheme may include the following steps:
and S1, after the medical staff collects the medical microscopic image of the cell tissue based on the microscope 31, the medical microscopic image is stored in the cloud.
S2, during the cell classification process, the computer device extracts the medical microscopy image 32 from the cloud.
S3, the computer device acquires the identification image 33 of the target cell nucleus (including the target image patch in the medical microscope image 32 and the mask image) from the medical microscope image 32.
S4, the computer device inputs the recognition image 33 into the image recognition model 34.
S5, the image recognition model 34 outputs the classification probability distribution 35.
S6, the computer device determines the cell type to which the cell of the target cell nucleus belongs from the classification probability distribution 35, and presents the recognition result image 36 based on the recognized cell type.
The target cell nucleus in the recognition result image 36 may be identified by a line corresponding to the color of the cell type.
FIG. 4 is a flowchart illustrating an image processing method according to an exemplary embodiment. The method may be performed by a computer device, for example, the computer device may be a server, or the computer device may also be a terminal, or the computer device may include a server and a terminal, wherein the server may be the server 160 in the embodiment shown in fig. 1, and the terminal may be the terminal 140 used by medical staff in the embodiment shown in fig. 1. The computer device may be implemented as a classification device for classifying cells in a medical microscopic image. As shown in fig. 4, the image processing method may include the following steps.
Step 401, a medical microscopic image is acquired, the medical microscopic image including at least one cell nucleus.
In one possible implementation, the computer device may acquire the medical microscopic image from a cloud.
In another possible implementation manner, the computer device can also receive the medical microscopic image transmitted by the medical microscopic image acquisition device.
In another possible implementation, when the computer device is a medical microscopic image acquisition device, the computer device may directly acquire the medical microscopic image. For example, when the computer device is a medical device integrated with a microscope, the computer device may directly acquire the medical microscope image under the microscope field of view through the integrated microscope.
In one possible implementation, the medical microscopic image may be a microscopic image of a stained section of tissue.
For example, the above-mentioned tissue-stained section may include a Hematoxylin-Eosin (HE) stained section, a Thinprep Cytological Test (TCT) section, and an Immunohistochemical (IHC) section, and the like.
For example, HE staining is one of the most common staining modalities for microscopic medical analysis and consists of two stains, hematoxylin stain bluish purple for basophilic structures such as nuclei and the like, and eosin stains pink for a large portion of eosinophilic structures such as cytoplasm.
Please refer to fig. 5, which shows a schematic view of HE staining according to the embodiment of the present application. As shown in fig. 5, the position of the dot 51 is a nucleus position, which is stained bluish purple. And the other locations in the dashed box 52, except for the bluish-purple nuclei, are cytoplasmic and stained pink.
After the medical microscopic image is acquired, the computer device may acquire an identification image of a target cell nucleus in the at least one cell nucleus based on the medical microscopic image. The process of acquiring the identification image may refer to the subsequent steps 402 to 405.
Step 402, performing nucleus position identification on the medical microscopic image to obtain position information of the target nucleus in the medical microscopic image.
In the scheme shown in the embodiment of the present application, the process of performing the cell nucleus position identification on the medical microscope image to obtain the position information of the target cell nucleus in the medical microscope image may be performed by using a deep learning method. For example, the computer device may perform the cell nucleus segmentation of feature extraction on the input medical microscopic image through a cell nucleus detection model trained in advance, so as to obtain the position of each cell nucleus in the medical microscopic image in the image.
For example, taking the above-mentioned cell nucleus detection model as a model of the Hover-Net architecture as an example, the computer device inputs an R (Red, Green) B (Blue ) image (i.e. a medical microscopic image) into the cell nucleus detection model, and outputs of the cell nucleus detection model are horizontal and vertical offsets of each pixel point from its prediction centroid, and a binary judgment whether each pixel point is a cell nucleus. The computer device realizes the segmentation detection of the cell nucleus by combining the two.
For example, please refer to fig. 6, which shows a schematic diagram of a result of the nuclear segmentation according to an embodiment of the present application. As shown in fig. 6, the computer device inputs the medical microscopic image 61 into the cell nucleus detection model 62, and based on the output result of the cell nucleus detection model 62, a medical microscopic image 63 in which each cell nucleus is labeled can be obtained. Alternatively, in fig. 6, the wire frame in the medical microscopic image 63 is the result of labeled cell detection segmentation, and different nuclei may correspond to different colors, for example, each nucleus is randomly set with a color to distinguish the nuclei of different cells.
In an embodiment of the present application, the input and output of the cell nucleus detection model are as follows:
input as RGB image, output as mask I of all detected cell nucleimask
The length and width of the mask are consistent with those of the input picture, a value i of each pixel in the mask represents that the pixel belongs to the ith cell nucleus, and if i is 0, the pixel belongs to the background.
The architecture of the cell nucleus detection model is not limited to the Hover-Net, and for example, the architecture of the cell nucleus detection model may be a Mask area Convolutional Neural network (Mask R-CNN) architecture or the like.
The cell nucleus detection model can be trained through medical microscopic image samples marked with cell nucleus positions in advance. For example, in the training process, the model training device may input the medical microscopic image sample into the cell nucleus detection model to obtain the position information of each cell nucleus predicted by the cell nucleus detection model. Then, the model training device calculates a loss function value according to the difference between the position information of each cell nucleus predicted by the cell nucleus detection model and the position of each marked cell nucleus, and adjusts the parameters in the cell nucleus detection model based on the loss function value. The model training device may iteratively perform the above process until the cell nucleus detection model converges.
And step 403, intercepting a target image block from the medical microscopic image based on the position information of the target cell nucleus in the medical microscopic image.
In the embodiment of the application, the computer device may intercept a target image block containing the target cell nucleus from the medical microscopic image based on the position information of the target cell nucleus in the medical microscopic image and according to a preset image block size.
When classifying cells, it is necessary to determine the morphological characteristics of the cells themselves and the positions of the cells. Since the environment around the cell nucleus (the shape of other cells, the color difference between the front and back backgrounds, the texture of the background, etc.) has a great influence on the classification judgment of the cell nucleus, in the embodiment of the present application, an image block size (e.g., a pixel size of 224 × 224) may be preset in the computer device, and an image block having a size of 224 × 224 and including the corresponding cell nucleus is segmented for each cell nucleus to be classified in the medical microscopic image, and is used as a part of the model input in the subsequent classification process.
In the embodiment of the present application, the size of the image block is only 224 × 224 by way of example, and the computer device may also set a smaller image block size to cut out a smaller background environment to emphasize the background of the cell itself and its neighboring background; alternatively, the computer device may set a larger image patch size to cut a larger background to emphasize where the cells are located.
In a possible implementation manner, the intercepting a target image block from a medical microscopic image based on the position information of the target cell nucleus in the medical microscopic image includes:
intercepting at least two target image blocks from the medical microscopic image based on the position information of the target cell nucleus in the medical microscopic image; the target nuclei differ in position in the different target image blocks.
In the embodiment of the present application, when segmenting the target image block, the computer device may not limit the specific location of the cell in the background environment, that is, the computer device may obtain a plurality of different target image blocks including the target cell nucleus as a data enhancement mode.
In one possible implementation, the intercepting at least two target image blocks from the medical microscopic image based on the position information of the target cell nucleus in the medical microscopic image includes:
at least two target image blocks are randomly intercepted from the medical microscopic image based on the position information of the target cell nucleus in the medical microscopic image.
In the embodiment of the present application, the computer device may intercept at least two target image blocks from the medical microscopic image by means of random interception.
In one possible implementation, the intercepting at least two target image blocks from the medical microscopic image based on the position information of the target cell nucleus in the medical microscopic image includes:
intercepting at least two target image blocks from the medical microscopic image based on the position information of the target cell nucleus in the medical microscopic image and at least two position limiting conditions;
the position limiting condition is used for limiting the distance between the position of the target cell nucleus in the target image block and the edge of the target image block.
Alternatively, the position limitation condition may include at least one of a condition defining a magnitude relation between a position of the target cell nucleus in the target image block and distances between upper and lower edges of the target image block, and a condition defining a magnitude relation between a position of the target cell nucleus in the target image block and distances between left and right edges of the target image block.
For example, the position limitation condition may limit the distance between the position of the target cell nucleus in the target image block and the upper edge of the target image block to be greater than or less than the distance between the position of the target cell nucleus in the target image block and the lower edge of the target image block. Alternatively, the position limitation condition may limit the distance between the position of the target cell nucleus in the target image block and the left edge of the target image block to be greater than or less than the distance between the position of the target cell nucleus in the target image block and the right edge of the target image block.
Alternatively, the position limitation condition may include at least one of a condition defining a ratio of a position of the target cell nucleus in the target image block to distances of upper and lower edges of the target image block, and a condition defining a ratio of a position of the target cell nucleus in the target image block to distances of left and right edges of the target image block.
For example, the position limitation condition may limit a ratio between a distance between the position of the target cell nucleus in the target image block and the upper edge of the target image block and a distance between the position of the target cell nucleus in the target image block and the lower edge of the target image block to be greater than or less than a first ratio threshold. Alternatively, the position limiting condition may limit a ratio of a distance between the position of the target cell nucleus in the target image block and the left edge of the target image block to a distance between the position of the target cell nucleus in the target image block and the right edge of the target image block to be greater than or less than the second ratio threshold.
Step 404, generating a mask image based on the position information of the target cell nucleus in the target image block.
For the target image block, the computer device may determine the position information of the target cell nucleus in the target image block based on the position information of the target cell nucleus in the medical microscopic image and the position information of the target image block in the medical microscopic image, and then the computer device may generate a mask image according to the position information of the target cell nucleus in the target image block, for example, in the mask image, the pixel value at the position corresponding to the position information of the target cell nucleus in the target image block is different from the pixel value at other positions in the mask image.
In one possible implementation, generating a mask image based on the position information of the target cell nucleus in the target image block includes:
at least two mask images are generated based on the position information of the target cell nuclei in the at least two target image blocks.
In the embodiment of the present application, in the case that the computer device acquires at least two target image blocks including the target cell nucleus, the computer device may generate a corresponding mask image corresponding to each target image block.
Step 405, combining the target image block and the mask image to generate an identification image.
Wherein the computer device may combine the target image block (e.g. a three-channel image of RGB) with the single-channel mask image to generate a four-channel image as the identification image.
In one possible implementation, the computer device may combine the at least two target image blocks with respective mask images of the at least two target image blocks, respectively, to generate at least two recognition images.
Wherein, when the computer device acquires at least two target image blocks containing target cell nuclei, the computer device combines each target image block with its corresponding mask image to generate at least two recognition images. That is, in each recognition image, the position of the target cell nucleus indicated by the mask image is the position where the target cell nucleus is located in the target image block of the current recognition image.
And 406, identifying the cell type based on the identification image to obtain the classification probability distribution of the target cell nucleus.
Wherein the classification probability distribution is used to indicate the probability that the cell corresponding to the target nucleus belongs to various cell types.
In this embodiment, the computer device may input the recognition image into the image recognition model, perform feature extraction on the recognition image as a whole by the image recognition model, and map the extracted features into a classification space containing each cell type, so as to obtain a probability that a cell corresponding to the target cell nucleus belongs to each cell type.
In the embodiment of the present application, for the target image block containing the target cell nucleus, the mask of the cell nucleus itself is also superimposed to emphasize the target cell nucleus to be classified. The scheme of RGB three-channel + cell nucleus mask can effectively and comprehensively utilize the cell nucleus peripheral information and the characteristics of the cell nucleus, thereby greatly improving the cell nucleus classification accuracy. The RGB three channels provide the environment around the cell and the morphology of the cell near the cell for the image recognition model, the image recognition network can be helped to better understand the position and the state of the cell, and the cell nucleus mask channel indicates the cell to be classified as the cell on the background image to the image recognition model, and meanwhile emphasizes the shape characteristic of the cell. The above identified images combine both cellular environment and cellular characteristics.
The image recognition model can be trained by the recognition image sample labeled with the cell type in advance. For example, in the training process, the model training device may input the recognition image sample into the image recognition model to obtain the probability distribution of the cell type predicted by the image recognition model. Then, the model training device calculates a loss function value through the difference between the probability distribution of the cell type predicted by the image recognition model and the marked cell type, and adjusts the parameters in the image recognition model based on the loss function value. The model training device may iteratively perform the above process until the image recognition model converges.
Step 407, based on the classification probability distribution of the target cell nucleus, a cell type to which a cell corresponding to the target cell nucleus belongs is acquired.
Optionally, the computer device may determine the cell type to which the cell corresponding to the target cell nucleus belongs according to the probability that the cell corresponding to the target cell nucleus belongs to various cell types.
For example, if the probability that the cell corresponding to the target cell nucleus belongs to each cell type includes probabilities that five cell types respectively correspond, and the probability that the cell type a corresponds to is the largest and is greater than a preset probability threshold (e.g., 90%), the computer device may determine that the cell corresponding to the target cell nucleus belongs to the cell type a.
In one possible implementation manner, obtaining a cell type to which a cell corresponding to the target cell nucleus belongs based on the classification probability distribution of the target cell nucleus includes:
fusing the classification probability distribution of at least two identification images to obtain fusion classification probability distribution; and acquiring the cell type of the cell corresponding to the target cell nucleus based on the fusion classification probability distribution.
In this embodiment of the application, when the target cell nucleus should have two or more identification images, the image identification model may output a group of classification probability distributions for each identification image, and at this time, the computer device may fuse, for example, take an average value, the classification probability distributions of each identification image in the two or more identification images to obtain a fused classification probability distribution, and use a cell type, which corresponds to the highest probability and is greater than a preset probability threshold, as a cell type to which a cell corresponding to the target cell nucleus belongs.
In another exemplary scheme, when the target cell check should have two or more recognition images captured by the computer device based on different position restriction conditions, the computer device may further perform weighted average on the classification probability distribution of each recognition image in the two or more recognition images to obtain a fused classification probability distribution.
For example, the computer device may be preset with a weight corresponding to each location restriction condition, and the computer device may obtain the fusion classification probability distribution by averaging the classification probability distribution of each recognition image in two or more recognition images multiplied by the weight of the corresponding location restriction condition.
To sum up, in the scheme shown in the embodiment of the present application, for a target cell nucleus to be classified in a medical microscopic image, a multi-channel identification image composed of a target image block including the target cell nucleus in the medical microscopic image and a mask image for indicating the target cell nucleus in the target image block is obtained, and the multi-channel identification image is identified to obtain a cell type to which a cell corresponding to the target cell nucleus belongs; in the above process, the input information for identifying the cell type includes not only the image block in the medical microscopic image but also the mask image indicating the position of the target cell nucleus, so that the image identification model can distinguish the target cell nucleus and the background of the target cell nucleus from the input data, thereby more accurately extracting the features related to the cell classification, and further improving the accuracy of classifying the cells in the medical microscopic image.
In addition, according to the scheme shown in the embodiment of the application, the position of the target cell nucleus in the target image block is indicated by adding a mask image outside the target image block, and compared with the mode of specifying the position of the target cell nucleus by using an image, the model of the scheme of the embodiment of the application only needs to be provided with a single input port, so that the simplicity of the model structure is ensured, and the training and classification efficiency of the model is ensured while the accuracy of model classification is improved.
According to the scheme shown in the embodiment of the application, a new input format is designed, and the problem that the input image cannot take the environment and the cell into consideration in cell classification on the HE staining image can be solved. The cell nucleus self-shade and the three-color channel of the cell nucleus background are utilized to form a four-channel image, and cell nucleus information is fully utilized to train and classify the model. It can accurately classify cells into the following seven categories: inflammatory cells, healthy epithelial cells, malignant epithelial cells, fibroblasts, muscle cells, endothelial cells, and other cells.
Please refer to fig. 7, which shows the main algorithm flow of the scheme presented in the present application. As shown in fig. 7, the algorithm flow of the scheme shown in the present application can be divided into three steps: image input 71, cell detection and segmentation 72, and cell classification 73. In the third step of cell classification, the input information is image blocks in the originally input medical microscopic image and mask images of target cell nuclei to be classified in the image blocks.
The scheme shown in the above embodiments of the present application can be used in combination with a pathology cloud platform system. For example, the pathology cloud platform system may extract a pathology image (i.e., a medical microscopic image) in a rendering database in real time, detect, segment, and classify the pathology image according to the scheme shown in the embodiment of the present application, and output a classification result to provide assistance for a pathologist to view, learn, and diagnose.
For example, please refer to fig. 8, which illustrates a flowchart of a medical cloud-based cell identification scheme according to an exemplary embodiment of the present application. As shown in fig. 8, the medical cloud-based cell identification scheme may include the following steps:
and S1, after the medical staff collects the medical microscopic image of the cell tissue based on the microscope 81, the medical microscopic image is stored in a cloud.
S2, during the cell classification process, the computer device extracts the medical microscopy image 82 from the cloud.
S3, the computer device acquires at least two identification images 83 of the target nucleus (comprising different target image blocks and corresponding mask images in the medical microscopy image 82) from the medical microscopy image 82.
S4, the computer device inputs the recognition image 83 into the image recognition model 84.
S5, the image recognition model 84 outputs at least two classification probability distributions 85.
S6, the computer device performs probability fusion according to the at least two classification probability distributions 85, determines a cell type to which the cell of the target cell nucleus belongs, and displays the recognition result image 86 based on the recognized cell type.
The target cell nucleus in the recognition result image 86 may be identified by a line corresponding to the color of the cell type.
The scheme shown in fig. 8 of the present application surrounds a pathology cloud platform system, and performs cell nucleus classification for each cell nucleus in the extracted pathology image. According to the scheme, a special input structure is designed, so that the image recognition model can be combined with the environment where the cell is located and the characteristics of the cell, and the cell can be classified finely. For example, the cell nuclei are classified into the above seven classifications according to a commonly used cell nucleus classification standard.
In the scheme shown in the embodiment of the present application, the position of the target cell nucleus in the mask image may not be limited, and therefore, the mask image including the target cell nucleus and the target image block corresponding to the mask image may be cut at will, and only the target cell nucleus needs to be ensured on the mask image, thereby implementing data enhancement.
According to the scheme shown in each embodiment of the application, the accuracy of the cell type recognition algorithm can be greatly improved through a designed new input mode. The basic network of the image recognition model may use any classification network, such as ViT or Resnet network, etc., and the loss function may use various types of classification loss functions, such as cross-entropy loss function, etc.
In the embodiment of the application, the image recognition model can fully combine and utilize the environment of the cell nucleus and the characteristics of the cell nucleus, improve the input of a classification network and improve the classification accuracy.
The scheme shown in the embodiment of the application can be further applied to an application scene of a customized microscope, and the cell types corresponding to each cell nucleus in the HE staining picture are fully automatically detected, segmented and classified through the intelligent microscope, so that a foundation is laid for the next medical analysis.
For example, please refer to fig. 9, which shows a schematic diagram of a cell classification result according to an exemplary embodiment of the present application. As shown in fig. 9, taking the example that the input medical microscopic image 91 is an HE stained section image of a microscope under a 40-fold microscope, the detection segmentation classification system according to the above embodiment of the present application can automatically output a high-quality classification result 92. As shown in fig. 9, different types of cells may be labeled in output 92 with different colors, e.g., red for tumor cells 92a, green for inflammatory cells 92b, blue for connective tissue cells 92c, earth green for dead cells 92d, dark green for epithelial cells 92e, etc. Correspondingly, medical personnel can judge the illness state of the patient according to the automatically output cell nucleus classification result, and guide subsequent medication and the like.
Please refer to table 1, which shows the effect of the scheme shown in the embodiments of the present application (i.e. the scheme using the target image block + the mask image as input) when testing in a certain data set, compared with the scheme using only the image block as input.
TABLE 1
Accuracy (%) F1 score
Image block (64X 64) 63.3 0.64
Target image block + mask image 73.7 0.76
As can be seen from table 1, the accuracy is greatly improved by adopting the scheme shown in each embodiment of the present application.
Optionally, the scheme shown in the above embodiment of the present application may be integrated into an intelligent microscope end and a scanner end, so as to perform direct cell segmentation and classification on a pathological image taken by a microscope.
In the present embodiment, the types of the cell classifications are only described by taking the 7 types as examples, and alternatively, the types of the cell classifications may be more or less.
In the embodiment of the present application, no microscope for medical microscopic images is required, and for example, 10, 20, 40 microscope may be used.
The scheme shown in the above embodiments of the present application may be implemented or executed in combination with a block chain. For example, some or all of the steps in the above embodiments may be performed in a blockchain system; or, data required for executing each step in the above embodiments or generated data may be stored in the blockchain system; for example, training samples used for training the model, and model input data such as medical images in the model application process may be acquired from the blockchain system by a computer device; for another example, the parameters of the model obtained after the model training may be stored in the blockchain system.
Fig. 10 is a block diagram showing a configuration of an image processing apparatus according to an exemplary embodiment. The device can realize all or part of the steps in the method provided by the embodiment shown in fig. 2 or fig. 4, and the image processing device comprises:
a microscopic image acquisition module 1001 configured to acquire a medical microscopic image, where the medical microscopic image includes at least one cell nucleus;
an identification image acquisition module 1002, configured to acquire an identification image of a target cell nucleus in the at least one cell nucleus based on the medical microscope image; the identification image is a multi-channel image composed of a target image block and a mask image; the target image block is an image block containing the target cell nucleus in the medical microscopic image; the mask image is used for indicating the position of the target cell nucleus in the target image block;
a probability distribution obtaining module 1003, configured to perform cell type identification based on the identification image, so as to obtain a classification probability distribution of the target cell nucleus; the classification probability distribution is used for indicating the probability that the cell corresponding to the target cell nucleus belongs to various cell types;
a cell type obtaining module 1004, configured to obtain a cell type to which a cell corresponding to the target cell nucleus belongs based on the classification probability distribution of the target cell nucleus.
In one possible implementation, the identification image acquisition module is configured to,
carrying out cell nucleus position identification on the medical microscopic image to obtain position information of the target cell nucleus in the medical microscopic image;
intercepting the target image block from the medical microscopic image based on the position information of the target cell nucleus in the medical microscopic image;
generating the mask image based on the position information of the target cell nucleus in the target image block;
combining the target image patch and the mask image to generate the identification image.
In one possible implementation, the identification image acquisition module is configured to,
intercepting at least two target image blocks from the medical microscopic image based on the position information of the target cell nucleus in the medical microscopic image; the target cell nucleus is positioned differently in different target image blocks;
generating at least two mask images based on the position information of the target cell nucleus in at least two target image blocks;
and combining at least two target image blocks with the mask images of the at least two target image blocks respectively to generate at least two identification images.
In a possible implementation manner, the identification image obtaining module is configured to randomly intercept at least two target image blocks from the medical microscope image based on the position information of the target cell nucleus in the medical microscope image.
In a possible implementation manner, the identification image obtaining module is configured to intercept at least two target image blocks from the medical microscope image based on the position information of the target cell nucleus in the medical microscope image and at least two position restriction conditions;
wherein the position limitation condition is used for limiting the distance between the position of the target cell nucleus in the target image block and the edge of the target image block.
In one possible implementation, the cell type acquisition module is configured to,
fusing the classification probability distribution of at least two identification images to obtain fused classification probability distribution;
and acquiring the cell type of the cell corresponding to the target cell nucleus based on the fusion classification probability distribution.
To sum up, in the scheme shown in the embodiment of the present application, for a target cell nucleus to be classified in a medical microscopic image, a multi-channel identification image composed of a target image block including the target cell nucleus in the medical microscopic image and a mask image for indicating the target cell nucleus in the target image block is obtained, and the multi-channel identification image is identified to obtain a cell type to which a cell corresponding to the target cell nucleus belongs; in the above process, the input information for identifying the cell type includes not only the image block in the medical microscopic image but also the mask image indicating the position of the target cell nucleus, so that the image identification model can distinguish the target cell nucleus and the background of the target cell nucleus from the input data, thereby more accurately extracting the features related to the cell classification, and further improving the accuracy of classifying the cells in the medical microscopic image.
In addition, according to the scheme shown in the embodiment of the application, the position of the target cell nucleus in the target image block is indicated by adding a mask image outside the target image block, and compared with the mode of specifying the position of the target cell nucleus by using an image, the model of the scheme of the embodiment of the application only needs to be provided with a single input port, so that the simplicity of the model structure is ensured, and the training and classification efficiency of the model is ensured while the accuracy of model classification is improved.
FIG. 11 is a block diagram illustrating a computer device in accordance with an exemplary embodiment. The computer device may be implemented as the computer device for training the first image recognition model in the above-described method embodiments, or may be implemented as the computer device for performing the brain centerline recognition by the second image recognition model in the above-described method embodiments. The computer device 1100 includes a Central Processing Unit (CPU) 1101, a system Memory 1104 including a Random Access Memory (RAM) 1102 and a Read-Only Memory (ROM) 1103, and a system bus 1105 connecting the system Memory 1104 and the Central Processing Unit 1101. The computer device 1100 also includes a basic input/output system 1106, which facilitates transfer of information between devices within the computer, and a mass storage device 1107 for storing an operating system 1113, application programs 1114, and other program modules 1115.
The mass storage device 1107 is connected to the central processing unit 1101 through a mass storage controller (not shown) that is connected to the system bus 1105. The mass storage device 1107 and its associated computer-readable media provide non-volatile storage for the computer device 1100. That is, the mass storage device 1107 may include a computer-readable medium (not shown) such as a hard disk or Compact disk Read-Only Memory (CD-ROM) drive.
Without loss of generality, the computer-readable media may comprise computer storage media and communication media. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Computer storage media includes RAM, ROM, flash memory or other solid state storage technology, CD-ROM, or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices. Of course, those skilled in the art will appreciate that the computer storage media is not limited to the foregoing. The system memory 1104 and mass storage device 1107 described above may be collectively referred to as memory.
The computer device 1100 may connect to the internet or other network devices through the network interface unit 1111 that is connected to the system bus 1105.
The memory further includes one or more programs, the one or more programs are stored in the memory, and the central processing unit 1101 implements all or part of the steps of the method shown in any one of fig. 2 or fig. 4 by executing the one or more programs.
In an exemplary embodiment, a non-transitory computer readable storage medium comprising instructions, such as a memory comprising computer programs (instructions), executable by a processor of a computer device to perform the methods shown in the various embodiments of the present application, is also provided. For example, the non-transitory computer readable storage medium may be a Read-Only Memory, a random access Memory, a Compact Disc Read-Only Memory (CD-ROM), a magnetic tape, a floppy disk, an optical data storage device, and the like.
In an exemplary embodiment, a computer program product or computer program is also provided, the computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions to cause the computer device to perform the methods shown in the various embodiments described above.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It will be understood that the present application is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (10)

1. An image processing method, characterized in that the method comprises:
acquiring a medical microscopic image, the medical microscopic image comprising at least one cell nucleus;
acquiring an identification image of a target cell nucleus in the at least one cell nucleus based on the medical microscopic image; the identification image is a multi-channel image composed of a target image block and a mask image; the target image block is an image block containing the target cell nucleus in the medical microscopic image; the mask image is used for indicating the position of the target cell nucleus in the target image block;
identifying the cell type based on the identification image to obtain the classification probability distribution of the target cell nucleus; the classification probability distribution is used for indicating the probability that the cell corresponding to the target cell nucleus belongs to various cell types;
and acquiring the cell type of the cell corresponding to the target cell nucleus based on the classification probability distribution of the target cell nucleus.
2. The method of claim 1, wherein said obtaining an identification image of a target nucleus of said at least one nucleus based on said medical microscopy image comprises:
carrying out cell nucleus position identification on the medical microscopic image to obtain position information of the target cell nucleus in the medical microscopic image;
intercepting the target image block from the medical microscopic image based on the position information of the target cell nucleus in the medical microscopic image;
generating the mask image based on the position information of the target cell nucleus in the target image block;
combining the target image patch and the mask image to generate the identification image.
3. The method of claim 2, wherein the intercepting the target image block from the medical microscopy image based on the location information of the target cell nucleus in the medical microscopy image comprises:
intercepting at least two target image blocks from the medical microscopic image based on the position information of the target cell nucleus in the medical microscopic image; the target cell nucleus is positioned differently in different target image blocks;
generating the mask image based on the position information of the target cell nucleus in the target image block, including:
generating at least two mask images based on the position information of the target cell nucleus in at least two target image blocks;
said combining the target patch and the mask image to generate the identification image comprises:
and combining at least two target image blocks with the mask images of the at least two target image blocks respectively to generate at least two identification images.
4. The method of claim 3, wherein the truncating at least two of the target image blocks from the medical microscopy image based on the location information of the target cell nuclei in the medical microscopy image comprises:
and randomly intercepting at least two target image blocks from the medical microscopic image based on the position information of the target cell nucleus in the medical microscopic image.
5. The method of claim 3, wherein the truncating at least two of the target image blocks from the medical microscopy image based on the location information of the target cell nuclei in the medical microscopy image comprises:
intercepting at least two target image blocks from the medical microscopic image based on the position information of the target cell nucleus in the medical microscopic image and at least two position limiting conditions;
wherein the position limitation condition is used for limiting the distance between the position of the target cell nucleus in the target image block and the edge of the target image block.
6. The method of claim 3, wherein obtaining the cell type to which the cell corresponding to the target nucleus belongs based on the classification probability distribution of the target nucleus comprises:
fusing the classification probability distribution of at least two identification images to obtain fused classification probability distribution;
and acquiring the cell type of the cell corresponding to the target cell nucleus based on the fusion classification probability distribution.
7. An image processing apparatus, characterized in that the apparatus comprises:
a microscopic image acquisition module for acquiring a medical microscopic image, the medical microscopic image including at least one cell nucleus;
the identification image acquisition module is used for acquiring an identification image of a target cell nucleus in the at least one cell nucleus based on the medical microscopic image; the identification image is a multi-channel image composed of a target image block and a mask image; the target image block is an image block containing the target cell nucleus in the medical microscopic image; the mask image is used for indicating the position of the target cell nucleus in the target image block;
a probability distribution obtaining module, configured to perform cell type identification based on the identification image, and obtain a classification probability distribution of the target cell nucleus; the classification probability distribution is used for indicating the probability that the cell corresponding to the target cell nucleus belongs to various cell types;
and the cell type acquisition module is used for acquiring the cell type of the cell corresponding to the target cell nucleus based on the classification probability distribution of the target cell nucleus.
8. A computer device comprising a processor and a memory, the memory having stored therein at least one computer instruction, the at least one computer instruction being loaded and executed by the processor to implement the image processing method according to any one of claims 1 to 6.
9. A computer-readable storage medium having stored therein at least one computer instruction, which is loaded and executed by a processor to implement the image processing method according to any one of claims 1 to 6.
10. A computer program product, characterized in that it comprises computer instructions which are read and executed by a processor of a computer device, causing the computer device to carry out the image processing method according to any one of claims 1 to 6.
CN202111547404.9A 2021-12-16 2021-12-16 Image processing method, device, equipment and storage medium Pending CN114332854A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111547404.9A CN114332854A (en) 2021-12-16 2021-12-16 Image processing method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111547404.9A CN114332854A (en) 2021-12-16 2021-12-16 Image processing method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114332854A true CN114332854A (en) 2022-04-12

Family

ID=81052488

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111547404.9A Pending CN114332854A (en) 2021-12-16 2021-12-16 Image processing method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114332854A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115994918A (en) * 2023-03-23 2023-04-21 深圳市大数据研究院 Cell segmentation method and system
WO2024032623A1 (en) * 2022-08-11 2024-02-15 珠海圣美生物诊断技术有限公司 Method and device for recognizing fluorescence staining signal point in cell nucleus image

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024032623A1 (en) * 2022-08-11 2024-02-15 珠海圣美生物诊断技术有限公司 Method and device for recognizing fluorescence staining signal point in cell nucleus image
CN115994918A (en) * 2023-03-23 2023-04-21 深圳市大数据研究院 Cell segmentation method and system

Similar Documents

Publication Publication Date Title
US11967069B2 (en) Pathological section image processing method and apparatus, system, and storage medium
CN111260677B (en) Cell analysis method, device, equipment and storage medium based on microscopic image
Ghahremani et al. Deep learning-inferred multiplex immunofluorescence for immunohistochemical image quantification
Lafarge et al. Learning domain-invariant representations of histological images
CN103020585B (en) A kind of immuning tissue's positive cell and negative cells recognition methods
CN113454733A (en) Multi-instance learner for prognostic tissue pattern recognition
CN110570352B (en) Image labeling method, device and system and cell labeling method
JP5394485B2 (en) Signet ring cell detector and related methods
CN109389129A (en) A kind of image processing method, electronic equipment and storage medium
CN111798425B (en) Intelligent detection method for mitotic image in gastrointestinal stromal tumor based on deep learning
CN114332854A (en) Image processing method, device, equipment and storage medium
CN109544507A (en) A kind of pathological image processing method and system, equipment, storage medium
CN111723815B (en) Model training method, image processing device, computer system and medium
US11176412B2 (en) Systems and methods for encoding image features of high-resolution digital images of biological specimens
Öztürk et al. Cell‐type based semantic segmentation of histopathological images using deep convolutional neural networks
CN112132827A (en) Pathological image processing method and device, electronic equipment and readable storage medium
CN112966792B (en) Blood vessel image classification processing method, device, equipment and storage medium
Kolla et al. CNN‐Based Brain Tumor Detection Model Using Local Binary Pattern and Multilayered SVM Classifier
CN114550169A (en) Training method, device, equipment and medium for cell classification model
Ghaye et al. Image thresholding techniques for localization of sub‐resolution fluorescent biomarkers
CN110490159B (en) Method, device, equipment and storage medium for identifying cells in microscopic image
Khoshdeli et al. Fusion of encoder-decoder deep networks improves delineation of multiple nuclear phenotypes
CN114283406A (en) Cell image recognition method, device, equipment, medium and computer program product
CN113763370A (en) Digital pathological image processing method and device, electronic equipment and storage medium
Khoshdeli et al. Deep learning models delineates multiple nuclear phenotypes in h&e stained histology sections

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination