CN109671049B - Medical image processing method, system, equipment and storage medium - Google Patents
Medical image processing method, system, equipment and storage medium Download PDFInfo
- Publication number
- CN109671049B CN109671049B CN201811320629.9A CN201811320629A CN109671049B CN 109671049 B CN109671049 B CN 109671049B CN 201811320629 A CN201811320629 A CN 201811320629A CN 109671049 B CN109671049 B CN 109671049B
- Authority
- CN
- China
- Prior art keywords
- layer
- image
- focus
- medical image
- convolution
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 31
- 230000002159 abnormal effect Effects 0.000 claims abstract description 49
- 238000012545 processing Methods 0.000 claims abstract description 17
- 230000005856 abnormality Effects 0.000 claims description 91
- 230000003902 lesion Effects 0.000 claims description 71
- 238000013528 artificial neural network Methods 0.000 claims description 36
- 210000000416 exudates and transudate Anatomy 0.000 claims description 36
- 230000011218 segmentation Effects 0.000 claims description 35
- 230000002062 proliferating effect Effects 0.000 claims description 33
- 208000009857 Microaneurysm Diseases 0.000 claims description 28
- 210000001210 retinal vessel Anatomy 0.000 claims description 20
- 238000012549 training Methods 0.000 claims description 18
- 238000007635 classification algorithm Methods 0.000 claims description 16
- 238000010801 machine learning Methods 0.000 claims description 16
- 239000013598 vector Substances 0.000 claims description 15
- 206010002329 Aneurysm Diseases 0.000 claims description 12
- 238000002372 labelling Methods 0.000 claims description 12
- 230000002207 retinal effect Effects 0.000 claims description 9
- 238000000034 method Methods 0.000 claims description 8
- 210000001525 retina Anatomy 0.000 claims description 8
- 210000004204 blood vessel Anatomy 0.000 claims description 6
- 230000002792 vascular Effects 0.000 claims description 6
- 238000000605 extraction Methods 0.000 claims description 5
- 238000005070 sampling Methods 0.000 claims description 5
- 206010028980 Neoplasm Diseases 0.000 claims description 4
- 230000006870 function Effects 0.000 claims description 3
- 210000002565 arteriole Anatomy 0.000 claims description 2
- 238000012512 characterization method Methods 0.000 claims description 2
- 231100000216 vascular lesion Toxicity 0.000 claims description 2
- 230000001575 pathological effect Effects 0.000 abstract description 7
- 238000004458 analytical method Methods 0.000 abstract description 4
- 206010012689 Diabetic retinopathy Diseases 0.000 description 22
- 206010012601 diabetes mellitus Diseases 0.000 description 7
- 201000007917 background diabetic retinopathy Diseases 0.000 description 6
- 238000013527 convolutional neural network Methods 0.000 description 6
- 230000004927 fusion Effects 0.000 description 6
- 208000032843 Hemorrhage Diseases 0.000 description 5
- 201000004569 Blindness Diseases 0.000 description 4
- 230000000740 bleeding effect Effects 0.000 description 4
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 208000001344 Macular Edema Diseases 0.000 description 3
- 206010025415 Macular oedema Diseases 0.000 description 3
- 208000037111 Retinal Hemorrhage Diseases 0.000 description 3
- 238000001514 detection method Methods 0.000 description 3
- 201000010099 disease Diseases 0.000 description 3
- 239000000284 extract Substances 0.000 description 3
- 201000010230 macular retinal edema Diseases 0.000 description 3
- 206010029113 Neovascularisation Diseases 0.000 description 2
- 241000212749 Zesius chrysomallus Species 0.000 description 2
- 230000003376 axonal effect Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 2
- 239000003086 colorant Substances 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 210000004126 nerve fiber Anatomy 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 230000035755 proliferation Effects 0.000 description 2
- 238000006467 substitution reaction Methods 0.000 description 2
- 102000004895 Lipoproteins Human genes 0.000 description 1
- 108090001030 Lipoproteins Proteins 0.000 description 1
- 206010025421 Macule Diseases 0.000 description 1
- 208000017442 Retinal disease Diseases 0.000 description 1
- 206010057430 Retinal injury Diseases 0.000 description 1
- 206010038886 Retinal oedema Diseases 0.000 description 1
- 206010038923 Retinopathy Diseases 0.000 description 1
- 206010038934 Retinopathy proliferative Diseases 0.000 description 1
- 238000009825 accumulation Methods 0.000 description 1
- 210000003050 axon Anatomy 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 230000017531 blood circulation Effects 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000004856 capillary permeability Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 208000035475 disorder Diseases 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 239000012530 fluid Substances 0.000 description 1
- 239000012634 fragment Substances 0.000 description 1
- 230000005802 health problem Effects 0.000 description 1
- 208000019622 heart disease Diseases 0.000 description 1
- 238000010191 image analysis Methods 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 208000028867 ischemia Diseases 0.000 description 1
- 210000003734 kidney Anatomy 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 239000012528 membrane Substances 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000005192 partition Methods 0.000 description 1
- 230000007170 pathology Effects 0.000 description 1
- 230000035699 permeability Effects 0.000 description 1
- 201000007914 proliferative diabetic retinopathy Diseases 0.000 description 1
- 102000004169 proteins and genes Human genes 0.000 description 1
- 108090000623 proteins and genes Proteins 0.000 description 1
- 238000007637 random forest analysis Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 210000001927 retinal artery Anatomy 0.000 description 1
- 210000003994 retinal ganglion cell Anatomy 0.000 description 1
- 210000001957 retinal vein Anatomy 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30041—Eye; Retina; Ophthalmic
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30096—Tumor; Lesion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30101—Blood vessel; Artery; Vein; Vascular
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Quality & Reliability (AREA)
- Artificial Intelligence (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Medical Informatics (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Image Analysis (AREA)
- Eye Examination Apparatus (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a medical image processing method, a system, equipment and a storage medium, which are characterized in that on one hand, a plurality of first abnormal probabilities are acquired through a focus image, on the other hand, a plurality of second abnormal probabilities are acquired through the medical image and the focus image, then the final probabilities that the medical image belongs to different image abnormal degrees are acquired according to the first abnormal probabilities, the second abnormal probabilities and different weight coefficients and according to different image abnormal degrees, and the image abnormal degree of the largest final probability is used as the image abnormal degree of the medical image, so that the abnormal degree of the medical image is analyzed, and the technical problems that pathological image processing and analysis depend on naked eyes and the efficiency is low in the prior art are overcome.
Description
Technical Field
The invention relates to the field of image processing, in particular to a medical image processing method, a medical image processing system, medical image processing equipment and a medical image processing storage medium.
Background
The rapid growth of diabetes during the last decades has attracted attention from all communities and the exponential growth of diabetes-induced diseases has become a great challenge for the current healthcare industry. And unfortunately, the number of patients suffering from diabetes-induced diseases continues to increase at a striking rate. Still more worry is that only about 70% of patients realize that they suffer from this disease. From a medical point of view, diabetes is considered to be the basis for many health problems and late disorders, i.e. diabetes causes a range of lesions and complications, including: causing severe heart disease, diabetic Retinopathy (DR), kidney problems, and the like. DR is one of the most common complications of diabetes and is considered one of the most leading causes of blindness. Studies have shown that the problem of diabetes in various parts of the world is growing at a striking rate, regardless of population size or socioeconomic background. Furthermore, a study showed that nearly 75% of DR patients belong to developing countries and that the likelihood of blindness of DR patients is almost 25-fold higher than those without DR due to the defects in the living conditions and treatment facilities of developing countries. But currently lacking medical professionals with diabetic retinopathy, most DR patients worldwide cannot get effective lesion detection and treatment in time, and most patients are aware of the treatment sought only when the retinopathy has progressed to the point where treatment becomes highly complex and sometimes nearly impossible. However, DR can reach 90% cure at the initial stage, so early detection of DR and effective treatment can greatly reduce the risk of DR causing blindness. Therefore, DR detection technology is particularly important, and medical image analysis is one of the research fields that is currently attracting great interest to scientists and physicians.
DR can be divided into two major categories, non-proliferative diabetic retinopathy (NPDR) and proliferative retinopathy (PDR). NPDR has three subclasses: mild NPDR, moderate NPDR and severe NPDR. The non-proliferative is early in the pathology, which is confined to the retina and manifests itself as microangioma, hemorrhage, hard and soft exudates, retinal artery and vein lesions. The proliferation type lesion extends at least partially beyond the inner limiting membrane, and the appearance of new blood vessels is a proliferation type sign. The foci of DR are described in detail as follows:
(1) Microaneurysm
Microaneurysms represent the most primitive perceptible indication of retinal damage, and abnormal permeability of retinal blood vessels leads to the formation of microaneurysms. On medical images, it is small, circular, and has dark red spots, typically in the macular cycle. The micro-aneurysm can be seen as a red dot with sharp edges, with dimensions between 20 μm and 200 μm, approximately 8.25% of the size of the optical disc.
(2) Hard exudates
Unlike microaneurysms, hard exudates are formed by leakage of lipoproteins and other proteins from retinal blood vessels. Visually, it looks like a small white or yellowish white deposit with distinct edges. The hard exudates are typically organized in the form of a ring, which is typically found in the outer layers of the retina. The hard exudates are often irregular and shiny, and are found in locations near the edges of the microaneurysms or retinal oedema.
(3) Soft exudates
Generally, soft exudates are formed as a result of arteriolar occlusion. The reduced blood flow to the retina causes ischemia of the Retinal Nerve Fiber Layer (RNFL) that ultimately affects the axonal flow, thereby accumulating axonal fragments on the retinal ganglion cell axons. This accumulation can be visualized like a fluffy white lesion in RNFL, which is commonly referred to as soft exudates.
(4) Bleeding from the body
Bleeding occurs due to leakage from weak blood vessels, and the bleeding is focused on the form of red spots with different densities and uneven edges, and it is found in the range of 125 μm. Bleeding is broadly divided into two categories: flame and dot blot hemorrhages, the first of which originates in the pre-capillary arterioles and occurs on nerve fibers. The second type of dot blot hemorrhage is circular and smaller than the micro-aneurysms. Dot blot hemorrhage can occur at different levels of retina, but it occurs in most cases at the ends of the meridians of capillaries.
(5) Neovascularization
Neovascularization generally means the atypical appearance of new blood vessels that appear on the inner surface of the retina. The new blood vessels are very fine and repeatedly infiltrate the vitreous cavity, which reduces visual ability and blurs them significantly, ultimately leading to blindness.
(6) Macular edema
Macular edema is identified as a swollen portion of the retina, which usually occurs due to abnormal retinal capillary permeability. Macular edema causes leakage of fluid or other solutes around the macula and severely affects vision.
Therefore, in the prior art, for a patient suffering from a pathological condition including Diabetic Retinopathy (DR), only the eyes of a doctor are used for judging whether the pathological condition occurs or not and the degree of the pathological condition by observing the pathological image, and the time and energy are very occupied by observing and analyzing the image by naked eyes, so that the efficiency is low, and the judgment of different doctors is still different, so that the problem is to be solved.
Disclosure of Invention
The present invention aims to solve at least one of the technical problems in the related art to some extent. To this end, an object of the present invention is to provide a medical image processing method, system, device, storage medium for improving the processing, analysis efficiency of medical images.
The technical scheme adopted by the invention is as follows:
in a first aspect, the present invention provides a medical image processing method comprising the steps of:
a focus image acquisition step of extracting a focus image from the medical image;
a score generating step, namely acquiring a plurality of focus scores according to the focus image, wherein the focus scores are first abnormal probabilities that the focus image belongs to different image abnormal degrees;
a second abnormality probability obtaining step of obtaining a plurality of second abnormality probabilities of the medical image according to the medical image and the focus image, wherein the second abnormality probabilities are probabilities that the medical image belongs to different image abnormality degrees;
and a classification step, namely acquiring final probabilities that the medical image belongs to different image abnormality degrees according to the first abnormality probabilities, the second abnormality probabilities and different weight coefficients and different image abnormality degrees, and taking the image abnormality degree with the largest final probability as the image abnormality degree of the medical image.
Further, the medical image includes a fundus photograph.
Further, the lesion image acquisition step includes:
acquiring a plurality of focus mask images according to the medical image and the segmented neural network;
a plurality of lesion images are acquired from the lesion mask image and the medical image.
Further, the score generating step includes:
extracting focus features of a plurality of focus images respectively, wherein the focus features comprise colors or shapes;
and splicing a plurality of focus features and inputting the focus features into a first machine learning classification algorithm to obtain a plurality of focus scores.
Further, the second abnormality probability obtaining step includes:
a plurality of second anomaly probabilities for the medical image are obtained from the medical image, a plurality of the lesion images, and a second machine learning classification algorithm.
Further, the image abnormality degree includes normal, mild first abnormality, moderate first abnormality, severe first abnormality, and second abnormality.
Further, the weight coefficient of the first anomaly probability is 0.2, and the weight coefficient of the second anomaly probability is 0.8.
In a second aspect, the present invention provides a medical image processing system comprising:
a focus image acquisition unit for extracting a focus image from the medical image;
the focus score generation unit is used for acquiring a plurality of focus scores according to the focus image, wherein the focus score is a first abnormal probability that the focus image belongs to different image abnormal degrees;
a second abnormality probability obtaining unit configured to obtain a plurality of second abnormality probabilities of the medical image according to the medical image and the lesion image, the second abnormality probabilities being probabilities that the medical image belongs to different image abnormality degrees;
the classifying unit is used for acquiring final probabilities that the medical image belongs to different image abnormality degrees according to the first abnormality probabilities, the second abnormality probabilities and different weight coefficients and different image abnormality degrees, and taking the image abnormality degree of the final probability with the maximum value as the image abnormality degree of the medical image.
In a third aspect, the present invention provides a medical image processing apparatus comprising:
at least one processor; the method comprises the steps of,
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the medical image processing method.
In a fourth aspect, the present invention provides a computer-readable storage medium storing computer-executable instructions for causing a computer to perform the medical image processing method.
The beneficial effects of the invention are as follows:
according to the invention, on one hand, a plurality of first abnormal probabilities are acquired through the focus image, on the other hand, a plurality of second abnormal probabilities are acquired through the medical image and the focus image, and then the final probabilities that the medical image belongs to different image abnormal degrees are acquired according to the plurality of first abnormal probabilities, the plurality of second abnormal probabilities and different weight coefficients and different image abnormal degrees, and the image abnormal degree of the maximum final probability is taken as the image abnormal degree of the medical image, so that the analysis of the abnormal degree of the medical image is realized, and the technical problems that pathological image processing and analysis depend on naked eyes and have low efficiency in the prior art are overcome.
In addition, the invention also acquires a plurality of focus mask images of the medical image by dividing the neural network so as to further acquire a plurality of focus images and ensure accurate extraction of the focus images.
Drawings
FIG. 1 is a flow chart of a medical image processing method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a segmented neural network of a medical image processing method according to an embodiment of the present invention.
Detailed Description
It should be noted that, in the case of no conflict, the embodiments and features in the embodiments may be combined with each other.
Example 1
A medical image processing method comprising the steps of:
a focus image acquisition step of extracting a focus image from the medical image;
a score generation step of obtaining a plurality of focus scores according to the focus image, wherein the focus scores are first abnormal probabilities that the focus image belongs to different image abnormality degrees, for example, the image abnormality degrees comprise normal and abnormal, and the score generation step is to obtain the probability that the focus image belongs to normal or abnormal, obtain two first abnormal probabilities, and so on;
a second abnormality probability obtaining step of obtaining a plurality of second abnormality probabilities of the medical image, which are probabilities that the medical image belongs to different image abnormality degrees, from the medical image and the lesion image;
and a classification step, namely acquiring final probabilities of medical images belonging to different image abnormality degrees according to the first abnormality probabilities, the second abnormality probabilities and different weight coefficients and different image abnormality degrees, and taking the image abnormality degree with the largest final probability as the image abnormality degree of the medical images. The first abnormal probability and the second abnormal probability of the same image abnormal degree are added according to different weight coefficients to obtain the final probability of the image abnormal degree, and a plurality of final probabilities of the medical image can be obtained due to a plurality of different image abnormal degrees.
A medical image processing method includes the steps of obtaining a plurality of first abnormal probabilities through a focus image, obtaining a plurality of second abnormal probabilities through the medical image and the focus image, obtaining final probabilities that the medical image belongs to different image abnormal degrees according to the first abnormal probabilities, the second abnormal probabilities and different weight coefficients and different image abnormal degrees, taking the image abnormal degree of the maximum final probability as the image abnormal degree of the medical image, analyzing the abnormal degree of the medical image, improving processing and analyzing efficiency of the medical image, and solving the technical problems that pathological image processing and analyzing depend on naked eyes and are low in efficiency in the prior art.
Further, the medical image includes, but is not limited to, fundus photographs, in this embodiment, the fundus photographs are taken as an example to describe a medical image processing method, and the processing methods of the rest kinds of medical images can refer to the following detailed description of the processing method of fundus photographs, and the processing method of the image is similar and will not be repeated. And the degree of image abnormality includes, but is not limited to, normal, mild first abnormality, moderate first abnormality, severe first abnormality and second abnormality, wherein, taking fundus photographs as an example, the first abnormality is a non-proliferative lesion, and the second abnormality is a proliferative lesion, and the degree of image abnormality of fundus photographs includes, but is not limited to, normal, mild non-proliferative lesion, moderate non-proliferative lesion, severe non-proliferative lesion and proliferative lesion. Referring to fig. 1, fig. 1 is a flowchart illustrating a medical image processing method according to an embodiment of the present invention; the medical image processing method is specifically described below:
the first step: the data is preprocessed. The data used in the invention is a two-dimensional color fundus photo, the original image size is 3000×3000, the image is preprocessed, and the redundant parts such as the background of the image are cut off to adjust the image size to 128×128.
And a second step of: a lesion image acquisition step comprising:
acquiring a plurality of focus mask images according to the medical image and the segmented neural network;
a plurality of lesion images are acquired from the lesion mask image and the medical image.
The main task of the neural network is to realize the segmentation and extraction of various focuses, in this embodiment, the focus types include micro-aneurysms, hard exudates and new retinal blood vessels, the micro-aneurysms, the hard exudates belong to non-proliferative lesions, and the new retinal blood vessels belong to proliferative lesions. Inputting the picture to be measured into a trained segmentation neural network, predicting whether each pixel on the picture belongs to a background, an arterioma, a hard exudate or a retinal blood vessel through the segmentation neural network, and outputting the predicted result into three masks, namely a micro-aneurysm mask, a hard exudate mask and a retinal blood vessel mask. The mask specifically consists of 0 and 1, the background is 0, the target is 1, for example, when the microaneurysm mask is generated, whether each pixel belongs to a microaneurysm area or not is predicted by dividing a neural network, if the pixel position is marked with 1, otherwise, the pixel position is marked with 0, and finally, the microaneurysm mask graph with the size of 128 multiplied by 128, which is the same as the original graph, is obtained. The mask can accurately describe the position of the target, and the mask is corresponding to the original image, so that the image after the focus segmentation, namely the focus image, can be extracted.
Referring to table 1 and fig. 2, fig. 2 is a schematic diagram showing a specific embodiment of a segmented neural network of a medical image processing method according to the present invention; the implementation of the segmented neural network is mainly based on a multi-task learning framework, and three segmented tasks of the segmented neural network are respectively segmented microaneurysms, segmented hard exudates and segmented retinal blood vessels. The first part of the network is called the "shared layer" and the weights of these layers are common to all three tasks. Next, a specific layer of each task, referred to as a "task specific layer," is provided, and parameters between these layers are independently calculated and do not participate in cross-layer sharing. In these task-specific layers, the network learns task-specific information. Each of these separate task layers produces a separate output, specifically a segmentation mask for outputting microaneurysms, hard exudates, retinal blood vessels.
Table 1 partition neural network hierarchy
The above-mentioned segmented neural network can segment a plurality of lesion images at the same time, and is not limited to the three lesion images in the present embodiment. Before using the split neural network, the network needs to be trained, specifically, a model is initialized first, then a sample marked with mask mark data is input for training, the training is performed by using a whole picture during training, patch sampling is not performed, and experiments prove that the whole picture is more effective. After inputting the picture, three convolutions and downsampling are respectively carried out to obtain a feature small image (5 multiplied by 128), then one convolution is carried out to obtain features (1 multiplied by 1024), then three groups of features which are fully connected to achieve three tasks are carried out, and the three tasks are respectively deconvoluted and upsampled to obtain a large image with the same size as the original image, wherein the images are divided masks. The segmentation neural network model is continuously optimized in the training process, parameters are adjusted until convergence, and the optimization aim is to continuously reduce the difference between a segmentation map (focus image) predicted by the network and sample labeling data, namely, minimize a loss function L1:
in the above formula, N is the number of training samples, L MA Pixel level loss, L, representing microaneurysm segmentation HES Representing the loss of segmentation of the hard exudates, L NV Represents the segmentation loss of retinal blood vessels; f (x, θ) is a segmentation result of the segmented neural network prediction, where x is one pixel of the sample, θ is a learning rate; s is S 1 ,S 2 ,S 3 Sample marking positions of the microaneurysms, the hard exudates and the retinal blood vessels are respectively; Φ (θ) is a regularization term.
A plurality of focus mask images of fundus photographs can be obtained using the trained segmented neural network, including a micro-aneurysm mask image, a hard exudate mask image, and a retinal vascular mask image; multiple focus images, namely a micro-aneurysm focus image, a hard exudate focus image and a retinal blood vessel focus image, can be obtained by segmentation according to the focus mask image and the medical image.
And a third step of: a score generation step comprising:
respectively extracting focus features of various focus images, wherein the focus features comprise colors or shapes;
the plurality of lesion features are stitched and input into a first machine learning classification algorithm to obtain a plurality of lesion scores.
Specifically, after each focus image is obtained, the features of the color, shape, etc. of each focus are extracted according to the focus image as shown in table 2, each focus image can extract 86-dimensional features, the focus features of 3 focus images are spliced together into 258-dimensional feature vectors according to the order of the oligodynamic tumor focus image, the hard exudate focus image and the retinal vascular focus image, and the feature vectors are input into a score generator (i.e. a first machine learning classification algorithm) to obtain a plurality of focus scoresi is the category of the degree of abnormality of the image, X is the number of the medical image, namely, the probability that the focus image belongs to different degrees of abnormality of the image (including normal, mild non-proliferative lesions, moderate non-proliferative lesions, severe non-proliferative lesions and proliferative lesions) is obtained, and then 5 focus scores can be obtained.
Table 2 list of lesion feature extraction
Features (e.g. a character) | Description of the features |
f 1 | Mean value on RGB image |
f 2 | Variance on RGB images |
f 3 -f 12 | Color moment on RGB image |
f 13 | Perimeter and target area |
f 14 | Area and area of target area |
f 15 -f 74 | LBP characterization of target area |
f 75 -f 86 | HSV characteristics |
In this embodiment, the score generator employs a random forest algorithm. The score generator needs to be trained before use, specifically, the focus feature 258-dimensional feature vector of the training sample and the lesion degree marked by the sample are input into the score generator for training. The trained score generator can obtain the probability that the input focus belongs to 5 image abnormality degrees (namely normal, mild non-proliferative lesions, moderate non-proliferative lesions, severe non-proliferative lesions and proliferative lesions) through calculation, and takes the probability value as the focus score of the sample.
Fourth step: a second abnormality probability obtaining step including:
a plurality of second anomaly probabilities for the medical image are obtained based on the medical image, the plurality of lesion images, and a second machine learning classification algorithm.
In this example, the degree of image abnormality includes normal, mild non-proliferative lesions, moderate non-proliferative lesions, severe non-proliferative lesions, and proliferative lesions; the second machine learning classification algorithm is designed based on a 3D convolutional neural network, and the 3D convolutional neural network can simultaneously input an original medical image (namely, fundus photo), an image of an arteriovenous tumor focus, an image of a hard exudate focus and an image of a retinal vascular focus, extract features from multiple dimensions, and then perform 3D convolution to capture feature information obtained from multiple images. Specifically, referring to table 3, the 3d convolutional neural network includes one real-wire layer (i.e., hard-wire layer), 4 convolutional layers, 3 downsampling layers, and one full-wire layer. The cube of each convolution kernel convolution is 4 pictures including artwork, microangioma lesion image, retinal vascular lesion image, hard exudate lesion image, each picture being 128 x 128 in size. In the first layer, a fixed core of real connection is applied to process the input graph to generate information of a plurality of channels, then the information of the channels is processed respectively, and finally the information of all the channels is combined to obtain the final feature description. This solid line layer actually encodes a priori knowledge of the lesion, which is better than the random initialization performance. Each graph extracts information of five channels, which are respectively: r, G, B color channels, and gradients in the x and y directions. There are 4×5=20 lesion maps. Then convolved with a 9 x 2 3D convolution kernel at each of the five channels. To increase the number of feature graphs, i.e. to extract different features, two different convolution kernels are used at each position, L 1 Two feature map sets were included in the layer, each set containing (4-2+1) ×5=15 feature maps. Immediately following is downsampling layer L 3 Layer, at L 2 Downsampling the layer profile with a 3 x 3 window results in the same number of lesion maps but with reduced spatial resolution. L (L) 4 Is calculated by using a 3D convolution kernel of 9 x 2 in each of the 5 channels. To increase the number of feature maps, three different convolution kernels are used at each location, so that6 different sets of (4-2+1-2+1) ×5=10 feature maps were obtained. L (L) 5 The layer uses a 2 x 2 downsampling window, and then two 2D convolutions and downsampling are performed to obtain 128 x 1 feature vectors, and 128 dimensions are determined according to previous experience. After inputting the medical image, the input images are converted into a 128-dimensional feature vector through multi-layer convolution and downsampling, and the feature vector captures the feature information of the retinal fundus picture. Inputting the obtained 128-dimensional feature vector into a softmax layer, wherein the node number of the softmax layer is consistent with the category number of the image abnormality degree, and each node is consistent with L 9 Is fully connected. The final softmax layer will obtain a series of predicted values, i.e. probability values of the medical image belonging to 5 image abnormality degrees, i.e. the second abnormality probability, assuming this value isWhere i (i=0, 1,2,3, 4) is the image abnormality degree category (i.e., normal, mild non-proliferative lesions, moderate non-proliferative lesions, severe non-proliferative lesions, and proliferative lesions), and X is the input medical image number.
Table 3D convolutional neural network hierarchy
Before using the 3D convolutional neural network, the model is required to be trained, firstly, a sample marked with abnormal degree is input for training, and 4 pictures of an original image of the sample, a microangioma focus image, a hard exudate focus image and a retinal blood vessel focus image are input for training during training. The 3D convolutional neural network model is continuously optimized in the training process, parameters are adjusted until convergence, and the optimization aim is to continuously reduce the difference between the classification result predicted by the network and the sample labeling abnormality degree, namely, the loss function L2 is minimized:
in the above formula, N is the number of training samples, L softmax Representing network classification loss and labeling data; f (x, θ) is the classification result of network prediction, where x is one sample and θ is the learning rate; c is the labeling category; Φ (θ) is a regularization term.
Fifth step: score fusion classification, in particular, in obtaining a second anomaly probability(there are 5 second abnormality probabilities in this embodiment) and first abnormality probability +.>(in this embodiment, there are 5 first anomaly probabilities, i.e. lesion scores), according to setting different weight coefficients to obtain the final anomaly probability, in this embodiment, the weight coefficient of the first anomaly probability is 0.2, the weight coefficient of the second anomaly probability is 0.8, and the final anomaly probability, i.e. score fusion value is-> (in this embodiment, there are 5 score fusion values), in the calculated result, which image abnormality degree category has the highest score fusion value, that is, the category is the image abnormality degree category to which the medical image belongs, that is, c=argmax is calculated i F i (X)。
A medical image processing method provides a score fusion mechanism, which fuses focus scores obtained by focus images into a classification network, thereby improving the sensitivity and classification accuracy of each focus. The scoring fusion classification framework is not limited to three focuses detected in the invention, can be expanded when more focuses are detected, and has strong system adaptability.
Example 2
A medical image processing system, comprising:
a focus image acquisition unit for extracting a focus image from the medical image;
the score generating unit is used for acquiring a plurality of focus scores according to the focus images, wherein the focus scores are first abnormal probabilities that the focus images belong to different image abnormal degrees;
a second abnormality probability obtaining unit configured to obtain a plurality of second abnormality probabilities of the medical image according to the medical image and the lesion image, the second abnormality probabilities being probabilities that the medical image belongs to different image abnormality degrees;
the classifying unit is used for acquiring final probabilities that the medical image belongs to different image abnormality degrees according to the first abnormality probabilities, the second abnormality probabilities and the different weight coefficients and different image abnormality degrees, and taking the image abnormality degree with the largest final probability as the image abnormality degree of the medical image.
The specific working process and beneficial effects of the medical image processing system are described with reference to the specific description of the medical image processing method in embodiment 1, and will not be repeated.
Example 3
A medical image processing apparatus comprising:
at least one processor; the method comprises the steps of,
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the medical image processing method. A specific description of the medical image processing method refers to the description in embodiment 1, and will not be repeated.
Example 4
A computer-readable storage medium storing computer-executable instructions for causing a computer to perform the medical image processing method. A specific description of the medical image processing method refers to the description in embodiment 1, and will not be repeated.
While the preferred embodiment of the present invention has been described in detail, the present invention is not limited to the embodiments, and those skilled in the art can make various equivalent modifications or substitutions without departing from the spirit of the present invention, and these equivalent modifications or substitutions are included in the scope of the present invention as defined in the appended claims.
Claims (7)
1. A medical image processing method, characterized by comprising the steps of:
a focus image obtaining step, namely obtaining a plurality of focus mask images according to the medical image and a segmentation neural network, wherein focus types comprise a micro-aneurysm, hard exudates and new retinal blood vessels, the segmentation neural network is used for predicting each pixel on the medical image, a picture to be tested is input into the trained segmentation neural network, the segmentation neural network predicts whether each pixel on the picture to be tested belongs to a background, the micro-aneurysm, the hard exudates or the retinal blood vessels, and the predicted result is output into three masks, namely the micro-aneurysm mask, the hard exudates mask and the retinal blood vessels mask, the masks consist of 0 and 1, the background is 0, the target is 1, the segmentation neural network comprises a sharing layer and a task specific layer, the weight of the sharing layer is universal for tasks, the parameters of the task specific layer are independently calculated, and each task specific layer can generate a single output result; acquiring a plurality of focus images according to the focus mask images and the medical images; the mask in the focus mask image is used for describing the position of a target, and the obtaining a plurality of focus images according to the focus mask image and the medical image comprises the following steps: performing image extraction on masks in the focus mask image corresponding to the medical image to obtain a plurality of focus images;
a score generation step of respectively extracting a plurality of focus features according to the focus images and splicing a plurality of focus featuresInputting the lesion scores into a first machine learning classification algorithm to obtain a plurality of lesion scores, wherein the lesion scores are first abnormal probabilities that the lesion images belong to different image abnormal degrees, and the lesion characteristics are a mean value on an RGB image, a variance on the RGB image, a color moment on the RGB image, zhou Changhe of a target area, area and area of the target area, LBP characteristics of the target area and HSV characteristics; the step of respectively extracting a plurality of focus features according to the focus images, splicing a plurality of focus features and inputting the focus features into a first machine learning classification algorithm to obtain a plurality of focus scores comprises the following steps: extracting 86-dimensional features from each focus image, splicing the extracted 86-dimensional features according to the sequence of the micro-aneurysm focus image, the hard exudate focus image and the neoretinal vascular focus image to obtain 258-dimensional feature vectors, inputting the 258-dimensional feature vectors into a first machine learning classification algorithm to obtain a plurality of focus scores, wherein the focus scores are as followsWherein->For the category of the degree of abnormality of the image, +.>,/>Numbering medical images, the degree of image abnormalities includes normal, mild non-proliferative lesions, moderate non-proliferative lesions, severe non-proliferative lesions, and proliferative lesions;
a second anomaly probability obtaining step, wherein a plurality of second anomaly probabilities of the medical image are obtained according to the medical image, the focus image and a second machine learning classification algorithm, wherein the second anomaly probabilities are probabilities that the medical image belongs to different image anomaly degrees, the second machine learning classification algorithm comprises 1 real connecting layer, 4 convolution layers, 3 downsampling layers, 1 full connecting layer and 1 softmax layer, and the real connecting layer of the first layerThe line layer, the convolution layer of the second layer, the downsampling layer of the third layer, the convolution layer of the fourth layer, the downsampling layer of the fifth layer, the convolution layer of the sixth layer, the downsampling layer of the seventh layer, the convolution layer of the eighth layer, the full-connection layer of the ninth layer and the softmax layer of the tenth layer are sequentially connected, a cube of convolution kernel convolution is a plurality of pictures comprising the medical image, the arteriole tumor focus image, the hard exudate focus image and the newly generated retinal vascular focus image, a real line layer of the first layer is used for extracting a plurality of channel information for each image, and the channel information comprises R, G, B color channels and 5 channels in total of gradients in the x direction and the y direction; the convolution layer of the second layer is used for passingConvolving the 3D convolution kernel of (a) in each channel with two different convolution kernels at each location to obtain two feature map sets, each feature map set comprising 15 feature maps; the downsampling layer of the third layer is for passing +.>The window downsamples the feature map of the second layer to obtain a focus map with reduced spatial resolution; the convolution layer of the fourth layer is used for passing +.>Each channel is convolved by a 3D convolution kernel of (a) and three different convolution kernels are used at each position to obtain six different sets of feature maps, each set having 10 feature maps; the downsampling layer of the fifth layer adopts +.>Is then subjected to 2D convolution and downsampling twice by the convolution layer of the sixth layer, the downsampling layer of the seventh layer, and the convolution layer of the eighth layer to obtain + ->Is a feature vector of (1); the softmax layer of the tenth layer is used according to +.>The feature vector of the ninth layer obtains a second anomaly probability, the node number of the softmax layer is consistent with the category number of the image anomaly degree, each node of the softmax layer is fully connected with 128 nodes of the full-connection layer of the ninth layer, and the second anomaly probability is->The method comprises the steps of carrying out a first treatment on the surface of the A classification step of acquiring final probabilities that the medical image belongs to different image abnormality degrees according to the first abnormality probabilities, the second abnormality probabilities and different weight coefficients and different image abnormality degrees, and taking the image abnormality degree of the final probability with the maximum value as the image abnormality degree of the medical image;
the segmented neural network is trained by:
initializing a model, inputting a sample marked with mask mark data for training, wherein the sample is a whole picture;
after the image is input, three convolutions and downsampling are respectively carried out to obtain a characteristic small imagePerforming convolution again to obtain the characteristic->Then, performing three groups of full connection to achieve the characteristics of three tasks, and performing deconvolution and up-sampling on the characteristics of the three tasks respectively to obtain a graph with the same size as an input image of each task, wherein the graph is a segmented mask;
the training of the segmented neural network further includes optimizing the segmented neural network, including:
continuously reducing the difference between the segmentation map and the sample labeling data predicted by the segmentation neural network, wherein the segmentation map is the focus image, and the difference between the segmentation map and the sample labeling data is the minimum lossLoss function,/>Wherein->For training sample number, ++>Pixel level loss indicative of microaneurysm segmentation, < ->Represents the segmentation loss of hard exudates, +.>Represents a loss of segmentation of the new retinal blood vessels, +.>For segmentation of the neural network predicted segmentation result, wherein +.>For one pixel of the sample, +.>For learning rate->Is the sample labeling position of the micro-aneurysm, +.>Is the sample marking location of hard exudates, +.>Is a sample labeling position of a new retina blood vessel; />Is a regular term.
2. The medical image processing method according to claim 1, wherein the medical image includes a fundus photo.
3. The medical image processing method according to claim 1, wherein the score generating step includes:
the lesion characterization includes a color or shape.
4. A medical image processing method according to any one of claims 1 to 3, wherein the weight coefficient of the first abnormality probability is 0.2 and the weight coefficient of the second abnormality probability is 0.8.
5. A medical image processing system, comprising:
a focus image obtaining unit, configured to obtain a plurality of focus mask images according to the medical image and a split neural network, where the focus types include an aneurysm, a hard exudate and a new retinal blood vessel, the split neural network is configured to predict each pixel on the medical image, input a to-be-detected image into the trained split neural network, predict whether each pixel on the to-be-detected image belongs to a background, an aneurysm, a hard exudate or a retinal blood vessel through the split neural network, and output the predicted result into three masks, namely, the microaneurite mask, the hard exudate mask and the retinal blood vessel mask, where the masks are composed of 0 and 1, the background is 0, and the target is 1, the split neural network includes a shared layer and a task specific layer, the weights of the shared layer are common for tasks, parameters of the task specific layer are calculated independently, and each task specific layer generates a single output result; acquiring a plurality of focus images according to the focus mask images and the medical images; the mask in the focus mask image is used for describing the targetIn position, the acquiring a plurality of lesion images from the lesion mask image and the medical image comprises: performing image extraction on masks in the focus mask image corresponding to the medical image to obtain a plurality of focus images; the segmented neural network is trained by: initializing a film type, inputting a sample marked with mask mark data for training, wherein the sample is a whole picture; after the image is input, three convolutions and downsampling are respectively carried out to obtain a characteristic small imagePerforming convolution again to obtain the characteristic->Then, performing three groups of full connection to achieve the characteristics of three tasks, and performing deconvolution and up-sampling on the characteristics of the three tasks respectively to obtain a graph with the same size as an input image of each task, wherein the graph is a segmented mask; the training of the segmented neural network further includes optimizing the segmented neural network, including: continuously reducing the difference between the segmentation map and the sample labeling data predicted by the segmentation neural network, wherein the segmentation map is the focus image, and the difference between the segmentation map and the sample labeling data is a minimization loss function->,Wherein->For training sample number, ++>Pixel level loss indicative of microaneurysm segmentation, < ->Represents the segmentation loss of hard exudates, +.>Represents a loss of segmentation of the new retinal blood vessels, +.>For segmentation of the neural network predicted segmentation result, wherein +.>For one pixel of the sample,for learning rate->Is the sample labeling position of the micro-aneurysm, +.>Is the sample marking location of hard exudates, +.>Is a sample labeling position of a new retina blood vessel; />Is a regular term;
the focus image processing unit is used for respectively extracting a plurality of focus features according to the focus image, splicing a plurality of focus features and inputting the focus features into a first machine learning classification algorithm to obtain a plurality of focus scores, wherein the focus scores are first abnormal probabilities that the focus image belongs to different image abnormal degrees, and the focus features are a mean value on an RGB image, a variance on the RGB image, a color moment on the RGB image, zhou Changhe of a target region, a region area of the target region, LBP features of the target region and HSV features; wherein, the focus image is used for respectively extracting a plurality of focus features, splicing a plurality of focus features and combining the focus featuresThe step of inputting a first machine learning classification algorithm to obtain a plurality of lesion scores comprises the following steps: extracting 86-dimensional features from each focus image, splicing the extracted 86-dimensional features according to the sequence of the micro-aneurysm focus image, the hard exudate focus image and the neoretinal vascular focus image to obtain 258-dimensional feature vectors, inputting the 258-dimensional feature vectors into a first machine learning classification algorithm to obtain a plurality of focus scores, wherein the focus scores are as followsWherein->For the category of the degree of abnormality of the image, +.>,/>Numbering medical images, the degree of image abnormalities includes normal, mild non-proliferative lesions, moderate non-proliferative lesions, severe non-proliferative lesions, and proliferative lesions;
a second abnormality probability obtaining unit configured to obtain a plurality of second abnormality probabilities of the medical image according to the medical image, the lesion image, and a second machine learning classification algorithm, where the second abnormality probabilities are probabilities that the medical image belongs to different degrees of image abnormality, the second machine learning classification algorithm includes 1 real-line layer, 4 convolution layers, 3 down-sampling layers, 1 full-line layer, and 1 softmax layer, the real-line layer of the first layer, the convolution layers of the second layer, the down-sampling layer of the third layer, the convolution layers of the fourth layer, the downsampling layers of the fifth layer, the convolution layers of the sixth layer, the downsampling layers of the seventh layer, the convolution layers of the eighth layer, the full-line layer of the ninth layer, and the softmax layer of the tenth layer are sequentially connected, and cubes of each convolution kernel convolution are multiple pictures including the medical image, the micro-tumor image, the hard exudate lesion image, and the newly generated retinal vascular lesion image, the first layerThe real connecting layer is used for extracting a plurality of channel information for each image, wherein the channel information comprises R, G, B color channels and 5 channels in the gradients of the x direction and the y direction; the convolution layer of the second layer is used for passingConvolving the 3D convolution kernel of (a) in each channel with two different convolution kernels at each location to obtain two feature map sets, each feature map set comprising 15 feature maps; the downsampling layer of the third layer is for passing +.>The window downsamples the feature map of the second layer to obtain a focus map with reduced spatial resolution; the convolution layer of the fourth layer is used for passingEach channel is convolved by a 3D convolution kernel of (a) and three different convolution kernels are used at each position to obtain six different sets of feature maps, each set having 10 feature maps; the downsampling layer of the fifth layer adopts +.>Is then obtained by performing 2D convolution and downsampling twice through the convolution layer of the sixth layer, the downsampling layer of the seventh layer, and the convolution layer of the eighth layerIs a feature vector of (1); the softmax layer of the tenth layer is used according to +.>The feature vector of the ninth layer obtains a second anomaly probability, the node number of the softmax layer is consistent with the category number of the image anomaly degree, each node of the softmax layer is fully connected with 128 nodes of the full-connection layer of the ninth layer, and the second anomaly probability is->;
The classifying unit is used for acquiring final probabilities that the medical image belongs to different image abnormality degrees according to the first abnormality probabilities, the second abnormality probabilities and different weight coefficients and different image abnormality degrees, and taking the image abnormality degree of the final probability with the maximum value as the image abnormality degree of the medical image.
6. A medical image processing apparatus, characterized by comprising:
at least one processor; the method comprises the steps of,
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the medical image processing method according to any one of claims 1 to 4.
7. A computer-readable storage medium storing computer-executable instructions for causing a computer to perform the medical image processing method according to any one of claims 1 to 4.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811320629.9A CN109671049B (en) | 2018-11-07 | 2018-11-07 | Medical image processing method, system, equipment and storage medium |
PCT/CN2018/124660 WO2020093563A1 (en) | 2018-11-07 | 2018-12-28 | Medical image processing method, system, device, and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811320629.9A CN109671049B (en) | 2018-11-07 | 2018-11-07 | Medical image processing method, system, equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109671049A CN109671049A (en) | 2019-04-23 |
CN109671049B true CN109671049B (en) | 2024-03-01 |
Family
ID=66142041
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811320629.9A Active CN109671049B (en) | 2018-11-07 | 2018-11-07 | Medical image processing method, system, equipment and storage medium |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN109671049B (en) |
WO (1) | WO2020093563A1 (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111144296B (en) * | 2019-12-26 | 2023-04-18 | 湖南大学 | Retina fundus picture classification method based on improved CNN model |
CN111932562B (en) * | 2020-09-22 | 2021-01-19 | 平安科技(深圳)有限公司 | Image identification method and device based on CT sequence, electronic equipment and medium |
CN112652394A (en) * | 2021-01-14 | 2021-04-13 | 浙江工商大学 | Multi-focus target detection-based retinopathy of prematurity diagnosis system |
CN115311188B (en) * | 2021-05-08 | 2023-12-22 | 数坤科技股份有限公司 | Image recognition method and device, electronic equipment and storage medium |
CN115816834B (en) * | 2023-02-20 | 2023-04-25 | 常熟理工学院 | Method and system for real-time monitoring of printing quality of printer |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106548192A (en) * | 2016-09-23 | 2017-03-29 | 北京市商汤科技开发有限公司 | Based on the image processing method of neutral net, device and electronic equipment |
DE202017104953U1 (en) * | 2016-08-18 | 2017-12-04 | Google Inc. | Processing fundus images using machine learning models |
CN108010021A (en) * | 2017-11-30 | 2018-05-08 | 上海联影医疗科技有限公司 | A kind of magic magiscan and method |
CN108230294A (en) * | 2017-06-14 | 2018-06-29 | 北京市商汤科技开发有限公司 | Image detecting method, device, electronic equipment and storage medium |
CN108634934A (en) * | 2018-05-07 | 2018-10-12 | 北京长木谷医疗科技有限公司 | The method and apparatus that spinal sagittal bit image is handled |
Family Cites Families (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP5935146B2 (en) * | 2011-08-30 | 2016-06-15 | 大日本印刷株式会社 | Ophthalmic disease image analysis apparatus, ophthalmic image analysis method, and ophthalmic image analysis program |
JP2016002380A (en) * | 2014-06-18 | 2016-01-12 | キヤノン株式会社 | Image processing system, operation method for the same, and program |
US9972092B2 (en) * | 2016-03-31 | 2018-05-15 | Adobe Systems Incorporated | Utilizing deep learning for boundary-aware image segmentation |
CN106530290A (en) * | 2016-10-27 | 2017-03-22 | 朱育盼 | Medical image analysis method and device |
CN106530295A (en) * | 2016-11-07 | 2017-03-22 | 首都医科大学 | Fundus image classification method and device of retinopathy |
CN107045720B (en) * | 2017-05-04 | 2018-11-30 | 深圳硅基仿生科技有限公司 | The processing system of identification eye fundus image lesion based on artificial neural network |
CN107145756A (en) * | 2017-05-17 | 2017-09-08 | 上海辉明软件有限公司 | A kind of stroke types Forecasting Methodology and device |
CN107679525B (en) * | 2017-11-01 | 2022-11-29 | 腾讯科技(深圳)有限公司 | Image classification method and device and computer readable storage medium |
CN108734209A (en) * | 2018-05-16 | 2018-11-02 | 上海鹰瞳医疗科技有限公司 | Feature recognition based on more images and equipment |
-
2018
- 2018-11-07 CN CN201811320629.9A patent/CN109671049B/en active Active
- 2018-12-28 WO PCT/CN2018/124660 patent/WO2020093563A1/en active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE202017104953U1 (en) * | 2016-08-18 | 2017-12-04 | Google Inc. | Processing fundus images using machine learning models |
CN106548192A (en) * | 2016-09-23 | 2017-03-29 | 北京市商汤科技开发有限公司 | Based on the image processing method of neutral net, device and electronic equipment |
CN108230294A (en) * | 2017-06-14 | 2018-06-29 | 北京市商汤科技开发有限公司 | Image detecting method, device, electronic equipment and storage medium |
CN108010021A (en) * | 2017-11-30 | 2018-05-08 | 上海联影医疗科技有限公司 | A kind of magic magiscan and method |
CN108634934A (en) * | 2018-05-07 | 2018-10-12 | 北京长木谷医疗科技有限公司 | The method and apparatus that spinal sagittal bit image is handled |
Also Published As
Publication number | Publication date |
---|---|
WO2020093563A1 (en) | 2020-05-14 |
CN109671049A (en) | 2019-04-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109671049B (en) | Medical image processing method, system, equipment and storage medium | |
Li et al. | Iternet: Retinal image segmentation utilizing structural redundancy in vessel networks | |
CN110689083B (en) | Context pyramid fusion network and image segmentation method | |
CN110570421B (en) | Multitask fundus image classification method and apparatus | |
Abràmoff et al. | Retinal imaging and image analysis | |
Akram et al. | Automated detection of dark and bright lesions in retinal images for early detection of diabetic retinopathy | |
Li et al. | DeepRetina: layer segmentation of retina in OCT images using deep learning | |
Sinthanayothin | Image analysis for automatic diagnosis of diabetic retinopathy | |
CN112017185B (en) | Focus segmentation method, device and storage medium | |
AU2021202217B2 (en) | Methods and systems for ocular imaging, diagnosis and prognosis | |
EP4018413A1 (en) | Computerised tomography image processing | |
CN108764342B (en) | Semantic segmentation method for optic discs and optic cups in fundus image | |
CN110599480A (en) | Multi-source input fundus image classification method and device | |
CN113450305B (en) | Medical image processing method, system, equipment and readable storage medium | |
Suriyasekeran et al. | Algorithms for diagnosis of diabetic retinopathy and diabetic macula edema-a review | |
CN112957005A (en) | Automatic identification and laser photocoagulation region recommendation algorithm for fundus contrast image non-perfusion region | |
CN110598652B (en) | Fundus data prediction method and device | |
Zhou et al. | Computer aided diagnosis for diabetic retinopathy based on fundus image | |
Liu et al. | A curriculum learning-based fully automated system for quantification of the choroidal structure in highly myopic patients | |
Haloi | Towards ophthalmologist level accurate deep learning system for OCT screening and diagnosis | |
Pappu et al. | EANet: Multiscale autoencoder based edge attention network for fluid segmentation from SD‐OCT images | |
Ahmed et al. | Morphological technique for detection of microaneurysms from RGB fundus images | |
Ramasubramanian et al. | A novel efficient approach for the screening of new abnormal blood vessels in color fundus images | |
Akram et al. | Gabor wavelet based vessel segmentation in retinal images | |
Joshi et al. | Fundus image analysis for detection of fovea: A review |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB03 | Change of inventor or designer information | ||
CB03 | Change of inventor or designer information |
Inventor after: Xu Yong Inventor after: Luo Xiaoling Inventor after: Pu Zuhui Inventor after: Mou Lisha Inventor after: Hu Jiying Inventor before: Xu Yong Inventor before: Luo Xiaoling Inventor before: Pu Zhihui Inventor before: Mou Lisha Inventor before: Hu Jiying |
|
GR01 | Patent grant | ||
GR01 | Patent grant |