CN111476772B - Focus analysis method and device based on medical image - Google Patents
Focus analysis method and device based on medical image Download PDFInfo
- Publication number
- CN111476772B CN111476772B CN202010259844.3A CN202010259844A CN111476772B CN 111476772 B CN111476772 B CN 111476772B CN 202010259844 A CN202010259844 A CN 202010259844A CN 111476772 B CN111476772 B CN 111476772B
- Authority
- CN
- China
- Prior art keywords
- focus
- lesion
- sample
- machine learning
- learning model
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000004458 analytical method Methods 0.000 title claims abstract description 138
- 230000003902 lesion Effects 0.000 claims abstract description 263
- 239000013598 vector Substances 0.000 claims abstract description 197
- 238000010801 machine learning Methods 0.000 claims abstract description 177
- 238000012512 characterization method Methods 0.000 claims abstract description 124
- 230000004927 fusion Effects 0.000 claims abstract description 56
- 206010035664 Pneumonia Diseases 0.000 claims description 194
- 210000004072 lung Anatomy 0.000 claims description 58
- 210000000056 organ Anatomy 0.000 claims description 56
- 238000000605 extraction Methods 0.000 claims description 48
- 238000000034 method Methods 0.000 claims description 37
- 238000012549 training Methods 0.000 claims description 34
- 230000011218 segmentation Effects 0.000 claims description 32
- 241000711573 Coronaviridae Species 0.000 claims description 29
- 208000015181 infectious disease Diseases 0.000 claims description 18
- 238000013528 artificial neural network Methods 0.000 claims description 17
- 238000013527 convolutional neural network Methods 0.000 claims description 10
- 238000004590 computer program Methods 0.000 claims description 9
- 206010035737 Pneumonia viral Diseases 0.000 claims description 7
- 208000009421 viral pneumonia Diseases 0.000 claims description 7
- 201000001178 Bacterial Pneumonia Diseases 0.000 claims description 6
- 201000005019 Chlamydia pneumonia Diseases 0.000 claims description 6
- 208000001572 Mycoplasma Pneumonia Diseases 0.000 claims description 6
- 201000008235 Mycoplasma pneumoniae pneumonia Diseases 0.000 claims description 6
- 230000002538 fungal effect Effects 0.000 claims description 6
- 239000003550 marker Substances 0.000 claims description 6
- 208000030773 pneumonia caused by chlamydia Diseases 0.000 claims description 6
- 238000012545 processing Methods 0.000 description 13
- 238000002591 computed tomography Methods 0.000 description 10
- 238000010586 diagram Methods 0.000 description 9
- 230000006870 function Effects 0.000 description 6
- 238000003745 diagnosis Methods 0.000 description 5
- 238000003709 image segmentation Methods 0.000 description 5
- 238000002595 magnetic resonance imaging Methods 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 238000002601 radiography Methods 0.000 description 4
- 241000700605 Viruses Species 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 2
- 230000014509 gene expression Effects 0.000 description 2
- 238000007689 inspection Methods 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000005192 partition Methods 0.000 description 2
- 230000003252 repetitive effect Effects 0.000 description 2
- 241000894006 Bacteria Species 0.000 description 1
- 206010011224 Cough Diseases 0.000 description 1
- 208000032376 Lung infection Diseases 0.000 description 1
- 206010058467 Lung neoplasm malignant Diseases 0.000 description 1
- 206010068956 Respiratory tract inflammation Diseases 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 238000013276 bronchoscopy Methods 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000009795 derivation Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 201000010099 disease Diseases 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 201000005202 lung cancer Diseases 0.000 description 1
- 208000020816 lung neoplasm Diseases 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 108090000623 proteins and genes Proteins 0.000 description 1
- 230000002685 pulmonary effect Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 239000007921 spray Substances 0.000 description 1
- 238000001356 surgical procedure Methods 0.000 description 1
- 210000000115 thoracic cavity Anatomy 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/12—Edge-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/62—Analysis of geometric attributes of area, perimeter, diameter or volume
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10081—Computed x-ray tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30061—Lung
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A90/00—Technologies having an indirect contribution to adaptation to climate change
- Y02A90/10—Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Geometry (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The embodiment of the application provides a focus analysis method, a focus analysis device, electronic equipment and a computer readable storage medium for medical images, which solve the problem that focus analysis in medical images is not comprehensive enough and not accurate enough in the prior art. The focus analysis method based on the medical image comprises the following steps: extracting lesion characterization data based on the medical image data; extracting lesion characterization information based on the lesion characterization data; inputting the focus characteristic information into a first machine learning model to obtain a first focus characteristic vector; inputting the medical image data and the lesion characterization data into a second machine learning model to obtain a second lesion feature vector; combining the first focus feature vector and the second focus feature vector to obtain a fusion feature vector corresponding to the focus; and obtaining the analysis result of the focus according to the fusion feature vector.
Description
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a focus analysis method, apparatus, electronic device, and computer readable storage medium based on medical images.
Background
Pneumonia is a common respiratory tract inflammation disease, mainly refers to lung infection caused by bacteria, viruses and the like, early-stage pneumonia is characterized in that focus areas displayed on CT (Computed Tomography) images are small and not obvious, and doctors need to spend a great deal of time to find corresponding focuses when observing the diseases. Recently, CT images have been used as one of the important indicators for determining the novel coronavirus pneumonia, and the workload of image doctors has been greatly increased.
At present, a neural network is adopted to detect a focus in a medical image in the prior art, focus characteristic information is extracted, a focus area is obtained in a form of a marking frame, the focus is identified and marked, and the specific form of the focus cannot be represented because the form of the marking frame is adopted, so that the position of the focus can only be determined, and more comprehensive analysis can not be carried out on the focus. The method comprises the steps of adopting an artificial intelligent network to identify and analyze focus features in medical images, comparing the focus features with focus features of medical images in a database, identifying focuses according to comparison results, and only comparing focuses with focuses in the database instead of analyzing the focuses per se because of diversification of the forms of the focuses, so that the identification results are inaccurate. Therefore, how to perform comprehensive feature extraction and accurate analysis on the focus in the CT image, so that the analysis result of the focus is more comprehensive and accurate is an important problem to be solved in urgent need nowadays.
Disclosure of Invention
In order to solve the above-mentioned problems in the prior art, embodiments of the present application provide a lesion analysis method, apparatus, electronic device and computer readable storage medium based on medical images.
According to an aspect of an embodiment of the present invention, there is provided a lesion analysis method based on medical images, including: extracting lesion characterization data based on the medical image data; extracting lesion characterization information based on the lesion characterization data; inputting the focus characteristic information into a first machine learning model to obtain a first focus characteristic vector; inputting the medical image data and the lesion characterization data into a second machine learning model to obtain a second lesion feature vector; combining the first focus feature vector and the second focus feature vector to obtain a fusion feature vector corresponding to the focus; and obtaining the analysis result of the focus according to the fusion feature vector.
In an embodiment of the present application, the obtaining the analysis result of the lesion according to the fusion feature vector includes: and inputting the fusion feature vector into a third machine learning model to obtain the analysis result of the focus.
In one embodiment of the present application, inputting the fused feature vector into a third machine learning model to obtain the analysis result of the lesion includes: the fused feature vector is input into a third machine learning model to obtain the category and/or severity of the lesion.
In one embodiment of the present application, after inputting the fused feature vector into a third machine learning model to obtain the type and/or severity of the lesion, the method further includes: and sending out early warning according to the type and/or the severity of the focus.
In one embodiment of the present application, extracting the lesion characterization data based on the medical image data includes: inputting the medical image data into a fourth machine learning model to obtain lesion characterization data; wherein the lesion characterization data comprises one or more combinations of: the outline of the focus, the outline of the structural unit of the organ where the focus is located, and the whole outline of the organ where the focus is located.
In one embodiment of the present application, the lesion characterization information includes one or more of the following combinations: the ratio of the focus in the organ of the focus, and the number of infections of the structural unit of the organ in which the focus is located.
In one embodiment of the present application, the lesion comprises pneumonia; wherein, the type of pneumonia includes: bacterial pneumonia, novel coronavirus pneumonia, other viral pneumonia than novel coronavirus pneumonia, mycoplasma pneumonia, chlamydia pneumonia, fungal pneumonia, protozoal pneumonia; and/or, wherein the severity of the pneumonia comprises: mild, moderate and severe.
In one embodiment of the present application, the severity of the pneumonia further includes: probability values for the novel coronavirus pneumonia.
In one embodiment of the present application, the lesion comprises pneumonia; wherein, according to the type and/or severity of the focus, sending out the early warning includes: when the severity of the pneumonia is moderate, sending out low-level early warning; when the severity of the pneumonia is severe, sending out a medium-grade early warning; and when the type of the pneumonia is novel coronavirus pneumonia, sending out advanced early warning.
In one embodiment of the present application, extracting the pneumonia focus characterization data based on the lung medical image data includes: inputting the lung medical image into a fourth machine learning model to obtain pneumonia lesion characterization data; wherein the fourth machine learning model comprises a pneumonia segmentation model, a lung segmentation model and a lung lobe segmentation model; the pneumonia lesion characterization data includes: pneumonia profile, lung lobe profile, lung profile.
According to another aspect of the embodiments of the present invention, there is provided a lesion analysis device based on medical images, including: a first extraction module configured to extract lesion characterization data based on medical image data; a second extraction module configured to extract lesion characterization information based on the lesion characterization data; a first lesion feature extraction module configured to input the lesion feature information into a first machine learning model to obtain a first lesion feature vector; a second lesion feature extraction module configured to input the medical image data and the lesion characterization data into a second machine learning model to obtain a second lesion feature vector; a merging module configured to merge the first lesion feature vector and the second lesion feature vector to obtain a merged feature vector corresponding to the lesion; and an analysis module configured to obtain an analysis result of the lesion according to the fusion feature vector.
In one embodiment of the present application, the analysis module is further configured to: and inputting the fusion feature vector into a third machine learning model to obtain the analysis result of the focus.
In one embodiment of the present application, the analysis module is further configured to: the fused feature vector is input into a third machine learning model to obtain the category and/or severity of the lesion.
In one embodiment of the present application, the apparatus further comprises: and the early warning module sends out early warning according to the type and/or the severity of the focus.
In one embodiment of the present application, the first extraction module is further configured to: inputting the medical image data into a fourth machine learning model to obtain lesion characterization data; wherein the lesion characterization data comprises one or more combinations of: the outline of the focus, the outline of the structural unit of the organ where the focus is located, and the whole outline of the organ where the focus is located.
In one embodiment of the present application, the medical image-based lesion analysis device includes a lung medical image-based lesion analysis device, including: a first pneumonia focus extraction module configured to extract pneumonia focus characterization data based on lung medical image data; a second pneumonia focus extraction module configured to extract pneumonia focus feature information based on the pneumonia focus characterization data; a first pneumonia focus feature extraction module configured to input the pneumonia focus feature information into a first machine learning model to obtain a first pneumonia focus feature vector; a second pneumonia focus feature extraction module configured to input the lung medical image data and the pneumonia focus characterization data into a second machine learning model to obtain a second pneumonia focus feature vector; a pneumonia focus combining module configured to combine the first pneumonia focus feature vector and the second pneumonia focus feature vector to obtain a fusion feature vector corresponding to the pneumonia focus; and a pneumonia focus analysis module configured to obtain an analysis result of the pneumonia focus according to the fusion feature vector.
In one embodiment of the present application, the lesion comprises pneumonia; wherein, the type of pneumonia includes: bacterial pneumonia, novel coronavirus pneumonia, other viral pneumonia than novel coronavirus pneumonia, mycoplasma pneumonia, chlamydia pneumonia, fungal pneumonia, protozoal pneumonia; and/or, wherein the severity of the pneumonia comprises: mild, moderate and severe.
In one embodiment of the present application, the severity of the pneumonia further includes: probability values for the novel coronavirus pneumonia.
In one embodiment of the present application, the pneumonia lesion analysis module is further configured to: and inputting the fusion feature vector into a third machine learning model to obtain an analysis result of the pneumonia.
In one embodiment of the present application, the pneumonia lesion analysis module is further configured to: and inputting the fusion characteristic vector into a third machine learning model to obtain the type and/or severity of the pneumonia.
In one embodiment of the present application, the apparatus further comprises: and the pneumonia focus early warning module sends out early warning according to the type and/or the severity of the pneumonia.
In one embodiment of the present application, the lesion comprises pneumonia; wherein, the pneumonia focus early warning module is further configured: when the severity of the pneumonia is moderate, sending out low-level early warning; when the severity of the pneumonia is severe, sending out a medium-grade early warning; and when the type of the pneumonia is novel coronavirus pneumonia, sending out advanced early warning.
In one embodiment of the present application, the first pneumonia lesion extraction module is further configured to: inputting the lung medical image into a fourth machine learning model to obtain pneumonia lesion characterization data; wherein the fourth machine learning model comprises a pneumonia segmentation model, a lung segmentation model and a lung lobe segmentation model; the pneumonia lesion characterization data includes: pneumonia profile, lung lobe profile, lung profile.
According to another aspect of the embodiments of the present application, there is provided a network model training method, including: inputting sample medical image data into a fourth machine learning model to obtain sample lesion characterization data, wherein the sample medical image data includes marker data; extracting sample focus characteristic information based on the sample focus characterization data; inputting the sample focus characteristic information into a first machine learning model to obtain a first focus characteristic vector sample; inputting the sample medical image data and the sample lesion characterization data into a second machine learning model to obtain a second lesion feature vector sample; combining the first focus feature vector sample and the second focus feature vector sample to obtain a fusion feature vector sample corresponding to the focus; inputting the fusion feature vector sample into a third machine learning model to obtain a sample analysis result of the focus; and adjusting network parameters of the first machine learning model, the second machine learning model, the third machine learning model and the fourth machine learning model according to the difference between the sample analysis result and the marking data.
According to another aspect of the embodiments of the present application, there is provided a network model training apparatus, including: a first sample extraction module configured to input sample medical image data into a fourth machine learning model to obtain sample lesion characterization data, wherein the sample medical image data includes marker data; a second sample extraction module configured to extract sample lesion characterization information based on the sample lesion characterization data; a first sample lesion feature extraction module configured to input the sample lesion feature information into a first machine learning model to obtain a first lesion feature vector sample; a second sample lesion feature extraction module configured to input the sample medical image data and the sample lesion characterization data into a second machine learning model to obtain a second lesion feature vector sample; a sample merging module configured to merge the first lesion feature vector sample and the second lesion feature vector sample to obtain a fused feature vector sample corresponding to the lesion; a sample analysis module configured to input the fused feature vector sample into a third machine learning model to obtain a sample analysis result of the lesion; and a parameter adjustment module configured to adjust network parameters of the first machine learning model, the second machine learning model, the third machine learning model, and the fourth machine learning model according to differences between the sample analysis result and the tag data.
According to another aspect of embodiments of the present application, there is provided a computer-readable storage medium storing a computer program for performing the medical image-based lesion analysis method as set forth in any one of the preceding claims.
According to another aspect of the embodiments of the present application, there is provided an electronic device, including: a processor; a memory, wherein the memory is configured to store instructions executable by the processor; when the processor executes the instructions, implementing any of the medical image-based lesion analysis methods described above.
It can be seen that, according to the focus analysis method, the apparatus, the electronic device and the computer readable storage medium based on the medical image provided by the embodiment of the application, by extracting focus feature information from focus characterization data, attribute features of a focus are extracted based on a preset extraction rule and a statistical mode, and focus feature information is input into a first machine learning model to obtain a first focus feature vector, so that the obtained first focus feature vector comprises feature information obtained based on the preset extraction rule and the statistical mode, and overfitting can be effectively avoided; meanwhile, the medical image data and the focus characterization data are directly input into a second machine learning model to obtain a second focus feature vector, and the obtained second focus feature vector comprises feature information which is directly abstracted through the medical image data and the focus characterization data and can not be obtained based on a preset extraction rule and a statistical mode; then, the two focus feature vectors are fused to obtain a fusion feature vector, information complementation of the first focus feature vector and the second focus feature vector is realized in the fusion feature vector, the feature information of the focus can be expressed more comprehensively, the analysis result of the focus is obtained according to the fusion feature vector, the feature information of the focus is more comprehensive, and the accuracy of the focus analysis result can be remarkably improved.
Drawings
Fig. 1 is a flowchart of a focus analysis method based on medical images according to an embodiment of the present application.
Fig. 2 is a flowchart of a lesion analysis and early warning method based on medical images according to another embodiment of the present application.
Fig. 3 is a flowchart of a method for analyzing a lung focus based on a lung medical image according to another embodiment of the present application.
Fig. 4 is a schematic structural diagram of a focus analysis device based on medical images according to an embodiment of the present application.
Fig. 5 is a schematic structural diagram of a focus analysis and early warning device based on medical images according to another embodiment of the present application.
Fig. 6 is a schematic structural diagram of a pneumonia focus analysis device according to another embodiment of the present application.
Fig. 7 is a flow chart of a network model training method according to an embodiment of the present application.
Fig. 8 is a schematic structural diagram of a network model training device according to an embodiment of the present application.
Fig. 9 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The following description of the technical solutions in the embodiments of the present application will be made clearly and completely with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
Summary of the application
As described above, in the existing analysis of a focus in a medical image, only the position of the focus is framed by using a form of a marking frame, and the focus that is framed is not subjected to subsequent analysis, or the focus features in the medical image are compared with the focus features in a database, instead of analyzing the focus itself in the medical image, the analysis result of the obtained focus is not comprehensive enough and not accurate enough. In view of the above technical problems, the present application provides a focus analysis method based on medical image, input medical image data and focus characterization data extracted based on medical image data into a machine learning model to obtain focus feature vectors, input focus feature information obtained based on focus characterization data into another machine learning model to obtain another focus feature vector, combine two focus feature vectors, and obtain focus analysis results according to the combination results, wherein the first focus feature vector is obtained by analyzing focus characterization data through a preset extraction rule and a statistical manner, and the second focus feature vector is obtained by analyzing the medical image data and focus characterization data, namely, basic attribute features of the focus, through the machine learning model, so that the feature vector obtained by combining the two vectors can more comprehensively and specifically express focus information. Compared with the single mode of marking focus position information in the medical image or comparing focus characteristics with focus characteristics in a database, the focus analysis result obtained by the embodiment of the application is more comprehensive, and the focus is extracted by adopting different machine learning models to extract feature vectors, so that the final analysis result is more accurate.
It should be understood that the method may be specifically executed by a processor of a local electronic device (e.g., a local medical device or other computer device), or may be executed by a server in the cloud, where the local electronic device communicates with the server in the cloud to obtain the lesion analysis result. The specific application hardware scenario of the medical image-based lesion analysis method is not strictly limited.
Having described the basic principles of the present application, various non-limiting embodiments of the present application will now be described in detail with reference to the accompanying drawings.
Exemplary lesion analysis method
Fig. 1 is a flow chart of a focus analysis method based on medical images according to an embodiment of the present application. As shown in fig. 1, the lesion analysis method includes the steps of:
step 110: focal characterization data is extracted based on the medical image data.
The source of the medical image data may be a medical image such as an electronic computed tomography (Computed Tomography, CT), a magnetic resonance imaging (Magnetic Resonance Imaging, MRI), a computed radiography (Computed Radiography, CR) or a digital radiography (Digital radiography, DR), which is not specifically limited in this application. The medical image data can be directly extracted from corresponding instruments such as CT, MRI and the like, or can be obtained from a database of a hospital.
In addition, the content of the medical image itself can be adjusted according to the actual application scene requirement, for example, the medical image can be a transmission image in a brain examination scene of neurosurgery or a transmission image in a lung examination scene of thoracic surgery. The lesion characterization data is a characteristic feature of a lesion in the medical image, and may be a feature or a combination of features. Such as the outline of the lesion, the outline of the structural unit of the organ in which the lesion is located, the overall outline of the organ in which the lesion is located, etc. As another example, the medical image data includes lung medical image data, the lesion characterization data includes lesion characterization data of pneumonia, further including a lung profile, a lung lobe profile, a lung profile, and the like. And the application is not particularly limited thereto.
Step 120: focal feature information is extracted based on the focal characterization data.
The focus characteristic information is information of focus obtained by processing focus characterization data according to a preset extraction rule and a statistical mode, and comprises the following steps: the ratio of focus in the organ of focus, the infection number of the structural unit of the organ of focus, the maximum area of infection of the structural unit of the organ of focus, the HU (Hounsfiled Unit) value distribution of focus areas in the structural units of different organs of focus, etc. For example, the medical image data may be CT images, and the focal volume and the volume of the organ in which the focal is located are calculated by calculating the area of the focal in each layer of medical image and combining the layer spacing of each layer of medical image, thereby calculating the ratio of the focal to the organ. For another example, when the lesion is pneumonia, the lung-full ratio and the number of infections of the lung lobes of the pneumonia can be calculated from the pneumonia lesion characterization data obtained by CT images.
Step 130: the lesion feature information is input into a first machine learning model to obtain a first lesion feature vector.
The first machine learning model is used for analyzing focus characteristic information extracted from focus characterization data, and can be established through a pre-training process by adopting a convolutional neural network architecture, a fully-connected neural network architecture and the like.
It should be noted that, the first machine learning model may be replaced by an expert system to analyze the lesion feature information to obtain a first lesion feature vector, the expert system includes a knowledge base storing all knowledge about the problem domain and expert experience, and the expert system may simulate a human expert to solve the problem of the domain by using an algorithm or a decision strategy and an inference machine for deducing each knowledge in the knowledge base, and machine learning models such as the second machine learning model and the third machine learning model described below may be replaced by expert systems.
Step 140: inputting the medical image data and the lesion characterization data into a second machine learning model to obtain a second lesion feature vector;
the second machine learning model can also adopt a convolutional neural network architecture, a fully-connected neural network architecture and the like, the specific type and architecture of the second machine learning model can be adjusted according to the requirements of practical application scenes, and the specific type and architecture of the second machine learning model are not strictly limited.
Step 150: and merging the first focus feature vector and the second focus feature vector to obtain a fusion feature vector corresponding to the focus.
The first focus feature vector is a first focus feature vector obtained by analyzing focus feature information in a preset rule and statistical mode, the second focus feature vector is a feature extraction result of medical image data and focus characterization data, and the first focus feature vector and the second focus feature vector are combined, so that focus feature information, medical image data and focus characterization data can be synthesized, and more comprehensive data support is provided for the analysis result of a focus to be obtained later. When the lengths of the first focus feature vector and the second focus feature vector are different, the first focus feature vector and the second focus feature vector can be spliced in a parallel mode to obtain a fusion feature vector. When the lengths of the first focus feature vector and the second focus feature vector are the same, the first focus feature vector and the second focus feature vector can be directly added, or weighted splicing processing is carried out, or weighted adding processing is carried out, and the two focus feature vectors can have different weights, so that the obtained fusion feature vector has stronger expressive performance on focuses, and the accuracy of a subsequent analysis result is improved. However, the specific manner of fusion of the first lesion feature vector and the second lesion feature vector is not specifically limited herein.
Step 160: and obtaining the analysis result of the focus according to the fusion feature vector.
The fusion feature vector comprises focus feature information, medical image data and focus characterization data, and the focus to which the fusion feature vector belongs is analyzed according to the fusion feature vector. For example, the type of lesion, the severity of the lesion, etc. may be analyzed. It should be understood that the focus analysis result is an intermediate data result of focus analysis obtained based on medical image data, the intermediate data result may be used to assist a doctor in diagnosing a patient according to the focus analysis result and in combination with other indexes, for example, when the medical image data is lung medical image data, the focus analysis result obtained by analyzing a focus may be a lung cancer focus analysis result, but the doctor must also need to perform inspection such as bronchoscopy, lung puncture or gene detection on the patient during diagnosis, and comprehensively consider the results of all the above inspection to determine whether the patient is ill, and the focus analysis result provided by the embodiment of the present application may be used to assist the doctor in diagnosis.
Therefore, according to the focus analysis method based on the medical image, focus characteristic information is extracted from focus characterization data, so that attribute characteristics of a focus are extracted based on a preset extraction rule and a statistical mode, and focus characteristic information is input into a first machine learning model to obtain a first focus characteristic vector, so that the obtained first focus characteristic vector comprises characteristic information obtained based on the preset extraction rule and the statistical mode, and overfitting can be effectively avoided; meanwhile, the medical image data and the focus characterization data are directly input into a second machine learning model to obtain a second focus feature vector, and the obtained second focus feature vector comprises feature information which is directly abstracted through the medical image data and the focus characterization data and can not be obtained based on a preset extraction rule and a statistical mode; then, the two focus feature vectors are fused to obtain a fusion feature vector, information complementation of the first focus feature vector and the second focus feature vector is realized in the fusion feature vector, the feature information of the focus can be expressed more comprehensively, the analysis result of the focus is obtained according to the fusion feature vector, the feature information of the focus is more comprehensive, and the accuracy of the focus analysis result can be remarkably improved.
In another embodiment of the present application, a machine learning model may be used to analyze a fused feature vector of a lesion, and specifically, obtaining an analysis result of the lesion according to the fused feature vector includes: and inputting the fusion feature vector into a third machine learning model to obtain a focus analysis result.
The third machine learning model may include a combination of one or more machine learning sub-models, and may be implemented by a convolutional neural network model operation, for example, to analyze the fused feature vector of the lesion, and if only the class of the lesion is obtained, to process the fused feature vector of the lesion by using one machine learning sub-model, and to analyze the class of the lesion. If the severity of the focus is required to be obtained, a machine learning sub-model is adopted to process the fusion feature vector of the focus, and the severity of the focus is analyzed. When the category of the focus and the severity of the focus need to be obtained, two machine learning sub-models are adopted to respectively obtain the category of the focus and the severity of the focus.
The severity of the lesion may be trained in advance in the third machine learning model, e.g., the severity of the lesion is trained in advance in the machine learning model and is classified as mild, moderate, severe, etc., the fusion characteristics of the lesion are analyzed, and the severity of the lesion is output. The set values in the 0-1 interval may be divided by a threshold, for example, the fusion feature vector of the focus is analyzed by a third machine learning model, one value in the 0-1 interval is output, the severity threshold may be set to 0, 0.25, 0.50, 0.75 and 1, so that the severity of the focus is divided into four levels, and the expression mode of the severity of the focus is not particularly limited in this application.
In addition, the fusion feature vector contains a large amount of feature information, is a high-dimensional feature vector, and is analyzed in a machine learning model mode, so that the analysis speed can be increased. In addition, according to different requirements for focus analysis, the third machine learning model can correspondingly adopt different machine learning sub-models to respectively realize different requirements, so that the focus analysis result is more comprehensive.
In another embodiment of the present application, for more comprehensive analysis of the lesion, inputting the fused feature vector into a third machine learning model to obtain a lesion analysis result includes: the fused feature vectors are input into a third machine learning model to obtain the category and/or severity of the lesion.
Specifically, the lesions are classified into categories of the lesions to which they belong, and/or the severity of the lesions, according to various expressions of features for the lesions in the fused feature vector. For example, when the category of lesions includes pneumonia lesions, the pneumonia lesions are classified by etiology: bacterial pneumonia focus, viral pneumonia focus, mycoplasma pneumonia focus, chlamydia pneumonia focus, fungal pneumonia focus, protozoal pneumonia focus and the like, wherein the severity of each pneumonia focus can be classified into mild, moderate, severe and the like, so that the focus is divided into the corresponding focus category and/or the focus severity in detail, a doctor can further diagnose according to the analyzed focus category and/or focus severity, and the workload of the doctor is reduced.
In another embodiment of the present application, as shown in fig. 2, considering that the focus in the medical image is analyzed to obtain the type and/or severity of the focus, the doctor can be assisted in performing rapid diagnosis on the patient, and the corresponding diagnosis and treatment can be performed according to the type and/or severity of the patient's illness, so that the analysis result of the focus in the medical image is timely fed back to the doctor, and the doctor can perform the corresponding diagnosis and treatment on the patient conveniently. Specifically, after obtaining the fusion feature vector, the method further includes the steps of:
step 200: the fused feature vectors are input into a third machine learning model to obtain the category and/or severity of the lesion.
Step 210: and sending out early warning according to the type and/or the severity of the focus.
After the type and/or the severity of the focus are analyzed according to the medical image, the analysis result is fed back to a doctor so as to ensure that a patient with serious illness state can be diagnosed and treated in time, or when the type of the focus of the patient corresponds to the focus of infection, the doctor can diagnose in time and isolate the patient through early warning so as to avoid infecting other people. The early warning mode can be a program instruction on a computer, mail pushing, mobile terminal short message pushing and the like, and the early warning information is informed to a doctor. In another embodiment of the present application, the lesion characterization data is based on data obtained from medical image data that may represent the fundamental characteristics of the lesion, and the obtaining operation may be performed using a machine learning model. Specifically, extracting lesion characterization data based on medical image data includes: inputting the medical image data into a fourth machine learning model to obtain lesion characterization data; wherein the lesion characterization data comprises one or more combinations of: the outline of the focus, the outline of the structural unit of the organ where the focus is located, and the whole outline of the organ where the focus is located.
Specifically, the fourth machine learning model may be a combination of one or more machine learning sub-models, and different forms of machine learning sub-models are selected according to the difference of the lesion characterization data, for example, when the lesion characterization data includes a lesion contour, the fourth machine learning model includes a lesion contour segmentation model. For another example, when the lesion characterization data includes a lesion contour, a contour of a structural unit of an organ in which the lesion is located, and an overall contour of an organ in which the lesion is located, the fourth machine learning model includes a lesion contour segmentation model, a contour segmentation model of a structural unit of an organ in which the lesion is located, and an overall contour segmentation model of an organ in which the lesion is located. It should be appreciated that the fourth machine learning model includes one or more machine learning sub-models, which are determined from lesion characterization data to be extracted from the medical image data, as not specifically limited in this application.
The fourth machine learning model may be a partition network such as U-Net (a neural network capable of performing image segmentation on a two-dimensional image), FCN (Fully Convolutional Network, full convolutional neural network), or may be optimized by using a structure such as res Net (residual neural network).
For example, the fourth machine learning model may adopt a U-Net image segmentation network framework, remove noise by preprocessing sample medical image data, obtain a training set of sample medical image data after performing data enhancement processing on the image, and input the training set of bitmap samples into the U-Net model of the medical image to train the U-Net model of the medical image. It should be understood that the machine learning models in the present application are all pre-trained models, and image segmentation is performed on the medical image through the U-Net model of the medical image, so as to obtain the contour of the focus, the contour of the structural unit of the organ where the focus is located, the overall contour of the organ where the focus is located, and so on.
In another embodiment of the present application, the lesion characterization information is information of a lesion obtained by processing the lesion characterization data according to a preset extraction rule and a statistical manner, and specifically, the lesion characterization information includes one or more of the following combinations: the ratio of the focus in the organ where the focus is located, the number of infections of the structural unit of the organ where the focus is located.
In order to further clearly illustrate the technical scheme, the following uses focus as an example to further illustrate the technical scheme of the application. However, it should be understood that the methods provided herein are applicable to a variety of medical image-based lesion analysis procedures, and the specific category of lesions is not strictly limited by the present application.
In another embodiment of the present application, the medical image data includes lung medical image data, pneumonia focus characterization data is extracted based on the lung medical image data, pneumonia focus feature information is extracted based on the pneumonia focus characterization data, the pneumonia focus feature information is input into a first machine learning model to obtain a first pneumonia focus feature vector, the lung medical image data and the pneumonia focus characterization data are input into a second machine learning model to obtain a second pneumonia focus feature vector, the first pneumonia focus feature vector and the second pneumonia focus feature vector are combined to obtain a fusion feature vector corresponding to the pneumonia focus, and an analysis result of the pneumonia focus is obtained according to the fusion feature vector.
In another embodiment of the present application, the lesion comprises pneumonia; the types of pneumonia include: bacterial pneumonia, novel coronavirus pneumonia, other viral pneumonia than novel coronavirus pneumonia, mycoplasma pneumonia, chlamydia pneumonia, fungal pneumonia, protozoal pneumonia; and/or, wherein the severity of the pneumonia comprises: mild, moderate and severe. And analyzing the fusion feature vector corresponding to the pneumonia focus to obtain an analysis result of the pneumonia focus, and specifically, inputting the fusion feature vector into a third machine learning model to obtain the type and/or severity of the pneumonia.
In another embodiment of the present application, the severity of the pneumonia further comprises: probability values for novel coronavirus pneumonia. Specifically, in the training sample data, the training sample of the novel coronavirus pneumonia is marked with different labels by a doctor, and then the machine learning model is trained, so that the trained machine learning model can analyze the focus of the novel coronavirus pneumonia more comprehensively, and the probability that the focus is the novel coronavirus pneumonia can be obtained when the lung medical image data is analyzed by the method.
In another embodiment of the present application, after inputting the fused feature vector into the third machine learning model to obtain the category and/or severity of pneumonia, further comprising: and sending out early warning according to the type and/or severity of the pneumonia.
Because the pneumonia has a certain infectivity, wherein the viral pneumonia is an inhalant infection, and viruses are easy to infect other people during the cough period, after the type and/or the severity of the pneumonia are judged, early warning is carried out to remind doctors so as to further diagnose the patients and judge whether the patients need to be isolated.
In another embodiment of the present application, due to different types and/or severity of pneumonia, different pre-warning can be performed according to different types and/or severity of pneumonia, specifically, as shown in fig. 3, the focus includes pneumonia; the method for sending out the early warning according to the type and/or the severity of the focus may specifically include:
Step 320: when the severity of the pneumonia is moderate, sending out low-level early warning;
step 330: when the severity of the pneumonia is severe, sending out a medium-grade early warning;
step 340: when the type of pneumonia is novel coronavirus pneumonia, advanced early warning is sent out.
When the analysis result of the focus is pneumonia except for novel coronavirus pneumonia, early warning is carried out according to the severity of the pneumonia, wherein low-level early warning is sent out when the severity of the pneumonia is medium-level, and medium-level early warning is sent out when the severity of the pneumonia is high-level. When the analysis result of the focus is the novel coronavirus pneumonia, no matter what the probability that the pneumonia is the novel coronavirus pneumonia, the advanced early warning is directly sent out. Because the novel coronavirus pneumonia is transmitted through spray, contact and the like, the infectivity is extremely strong, and a patient is very easy to infect other people in the process of waiting for a doctor, after the type of the pneumonia is analyzed to be the novel coronavirus pneumonia, the advanced early warning is sent out to remind medical personnel, the patient is isolated in time, and the infection of other people is avoided.
In another embodiment of the present application, the extracting the pneumonia focus characterization data based on the lung medical image data includes: inputting the lung medical image into a fourth machine learning model to obtain pneumonia focus characterization data; the fourth machine learning model comprises a pneumonia segmentation model, a lung segmentation model and a lung lobe segmentation model; the pneumonia lesion characterization data included: pneumonia profile, lung lobe profile, lung profile.
And (3) respectively carrying out pneumonia segmentation, lung lobe segmentation and lung segmentation on the medical image through a pre-trained pneumonia segmentation model, a lung segmentation model and a lung lobe segmentation model. For example, the lung lobe segmentation model extracts lung lobes on the whole lung, including the outline of each lung lobe, and may adopt a deep learning segmentation network model such as a U-Net image segmentation network model, etc., where the principle of the lung lobe segmentation model is the same as that of the lung lobe segmentation model, and will not be described here again.
Exemplary lesion analysis device
The following are examples of the lesion analysis device according to the present application, and examples of the lesion analysis method according to the present application may be performed. For details not disclosed in the device embodiments of the present application, please refer to the focus analysis method embodiments of the present application.
Fig. 4 is a schematic structural diagram of a focus analysis device 40 based on medical images according to an embodiment of the present application. As shown in fig. 4, the lesion analyzing device 40 includes:
the first extraction module 410 is configured to extract lesion characterization data based on medical image data.
The second extraction module 420 is configured to extract lesion characterization information based on the lesion characterization data.
The first lesion feature extraction module 430 is configured to input lesion feature information into a first machine learning model to obtain a first lesion feature vector.
The second lesion feature extraction module 440 is configured to input medical image data and lesion characterization data into a second machine learning model to obtain a second lesion feature vector.
A merging module 450 configured to merge the first lesion feature vector and the second lesion feature vector to obtain a merged feature vector corresponding to the lesion.
The analysis module 460 is configured to obtain an analysis result of the lesion according to the fusion feature vector.
In another embodiment of the present application, the analysis module 460 is further configured to: and inputting the fusion feature vector into a third machine learning model to obtain a focus analysis result.
In another embodiment of the present application, the analysis module 460 is further configured to: the fused feature vectors are input into a third machine learning model to obtain the category and/or severity of the lesion.
Fig. 5 is a schematic structural diagram of a focus analysis device 50 based on medical images according to another embodiment of the present application. As shown in fig. 5, the apparatus 50 further includes: the early warning module 570 sends out an early warning according to the type and/or severity of the lesion.
In another embodiment of the present application, the first extraction module 410 is configured to: inputting the medical image data into a fourth machine learning model to obtain lesion characterization data; wherein the lesion characterization data comprises one or more combinations of: the outline of the focus, the outline of the structural unit of the organ where the focus is located, and the whole outline of the organ where the focus is located.
In one embodiment of the present application, the medical image-based lesion analysis device 40 includes a lung medical image-based pneumonia lesion analysis device 60, as shown in fig. 6, the device 60 includes: a first pneumonia focus extraction module 610 configured to extract pneumonia focus characterization data based on lung medical image data; a second pneumonia focus extraction module 620 configured to extract pneumonia focus feature information based on the pneumonia focus characterization data; a first pneumonia focus feature extraction module 630 configured to input pneumonia focus feature information into a first machine learning model to obtain a first pneumonia focus feature vector; a second pneumonia focus feature extraction module 640 configured to input lung medical image data and pneumonia focus characterization data into a second machine learning model to obtain a second pneumonia focus feature vector; a pneumonia focus combining module 650 configured to combine the first pneumonia focus feature vector and the second pneumonia focus feature vector to obtain a fusion feature vector corresponding to the pneumonia focus; and a pneumonia focus analysis module 660 configured to obtain an analysis result of the pneumonia focus according to the fusion feature vector.
In one embodiment of the present application, the lesion comprises pneumonia; the types of pneumonia include: bacterial pneumonia, novel coronavirus pneumonia, other viral pneumonia than novel coronavirus pneumonia, mycoplasma pneumonia, chlamydia pneumonia, fungal pneumonia, protozoal pneumonia; and/or, wherein the severity of the pneumonia comprises: mild, moderate and severe.
In one embodiment of the present application, the severity of the pneumonia further includes: probability values for novel coronavirus pneumonia.
In one embodiment of the present application, the pneumonia lesion analysis module 660 is further configured to: and inputting the fusion feature vector into a third machine learning model to obtain an analysis result of the pneumonia.
In one embodiment of the present application, the pneumonia lesion analysis module 660 is further configured to: the fused feature vectors are input into a third machine learning model to obtain the category and/or severity of pneumonia.
In one embodiment of the present application, the apparatus 60 further comprises: the pneumonia focus early warning module 670 sends out early warning according to the type and/or severity of the pneumonia.
In one embodiment of the present application, the lesion comprises pneumonia; wherein the pneumonia focus early warning module 670 is further configured to:
low-level early warning unit 601: when the severity of the pneumonia is moderate, sending out low-level early warning;
intermediate-level early warning unit 602: when the severity of the pneumonia is severe, sending out a medium-grade early warning;
advanced early warning unit 603: when the type of pneumonia is novel coronavirus pneumonia, advanced early warning is sent out.
In one embodiment of the present application, the first pneumonia lesion extraction module 610 is further configured to: inputting the lung medical image into a fourth machine learning model to obtain pneumonia focus characterization data; the fourth machine learning model comprises a pneumonia segmentation model, a lung segmentation model and a lung lobe segmentation model; the pneumonia lesion characterization data included: pneumonia profile, lung lobe profile, lung profile.
The specific functions and operations of the respective modules in the above-described medical image-based lesion analysis device 40 and the pulmonary medical image-based pneumonia lesion analysis device 60 have been described in detail in the medical image-based lesion analysis method described above with reference to fig. 1 to 3, and thus, repetitive descriptions thereof will be omitted herein.
It should be noted that the medical image-based lesion analysis device according to the embodiments of the present application may be integrated into the electronic device as a software module and/or a hardware module, in other words, the electronic device may include the medical image-based lesion analysis device. For example, the medical image-based lesion analysis device may be a software module in the operating system of the electronic device, or may be an application developed for it; of course, the lesion analysis device based on medical image may also be one of a plurality of hardware modules of the electronic device.
In another embodiment of the present application, the medical image based lesion analysis device and the electronic device may also be separate devices (e.g., servers), and the medical image based lesion analysis device may be connected to the electronic device via a wired and/or wireless network and transmit the interaction information in accordance with a agreed data format.
Exemplary network model training methods
Fig. 7 is a flow chart of a network model training method according to an embodiment of the present application. As shown in fig. 7, the network model training method includes the steps of:
step 710: the sample medical image data is input into a fourth machine learning model to obtain sample lesion characterization data, wherein the sample medical image data includes marker data.
Specifically, the sample lesion characterization data includes a combination of one or more of the following: the outline of the sample focus, the outline of the structural unit of the organ where the sample focus is located, and the whole outline of the organ where the sample focus is located. The fourth machine learning model may be formed by combining one or more machine learning sub-models, and different forms of machine learning sub-models are selected according to different sample lesion characterization data. The fourth machine learning model can be a partition network such as a U-Net (a neural network capable of performing image segmentation on a two-dimensional image), an FCN (Fully Convolutional Network, a full convolution neural network), or can be optimized by adopting a structure such as a res Net (residual neural network).
The marking data is marked in advance, and may be medical image data marked by a doctor according to experience or medical data, or medical image data may be marked by a computer by calculating lesion information stored on the computer or on a network, which is not particularly limited herein.
Step 720: sample lesion characterization information is extracted based on the sample lesion characterization data.
Specifically, the sample focus characteristic information is information of a sample focus obtained by processing sample focus characterization data according to a preset extraction rule and a statistical mode, and the sample focus characteristic information comprises: the ratio of the sample focus in the organ where the sample focus is located, the infection number of the structural units of the organ where the sample focus is located, the maximum area of infection of the structural units of the organ where the sample focus is located, the HU (Hounsfiled Unit) value distribution condition of focus areas in the structural units of the organ where different sample focuses are located, and the like.
Step 730: sample lesion feature information is input into a first machine learning model to obtain a first lesion feature vector sample.
The first machine learning model is used for performing training analysis on sample focus characteristic information extracted from sample focus characterization data, and can be selected from a convolutional neural network architecture, a fully-connected neural network architecture and the like, and the network architecture selected by the second machine learning model and the third machine learning model is the same as that of the first machine learning model, and is not repeated here.
Step 740: the sample medical image data and the sample lesion characterization data are input into a second machine learning model to obtain a second lesion feature vector sample.
Step 750: and combining the first focus characteristic vector sample and the second focus characteristic vector sample to obtain a fusion characteristic vector sample corresponding to the focus.
When the lengths of the first sample focus feature vector and the second sample focus feature vector are different, the first sample focus feature vector and the second sample focus feature vector can be spliced in a parallel mode to obtain a fusion feature vector. When the lengths of the first sample focus feature vector and the second sample focus feature vector are the same, the first sample focus feature vector and the second sample focus feature vector can be directly added, or weighted splicing processing is carried out, or weighted adding processing is carried out, and the two sample focus feature vectors can have different weights, so that the obtained fusion feature vector has stronger expressive performance on focuses, and the accuracy of a subsequent analysis result is improved. However, the specific manner of fusion of the first sample lesion feature vector and the second sample lesion feature vector is not specifically limited herein.
Step 760: and inputting the fused feature vector samples into a third machine learning model to obtain sample analysis results of the sample focus.
The fusion feature vector comprises sample focus feature information, sample medical image data and sample focus characterization data, and the focus to which the fusion feature vector belongs is analyzed according to the fusion feature vector. For example, the type of lesion, the severity of the lesion, etc. may be analyzed.
Step 770: and adjusting network parameters of the first machine learning model, the second machine learning model, the third machine learning model and the fourth machine learning model according to the difference between the sample analysis result and the marking data.
Specifically, the sample analysis result is obtained by processing the medical image data through each machine learning model, the marking data contained in the medical image data is marked manually or by a computer in advance, the network parameters of the first machine learning model, the second machine learning model, the third machine learning model and the fourth machine learning model are adjusted through the calculation of the loss function between the sample analysis result and the marking data, and when the network parameters do not change or fluctuate within a preset range, the training is finished, namely the first machine learning model, the second machine learning model, the third machine learning model and the fourth machine learning model are trained at the moment.
Exemplary network model training apparatus
The following is an embodiment of a network model training apparatus of the present application, which may perform an embodiment of a network model training method of the present application. For details not disclosed in the device embodiments of the present application, please refer to the network model training method embodiments of the present application.
Fig. 8 is a schematic structural diagram of a network model training device 80 according to an embodiment of the present application. As shown in fig. 8, the network model training apparatus 80 includes:
the first sample extraction module 810 is configured to input sample medical image data into the fourth machine learning model to obtain sample lesion characterization data, wherein the sample medical image data includes marker data.
A second sample extraction module 820 is configured to extract sample lesion characterization information based on the sample lesion characterization data.
The first sample lesion feature extraction module 830 is configured to input sample lesion feature information into a first machine learning model to obtain a first lesion feature vector sample.
The second sample lesion feature extraction module 840 is configured to input sample medical image data and sample lesion characterization data into a second machine learning model to obtain a second lesion feature vector sample.
The sample merging module 850 is configured to merge the first lesion feature vector sample and the second lesion feature vector sample to obtain a merged feature vector sample corresponding to the lesion.
The sample analysis module 860 is configured to input the fused feature vector samples into a third machine learning model to obtain a sample analysis result of the lesion.
The parameter adjustment module 870 is configured to adjust network parameters of the first machine learning model, the second machine learning model, the third machine learning model, and the fourth machine learning model according to the differences between the sample analysis result and the tag data.
The specific functions and operations of the respective modules in the above-described network model training apparatus 80 have been described in detail in the network model training method described above with reference to fig. 7, and thus, repetitive descriptions thereof will be omitted here.
It should be noted that the network model training apparatus according to the embodiments of the present application may be integrated into an electronic device as a software module and/or a hardware module, in other words, the electronic device may include the network model training apparatus. For example, the network model training means may be a software module in the operating system of the electronic device, or may be an application developed for it; of course, the network model training means may equally be one of a number of hardware modules of the electronic device.
In another embodiment of the present application, the network model training apparatus and the electronic device may also be separate devices (e.g., servers), and the network model training apparatus may be connected to the electronic device through a wired and/or wireless network and transmit the interaction information in a agreed data format.
Exemplary electronic device
Fig. 9 is a schematic structural diagram of an electronic device 90 according to other exemplary embodiments of the present application. As shown in fig. 9, the electronic device 90 includes: one or more processors 910; memory 920, and computer program instructions stored in memory 920 that, when executed by the one or more processors 910, implement the medical image-based lesion analysis method and the network model training method of any of the embodiments described above.
Processor 910 may be a Central Processing Unit (CPU) or other form of processing unit having data processing and/or instruction execution capabilities and may control other components in electronic device 90 to perform the desired functions.
Memory 920 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or nonvolatile memory. Volatile memory can include, for example, random Access Memory (RAM) and/or Cache memory (Cache) and the like. The non-volatile memory may include, for example, read Only Memory (ROM), hard disk, flash memory, and the like. One or more computer program instructions may be stored on a computer readable storage medium and the processor 910 may execute the program instructions to implement the steps in the medical image based lesion analysis method and the steps in the network model training method of the various embodiments of the present application above and/or other desired functions. Information such as light intensity, compensation light intensity, position of the filter, etc. may also be stored in the computer readable storage medium.
The electronic device 90 may further include: an input device 930, and an output device 940, which are interconnected by a bus system and/or other forms of connection mechanisms (not shown in fig. 9).
For example, where the electronic device 90 is a stand-alone device, the input means 930 may be a communication network connector for receiving the acquired input signal from an external, removable device. In addition, the input device 930 may also include, for example, a keyboard, a mouse, a microphone, and the like.
The output device 940 may output various information to the outside, and may include, for example, a display, a speaker, a printer, and a communication network and a remote output apparatus connected thereto, etc.
Of course, only some of the components of the electronic device 90 that are relevant to the present application are shown in fig. 9 for simplicity, components such as buses, input/output interfaces, etc. are omitted. In addition, the electronic device 90 may include any other suitable components depending on the particular application.
Exemplary computer-readable storage Medium
In addition to the methods and apparatus described above, embodiments of the present application may also be a computer program product comprising computer program instructions which, when executed by a processor, cause the processor to perform the steps of the medical image based lesion analysis method and the network model training method of any of the embodiments described above.
The computer program product may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device, partly on a remote computing device, or entirely on the remote computing device or server.
Furthermore, embodiments of the present application may also be a computer readable storage medium, on which a computer program is stored, which when executed by a processor, implements the medical image based lesion analysis method and the network model training method of any of the embodiments described above. A computer readable storage medium may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium may include, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium would include the following: an electrical connection having one or more wires, a portable disk, a hard disk, random access memory ((RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The schematic illustrations of the square structures of the devices, apparatuses, devices, systems referred to in this application are merely illustrative examples and are not intended to require or imply that the connections, arrangements, configurations must be made in the manner shown in the schematic illustrations of the square structures. As will be appreciated by one of skill in the art, the devices, apparatuses, devices may be connected, arranged, configured in any manner. Words such as "including," "comprising," "having," and the like are words of openness and mean "including but not limited to," and are used interchangeably therewith. The terms "or" and "as used herein refer to and are used interchangeably with the term" and/or "unless the context clearly indicates otherwise. The term "such as" as used herein refers to the phrase "such as, but not limited to," and is used interchangeably therewith, and it should be understood that the first, second, third, etc. qualifiers mentioned in the embodiments of the present application are used only for the sake of clarity in describing the technical solutions of the embodiments of the present application, and are not intended to limit the scope of protection of the present application.
In the several embodiments provided in the present application, it should be understood that the disclosed methods and apparatus may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of modules is merely a logical function division, and other manners of division may be implemented in practice.
In addition, it should be noted that the combination of the technical features described in the present invention is not limited to the combination described in the claims or the combination described in the specific embodiments, and all the technical features described in the present invention may be freely combined or combined in any manner unless contradiction occurs between them.
It should be noted that the above list is only specific embodiments of the present application, and it is obvious that the present application is not limited to the above embodiments, and many similar variations follow. All modifications and variations therein which would occur to a person skilled in the art upon direct derivation or association from the disclosure herein are intended to be within the scope of this application.
The foregoing description of the preferred embodiments of the invention is not intended to be limiting, but rather is to be construed as including any modifications, equivalents, and alternatives falling within the spirit and principles of the invention.
Claims (14)
1. A medical image-based lesion analysis method, comprising:
extracting lesion characterization data based on the medical image data;
extracting lesion characterization information based on the lesion characterization data;
inputting the focus characteristic information into a first machine learning model to obtain a first focus characteristic vector;
Inputting the medical image data and the lesion characterization data into a second machine learning model to obtain a second lesion feature vector;
combining the first focus feature vector and the second focus feature vector to obtain a fusion feature vector corresponding to the focus; and
acquiring an analysis result of the focus according to the fusion feature vector;
wherein the lesion characterization data comprises one or more combinations of: the outline of the focus, the outline of the structural unit of the organ where the focus is located and the whole outline of the organ where the focus is located;
the lesion characterization information includes one or more combinations of the following: the ratio of the focus in the organ of the focus, the infection number of the structural units of the organ of the focus, the maximum infection area of the structural units of the organ of the focus, and the HU value distribution condition of focus areas in the structural units of different organs of the focus;
the first machine learning model and the second machine learning model are respectively selected from a convolutional neural network architecture and/or a fully-connected neural network architecture.
2. The method of claim 1, wherein the obtaining the lesion analysis result from the fused feature vector comprises: and inputting the fusion feature vector into a third machine learning model to obtain the focus analysis result, wherein the third machine learning model adopts a convolutional neural network architecture and/or a fully-connected neural network architecture.
3. The method of claim 2, wherein said inputting the fused feature vector into a third machine learning model to obtain the lesion analysis result comprises: the fused feature vector is input into a third machine learning model to obtain the category and/or severity of the lesion.
4. A method according to claim 3, further comprising, after inputting the fused feature vector into a third machine learning model to obtain a category and/or severity of the lesion: and sending out early warning according to the type and/or the severity of the focus.
5. The method of claim 4, wherein extracting lesion characterization data based on medical image data comprises:
inputting the medical image data into a fourth machine learning model to obtain lesion characterization data;
the fourth machine learning model adopts a U-Net and/or full convolution neural network segmentation network.
6. The method of claim 5, wherein the lesion comprises pneumonia;
wherein, the type of pneumonia includes: bacterial pneumonia, novel coronavirus pneumonia, other viral pneumonia than novel coronavirus pneumonia, mycoplasma pneumonia, chlamydia pneumonia, fungal pneumonia, protozoal pneumonia; and/or the number of the groups of groups,
Wherein the severity of the pneumonia comprises: mild, moderate and severe.
7. The method of claim 6, wherein the severity of the pneumonia further comprises: probability values for the novel coronavirus pneumonia.
8. The method of claim 7, wherein the lesion comprises pneumonia;
wherein, according to the type and/or severity of the focus, sending out the early warning includes:
when the severity of the pneumonia is moderate, sending out low-level early warning;
when the severity of the pneumonia is severe, sending out a medium-grade early warning;
and when the type of the pneumonia is novel coronavirus pneumonia, sending out advanced early warning.
9. The method of claim 8, wherein the extracting pneumonia lesion characterization data based on lung medical image data comprises: inputting the lung medical image into a fourth machine learning model to obtain pneumonia lesion characterization data; wherein,,
the fourth machine learning model comprises a pneumonia segmentation model, a lung segmentation model and a lung lobe segmentation model;
the pneumonia lesion characterization data includes: pneumonia profile, lung lobe profile, lung profile.
10. A medical image-based lesion analysis device, comprising:
A first extraction module configured to extract lesion characterization data based on medical image data;
a second extraction module configured to extract lesion characterization information based on the lesion characterization data;
a first lesion feature extraction module configured to input the lesion feature information into a first machine learning model to obtain a first lesion feature vector;
a second lesion feature extraction module configured to input the medical image data and the lesion characterization data into a second machine learning model to obtain a second lesion feature vector;
a merging module configured to merge the first lesion feature vector and the second lesion feature vector to obtain a merged feature vector corresponding to the lesion; and
the analysis module is configured to acquire an analysis result of the focus according to the fusion feature vector;
wherein the lesion characterization data comprises one or more combinations of: the outline of the focus, the outline of the structural unit of the organ where the focus is located and the whole outline of the organ where the focus is located;
the lesion characterization information includes one or more combinations of the following: the ratio of the focus in the organ of the focus, the infection number of the structural units of the organ of the focus, the maximum infection area of the structural units of the organ of the focus, and the HU value distribution condition of focus areas in the structural units of different organs of the focus;
The first machine learning model and the second machine learning model are respectively selected from a convolutional neural network architecture and/or a fully-connected neural network architecture.
11. A method for training a network model, comprising:
inputting sample medical image data into a fourth machine learning model to obtain sample lesion characterization data, wherein the sample medical image data includes marker data;
extracting sample focus characteristic information based on the sample focus characterization data;
inputting the sample focus characteristic information into a first machine learning model to obtain a first focus characteristic vector sample;
inputting the sample medical image data and the sample lesion characterization data into a second machine learning model to obtain a second lesion feature vector sample;
combining the first focus feature vector sample and the second focus feature vector sample to obtain a fusion feature vector sample corresponding to the focus;
inputting the fusion feature vector sample into a third machine learning model to obtain a sample analysis result of the focus; and
according to the difference between the sample analysis result and the marking data, network parameters of the first machine learning model, the second machine learning model, the third machine learning model and the fourth machine learning model are adjusted;
Wherein the sample lesion characterization data comprises a combination of one or more of: the outline of the sample focus, the outline of a structural unit of an organ where the sample focus is located, and the whole outline of the organ where the sample focus is located;
the sample lesion characterization information includes one or more combinations of: the ratio of the sample focus in the organ of the focus, the infection number of the structural units of the organ of the sample focus, the maximum area of infection of the structural units of the organ of the sample focus, and the HU value distribution condition of focus areas in the structural units of the organs of different sample focuses;
the first machine learning model, the second machine learning model and the third machine learning model are respectively selected from a convolutional neural network architecture and/or a fully-connected neural network architecture;
and the fourth machine learning model adopts a U-Net and/or full convolution neural network segmentation network.
12. A network model training apparatus, comprising:
a first sample extraction module configured to input sample medical image data into a fourth machine learning model to obtain sample lesion characterization data, wherein the sample medical image data includes marker data;
A second sample extraction module configured to extract sample lesion characterization information based on the sample lesion characterization data;
a first sample lesion feature extraction module configured to input the sample lesion feature information into a first machine learning model to obtain a first lesion feature vector sample;
a second sample lesion feature extraction module configured to input the sample medical image data and the sample lesion characterization data into a second machine learning model to obtain a second lesion feature vector sample;
a sample merging module configured to merge the first lesion feature vector sample and the second lesion feature vector sample to obtain a fused feature vector sample corresponding to the lesion;
a sample analysis module configured to input the fused feature vector sample into a third machine learning model to obtain a sample analysis result of the lesion; and
a parameter adjustment module configured to adjust network parameters of the first machine learning model, the second machine learning model, the third machine learning model, and the fourth machine learning model according to differences between the sample analysis result and the tag data;
wherein the sample lesion characterization data comprises a combination of one or more of: the outline of the sample focus, the outline of a structural unit of an organ where the sample focus is located, and the whole outline of the organ where the sample focus is located;
The sample lesion characterization information includes one or more combinations of: the ratio of the sample focus in the organ of the focus, the infection number of the structural units of the organ of the sample focus, the maximum area of infection of the structural units of the organ of the sample focus, and the HU value distribution condition of focus areas in the structural units of the organs of different sample focuses;
the first machine learning model, the second machine learning model and the third machine learning model are respectively selected from a convolutional neural network architecture and/or a fully-connected neural network architecture;
and the fourth machine learning model adopts a U-Net and/or full convolution neural network segmentation network.
13. A computer readable storage medium, characterized in that the storage medium stores a computer program for performing the medical image-based lesion analysis method according to any of the preceding claims 1-9.
14. An electronic device, comprising:
a processor;
a memory, wherein the memory is configured to store instructions executable by the processor;
the processor, when executing the instructions, implements the medical image-based lesion analysis method according to any of the preceding claims 1-9.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010259844.3A CN111476772B (en) | 2020-04-03 | 2020-04-03 | Focus analysis method and device based on medical image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010259844.3A CN111476772B (en) | 2020-04-03 | 2020-04-03 | Focus analysis method and device based on medical image |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111476772A CN111476772A (en) | 2020-07-31 |
CN111476772B true CN111476772B (en) | 2023-05-26 |
Family
ID=71749692
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010259844.3A Active CN111476772B (en) | 2020-04-03 | 2020-04-03 | Focus analysis method and device based on medical image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111476772B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112071421A (en) * | 2020-09-01 | 2020-12-11 | 深圳高性能医疗器械国家研究院有限公司 | Deep learning estimation method and application thereof |
CN112807008A (en) * | 2021-01-27 | 2021-05-18 | 山东大学齐鲁医院 | Method and system for identifying actual mycoplasma pneumoniae and streptococcus pneumoniae of children based on imaging omics |
CN113052831B (en) * | 2021-04-14 | 2024-04-23 | 清华大学 | Brain medical image anomaly detection method, device, equipment and storage medium |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108376558A (en) * | 2018-01-24 | 2018-08-07 | 复旦大学 | A kind of multi-modal nuclear magnetic resonance image Case report no automatic generation method |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9801597B2 (en) * | 2014-09-24 | 2017-10-31 | General Electric Company | Multi-detector imaging system with x-ray detection |
CN107025369B (en) * | 2016-08-03 | 2020-03-10 | 北京推想科技有限公司 | Method and device for performing conversion learning on medical images |
CN107203995A (en) * | 2017-06-09 | 2017-09-26 | 合肥工业大学 | Endoscopic images intelligent analysis method and system |
CN107633515A (en) * | 2017-09-19 | 2018-01-26 | 西安电子科技大学 | A kind of image doctor visual identity ability quantization method and system |
-
2020
- 2020-04-03 CN CN202010259844.3A patent/CN111476772B/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108376558A (en) * | 2018-01-24 | 2018-08-07 | 复旦大学 | A kind of multi-modal nuclear magnetic resonance image Case report no automatic generation method |
Also Published As
Publication number | Publication date |
---|---|
CN111476772A (en) | 2020-07-31 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Das et al. | Automatic COVID-19 detection from X-ray images using ensemble learning with convolutional neural network | |
Giuste et al. | Explainable artificial intelligence methods in combating pandemics: A systematic review | |
Selvan et al. | Lung segmentation from chest X-rays using variational data imputation | |
CN111476772B (en) | Focus analysis method and device based on medical image | |
CN110838118B (en) | System and method for anomaly detection in medical procedures | |
Rajan et al. | Fog computing employed computer aided cancer classification system using deep neural network in internet of things based healthcare system | |
CN105956386B (en) | Health indicator index classification system and method based on Healthy People rabat | |
CN111080624B (en) | Sperm movement state classification method, device, medium and electronic equipment | |
US20210327055A1 (en) | Systems and methods for detection of infectious respiratory diseases | |
CN111524109A (en) | Head medical image scoring method and device, electronic equipment and storage medium | |
CN116848588A (en) | Automatic labeling of health features in medical images | |
Kavuran et al. | COVID-19 and human development: An approach for classification of HDI with deep CNN | |
Magrelli et al. | Classification of lung disease in children by using lung ultrasound images and deep convolutional neural network | |
CN113554640A (en) | AI model training method, use method, computer device and storage medium | |
KR20220123518A (en) | Method and device for generating improved surgical report using machine learning | |
CN116030063B (en) | Classification diagnosis system, method, electronic device and medium for MRI image | |
Hossain et al. | COVID-19 detection through deep learning algorithms using chest X-ray images | |
CN114283140A (en) | Lung X-Ray image classification method and system based on feature fusion and storage medium | |
CN111080625B (en) | Training method and training device for lung image strip and rope detection model | |
CN111476775B (en) | DR symptom identification device and method | |
CN111275558B (en) | Method and device for determining insurance data | |
Pandey et al. | An analysis of pneumonia prediction approach using deep learning | |
Rifa'i et al. | Analysis for diagnosis of pneumonia symptoms using chest X-Ray based on Resnet-50 models with different epoch | |
Duan et al. | An in-depth discussion of cholesteatoma, middle ear Inflammation, and langerhans cell histiocytosis of the temporal bone, based on diagnostic results | |
Ajad et al. | CV-CXR: A Method for Classification and Visualisation of Covid-19 virus using CNN and Heatmap |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB02 | Change of applicant information |
Address after: Room B401, floor 4, building 1, No. 12, Shangdi Information Road, Haidian District, Beijing 100085 Applicant after: Tuxiang Medical Technology Co.,Ltd. Address before: Room B401, floor 4, building 1, No. 12, Shangdi Information Road, Haidian District, Beijing 100085 Applicant before: INFERVISION |
|
CB02 | Change of applicant information | ||
GR01 | Patent grant | ||
GR01 | Patent grant |