CN115100132A - Method and apparatus for analyzing tomosynthesis image, computer device and storage medium - Google Patents

Method and apparatus for analyzing tomosynthesis image, computer device and storage medium Download PDF

Info

Publication number
CN115100132A
CN115100132A CN202210689683.0A CN202210689683A CN115100132A CN 115100132 A CN115100132 A CN 115100132A CN 202210689683 A CN202210689683 A CN 202210689683A CN 115100132 A CN115100132 A CN 115100132A
Authority
CN
China
Prior art keywords
image
maximum density
density projection
tomosynthesis
analyzing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210689683.0A
Other languages
Chinese (zh)
Inventor
程钊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Nanuoai Medical Technology Co ltd
Original Assignee
Shenzhen Nanuoai Medical Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Nanuoai Medical Technology Co ltd filed Critical Shenzhen Nanuoai Medical Technology Co ltd
Priority to CN202210689683.0A priority Critical patent/CN115100132A/en
Publication of CN115100132A publication Critical patent/CN115100132A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment
    • A61B6/52Devices using data or image processing specially adapted for radiation diagnosis
    • A61B6/5211Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data
    • A61B6/5229Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data combining image data of a patient, e.g. combining a functional image with an anatomical image
    • A61B6/5235Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data combining image data of a patient, e.g. combining a functional image with an anatomical image combining images from the same or different ionising radiation imaging techniques, e.g. PET and CT
    • A61B6/5241Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data combining image data of a patient, e.g. combining a functional image with an anatomical image combining images from the same or different ionising radiation imaging techniques, e.g. PET and CT combining overlapping images of the same imaging modality, e.g. by stitching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/08Volume rendering
    • G06T5/90
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10112Digital tomosynthesis [DTS]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10116X-ray image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker

Abstract

The application discloses a method for analyzing a tomosynthesis image, which comprises the following steps: acquiring a tomosynthesis image; carrying out maximum density projection processing on a plurality of slice images corresponding to the tomosynthesis image to generate a maximum density projection image; and analyzing whether the maximum density projection image has abnormal regions by using a machine learning model to obtain an analysis result, wherein the machine learning model is trained and finished based on a plurality of maximum density projection image samples, and the maximum density projection image samples are marked with the abnormal regions. The application also discloses a computer device and a computer readable storage medium. The method aims to improve the efficiency of a doctor for diagnosing whether a lesion exists in the body of a detected person by utilizing a tomosynthesis image.

Description

Method and apparatus for analyzing tomosynthesis image, computer device and storage medium
Technical Field
The present application relates to the field of artificial intelligence technology, and in particular, to a method and apparatus for analyzing a tomosynthesis image, a computer device, and a computer-readable storage medium.
Background
Digital tomosynthesis techniques use an X-ray tube and a digital detector to acquire a set of low radiation dose projection images that are combined to synthesize any plane in the patient's body. Unlike conventional tomography, tomosynthesis is not limited to reconstructing a single plane, but rather may generate any number of slice images throughout the entire volume of the patient. Because the imaging of each slice image in the tomosynthesis image is not influenced, the problem that the diagnosis of a doctor on a detected person is influenced because the superposition of each organ part in the X-ray image, which is one of the main weaknesses of the traditional single projection X-ray imaging, can cause the interested part to be blurred or generate a structure similar to a disease is solved.
Tomosynthesis techniques have been applied in some clinics to provide images of the chest, abdomen, breasts, head and neck. However, since the tomosynthesis image includes a plurality of slice images, when a doctor diagnoses the examinee based on the tomosynthesis image, the doctor needs to page the slice images one by one, and since the quality of the image of a single slice image is low, it is inconvenient to refer to the slice image with naked eyes, which makes it difficult for the doctor to quickly find the lesion region in the examinee's body based on the tomosynthesis image.
The above is only for the purpose of assisting understanding of the technical solutions of the present application, and does not represent an admission that the above is prior art.
Disclosure of Invention
The application provides a tomosynthesis image analysis method, a tomosynthesis image analysis device, a computer device and a computer readable storage medium, aiming at improving the efficiency of a doctor for diagnosing whether a lesion condition exists in a body of a detected person by using a tomosynthesis image.
To achieve the above object, the present application provides a method of analyzing a tomosynthesis image, comprising the steps of:
acquiring a tomosynthesis image;
carrying out maximum density projection processing on a plurality of slice images corresponding to the tomosynthesis image to generate a maximum density projection image;
and analyzing whether the maximum density projection image has abnormal regions by using a machine learning model to obtain an analysis result, wherein the machine learning model is trained and finished based on a plurality of maximum density projection image samples, and the maximum density projection image samples are marked with the abnormal regions.
Optionally, the method for analyzing a tomosynthesis image further includes:
and if the analysis result is that the maximum density projection image has an abnormal area, marking the abnormal area in the maximum density projection image to generate a medical image.
Optionally, the maximum density projection image sample is further labeled with a region of interest; the method of analyzing a tomosynthesis image further includes:
marking the region of interest in the medical image by adopting a preset first marking mode according to the analysis result;
if the maximum density projection image has an abnormal area, the abnormal area marked in the medical image is marked by adopting a preset second marking mode.
Optionally, if the analysis result indicates that the maximum density projection image has an abnormal region, the step of marking the abnormal region in the maximum density projection image to generate a medical image further includes:
and converting the medical image into a medical X-ray image.
Optionally, if the analysis result indicates that the maximum density projection image has an abnormal region, the step of marking the abnormal region in the maximum density projection image to generate a medical image further includes:
and when a confirmation instruction of the medical image is received, updating the machine learning model based on the medical image.
Optionally, the step of performing maximum density projection processing on the plurality of slice images corresponding to the tomosynthesis image to generate a maximum density projection image includes:
selecting a plurality of slice images within a preset image layer range from all slice images corresponding to the tomosynthesis image as target images;
carrying out maximum density projection processing on a plurality of target images to generate a maximum density projection image;
the preset layer range is determined according to a preset numerical value range;
or the preset layer range is determined according to the target organ part associated with the tomosynthesis image.
Optionally, before the step of analyzing whether an abnormal region exists in the maximum density projection image by using a machine learning model to obtain an analysis result, the method further includes:
enhancing the contrast of the maximum density projection image.
In order to achieve the above object, the present application also provides a tomosynthesis image analysis apparatus including:
an acquisition module for acquiring a tomosynthesis image;
the processing module is used for carrying out maximum density projection processing on the plurality of slice images corresponding to the tomosynthesis image to generate a maximum density projection image;
and the analysis module is used for analyzing whether the maximum density projection image has abnormal regions by utilizing a machine learning model to obtain an analysis result, wherein the machine learning model is trained and completed based on a plurality of maximum density projection image samples, and the maximum density projection image samples are marked with abnormal regions.
To achieve the above object, the present application further provides a computer device, including: a memory, a processor and a program for analyzing a tomosynthesis image stored on the memory and executable on the processor, the program for analyzing a tomosynthesis image implementing the steps of the method for analyzing a tomosynthesis image as described above when executed by the processor.
To achieve the above object, the present application also provides a computer-readable storage medium having stored thereon a program for analyzing a tomosynthesis image, which when executed by a processor, implements the steps of the method for analyzing a tomosynthesis image as described above.
According to the tomosynthesis image analysis method, the tomosynthesis image analysis device, the computer equipment and the computer readable storage medium, the tomosynthesis image of the examinee is used for generating the maximum density projection image so that the image can better display the organ part of the examinee, and the maximum density projection image is analyzed and processed by using the machine learning model trained in advance so as to quickly detect whether an abnormal region (such as a lesion region) exists in the image, so that a doctor can quickly diagnose whether the lesion condition exists in the body of the examinee by using the analysis result of the machine learning model.
Drawings
FIG. 1 is a schematic diagram illustrating a method for analyzing a tomosynthesis image according to an embodiment of the present disclosure;
FIG. 2 is a schematic diagram illustrating a method for analyzing a tomosynthesis image according to another embodiment of the present disclosure;
FIG. 3 is a block diagram schematically illustrating a structure of an apparatus for analyzing a tomosynthesis image according to an embodiment of the present application;
fig. 4 is a schematic block diagram of an internal structure of a computer device according to an embodiment of the present application.
The implementation, functional features and advantages of the objectives of the present application will be further explained with reference to the accompanying drawings.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are exemplary and intended to be illustrative of the present invention and should not be construed as limiting the present invention, and all other embodiments that can be obtained by one skilled in the art based on the embodiments of the present invention without inventive efforts shall fall within the scope of protection of the present invention.
Referring to fig. 1, in an embodiment, the method of analyzing a tomosynthesis image includes:
step S10, acquiring a tomosynthesis image;
step S20, carrying out maximum density projection processing on the plurality of slice images corresponding to the tomosynthesis image to generate a maximum density projection image;
and step S30, analyzing whether the maximum density projection image has abnormal regions by using a machine learning model to obtain an analysis result, wherein the machine learning model is trained and completed based on a plurality of maximum density projection image samples, and the maximum density projection image samples are marked with abnormal regions.
In this embodiment, the terminal in this embodiment may be a computer device (e.g., a medical spine image generation device, a server, etc.).
As described in step S10, the terminal establishes a communication connection with the tomography apparatus, and the terminal can acquire a tomosynthesis image of a target portion in the subject' S body using the tomography apparatus. Wherein, the subject can be human, or animal such as cat and dog; the target site may be any part of the subject's body, such as the chest, abdomen, breast, head, neck, etc. The following description will be given taking an example in which the subject is a human and the target site to be examined is a human chest.
Optionally, the tomosynthesis image acquisition process includes moving the X-ray tube and the large digital flat panel detector on a track preset by a motion module controlled by the terminal to capture a plurality of low-radiation-dose projection images of the target portion of the subject from different angles, and the images are combined to synthesize any plane in the subject, so as to generate the tomosynthesis image.
As described in step S20, the tomosynthesis images acquired by the terminal include a plurality of slice images, that is, when the tomosynthesis images are generated, a set of corresponding slice images is obtained by performing tomosynthesis reconstruction using a plurality of projection images captured from the subject. For example, when the human chest is photographed, if the large-scale digital flat panel detector corresponds to a detection range of 43cm × 43cm and the region of interest is 20cm thick (which may correspond to the width of the human body on one side), the finally generated tomosynthesis image may include 200 slice images with a thickness of 1mm and a slice size of 43cm × 43cm, and the 200 slice images may be combined into the human chest accordingly.
Each slice image has a corresponding layer sequence (for example, 200 layers may be corresponding to 200 slice images), and the slice images are arranged according to the corresponding layer sequence.
Optionally, after obtaining a plurality of slice images corresponding to the tomosynthesis image, the terminal may perform Maximum Intensity Projection (MIP) processing on all slice images (or select a slice image in a preset layer range to perform maximum intensity projection processing), so as to generate a maximum intensity projection image.
Optionally, a series of consecutive slice images may be integrated into a two-dimensional image using a perspective method using maximum intensity projection. I.e. along parallel rays from the viewpoint to the projection plane, when the fiber bundle passes through the original image of a section of tissue, the pixels of the image with the highest density are retained and projected onto a two-dimensional plane, thereby reconstructing the image to form a maximum-density projection image.
It should be noted that the maximum intensity projection can reflect the X-ray attenuation value of the corresponding pixel, and the smaller density change can also be displayed on the maximum intensity projection image, so as to well display the contour information of each organ tissue, such as the stenosis, dilation, filling defect of the blood vessel and the differentiation between calcification on the blood vessel wall and contrast agent in the blood vessel cavity.
As shown in step S30, the terminal pre-artificial intelligence technology is configured with a machine learning model, and the relevant engineer may collect a plurality of maximum density projection images (such as the thousands of maximum density projection images) in advance, mark out abnormal regions in the images, and input the marked maximum density projection images as maximum density projection image samples into the machine learning model for training, so that the machine learning model may continuously learn the relevance between the images displayed in the abnormal regions and the marked region ranges, i.e., train the ability of the machine learning model to identify the abnormal regions in the maximum density projection images. The maximum density projection images used as training samples can also be generated based on tomosynthesis images, that is, maximum density projection images are generated by acquiring tomosynthesis images of a subject, and then the maximum density projection images are labeled to generate maximum density projection image samples; the abnormal region may be a lesion region in the subject body to which the maximum density projection image belongs (e.g., a cancerous region of the lung, and the lesion factor is not limited to a tumor, and may be other lesion factors than a tumor, such as tuberculosis, pneumothorax, and the like).
And when the machine learning model carries out repeated iterative training based on the plurality of maximum density projection images until the model reaches convergence, finishing the training of the machine learning model. The machine learning module can be constructed based on the EfficientNet-B7 architecture and is pre-trained on ImageNet.
Optionally, after the terminal executes steps S10-S20 to obtain the maximum density projection image corresponding to the current subject (for example, after the scheme of this embodiment is put into practical use, when the maximum density projection image of the corresponding target portion of the current subject with a diagnosis requirement is obtained), the maximum density projection image may be input into the trained machine learning model for further analysis processing. Thus, the machine learning module can be used for automatically detecting and analyzing whether abnormal regions exist in the maximum-density projection image.
Optionally, the maximum density projection image is analyzed and processed by using a machine learning model to detect whether an abnormal region exists in the maximum density projection image and obtain a corresponding analysis result. For example, if an abnormal region is detected in the maximum density projection image, the prompt information of the fault synthesis image abnormality is output as the analysis result; and if the abnormal region does not exist in the maximum density projection image, outputting prompt information that the tomosynthesis image is normal as an analysis result.
Or if the analysis result is that the maximum density projection image has an abnormal area, marking the abnormal area in the maximum density projection image, and taking the marked maximum density projection image as an output medical image; if the analysis result shows that the maximum density projection image has no abnormal area, the maximum density projection image can be directly used as an output medical image. The medical image may be output as an analysis result (that is, the obtained analysis result may be represented by generating a corresponding medical image).
It should be appreciated that if there are multiple abnormal regions in the maximum intensity projection image, each abnormal region may be separately marked using a machine learning model.
Optionally, when labeling the maximum density projection image sample, the relevant engineer may label, in addition to label the abnormal region in the image, a disease that may be suffered by a corresponding organ part in the abnormal region according to the relevant information of the known disease and the specific pathological condition in the abnormal region. Thus, when the machine learning model is trained by using the maximum density projection image samples, the machine learning model can learn the capacity of recognizing a certain disease possibly in an abnormal area.
Optionally, in the process of the terminal performing step S30, in addition to analyzing whether there is an abnormal region in the maximum density projection image, when it is detected that there is an abnormal region, the machine learning model may further analyze the cause of the abnormal region causing a disease (i.e., analyze the probability that the organ portion corresponding to the abnormal region has a related disease), and when generating the analysis result, may also synchronously output the probability that the organ portion corresponding to the abnormal region has a related disease. Of course, if the analysis result is that the maximum density projection image has no abnormal region, the prompt information for detecting the normal state (for example, "no body abnormality is found") can be synchronously output. In this way, the doctor can be assisted in diagnosing the physical condition of the subject based on the medical image.
In this way, the maximum density projection image is generated by using the tomosynthesis image of the examinee, so that the image can better display the organ part of the examinee, and the maximum density projection image is analyzed by using the machine learning model trained in advance, so as to quickly detect whether an abnormal region (such as a lesion region) exists in the image, and thus, a doctor can quickly diagnose whether a lesion exists in the body of the examinee by using the analysis result of the machine learning model.
In an embodiment, on the basis of the above embodiment, the maximum density projection image sample is further labeled with a region of interest; the method of analyzing a tomosynthesis image further includes:
step S40, according to the analysis result, marking the region of interest in the medical image by adopting a preset first marking mode; if the maximum density projection image has an abnormal area, the abnormal area marked in the medical image is marked by adopting a preset second marking mode.
In this embodiment, when labeling the maximum-density projection image sample, the relevant engineer may label, in addition to labeling the abnormal region in the image, a corresponding region of interest in the image according to the position of each organ tissue (e.g., the position of the spine, the position of the lung, etc.). This also allows the machine learning model to learn the ability to identify regions of interest in the images when training the machine learning model using these maximum density projection image samples.
Optionally, during the process of the terminal performing step S30, the region of interest in the maximum density projection image is also automatically identified, and when the medical image is generated based on the identified region of interest, the region of interest is also marked in the medical image, that is, no matter whether an abnormal region exists in the maximum density projection image, the generated medical image may be marked with the corresponding region of interest.
The terminal can mark an interested region in the medical image by adopting a preset first marking mode; and if the abnormal area exists in the maximum density projection image, the abnormal area can be marked by adopting a preset second marking mode.
Optionally, the difference between the first marking mode and the second marking mode may be a difference in marking color, for example, the first marking mode uses a yellow line to outline the region of interest in the image, and the second marking mode uses a red line to outline the abnormal region in the image.
Therefore, the doctor can conveniently and quickly distinguish the relevance between the interested area and the abnormal area based on the medical image.
In an embodiment, on the basis of the above embodiment, after the step of marking an abnormal region in the maximum density projection image to generate a medical image if the analysis result indicates that the abnormal region exists in the maximum density projection image, the method further includes:
and step S50, converting the medical image into a medical X-ray image.
In this embodiment, the terminal is preset with an AI (intellectual intelligence) algorithm, and the AI algorithm is used to convert the image format corresponding to the maximum density projection image into the image format corresponding to the X-ray image. The AI algorithm may be a stand-alone callable algorithm model (i.e., the model may also be constructed based on artificial intelligence techniques), or may be integrated into the machine learning model.
Optionally, when the terminal generates a medical image (the image format is consistent with the image format corresponding to the maximum density projection image), the medical image may be converted into a medical X-ray image by using a corresponding AI algorithm, and then the medical X-ray image is output, so that a doctor can conveniently view the medical image. Therefore, the output medical X-ray image can combine the advantages of the X-ray image and the maximum density projection image, namely, the maximum density projection can be utilized to clearly display and mark the abnormal area in the tomosynthesis image, and the finally output image is also the X-ray image which is usually watched by doctors, thereby better assisting the doctors in diagnosis and analysis.
In an embodiment, on the basis of the above embodiment, after the step of marking an abnormal region in the maximum density projection image to generate a medical image if the analysis result indicates that the abnormal region exists in the maximum density projection image, the method further includes:
and step S60, when receiving the confirmation instruction of the medical image, updating the machine learning model based on the medical image.
In this embodiment, after the medical image is obtained at the terminal and output for the doctor to review, and when the doctor confirms that the medical image and the analysis result corresponding to the machine learning model are correct, a confirmation instruction for confirming that the medical image is correct may be input to the terminal by using a device associated with the terminal or a control panel provided by the terminal.
When the terminal receives a confirmation instruction corresponding to the medical image, the analysis result of the machine learning model aiming at the maximum density projection image is judged to be correct, and the corresponding medical image (the analysis result can be combined with the analysis result (the analysis result can be used as marking information representing image abnormity or normality)) is updated to be the maximum density projection image sample, so that the medical image is used as a training sample of the machine learning model and added into the machine learning model for updating iterative training, and the machine learning model is updated and optimized.
The medical image that can be used as the training sample may be a medical image in which an abnormal region is marked (i.e., a medical image in which an abnormality is detected) or a medical image in which an abnormal region is absent (i.e., a medical image in which a normal region is detected). Also, in the maximum intensity projection image samples initially used to train the machine learning model, in addition to the maximum intensity projection image labeled with the abnormal region, the maximum intensity projection image without the abnormal region (which may be acquired from a healthy subject) may be used. And the maximum density projection images with abnormal regions and the maximum density projection images without abnormal regions are used for performing antagonistic training together, so that the number of samples of the maximum density projection images with abnormal regions required by training the machine learning model can be reduced, and the accuracy of analyzing the normal maximum density projection images and the abnormal maximum density projection images by the machine learning model can be improved.
In an embodiment, referring to fig. 2, in addition to the above embodiment, the step of performing maximum intensity projection processing on the plurality of slice images corresponding to the tomosynthesis image to generate a maximum intensity projection image includes:
s21, selecting a plurality of slice images in a preset image layer range from all slice images corresponding to the tomosynthesis image as target images;
and step S22, performing maximum intensity projection processing on the plurality of target images to generate maximum intensity projection images.
In this embodiment, after obtaining a plurality of slice images corresponding to the tomosynthesis image, the terminal may select a plurality of continuous slice images within a preset image layer range from the obtained slice images as a target image.
The preset layer range is determined according to a preset numerical value range; or the preset layer range is determined according to the target organ part associated with the tomosynthesis image.
Optionally, the preset value range may be preset by a relevant engineer or doctor according to actual examination needs, for example, the preset value range is set to be 20 to 180 in 200 slice images, and then the terminal may select, according to the preset value range, a slice image with a preset layer range from a layer 20 to a layer 180 from the 200 slice images as a target image.
Alternatively, in view of the limited depth resolution of the tomosynthesis image, the preset numerical range may be determined according to the focal position of the scanner when capturing an image, that is, the slice images corresponding to the layers (for example, layer 50 to layer 150) near the focal plane may be selected as the target image.
Or the preset layer range is determined according to the target organ part associated with the tomosynthesis image. For example, since the positions of the organ tissues of the human chest in the human body are different, which results in different image layers of the organ tissues in the tomosynthesis images (for example, in the chest tomosynthesis images corresponding to 200 consecutive slice images, the chest (or breast) is displayed between the layer 1 to the layer 70, and the heart is displayed between the layer 70 to the layer 110 (for example, but not real data)), the preset image layer range associated with each organ tissue can be preset, and during the execution of step S20, the target organ portion (one or more organ portions to be examined) currently required to be examined is selected for the subject according to the doctor or the relevant staff in advance, then the preset image layer range associated with the corresponding target organ portion is queried, and the slice image within the associated preset image layer range is selected from all slice images, as a target image.
Optionally, after the terminal selects a plurality of target images, Maximum Intensity Projection (MIP) processing is performed on all the target images, so as to generate a maximum intensity projection image.
In this way, the maximum density projection image is generated by selecting the slice images within the range of the preset image layer from all the slice images, so that the generated maximum density projection image can more clearly display the outline of the organ tissue in the image.
In an embodiment, on the basis of the above embodiment, before the step of analyzing whether there is an abnormal region in the maximum density projection image by using a machine learning model to obtain an analysis result, the method further includes:
and step S70, enhancing the contrast of the maximum-density projection image.
In this embodiment, after the terminal obtains the generated maximum density projection image by executing step S20, before executing step S30, the terminal may perform image contrast enhancement processing on the maximum density projection image to enhance the contrast of the maximum density projection image, so that the maximum density projection image may more clearly display the contour of the organ tissue in the image, and further improve the accuracy of the analysis result corresponding to the maximum density projection image obtained by using the machine learning model.
Referring to fig. 3, an embodiment of the present application further provides a tomosynthesis image analysis apparatus Z10, including:
an acquisition module Z11 for acquiring a tomosynthesis image;
a processing module Z12, configured to perform maximum density projection processing on the multiple slice images corresponding to the tomosynthesis image, and generate a maximum density projection image;
and the analysis module Z13 is configured to analyze whether the maximum density projection image has an abnormal region by using a machine learning model to obtain an analysis result, where the machine learning model is trained based on a plurality of maximum density projection image samples, and the maximum density projection image samples are labeled with the abnormal region.
Referring to fig. 4, a computer device is further provided in the embodiment of the present application, and an internal structure of the computer device may be as shown in fig. 4. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the processor is configured to provide computational and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operating system and the computer program to run on the non-volatile storage medium. The database of the computer device is used for an analysis program of the tomosynthesis image. The network interface of the computer device is used for communicating with an external terminal through a network connection. The input device of the computer equipment is used for receiving signals input by external equipment. The computer program is executed by a processor to implement a method of analyzing a tomosynthesis image as described in the above embodiments.
It will be understood by those skilled in the art that the structure shown in fig. 4 is only a block diagram of a part of the structure related to the present application, and does not constitute a limitation to the computer device to which the present application is applied.
Further, the present application also proposes a computer-readable storage medium including a program for analyzing a tomosynthesis image, which when executed by a processor implements the steps of the method for analyzing a tomosynthesis image as described in the above embodiment. It is to be understood that the computer-readable storage medium in the present embodiment may be a volatile-readable storage medium or a non-volatile-readable storage medium.
In summary, in the method for analyzing a tomosynthesis image, the device for analyzing a tomosynthesis image, the computer apparatus, and the computer-readable storage medium provided in the embodiments of the present application, the tomosynthesis image of the subject is used to generate the maximum density projection image, so that the image can better display the organ part of the subject, and the maximum density projection image is analyzed by using the machine learning model trained in advance to quickly detect whether there is an abnormal region (e.g., a lesion region) in the image, so that the doctor can quickly diagnose whether there is a lesion in the subject according to the medical image analyzed and generated by the machine learning module.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database or other medium provided herein and used in the examples may include non-volatile and/or volatile memory. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), double-rate SDRAM (SSRSDRAM), Enhanced SDRAM (ESDRAM), synchronous link (Synchlink) DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, apparatus, article, or method that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, apparatus, article, or method. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, apparatus, article, or method that includes the element.
The above description is only for the preferred embodiment of the present application and not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the specification and the drawings of the present application, or which are directly or indirectly applied to other related technical fields, are intended to be included within the scope of the present application.

Claims (10)

1. A method of analyzing a tomosynthesis image, comprising:
acquiring a tomosynthesis image;
carrying out maximum density projection processing on a plurality of slice images corresponding to the tomosynthesis image to generate a maximum density projection image;
and analyzing whether the maximum density projection image has abnormal regions by using a machine learning model to obtain an analysis result, wherein the machine learning model is trained and finished based on a plurality of maximum density projection image samples, and the maximum density projection image samples are marked with the abnormal regions.
2. The method of analyzing a tomosynthesis image according to claim 1, characterized in that the method of analyzing a tomosynthesis image further comprises:
and if the analysis result is that the maximum density projection image has an abnormal area, marking the abnormal area in the maximum density projection image to generate a medical image.
3. The method of analyzing a tomosynthesis image according to claim 2, wherein the maximum density projection image sample is further labeled with a region of interest; the method of analyzing a tomosynthesis image further includes:
marking the region of interest in the medical image by adopting a preset first marking mode according to the analysis result;
if the maximum density projection image has an abnormal area, the abnormal area marked in the medical image is marked by adopting a preset second marking mode.
4. The method for analyzing a tomosynthesis image according to claim 2 or 3, wherein the step of, if the analysis result is that there is an abnormal region in the maximum density projection image, marking the abnormal region in the maximum density projection image to generate a medical image further includes:
and converting the medical image into a medical X-ray image.
5. The method for analyzing a tomosynthesis image according to claim 2 or 3, wherein the step of, if the analysis result is that there is an abnormal region in the maximum density projection image, marking the abnormal region in the maximum density projection image to generate a medical image further includes:
and when a confirmation instruction of the medical image is received, updating the machine learning model based on the medical image.
6. The method of analyzing a tomosynthesis image according to claim 1, wherein the step of performing maximum intensity projection processing on a plurality of slice images corresponding to the tomosynthesis image to generate a maximum intensity projection image includes:
selecting a plurality of slice images in a preset image layer range from all slice images corresponding to the tomosynthesis image as target images;
carrying out maximum density projection processing on a plurality of target images to generate a maximum density projection image;
the preset image layer range is determined according to a preset numerical value range;
or the preset layer range is determined according to the target organ part associated with the tomosynthesis image.
7. The method for analyzing a tomosynthesis image according to any one of claims 1 to 3, wherein the step of analyzing whether the maximum density projection image has an abnormal region by using a machine learning model and obtaining an analysis result further comprises:
enhancing the contrast of the maximum density projection image.
8. An apparatus for analyzing a tomosynthesis image, comprising:
an acquisition module for acquiring a tomosynthesis image;
the processing module is used for carrying out maximum density projection processing on the plurality of slice images corresponding to the tomosynthesis image to generate a maximum density projection image;
and the analysis module is used for analyzing whether the maximum density projection image has abnormal regions by utilizing a machine learning model to obtain an analysis result, wherein the machine learning model is trained and completed based on a plurality of maximum density projection image samples, and the maximum density projection image samples are marked with abnormal regions.
9. A computer device characterized by comprising a memory, a processor, and a program for analyzing a tomosynthesis image stored on the memory and executable on the processor, the program for analyzing a tomosynthesis image when executed by the processor implementing the steps of the method for analyzing a tomosynthesis image according to any one of claims 1 to 7.
10. A computer-readable storage medium, characterized in that a program for analyzing a tomosynthesis image is stored thereon, which when executed by a processor implements the steps of the method for analyzing a tomosynthesis image according to any one of claims 1 to 7.
CN202210689683.0A 2022-06-17 2022-06-17 Method and apparatus for analyzing tomosynthesis image, computer device and storage medium Pending CN115100132A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210689683.0A CN115100132A (en) 2022-06-17 2022-06-17 Method and apparatus for analyzing tomosynthesis image, computer device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210689683.0A CN115100132A (en) 2022-06-17 2022-06-17 Method and apparatus for analyzing tomosynthesis image, computer device and storage medium

Publications (1)

Publication Number Publication Date
CN115100132A true CN115100132A (en) 2022-09-23

Family

ID=83291034

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210689683.0A Pending CN115100132A (en) 2022-06-17 2022-06-17 Method and apparatus for analyzing tomosynthesis image, computer device and storage medium

Country Status (1)

Country Link
CN (1) CN115100132A (en)

Similar Documents

Publication Publication Date Title
JP6567179B2 (en) Pseudo CT generation from MR data using feature regression model
EP3355273B1 (en) Coarse orientation detection in image data
EP4131160A1 (en) Image obtaining method and system, image quality determination method and system, and medical image acquisition method and system
CN101273919B (en) Sequential image acquisition with updating method and system
JP7051307B2 (en) Medical image diagnostic equipment
US10540764B2 (en) Medical image capturing apparatus and method
CN111915696B (en) Three-dimensional image data-aided low-dose scanning data reconstruction method and electronic medium
US20100260394A1 (en) Image analysis of brain image data
JP7027046B2 (en) Medical image imaging device and method
US20190311228A1 (en) Cross-modality image synthesis
TWI717589B (en) Diagnostic image system
CN110264559B (en) Bone tomographic image reconstruction method and system
CN107115119A (en) The acquisition methods of PET image attenuation coefficient, the method and system of correction for attenuation
CN111199566A (en) Medical image processing method, medical image processing device, storage medium and computer equipment
US7773719B2 (en) Model-based heart reconstruction and navigation
CN111904379A (en) Scanning method and device of multi-modal medical equipment
JP2006102353A (en) Apparatus, method and program for analyzing joint motion
CN110223247B (en) Image attenuation correction method, device, computer equipment and storage medium
CN110738633A (en) organism tissue three-dimensional image processing method and related equipment
CN104217423A (en) Automatic generation of selected image data set
CN111179373B (en) Medical image bone removal model construction method and bone information removal method
CN109350059A (en) For ancon self-aligning combined steering engine and boundary mark engine
CN113520416A (en) Method and system for generating two-dimensional image of object
JP5363962B2 (en) Diagnosis support system, diagnosis support program, and diagnosis support method
CN115300809B (en) Image processing method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination