CN110969623A - Lung CT multi-symptom automatic detection method, system, terminal and storage medium - Google Patents

Lung CT multi-symptom automatic detection method, system, terminal and storage medium Download PDF

Info

Publication number
CN110969623A
CN110969623A CN202010128396.3A CN202010128396A CN110969623A CN 110969623 A CN110969623 A CN 110969623A CN 202010128396 A CN202010128396 A CN 202010128396A CN 110969623 A CN110969623 A CN 110969623A
Authority
CN
China
Prior art keywords
detection
frame
layer
candidate
frames
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010128396.3A
Other languages
Chinese (zh)
Other versions
CN110969623B (en
Inventor
张树
李梓豪
马杰超
俞益洲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Shenrui Bolian Technology Co Ltd
Shenzhen Deepwise Bolian Technology Co Ltd
Original Assignee
Beijing Shenrui Bolian Technology Co Ltd
Shenzhen Deepwise Bolian Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Shenrui Bolian Technology Co Ltd, Shenzhen Deepwise Bolian Technology Co Ltd filed Critical Beijing Shenrui Bolian Technology Co Ltd
Priority to CN202010128396.3A priority Critical patent/CN110969623B/en
Publication of CN110969623A publication Critical patent/CN110969623A/en
Application granted granted Critical
Publication of CN110969623B publication Critical patent/CN110969623B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30061Lung

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The application provides a lung CT multi-symptom automatic detection method, a system, a terminal and a storage medium, comprising the following steps: acquiring a lung CT image; determining a typical layer of the lung CT image, and labeling each abnormal symptom of the typical layer by using a sparse labeling mode; inputting the labeled typical layer image data into a preset deep learning network model for training to obtain a trained 2D frame detection model; inputting image data of a layer to be detected in the lung CT image into a trained 2D frame detection model, and predicting a 2D detection frame with abnormal signs; respectively setting corresponding thresholds of the 2D detection frames according to the types of the 2D detection frames, and outputting the 2D detection frames with the scores exceeding the corresponding thresholds to obtain 2D candidate frames; merging the 2D candidate frames into a 3D candidate frame according to the position continuity on the continuous layer of the 2D candidate frames; setting a corresponding threshold of the 3D candidate frame according to the category of the 3D candidate frame, and outputting the 3D candidate frame with the score exceeding a second threshold to obtain a model detection result; the simultaneous detection and analysis of multiple signs in pulmonary CT are realized.

Description

Lung CT multi-symptom automatic detection method, system, terminal and storage medium
Technical Field
The present application relates to the field of medical image processing technologies, and in particular, to a method, a system, a terminal, and a storage medium for automatically detecting multiple lung CT signs.
Background
The deep learning technology is more and more widely regarded in clinical and scientific research, and the computer aided diagnosis technology taking the deep learning technology as the core is also more and more widely applied in clinical practice. Currently, most computer-aided diagnosis systems rely on a single sign or disease, such as a lung nodule-aided diagnosis system for lung cancer screening, a fracture detection system on X-ray film, and the like. With the development of medical and computer technology, more doctors rely on computer-aided techniques to perform the detection of lesions in images and even the writing of structured reports.
A problem commonly solved by deep learning based detection techniques is the problem of object localization in 2D images. With the development of deep learning technology, a series of methods are available for accurate target detection, such as a two-stage detection framework represented by fast-RCNN, a single-stage detection framework represented by Yolo, SSD, etc., and an anchor-free detection framework represented by corner-net, FCOS, etc., which have been developed recently. However, the detection methods for medical images, particularly images describing 3D structures such as CT and MRI, are relatively less studied, and among them, there are many tasks for detecting nodules and few tasks involving simultaneous detection of various types. Since the nodule detection is usually labeled by using a 3D label, that is, the center point and the diameter (or the side length in the x, y, z direction) of a labeled target, the detection method generally targets to predict the 3D center point and the 3D detection frame offset, and finally outputs a 3D detection frame. In addition, there are also 3D object detection systems in which methods are based on 2D detection methods. However, these methods are applied to solve the nodule detection problem, and no relevant solution has been proposed for the lung multiple sign problem. Because of the difficulty that the current auxiliary diagnosis system for pulmonary CT can only automatically analyze and process the single sign of the nodule, the CAD system cannot play a larger role in clinical diagnosis.
Therefore, there is a need to develop a method and a system for simultaneously detecting and analyzing multiple lesions or signs in pulmonary CT, so as to assist doctors in clinical practice in an all-round manner, and improve the diagnosis rate and efficiency of doctors.
Disclosure of Invention
In view of the above-mentioned deficiencies of the prior art, the present application provides an automatic detection method, system, terminal and storage medium for multiple lung CT signs, which can achieve simultaneous detection and analysis of multiple lesions or signs in lung CT.
In a first aspect, to solve the above technical problem, the present application provides an automatic lung CT multi-sign detection method, including:
acquiring a lung CT image;
determining a typical layer of the lung CT image, and labeling each abnormal symptom of the typical layer by using a sparse labeling mode;
inputting the labeled typical layer image data into a preset deep learning network model for training to obtain a trained 2D frame detection model;
inputting image data of a layer to be detected in the lung CT image into a trained 2D frame detection model, and predicting a 2D detection frame with abnormal signs of the layer to be detected;
respectively setting first threshold values corresponding to the 2D detection frames according to the types of the 2D detection frames, and outputting the 2D detection frames with detection scores exceeding the first threshold values to obtain 2D candidate frames;
merging the 2D candidate frames into a 3D candidate frame according to the position continuity on the continuous layer of the 2D candidate frames;
and setting a second threshold corresponding to the 3D candidate frame according to the category of the 3D candidate frame, and outputting the 3D candidate frame with the detection score exceeding the second threshold to obtain a model detection result.
Optionally, the pulmonary CT multiple signs include:
solid ghost, ground glass density ghost, streak ghost, grid ghost, cellulite, emphysema, bulla, lump ghost, pleural thickening, pleural depression, pneumothorax, pleural effusion, other low density shadows of the sack, bronchiectasis, mosaic perfusion, paviosides, and crescentia.
Optionally, the acquiring a lung CT image includes:
chest scans or enhanced CT images are acquired.
Optionally, the determining a typical layer of the lung CT image, and labeling each abnormal sign of the typical layer by using a sparse labeling method includes:
determining abnormal signs and corresponding typical layers of the lung CT image;
and marking each abnormal symptom on the typical layer by using a drawing frame or an outline drawing mode to obtain a 2D marking frame of the typical layer of the abnormal symptom.
Optionally, the inputting the labeled typical layer image data into a preset deep learning network model for training to obtain a trained 2D frame detection model includes:
using the marked axial position layer image data as training data;
and inputting the marked axial layer image data to a fast-RCNN network model based on FPN for training to obtain a trained 2D frame detection model.
Optionally, the step of inputting image data of a layer to be detected in the lung CT image into the trained 2D frame detection model to predict the 2D detection frame with the abnormal symptom existing in the layer to be detected includes:
inputting image data of a layer to be detected in the lung CT image into a trained 2D frame detection model;
and predicting the 2D detection frame with the abnormal symptoms of the layer to be detected to obtain the category, the position and the detection score information of the 2D detection frame.
Optionally, the merging into the 3D candidate frame according to the position continuity on the 2D candidate frame continuous layer includes:
judging whether the 2D detection frames can be merged into a 3D frame according to the size and the type of the overlapped area of the 2D detection frames of two continuous layers;
if the overlapping area of the detection frames is larger than a first threshold and meets the requirement of the same type, the overlapping areas are combined into the same 3D detection frame;
judging all the 2D detection frames one by one according to the rule to obtain the category, the position and the detection score information of the final 3D candidate frame of the whole CT;
wherein the detection score of the 3D candidate box may be set to a mean, a maximum, or a median of scores of 2D candidate boxes for merging.
In a second aspect, the present invention further provides an automatic pulmonary CT multi-sign detection system, including:
the data acquisition unit is configured for acquiring lung CT images;
the symptom marking unit is configured to determine a typical layer of the lung CT image and mark each abnormal symptom of the typical layer in a sparse marking mode;
the model training unit is configured and used for inputting the labeled typical layer image data into a preset deep learning network model for training to obtain a trained 2D frame detection model;
the model prediction unit is configured to input image data of a layer to be detected in the lung CT image into a trained 2D frame detection model and predict a 2D detection frame of the layer to be detected with abnormal signs;
the first thresholding unit is configured to set first thresholds corresponding to the 2D detection frames according to the types of the 2D detection frames, and output the 2D detection frames with detection scores exceeding the first thresholds to obtain 2D candidate frames;
the detection frame merging unit is configured to merge the 2D candidate frames into the 3D candidate frames according to the position continuity of the 2D candidate frames on the continuous layer;
and the second thresholding unit is configured to set a second threshold corresponding to the 3D candidate frame according to the category of the 3D candidate frame, and output the 3D candidate frame with the detection score exceeding the second threshold to obtain a model detection result.
Optionally, the data acquisition unit is specifically configured to:
chest scans or enhanced CT images are acquired.
Optionally, the symptom labeling unit is specifically configured to:
determining abnormal signs and corresponding typical layers of the lung CT image;
and marking each abnormal symptom on the typical layer by using a drawing frame or an outline drawing mode to obtain a 2D marking frame of the typical layer of the abnormal symptom.
Optionally, the model training unit specifically includes:
using the marked axial position layer image data as training data;
and inputting the marked axial layer image data to a fast-RCNN network model based on FPN for training to obtain a trained 2D frame detection model.
Optionally, the model prediction unit specifically includes:
inputting data of images (generally all layers) of a layer to be detected in lung CT images into a trained 2D frame detection model;
and predicting the 2D detection frame with the abnormal symptoms of the layer to be detected to obtain the category, the position and the detection score information of the 2D detection frame.
Optionally, the detection frame merging unit specifically includes:
judging whether the 2D detection frames can be merged into a 3D frame according to the size and the type of the overlapped area of the 2D detection frames of two continuous layers;
if the overlapping area of the detection frames is larger than a first threshold and meets the requirement of the same type, the overlapping areas are combined into the same 3D detection frame;
judging all the 2D detection frames one by one according to the rule to obtain the category, the position and the detection score information of the final 3D candidate frame of the whole CT;
wherein the detection score of the 3D candidate box may be set to a mean, a maximum, or a median of scores of 2D candidate boxes for merging.
In a third aspect, a terminal is provided, including:
a processor, a memory, wherein,
the memory is used for storing a computer program which,
the processor is used for calling and running the computer program from the memory so as to make the terminal execute the method of the terminal.
In a fourth aspect, a computer storage medium is provided having stored therein instructions that, when executed on a computer, cause the computer to perform the method of the above aspects.
Compared with the prior art, the method has the following beneficial effects:
according to the lung CT multi-symptom automatic detection method, the system, the terminal and the storage medium, a specific CT level is selected for efficient sparse labeling, a target detection model of a 2D detection frame capable of predicting various focus or abnormal symptoms at the same time is trained in a matched manner, and then a final detection result of the multi-focus or abnormal symptoms is obtained through a 3D detection frame merging method using a double-threshold mechanism; by designing a new data annotation, model training and model prediction scheme, more than 10 typical abnormal lung signs or diseases can be detected simultaneously, the blank of the multi-sign lung detection algorithm in the current market is filled up, on one hand, doctors can be assisted to obtain higher sensitivity in the aspect of focus discovery, and on the other hand, more comprehensive support can be provided for the clinicians in diagnosis.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
Fig. 1 is a schematic flow chart of an automatic lung CT multi-sign detection method according to an embodiment of the present application.
Fig. 2 is a schematic block diagram of an automatic lung CT multi-sign detection system according to an embodiment of the present application.
Fig. 3 is a schematic structural diagram of an automatic lung CT multi-sign detection terminal according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Referring to fig. 1, fig. 1 is a flowchart illustrating an automatic lung CT multi-sign detection method according to an embodiment of the present application, the method including:
s101: a data preparation stage: acquiring a lung CT image;
s102: and a data annotation stage: determining a typical layer of the lung CT image, and labeling each abnormal symptom of the typical layer by using a sparse labeling mode;
s103: a model training stage: inputting the labeled typical layer image data into a preset deep learning network model for training to obtain a trained 2D frame detection model;
s104: a model prediction stage: inputting image data of a layer to be detected in the lung CT image into a trained 2D frame detection model, and predicting a 2D detection frame with abnormal signs of the layer to be detected;
s105: a model prediction stage: respectively setting first threshold values corresponding to the 2D detection frames according to the types of the 2D detection frames, and outputting the 2D detection frames with detection scores exceeding the first threshold values to obtain 2D candidate frames;
s106: a model prediction stage: merging the 2D candidate frames into a 3D candidate frame according to the position continuity on the continuous layer of the 2D candidate frames;
s107: a model prediction stage: and setting a second threshold corresponding to the 3D candidate frame according to the category of the 3D candidate frame, and outputting the 3D candidate frame with the detection score exceeding the second threshold to obtain a model detection result.
Based on the above embodiment, as a preferred embodiment, the pulmonary CT multiple signs include:
solid ghost, ground glass density ghost, streak ghost, grid ghost, cellulite, emphysema, bulla, lump ghost, pleural thickening, pleural depression, pneumothorax, pleural effusion, other low density shadows of the sack, bronchiectasis, mosaic perfusion, paviosides, and crescentia.
It should be noted that the signs of pulmonary CT include, but are not limited to, the above categories.
Based on the above embodiment, as a preferred embodiment, the S101 acquires a lung CT image, including:
chest scans or enhanced CT images are acquired.
Note that, when the chest is swept or the CT image acquisition is enhanced, attention needs to be paid to balance the distribution of various parameters, such as the proportion of data from an outpatient or physical examination scene, the distribution of convolution kernels and the distribution of layer thicknesses of data reconstruction, and the like, wherein the data come from different manufacturers.
Based on the above embodiment, as a preferred embodiment, the S102 determines a typical layer of the lung CT image, and labels each abnormal symptom of the typical layer by using a sparse labeling manner, including:
determining abnormal signs and corresponding typical layers of the lung CT image;
and marking each abnormal symptom on the typical layer by using a drawing frame or an outline drawing mode to obtain a 2D marking frame of the typical layer of the abnormal symptom.
Specifically, for labeling the lung CT multi-symptom detection problem, a sparse labeling mode is adopted, that is, only a typical level of a C layer (C < CT total level) is completely labeled for each CT to be labeled. For each typical layer, each abnormal sign needs to be marked by using a drawing frame or an outline for all the lesions on the layer. For example, if a CT contains M abnormal features, T slices with typical appearance features can be selected for each feature, and then T × M slices are selected for a CT, and the number of final slices should be less than or equal to T × M considering that the positions of the features may coincide with each other. The labeling method is a sparse labeling mode because the rectangular frame or outline of each layer of each symptom is not completely labeled in the labeling process.
It should be noted that in the existing single sign test, because the nodule is relatively small, the labeling of the nodule is a complete 3D labeling, that is, the contour of each layer of the nodule is labeled; in the multi-symptom test related to the present application, the abnormal symptoms in the CT have large variation in scale and are complicated in adjacent relationship with each other. In order to improve the labeling efficiency on the basis of obtaining enough useful labeling information, the method for labeling the focus on the 2D layer only uses a sparse labeling mode, namely labeling the focus frame on the typical 2D layer.
Based on the above embodiment, as a preferred embodiment, the step S103 of inputting the labeled typical layer image data into a preset deep learning network model for training to obtain a trained 2D frame detection model includes:
using the marked axial position layer image data as training data;
and inputting the marked axial layer image data to a fast-RCNN network model based on FPN for training to obtain a trained 2D frame detection model.
It should be noted that the typical layer can be selected as any non-abnormal layer, i.e. normal layer, and the typical layer can be selected as an axial layer, and the input terminal allows the upper and lower N layers centered on the typical layer to be used as the input terminal.
Specifically, assuming that the selected axial slice is a typical slice, N axial slices are collectively labeled from several CT slices, wherein each slice contains at least V (V > = 0) 2D labeling boxes for the to-be-detected signs. Using N axial levels as training data, a 2D frame detection model can be obtained through training, and the model can output 2D detection frames of abnormal signs contained in the levels to be detected, and the detection frames can be represented by (cx, cy, w, h), wherein cx and cy represent coordinates of a central point of the detection frame, and w and h represent width and length of the detection frame. In order to take information of a plurality of layers into consideration when performing 2D detection frame prediction, data of layers other than the predicted layer may be used as input of the 2D detection model, and the number of input layers is 1 or more. Preferably, a consecutive S (S > = 3) layer image centering on the predicted layer may be used as the network input.
In order to allow the model to take more spatial context information into account, the input to the 2D detection box prediction model is the entire axial level, rather than some patch clipped from it. Meanwhile, in order to better model the problem of large change of various symptom scales, FPN containing a characteristic pyramid structure is used as a main network of a detection model instead of the main network in common fast-rcnn in the aspect of detection model selection, and an image pyramid is used as input in training, namely multi-scale data augmentation is carried out on input data. The selection of the detection model is not limited to the use of fast-RCNN based on FPN, and the detector of Retina-net or other Anchor-free using FPN as a backbone network can be used. Meanwhile, in order to enhance the capability of the detector to utilize multiple levels of input information, the detector does not limit the structure of the backbone network, and is not limited to the general 2D convolution, and a 3D convolution, an RNN-based structure, or the like may be used. For example, an FPN-based fast-RCNN detector using 2D convolution can be constructed using successive 9-layer axial images as input, to predict the 2D box of the anomaly symptoms contained in its middle-layer image.
Based on the above embodiment, as a preferred embodiment, the S104 inputs image data of a layer to be detected in a lung CT image into a trained 2D frame detection model, and predicts a 2D detection frame of the layer to be detected with abnormal signs, including:
inputting image data of a layer to be detected in the lung CT image into a trained 2D frame detection model;
and predicting the 2D detection frame with the abnormal symptoms of the layer to be detected to obtain the category, the position and the detection score information of the 2D detection frame.
Specifically, for a CT to be detected, the trained 2D frame detection model may be used to predict a 2D detection frame with abnormal signs in the current layer for each layer in the CT, and usually, image data of all layers in the lung CT image is input to the trained 2D frame detection model to predict the 2D detection frame result. The prediction result contains category, score, and location information for each 2D detection box. For each category of 2D detection frames, a first threshold may be selected, and the detection frames with scores greater than the first threshold are used as candidate frames for the next combination.
Based on the foregoing embodiment, as a preferred embodiment, the merging S106 into the 3D candidate frame according to the positional continuity on the 2D candidate frame continuous layer includes:
judging whether the 2D detection frames can be merged into a 3D frame according to the size and the type of the overlapped area of the 2D detection frames of two continuous layers;
if the overlapping area of the detection frames is larger than a first threshold and meets the requirement of the same type, the overlapping areas are combined into the same 3D detection frame;
judging all the 2D detection frames one by one according to the rule to obtain the category, the position and the detection score information of the final 3D candidate frame of the whole CT;
wherein the detection score of the 3D candidate frame may be set as a mean, a maximum, or a median of scores of 2D candidate frames for merging, etc.
Specifically, all 2D detection frames { (b 1, s 1), (b2, s2) … }1, { (b 1, s 1), (b2, s2) … }2, …, { (b 1, s 1), (b2, s2) … } C of each class in the entire CT are obtained by the 2D detection frame thresholding of the previous step, where C represents a common C class and b1, s1 represents coordinate information and a detection score of one 2D detection frame. For each class of detection frame, 2D candidate frames need to be combined into 3D candidate frames according to the position continuity on the continuous layer, and a detection score is given to the combined 3D candidate frames. Specifically, whether the 2D detection frames can be merged into one 3D frame can be determined according to whether enough overlapping areas exist in the 2D detection frames of the same category of two consecutive layers. The 3D frame score may be set to a mean value, a maximum value, a median, or the like of the scores of the 2D detection frames used for merging, as the case may be.
After the above process, the merged 3D box position and the corresponding score for each category are obtained. For each category of 3D detection boxes, a second threshold may be selected, and the 3D detection boxes with scores greater than the threshold are used as final detection results. And the thresholded 3D frame is the detection result of the model.
Referring to fig. 2, fig. 2 is a schematic structural diagram of an automatic lung CT multi-sign detection system 200 according to an embodiment of the present application, including:
a data acquisition unit 201 configured to acquire a lung CT image;
the symptom labeling unit 202 is configured to determine a typical layer of the lung CT image, and label each abnormal symptom of the typical layer by using a sparse labeling manner;
the model training unit 203 is configured to input the labeled typical layer image data into a preset deep learning network model for training to obtain a trained 2D frame detection model;
the model prediction unit 204 is configured to input image data of a layer to be detected in the lung CT image into the trained 2D frame detection model, and predict a 2D detection frame of the layer to be detected with abnormal signs;
a first thresholding unit 205 configured to set first thresholds corresponding to the 2D detection frames according to the types of the 2D detection frames, and output the 2D detection frames with detection scores exceeding the first thresholds to obtain 2D candidate frames;
a detection frame merging unit 206 configured to merge into a 3D candidate frame according to positional continuity of the 2D candidate frame on the continuous layer;
and a second thresholding unit 207 configured to set a second threshold corresponding to the 3D candidate frame according to the category of the 3D candidate frame, and output the 3D candidate frame with the detection score exceeding the second threshold to obtain a model detection result.
Based on the above embodiment, as a preferred embodiment, the data acquisition unit 201 is specifically configured to:
chest scans or enhanced CT images are acquired.
Based on the above embodiment, as a preferred embodiment, the symptom labeling unit 202 is specifically configured to:
determining abnormal signs and corresponding typical layers of the lung CT image;
and marking each abnormal symptom on the typical layer by using a drawing frame or an outline drawing mode to obtain a 2D marking frame of the typical layer of the abnormal symptom.
Based on the above embodiment, as a preferred embodiment, the model training unit 203 specifically includes:
using the marked axial position layer image data as training data;
and inputting the marked axial layer image data to a fast-RCNN network model based on FPN for training to obtain a trained 2D frame detection model.
Based on the foregoing embodiment, as a preferred embodiment, the model prediction unit 204 specifically includes:
inputting image data of a layer to be detected outside a typical layer in a lung CT image into a trained 2D frame detection model;
and predicting the 2D detection frame with the abnormal symptoms of the layer to be detected to obtain the category, the position and the detection score information of the 2D detection frame.
Based on the foregoing embodiment, as a preferred embodiment, the detection frame merging unit 206 specifically includes:
judging whether the 2D detection frames can be merged into a 3D frame according to the size and the type of the overlapped area of the 2D detection frames of two continuous layers;
if the overlapping area of the detection frames is larger than a first threshold and meets the requirement of the same type, the overlapping areas are combined into the same 3D detection frame;
judging all the 2D detection frames one by one according to the rule to obtain the category, the position and the detection score information of the final 3D candidate frame of the whole CT;
wherein the detection score of the 3D candidate frame may be set as a mean, a maximum, or a median of scores of 2D candidate frames for merging, etc.
Fig. 3 is a schematic structural diagram of a controlled terminal 300 according to an embodiment of the present invention, where the controlled terminal 300 may be used to execute the method for automatically detecting multiple pulmonary CT signs according to the embodiment of the present invention.
Wherein, the controlled terminal 300 may include: a processor 310, a memory 320, and a communication unit 330. The components communicate via one or more buses, and those skilled in the art will appreciate that the architecture of the servers shown in the figures is not intended to be limiting, and may be a bus architecture, a star architecture, a combination of more or less components than those shown, or a different arrangement of components.
The memory 320 may be used for storing instructions executed by the processor 310, and the memory 320 may be implemented by any type of volatile or non-volatile storage terminal or combination thereof, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic disk or optical disk. The executable instructions in the memory 320, when executed by the processor 310, enable the controlled terminal 300 to perform some or all of the steps in the method embodiments described below.
The processor 310 is a control center of the storage terminal, connects various parts of the entire electronic terminal using various interfaces and lines, and performs various functions of the electronic terminal and/or processes data by operating or executing software programs and/or modules stored in the memory 320 and calling data stored in the memory. The processor may be composed of an Integrated Circuit (IC), for example, a single packaged IC, or a plurality of packaged ICs connected with the same or different functions. For example, the processor 310 may include only a Central Processing Unit (CPU). In the embodiment of the present invention, the CPU may be a single operation core, or may include multiple operation cores.
A communication unit 330, configured to establish a communication channel so that the storage terminal can communicate with other terminals. And receiving user data sent by other terminals or sending the user data to other terminals.
The present invention also provides a computer storage medium, wherein the computer storage medium may store a program, and the program may include some or all of the steps in the embodiments provided by the present invention when executed. The storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM) or a Random Access Memory (RAM).
Therefore, the invention selects the specific CT layer to carry out sparse labeling to obtain the 2D detection frames of various focuses or abnormal signs, and combines the 3D detection frames through the 2D detection frames to obtain the detection result of the multiple focuses or abnormal signs. The present application can detect more than 10 typical abnormal pulmonary signs or diseases simultaneously, on one hand, can assist a doctor to obtain higher sensitivity in the aspect of focus discovery, and on the other hand, can also provide more comprehensive support for a clinician in diagnosis.
Those skilled in the art will readily appreciate that the techniques of the embodiments of the present invention may be implemented as software plus a required general purpose hardware platform. Based on such understanding, the technical solutions in the embodiments of the present invention may be embodied in the form of a software product, where the computer software product is stored in a storage medium, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and the like, and the storage medium can store program codes, and includes instructions for enabling a computer terminal (which may be a personal computer, a server, or a second terminal, a network terminal, and the like) to perform all or part of the steps of the method in the embodiments of the present invention.
The same and similar parts in the various embodiments in this specification may be referred to each other. Especially, for the terminal embodiment, since it is basically similar to the method embodiment, the description is relatively simple, and the relevant points can be referred to the description in the method embodiment.
In the embodiments provided in the present invention, it should be understood that the disclosed system and method can be implemented in other ways. For example, the above-described system embodiments are merely illustrative, and for example, the division of the units is only one logical functional division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, systems or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
Although the present invention has been described in detail by referring to the drawings in connection with the preferred embodiments, the present invention is not limited thereto. Various equivalent modifications or substitutions can be made on the embodiments of the present invention by those skilled in the art without departing from the spirit and scope of the present invention, and these modifications or substitutions are within the scope of the present invention/any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (10)

1. An automatic detection method for lung CT multiple signs is characterized by comprising the following steps:
acquiring a lung CT image;
determining a typical layer of the lung CT image, and labeling each abnormal symptom of the typical layer by using a sparse labeling mode;
inputting the labeled typical layer image data into a preset deep learning network model for training to obtain a trained 2D frame detection model;
inputting image data of a layer to be detected in the lung CT image into a trained 2D frame detection model, and predicting a 2D detection frame with abnormal signs of the layer to be detected;
respectively setting first threshold values corresponding to the 2D detection frames according to the types of the 2D detection frames, and outputting the 2D detection frames with detection scores exceeding the first threshold values to obtain 2D candidate frames;
merging the 2D candidate frames into a 3D candidate frame according to the position continuity on the continuous layer of the 2D candidate frames;
and setting a second threshold corresponding to the 3D candidate frame according to the category of the 3D candidate frame, and outputting the 3D candidate frame with the detection score exceeding the second threshold to obtain a model detection result.
2. The method of claim 1, wherein the pulmonary CT multiple sign automated inspection comprises:
solid ghost, ground glass density ghost, streak ghost, grid ghost, cellulite, emphysema, bulla, lump ghost, pleural thickening, pleural depression, pneumothorax, pleural effusion, other low density shadows of the sack, bronchiectasis, mosaic perfusion, paviosides, and crescentia.
3. The method for automatic detection of pulmonary CT multiple signs according to claim 1, wherein the acquiring of pulmonary CT images comprises:
chest scans or enhanced CT images are acquired.
4. The method for automatically detecting the pulmonary CT multiple signs according to claim 1, wherein the determining a typical layer of the pulmonary CT image and labeling each abnormal sign of the typical layer by using a sparse labeling method comprises:
determining abnormal signs and corresponding typical layers of the lung CT image;
and marking each abnormal symptom on the typical layer by using a drawing frame or an outline drawing mode to obtain a 2D marking frame of the typical layer of the abnormal symptom.
5. The method of claim 1, wherein the inputting of the labeled typical layer image data into a preset deep learning network model for training to obtain a trained 2D frame detection model comprises:
using the marked axial position layer image data as training data;
and inputting the marked axial layer image data to a fast-RCNN network model based on FPN for training to obtain a trained 2D frame detection model.
6. The method according to claim 1, wherein the step of inputting image data of a layer to be detected in the lung CT image into a trained 2D frame detection model to predict a 2D frame of the layer to be detected with abnormal signs comprises:
inputting image data of a layer to be detected in the lung CT image into a trained 2D frame detection model;
and predicting the 2D detection frame with the abnormal symptoms of the layer to be detected to obtain the category, the position and the detection score information of the 2D detection frame.
7. The method of claim 1, wherein the merging into the 3D candidate frame according to the position continuity on the 2D candidate frame continuous layer comprises:
judging whether the 2D detection frames can be merged into a 3D frame according to the size and the type of the overlapped area of the 2D detection frames of two continuous layers;
if the overlapping area of the detection frames is larger than a certain threshold and meets the requirement of the same type, the overlapping areas are combined into the same 3D detection frame;
judging all the 2D detection frames one by one according to the rule to obtain the category, the position and the detection score information of the final 3D candidate frame of the whole CT;
wherein the detection score of the 3D candidate box may be set to a mean, a maximum, or a median of scores of 2D candidate boxes for merging.
8. An automatic pulmonary CT multi-sign detection system, comprising:
the data acquisition unit is configured for acquiring lung CT images;
the symptom marking unit is configured to determine a typical layer of the lung CT image and mark each abnormal symptom of the typical layer in a sparse marking mode;
the model training unit is configured and used for inputting the labeled typical layer image data into a preset deep learning network model for training to obtain a trained 2D frame detection model;
the model prediction unit is configured to input image data of a layer to be detected in the lung CT image into a trained 2D frame detection model and predict a 2D detection frame of the layer to be detected with abnormal signs;
the first thresholding unit is configured to set first thresholds corresponding to the 2D detection frames according to the types of the 2D detection frames, and output the 2D detection frames with detection scores exceeding the first thresholds to obtain 2D candidate frames;
the detection frame merging unit is configured to merge the 2D candidate frames into the 3D candidate frames according to the position continuity of the 2D candidate frames on the continuous layer;
and the second thresholding unit is configured to set a second threshold corresponding to the 3D candidate frame according to the category of the 3D candidate frame, and output the 3D candidate frame with the detection score exceeding the second threshold to obtain a model detection result.
9. A terminal, comprising:
a processor;
a memory for storing instructions for execution by the processor;
wherein the processor is configured to perform the method of any one of claims 1-7.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1-7.
CN202010128396.3A 2020-02-28 2020-02-28 Lung CT multi-symptom automatic detection method, system, terminal and storage medium Active CN110969623B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010128396.3A CN110969623B (en) 2020-02-28 2020-02-28 Lung CT multi-symptom automatic detection method, system, terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010128396.3A CN110969623B (en) 2020-02-28 2020-02-28 Lung CT multi-symptom automatic detection method, system, terminal and storage medium

Publications (2)

Publication Number Publication Date
CN110969623A true CN110969623A (en) 2020-04-07
CN110969623B CN110969623B (en) 2020-06-26

Family

ID=70038257

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010128396.3A Active CN110969623B (en) 2020-02-28 2020-02-28 Lung CT multi-symptom automatic detection method, system, terminal and storage medium

Country Status (1)

Country Link
CN (1) CN110969623B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111862001A (en) * 2020-06-28 2020-10-30 微医云(杭州)控股有限公司 Semi-automatic labeling method and device for CT image, electronic equipment and storage medium
CN112184684A (en) * 2020-10-09 2021-01-05 桂林电子科技大学 Improved YOLO-v3 algorithm and application thereof in lung nodule detection
CN113160233A (en) * 2021-04-02 2021-07-23 易普森智慧健康科技(深圳)有限公司 Method for training example segmentation neural network model by using sparse labeled data set
CN115994898A (en) * 2023-01-12 2023-04-21 北京医准智能科技有限公司 Mediastinum space-occupying lesion image detection method, device, equipment and storage medium
CN116452579A (en) * 2023-06-01 2023-07-18 中国医学科学院阜外医院 Chest radiography image-based pulmonary artery high pressure intelligent assessment method and system

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020006216A1 (en) * 2000-01-18 2002-01-17 Arch Development Corporation Method, system and computer readable medium for the two-dimensional and three-dimensional detection of lesions in computed tomography scans
CN107016665A (en) * 2017-02-16 2017-08-04 浙江大学 A kind of CT pulmonary nodule detection methods based on depth convolutional neural networks
CN108090903A (en) * 2017-12-29 2018-05-29 苏州体素信息科技有限公司 Lung neoplasm detection model training method and device, pulmonary nodule detection method and device
CN108446730A (en) * 2018-03-16 2018-08-24 北京推想科技有限公司 A kind of CT pulmonary nodule detection methods based on deep learning
US20180365829A1 (en) * 2017-06-20 2018-12-20 Case Western Reserve University Intra-perinodular textural transition (ipris): a three dimenisonal (3d) descriptor for nodule diagnosis on lung computed tomography (ct) images
CN110059697A (en) * 2019-04-29 2019-07-26 上海理工大学 A kind of Lung neoplasm automatic division method based on deep learning

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020006216A1 (en) * 2000-01-18 2002-01-17 Arch Development Corporation Method, system and computer readable medium for the two-dimensional and three-dimensional detection of lesions in computed tomography scans
CN107016665A (en) * 2017-02-16 2017-08-04 浙江大学 A kind of CT pulmonary nodule detection methods based on depth convolutional neural networks
US20180365829A1 (en) * 2017-06-20 2018-12-20 Case Western Reserve University Intra-perinodular textural transition (ipris): a three dimenisonal (3d) descriptor for nodule diagnosis on lung computed tomography (ct) images
CN108090903A (en) * 2017-12-29 2018-05-29 苏州体素信息科技有限公司 Lung neoplasm detection model training method and device, pulmonary nodule detection method and device
CN108446730A (en) * 2018-03-16 2018-08-24 北京推想科技有限公司 A kind of CT pulmonary nodule detection methods based on deep learning
CN110059697A (en) * 2019-04-29 2019-07-26 上海理工大学 A kind of Lung neoplasm automatic division method based on deep learning

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111862001A (en) * 2020-06-28 2020-10-30 微医云(杭州)控股有限公司 Semi-automatic labeling method and device for CT image, electronic equipment and storage medium
CN111862001B (en) * 2020-06-28 2023-11-28 微医云(杭州)控股有限公司 Semi-automatic labeling method and device for CT images, electronic equipment and storage medium
CN112184684A (en) * 2020-10-09 2021-01-05 桂林电子科技大学 Improved YOLO-v3 algorithm and application thereof in lung nodule detection
CN113160233A (en) * 2021-04-02 2021-07-23 易普森智慧健康科技(深圳)有限公司 Method for training example segmentation neural network model by using sparse labeled data set
CN115994898A (en) * 2023-01-12 2023-04-21 北京医准智能科技有限公司 Mediastinum space-occupying lesion image detection method, device, equipment and storage medium
CN115994898B (en) * 2023-01-12 2023-11-14 浙江医准智能科技有限公司 Mediastinum space-occupying lesion image detection method, device, equipment and storage medium
CN116452579A (en) * 2023-06-01 2023-07-18 中国医学科学院阜外医院 Chest radiography image-based pulmonary artery high pressure intelligent assessment method and system
CN116452579B (en) * 2023-06-01 2023-12-08 中国医学科学院阜外医院 Chest radiography image-based pulmonary artery high pressure intelligent assessment method and system

Also Published As

Publication number Publication date
CN110969623B (en) 2020-06-26

Similar Documents

Publication Publication Date Title
CN110969623B (en) Lung CT multi-symptom automatic detection method, system, terminal and storage medium
Wang et al. Automatically discriminating and localizing COVID-19 from community-acquired pneumonia on chest X-rays
CN108615237B (en) Lung image processing method and image processing equipment
CN109035187B (en) Medical image labeling method and device
CN111047591A (en) Focal volume measuring method, system, terminal and storage medium based on deep learning
CN111160367B (en) Image classification method, apparatus, computer device, and readable storage medium
CN111402260A (en) Medical image segmentation method, system, terminal and storage medium based on deep learning
US8811699B2 (en) Detection of landmarks and key-frames in cardiac perfusion MRI using a joint spatial-temporal context model
CN101027692B (en) System and method for object characterization of toboggan-based clusters
CN111080584A (en) Quality control method for medical image, computer device and readable storage medium
CN110838114B (en) Pulmonary nodule detection method, device and computer storage medium
CN111340756A (en) Medical image lesion detection and combination method, system, terminal and storage medium
US10390726B2 (en) System and method for next-generation MRI spine evaluation
CN111047610A (en) Focal region presenting method and device
WO2022110525A1 (en) Comprehensive detection apparatus and method for cancerous region
US20220148727A1 (en) Cad device and method for analysing medical images
CN113935943A (en) Method, device, computer equipment and storage medium for intracranial aneurysm identification detection
CN112308853A (en) Electronic equipment, medical image index generation method and device and storage medium
CN111128348B (en) Medical image processing method, medical image processing device, storage medium and computer equipment
CN110570425B (en) Pulmonary nodule analysis method and device based on deep reinforcement learning algorithm
Zhang et al. An Algorithm for Automatic Rib Fracture Recognition Combined with nnU‐Net and DenseNet
Wang et al. Automatic creation of annotations for chest radiographs based on the positional information extracted from radiographic image reports
Pham et al. Chest x-rays abnormalities localization and classification using an ensemble framework of deep convolutional neural networks
US11416994B2 (en) Method and system for detecting chest x-ray thoracic diseases utilizing multi-view multi-scale learning
EP1889224B1 (en) Automated organ linking for organ model placement

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant