CN114937022A - Novel coronary pneumonia disease detection and segmentation method - Google Patents

Novel coronary pneumonia disease detection and segmentation method Download PDF

Info

Publication number
CN114937022A
CN114937022A CN202210608148.8A CN202210608148A CN114937022A CN 114937022 A CN114937022 A CN 114937022A CN 202210608148 A CN202210608148 A CN 202210608148A CN 114937022 A CN114937022 A CN 114937022A
Authority
CN
China
Prior art keywords
net
image
branch
disease detection
slices
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210608148.8A
Other languages
Chinese (zh)
Other versions
CN114937022B (en
Inventor
郭菲
张南星
李雪健
马世强
唐继军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin University
Original Assignee
Tianjin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin University filed Critical Tianjin University
Priority to CN202210608148.8A priority Critical patent/CN114937022B/en
Publication of CN114937022A publication Critical patent/CN114937022A/en
Application granted granted Critical
Publication of CN114937022B publication Critical patent/CN114937022B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/082Learning methods modifying the architecture, e.g. adding, deleting or silencing nodes or connections
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/809Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of classification results, e.g. where the classifiers operate on the same input data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30061Lung

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Medical Informatics (AREA)
  • Multimedia (AREA)
  • Databases & Information Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computational Linguistics (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Data Mining & Analysis (AREA)
  • Biophysics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Image Processing (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The invention discloses a novel coronary pneumonia disease detection and segmentation method, which comprises the following steps: s1, acquiring COVID-19lung CT lesion data, preprocessing the data, finding out the slices of the beginning and the end of the lung, expanding the slices outwards, and performing slicing processing to remove the slices without the focus, wherein the threshold range of the image is (-1000, 500); s2, processing the data, including randomly rotating the image, horizontally and vertically turning the image, histogram equalization technology and image normalization processing; s3, constructing an N-Net network structure; s4, optimizing the blocks of the convolution of two 3x3 of each layer in the N-Net, and simultaneously optimizing the blocks of the convolution of two 3x3 of each layer in the U-Net; and S5, merging the U-Net and the N-Net network structures into a final network structure NU-Net. The invention adopts the novel coronary pneumonia disease detection and segmentation method, and realizes effective segmentation of lung lesions by extracting characteristics of CT images of lungs infected with COVID-19.

Description

Novel coronary pneumonia disease detection and segmentation method
Technical Field
The invention relates to the technical field of computer deep learning image segmentation, in particular to a novel coronary pneumonia disease detection segmentation method.
Background
The widespread worldwide distribution of coronavirus disease (COVID-19) poses a health risk for survival throughout the world. Automated detection of lung infections via Computed Tomography (CT) images offers great potential for strengthening the traditional healthcare strategy for COVID-19. To control the spread of the disease, it is imperative to screen large numbers of suspected cases for appropriate quarantine and treatment.
In determining the severity of COVID-19, pulmonary abnormalities are a key factor in the clinical management of patients, potentially facilitating more timely and personalized medical intervention. Quantification of lesions may further provide tracking of disease progression and response to treatment strategies. Thus, improved treatment of COVID-19 begins with a clearer understanding of the disease state of the patient, and must include accurate identification, delineation, and quantification of lung lesions and disease phenotypes and patterns.
Currently, disease detection segmentation of the novel coronary pneumonitis COVID-19 still has many challenges, and although pulmonary imaging is crucial for early identification and treatment, segmentation of the infected region from CT slices faces several challenges, including high variation in the infection characteristics, low intensity contrast of infection to normal tissue. In addition, deep learning requires a large data set to obtain more efficient features, while it is impractical to collect a large amount of data in a short time, with a relatively small data set, which can lead to overfitting of the model, preventing training of the deep model.
Although UNet networks are very popular networks in the medical segmentation field in recent years, it has been found through research that UNet networks have poor performance in detecting fine tissue structures and cannot accurately segment boundary regions, which is caused by a large receptive field in such an incomplete network as UNet. With the increase of the network depth, the receptive field is larger, so that the network can pay more attention to high-level semantic information, and can learn less low-level features, but a small organization structure needs a smaller receptive field to obtain, and even if the UNet has a jump-link structure, the minimum receptive field is limited to the first-level network.
Disclosure of Invention
The invention aims to provide a novel coronary pneumonia disease detection and segmentation method, which solves the problems of poor performance and incapability of accurately segmenting boundary regions when a U-Net network detects a fine tissue structure, and realizes effective segmentation of lung lesions by extracting features from CT images of lungs infected with COVID-19.
In order to achieve the purpose, the invention provides a novel coronary pneumonia disease detection and segmentation method, which comprises the following steps:
s1, acquiring COVID-19lung CT lesion data, preprocessing the data, finding out the slices of the beginning and the end of the lung, expanding the slices outwards, and performing slicing processing to remove the slices without the focus, wherein the threshold range of the image is (-1000, 500);
s2, processing the data, including randomly rotating the image, horizontally and vertically turning the image, histogram equalization technology and image normalization processing;
s3, constructing an N-Net network structure, wherein the sampling sequence is opposite to that of U-Net, and the sampling is performed after up-sampling, and the detail information of the input picture is amplified;
s4, optimizing blocks of two 3x3 convolutions of each layer in N-Net, and simultaneously optimizing blocks of two 3x3 convolutions of each layer in U-Net;
and S5, combining the U-Net and the N-Net network structures into a final network structure NU-Net.
Preferably, in step S3, the N-Net encoder performs upsampling to convert the input to a higher dimensionality, and then the decoder performs downsampling, wherein the upsampling is bilinear interpolation and the downsampling is maximal pooling.
Preferably, in step S4, the Block optimization process for each layer of N-Net with two 3x3 convolutions is as follows, where each Block1 has three branches, and the first branch is upsampled, then convolved by one 3x3, and then downsampled; the second branch is a convolution of two 3x 3; the third branch is downsampled and then upsampled after being convolved by 3x 3.
Preferably, in step S5, the two network structures are merged into a new network structure NU-Net, the upper branch is N-Net, the lower branch is U-Net, the original image is cut into a plurality of images with the same size from top to bottom and from left to right, the images are input to the end of the N-Net branch, the images are spliced into the original image according to the original sequence, the input and output sizes of the lower branch U-Net are matched with the original image, the results of the upper and lower branches are merged, and the merged image is classified by a softmax classifier through convolution with 1x 1.
Therefore, the present invention adopts the above-mentioned novel coronary pneumonia disease detection and segmentation method, and the method for automatically detecting and quantifying the breast covi-19 lesion may play an important role in the monitoring and management of the disease. Infected areas can be automatically identified from CT slices of the chest, and infected people can be screened, so that the infected people can be treated and nursed, and isolated, and virus transmission is reduced.
The specific technical effects are as follows:
(1) a new Block is added to the basic network, and the method can be used for better extracting features.
(2) An over-complete network structure is explored, the under-complete network structure and the over-complete network structure are combined, a new Block is added, a new network structure NU-Net is provided, and edge and detail characteristics are captured better than those of the U-Net obviously.
(3) Faster convergence rate, better performance and better generalization are achieved in the segmentation field.
The technical solution of the present invention is further described in detail by the accompanying drawings and embodiments.
Drawings
FIG. 1 is a schematic diagram of the upsampling of an N-Net encoder;
FIG. 2 is a schematic diagram of down-sampling by an N-Net encoder;
FIG. 3 is a schematic diagram of an optimized Block;
FIG. 4 is a schematic diagram of optimized U-Net and N-Net;
FIG. 5 is a schematic diagram of the NU-Net model structure.
Detailed Description
The technical solution of the present invention is further illustrated by the accompanying drawings and examples.
Unless defined otherwise, technical or scientific terms used herein shall have the ordinary meaning as understood by one of ordinary skill in the art to which this invention belongs.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein, and any reference signs in the claims are not intended to be construed as limiting the claim concerned.
Furthermore, it should be understood that although the specification describes embodiments, not every embodiment includes only a single embodiment, and such description is for clarity purposes only, and it will be understood by those skilled in the art that the specification as a whole and the embodiments may be combined as appropriate to form other embodiments understood by those skilled in the art. These other embodiments are also covered by the scope of the present invention.
It should be understood that the above-mentioned embodiments are only for explaining the present invention, and the protection scope of the present invention is not limited thereto, and any person skilled in the art should be able to cover the technical scope of the present invention and the equivalent replacement or change of the technical solution and the inventive concept thereof in the technical scope of the present invention.
The use of the word "comprising" or "comprises" and the like in the present invention means that the element preceding the word covers the element listed after the word and does not exclude the possibility of also covering other elements. The terms "inner", "outer", "upper", "lower", and the like indicate orientations or positional relationships based on orientations or positional relationships shown in the drawings, merely for convenience in describing the present invention and simplifying the description, but do not indicate or imply that the referred devices or elements must have a specific orientation, be constructed and operated in a specific orientation, and thus should not be construed as limiting the present invention, and when the absolute position of the described object is changed, the relative positional relationships may be changed accordingly. In the present invention, unless otherwise expressly stated or limited, the terms "attached" and the like are to be construed broadly, e.g., as meaning a fixed connection, a removable connection, or an integral part; either directly or indirectly through intervening media, either internally or in any other relationship. The specific meanings of the above terms in the present invention can be understood by those skilled in the art according to specific situations. The term "about" as used herein has the meaning well known to those skilled in the art, and preferably means that the term modifies a value within the range of ± 50%, ± 40%, ± 30%, ± 20%, ± 10%, ± 5% or ± 1% thereof.
All terms (including technical or scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure belongs unless specifically defined otherwise. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
Techniques, methods, and apparatus known to those of ordinary skill in the relevant art may not be discussed in detail but are intended to be part of the specification where appropriate.
The disclosures of the prior art documents cited in the present description are incorporated by reference in their entirety and are, therefore, part of the present disclosure.
Example one
Data set Using the data set of COVID-19Lung CT Segmentation Challenge-20, COVID-19Lung CT Lesion Segmentation Challenge (COVID-19-20) in 2020 created a common platform to evaluate emerging artificial intelligence methods to segment and quantify Lung lesions caused by SARS-CoV-2 infection from CT images.
Lung lesions caused by SARS-CoV-2 infection were segmented and quantified from CT images from multiple institutions, multiple countries, patients of different ages, sexes and different disease severity. The data set included 199 training data and 50 validation data.
The invention discloses a novel coronary pneumonia disease detection and segmentation method, which comprises the following specific steps:
step (1): a data set of COVID-19Lung CT Segmentation Challenge-20 Challenge was obtained and pre-processed with a threshold cut of (-1000, 500) for the image, slices at the beginning and end of the Lung were found, slices were dilated outward and processed to remove slices without lesions.
Step (2): in order to improve the quality of model training, data enhancement is performed to improve the generalization effect and robustness of the model, and the data enhancement used comprises techniques of randomly rotating images, horizontally and vertically turning images, histogram equalization (for increasing image contrast) and image normalization.
And (3): the traditional U-Net network is up-sampling after down-sampling, the network shape and letter U are similar to the U-Net, so the structure is called U-Net, the N-Net network structure constructed by the invention is opposite to the U-Net, and the up-sampling and the down-sampling are performed before the up-sampling, so that the detail information of an input picture can be amplified to extract the detail information.
In the step (3), an N-Net encoder is formed, upsampling is adopted, input is converted into higher dimensionality (the dimensionality is a space meaning and is not a channel meaning), then a decoder carries out downsampling, the adopted upsampling is bilinear interpolation (shown in figure 1), and the downsampling is maximum pooling (shown in figure 2), so that fine details and edge characteristics can be accurately captured.
And (4): the Block of two 3x3 convolutions of each layer in N-Net is optimized, which is the improved N-Net. The same optimization was done for two 3x3 convolved blocks for each layer in U-Net at the same time.
In the step (4), two blocks of 3x3 in the original network are optimized, the new Block is shown as a Block1 in fig. 3, each Block1 has three branches, and the first branch is upsampled, then convolved by 3x3 and then downsampled, so that large target and overall information can be better extracted. The second branch is convolution of two 3x3, and the third branch is downsampling and then upsampling after convolution of 3x3, so that detail and edge information can be better extracted. After the three branches are merged, extraction of different targets is facilitated, element-wise add is performed on the three branches and the original input of the three branches, so that the captured information is more comprehensive, gradient disappearance can be reduced, Block2 in the graph serves as a substitute of Block1, and Block2 is used on a large-size characteristic diagram to replace Block1 so as to reduce memory consumption. And optimizing the U-Net and N-Net network structures of Block, as shown in FIG. 4, except that the up-down sampling sequence is different.
And (5): the U-Net and N-Net network structures are combined into a final network structure NU-Net, as shown in FIG. 5, which can not only ensure the original segmentation effect of U-Net, but also segment the fine organization structure and the boundary region.
In the step (5), two network structures are combined into a new network structure NU-Net, the upper branch is N-Net, the lower branch is U-Net, the size of the original image is (512 ), the image is cut into 4 images with the size of (256 ) in consideration of overlarge image size, the images are input into the network structure, and then the network structure is convoluted by 3x 3.
Due to the fact that upsampling has a large demand on a display memory, in order to enable a model to run smoothly, the size of an image input into an upper N-Net branch is cut into 16 pictures with the size of (64, 64) from top to bottom from left to right as an input, (256 ) the pictures are spliced into the pictures with the size of (256 ) at the tail end of the upper N-Net branch according to the original sequence, the input and output sizes of a lower branch U-Net are (256 ), then the results of the upper branch and the lower branch are fused by using a catate, and the upper branch and the lower branch are classified by using a softmax classifier through convolution of 1x 1.
Four modules in the network structure replace Block1 with Block2 because of video memory limitation, the rest positions still use Block1, all used convolutions of 3x3 do not change the image size, and in the final network structure, the convolution kernel number in the upper branch N-Net is set to be 80, 40, 20, 10 and 5 respectively, and the convolution kernel number in the lower branch U-Net is set to be 32, 64, 128, 256 and 512 respectively.
The overcomplete network structure N-Net of the invention, combined with the U-Net network structure, can obtain advanced features and deep semantic information, and can better capture details and segment, thus obtaining better performance in the aspect of segmenting the focus.
Test of
Using Dice, Jaccrad, Volume Similarity, Hausdorff 95, Hausdorff 100, Surface Dice AT1mm, Average Surface Distance GT to Pred, and Average Surface Distance Pred to GT as evaluation indexes, and looking AT Dice and Jaccard, the calculation method is as follows:
Figure BDA0003671125620000081
Figure BDA0003671125620000082
the results of comparing four network structures, i.e., U-Net B (FIG. 4), Sym-Unet (a dual-branch structure consisting of Unet and a network symmetrical to it, one branch using U-Net, the other using up-sampling and down-sampling codec, and common 3x3 convolution for each Block) and NU-Net, are shown in Table 1.
Table 1 comparison of four sets of network structures
Figure BDA0003671125620000083
From Table 1, it can be seen that the Dice coefficient and the Jaccard coefficient of NU-Net are 0.6918 and 0.5556 respectively, while the Dice coefficient and the Jaccard coefficient of U-Net are 0.6829 and 0.5347 respectively, and the Dice coefficient and the Jaccard coefficient of NU-Net are improved by 0.0089 and 0.0209 compared with U-Net, which are obviously improved, indicating that the segmentation effect is better.
Hausdorff 95 and Hausdorff 100 are obviously reduced, the segmentation edge and fine detail effects are improved, the surface Dice AT1mm is improved, the Dice coefficient of the surface volume is improved, and the segmentation result is more accurate. Through the comparison of U-Net and U-Net B and the comparison of Sym-Unet and NU-Net, the addition of Block can really improve the segmentation effect, and the module is proved to be effective.
Therefore, the invention adopts the novel coronary pneumonia disease detection and segmentation method to automatically identify the infected area from the chest CT slice and screen the infected people, so that the infected people can be treated, nursed and isolated, and the virus transmission is reduced.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solutions of the present invention and not for limiting the same, and although the present invention is described in detail with reference to the preferred embodiments, those of ordinary skill in the art should understand that: modifications and equivalents may be made to the invention without departing from the spirit and scope of the invention.

Claims (4)

1. A novel coronary pneumonia disease detection and segmentation method is characterized by comprising the following steps:
s1, acquiring COVID-19lung CT lesion data, preprocessing the data, finding out the slices of the beginning and the end of the lung, expanding the slices outwards, and performing slicing processing to remove the slices without the focus, wherein the threshold range of the image is (-1000, 500);
s2, processing the data, including randomly rotating the image, horizontally and vertically turning the image, histogram equalization technology and image normalization processing;
s3, constructing an N-Net network structure, wherein the sampling sequence is opposite to that of U-Net, and the sampling is performed after the up-sampling;
s4, optimizing blocks of two 3x3 convolutions of each layer in N-Net, and simultaneously optimizing blocks of two 3x3 convolutions of each layer in U-Net;
and S5, combining the U-Net and the N-Net network structures into a final network structure NU-Net.
2. The novel coronary pneumonia disease detection and segmentation method according to claim 1, characterized in that: in step S3, the process of constructing N-Net is as follows, where the N-Net encoder performs upsampling to convert the input to a higher dimensionality, and then the decoder performs downsampling, where the upsampling is bilinear interpolation and the downsampling is maximum pooling.
3. The method of claim 1, wherein the disease detection and segmentation method for coronary pneumonia comprises: in step S4, the Block optimization process of each layer of two 3x3 convolutions in N-Net is as follows, each Block1 has three branches, the first branch is upsampled, then is convolved by one 3x3, and then is downsampled; the second branch is a convolution of two 3x 3; the third branch is downsampled and then upsampled after being convolved by 3x 3.
4. The method of claim 1, wherein the disease detection and segmentation method for coronary pneumonia comprises: in step S5, two network structures are merged into a new network structure NU-Net, the upper branch is N-Net, the lower branch is U-Net, the original image is cut into a plurality of images with the same size from top to bottom and from left to right, and the images are input to the end of the N-Net branch, and then the images are spliced into the original image according to the original sequence, the input and output sizes of the lower branch U-Net are matched with the original image, then the results of the upper and lower branches are merged, and the merged images are classified by a 1x1 convolution using a softmax classifier.
CN202210608148.8A 2022-05-31 2022-05-31 Novel coronary pneumonia disease detection and segmentation method Active CN114937022B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210608148.8A CN114937022B (en) 2022-05-31 2022-05-31 Novel coronary pneumonia disease detection and segmentation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210608148.8A CN114937022B (en) 2022-05-31 2022-05-31 Novel coronary pneumonia disease detection and segmentation method

Publications (2)

Publication Number Publication Date
CN114937022A true CN114937022A (en) 2022-08-23
CN114937022B CN114937022B (en) 2023-04-07

Family

ID=82866721

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210608148.8A Active CN114937022B (en) 2022-05-31 2022-05-31 Novel coronary pneumonia disease detection and segmentation method

Country Status (1)

Country Link
CN (1) CN114937022B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115375712A (en) * 2022-10-25 2022-11-22 西南科技大学 Lung lesion segmentation method for realizing practicality based on bilateral learning branch

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109615632A (en) * 2018-11-09 2019-04-12 广东技术师范学院 A kind of eyeground figure optic disk and optic cup dividing method based on semi-supervised condition production confrontation network
CN112634192A (en) * 2020-09-22 2021-04-09 广东工业大学 Cascaded U-N Net brain tumor segmentation method combined with wavelet transformation
CN112819911A (en) * 2021-01-23 2021-05-18 西安交通大学 Four-dimensional cone beam CT reconstruction image enhancement algorithm based on N-net and CycN-net network structures
CN112927237A (en) * 2021-03-10 2021-06-08 太原理工大学 Honeycomb lung focus segmentation method based on improved SCB-Unet network
CN112950643A (en) * 2021-02-26 2021-06-11 东北大学 New coronary pneumonia focus segmentation method based on feature fusion deep supervision U-Net
US20210248747A1 (en) * 2020-02-11 2021-08-12 DeepVoxel, Inc. Organs at risk auto-contouring system and methods
CN113538363A (en) * 2021-07-13 2021-10-22 南京航空航天大学 Lung medical image segmentation method and device based on improved U-Net
CN114299082A (en) * 2021-12-15 2022-04-08 苏州大学 New coronary pneumonia CT image segmentation method, device and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109615632A (en) * 2018-11-09 2019-04-12 广东技术师范学院 A kind of eyeground figure optic disk and optic cup dividing method based on semi-supervised condition production confrontation network
US20210248747A1 (en) * 2020-02-11 2021-08-12 DeepVoxel, Inc. Organs at risk auto-contouring system and methods
CN112634192A (en) * 2020-09-22 2021-04-09 广东工业大学 Cascaded U-N Net brain tumor segmentation method combined with wavelet transformation
CN112819911A (en) * 2021-01-23 2021-05-18 西安交通大学 Four-dimensional cone beam CT reconstruction image enhancement algorithm based on N-net and CycN-net network structures
CN112950643A (en) * 2021-02-26 2021-06-11 东北大学 New coronary pneumonia focus segmentation method based on feature fusion deep supervision U-Net
CN112927237A (en) * 2021-03-10 2021-06-08 太原理工大学 Honeycomb lung focus segmentation method based on improved SCB-Unet network
CN113538363A (en) * 2021-07-13 2021-10-22 南京航空航天大学 Lung medical image segmentation method and device based on improved U-Net
CN114299082A (en) * 2021-12-15 2022-04-08 苏州大学 New coronary pneumonia CT image segmentation method, device and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
SHIQIANG MA 等: ""GEU-Net: Rethinking the information transmission in the skip connection of U-Net architecture"", 《IEEE》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115375712A (en) * 2022-10-25 2022-11-22 西南科技大学 Lung lesion segmentation method for realizing practicality based on bilateral learning branch

Also Published As

Publication number Publication date
CN114937022B (en) 2023-04-07

Similar Documents

Publication Publication Date Title
Murugesan et al. A hybrid deep learning model for effective segmentation and classification of lung nodules from CT images
Xie et al. Skin lesion segmentation using high-resolution convolutional neural network
CN113674253B (en) Automatic segmentation method for rectal cancer CT image based on U-transducer
CN110889852B (en) Liver segmentation method based on residual error-attention deep neural network
CN110889853A (en) Tumor segmentation method based on residual error-attention deep neural network
CN111798425B (en) Intelligent detection method for mitotic image in gastrointestinal stromal tumor based on deep learning
CN112150428A (en) Medical image segmentation method based on deep learning
CN111938569A (en) Eye ground multi-disease classification detection method based on deep learning
WO2020211530A1 (en) Model training method and apparatus for detection on fundus image, method and apparatus for detection on fundus image, computer device, and medium
CN108615236A (en) A kind of image processing method and electronic equipment
CN110930416A (en) MRI image prostate segmentation method based on U-shaped network
CN110751636B (en) Fundus image retinal arteriosclerosis detection method based on improved coding and decoding network
Ashwin et al. Efficient and reliable lung nodule detection using a neural network based computer aided diagnosis system
CN113239755B (en) Medical hyperspectral image classification method based on space-spectrum fusion deep learning
CN109670489B (en) Weak supervision type early senile macular degeneration classification method based on multi-instance learning
CN114093501B (en) Intelligent auxiliary analysis method for child movement epilepsy based on synchronous video and electroencephalogram
CN113012140A (en) Digestive endoscopy video frame effective information region extraction method based on deep learning
CN113420826A (en) Liver focus image processing system and image processing method
CN113643261B (en) Lung disease diagnosis method based on frequency attention network
CN114202545A (en) UNet + + based low-grade glioma image segmentation method
CN114092450A (en) Real-time image segmentation method, system and device based on gastroscopy video
CN114937022B (en) Novel coronary pneumonia disease detection and segmentation method
US20230377147A1 (en) Method and system for detecting fundus image based on dynamic weighted attention mechanism
David et al. Retinal Blood Vessels and Optic Disc Segmentation Using U‐Net
Jian et al. Dual-branch-UNnet: A dual-branch convolutional neural network for medical image segmentation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant