CN114049358A - Method and system for rib case segmentation, counting and positioning - Google Patents

Method and system for rib case segmentation, counting and positioning Download PDF

Info

Publication number
CN114049358A
CN114049358A CN202111364466.6A CN202111364466A CN114049358A CN 114049358 A CN114049358 A CN 114049358A CN 202111364466 A CN202111364466 A CN 202111364466A CN 114049358 A CN114049358 A CN 114049358A
Authority
CN
China
Prior art keywords
rib
segmentation
ribs
mask
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111364466.6A
Other languages
Chinese (zh)
Inventor
王昊
蒋昌龙
冯奕乐
王子龙
张政
丁晓伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Voxelcloud Information Technology Co ltd
Original Assignee
Suzhou Voxelcloud Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Voxelcloud Information Technology Co ltd filed Critical Suzhou Voxelcloud Information Technology Co ltd
Priority to CN202111364466.6A priority Critical patent/CN114049358A/en
Publication of CN114049358A publication Critical patent/CN114049358A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30008Bone
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30242Counting objects in image

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a method and a system for rib case segmentation, counting and positioning, which comprises the following steps: performing semantic segmentation on ribs in chest CT to obtain a rib semantic segmentation binary mask; collecting artificial labels corresponding to the rib semantic segmentation binary mask to obtain a rib instance segmentation mask; processing the rib semantic segmentation binary mask and the rib instance segmentation mask to manufacture a sequence segmentation training sample set; establishing a deep convolutional neural network model; training the deep convolutional neural network model by adopting the training sample set to obtain a deep convolutional neural network capable of predicting the topmost rib; and inputting the rib semantic segmentation binary mask into a trained deep convolutional neural network, and obtaining a prediction mask of the topmost rib by sequence from top to bottom so as to obtain a predicted rib instance segmentation mask. The invention carries out explicit modeling on the rib counting problem, and realizes counting and positioning of the ribs by segmenting the ribs one by one from the top of the lung to the bottom of the lung.

Description

Method and system for rib case segmentation, counting and positioning
Technical Field
The invention relates to the technical field of computers, in particular to a rib instance segmentation, counting and positioning method and system, and particularly relates to a rib instance segmentation, counting and positioning method based on deep learning.
Background
Diagnosis, description and report of fracture (disease) in CT are one of the important contents for imaging doctors. When the fracture (disease) is discovered, the focus needs to be described according to the anatomical position of the fracture (disease) for follow-up analysis or reference of other departments. With the spread of thin-layer CT, doctors can find subtle fractures (diseases), but due to the increase of the number of layers, the confirmation of the position of the lesion becomes a difficult problem, especially the description of the rib. A person usually has 12 pairs of ribs, each of which has a separate number and can be divided into 1 st, … and 12 th ribs from top to bottom. Since there is no reliable reference point, the physician needs to determine the location of one lesion from the start level of CT to the lesion level, and if there are multiple lesions, it needs to repeat many times. The process is easy to make mistakes and seriously influences the film reading efficiency of doctors, so that the automatic rib counting method is of great importance to the efficiency of the doctors and the improvement of the diagnosis and treatment quality.
There are two main methods of automated rib counting currently available. The first is a rule-based method, which generally employs a threshold or deep learning method to segment ribs, extract rib regions, then employs a certain morphological regularization method to process, and finally calculates connected domains, and assigns category labels to each connected domain from top to bottom according to positions. However, this method does not take into account the morphological information of the ribs, and will give an erroneous count when the CT does not sweep the 1 st pair of ribs; in the case of severe fracture or failure in segmentation, each connected domain no longer corresponds to one rib, so it is difficult to design a reasonable rule to assign a rib number to each region. The second method is a voxel segmentation-based method, which generally considers rib counting as a segmentation problem, and predicts each rib as an independent class by using a 2D or 3D segmentation model based on deep learning, and the method can avoid manual design rules by labeling a large amount of data of rib counting and learning from the data, but is limited by video memory and calculation amount, and the model can only take partial CT data as input, so that the segmentation of the model is inaccurate due to lack of sufficient context; meanwhile, the segmentation network has a large number of parameters, needs a large number of CT and corresponding counting labels, considers various types and parts of fracture, has large individual difference of people, and is difficult to collect training data with enough diversity in practice so as to ensure the stability of the model; finally, the segmentation network operates the original data, so that the computation complexity is extremely high, and huge resource overhead is generated in actual deployment.
Patent document CN112529849A discloses an automatic counting method and device for CT ribs, wherein the method comprises: segmenting the rib bone in the CT to obtain a rib mask corresponding to the CT; traversing each layer of the mask, taking each layer of the mask as a binary image, and extracting a rib outline; converting each rib outline of each layer into point cloud; predicting the number of the rib by using a point cloud pattern neural network to obtain the number of the point cloud rib; and performing inverse mapping on the point cloud rib numbers, mapping the point cloud rib numbers back to the rib outlines to obtain the rib numbers to which each rib outline of each layer belongs, and finishing rib counting. However, the patent document cannot guarantee stability and reliability of the rib count.
Patent document No. CN111915620A discloses a CT rib segmentation method and device, the method includes: s1, acquiring training data, and generating two types of labels according to the training data; s2, training full convolution image semantic segmentation models of two tasks according to the two types of labels to obtain a rib segmentation model; s3, acquiring CT data to be segmented, wherein the CT data to be segmented comprises all layers in CT; s4, reasoning a two-dimensional segmentation result and adjacent layer relations on each layer in the CT data to be segmented by using a trained rib segmentation model, and obtaining a rib outline of each layer by using a connected domain detection algorithm based on two-dimensional segmentation; s5, combining the rib outlines of all layers according to the adjacent layer relation to obtain a three-dimensional segmentation result; s6, obtaining the CT rib segmentation result of the CT data to be segmented by using a post-processing algorithm. However, the patent document can only obtain the semantic segmentation result of the ribs, cannot obtain the example segmentation mask separated by each rib, and still has the defect of inaccurate segmentation.
Disclosure of Invention
In view of the shortcomings in the prior art, it is an object of the present invention to provide a method and system for rib instance segmentation, counting and positioning.
The rib example segmentation, counting and positioning method provided by the invention comprises the following steps:
step 1: performing semantic segmentation on ribs in chest CT to obtain a rib semantic segmentation binary mask;
step 2: collecting artificial labels corresponding to the rib semantic segmentation binary mask to obtain a rib instance segmentation mask;
and step 3: processing the rib semantic segmentation binary mask and the rib instance segmentation mask to manufacture a sequence segmentation training sample set;
and 4, step 4: establishing a deep convolutional neural network model;
and 5: training each level of neural network architecture in the deep convolutional neural network model by adopting the training sample set to obtain a deep convolutional neural network capable of predicting the topmost rib;
step 6: inputting the rib semantic segmentation binary mask into a trained deep convolutional neural network, obtaining a prediction mask of the topmost rib layer by layer sequence from top to bottom, erasing the topmost rib layer from the rib semantic segmentation binary mask, inputting the rib semantic segmentation binary mask into the trained deep convolutional neural network again, repeating the operation until the rib semantic segmentation binary mask is empty or all ribs are obtained, and obtaining the predicted rib case segmentation mask.
Preferably, the step 3 specifically comprises: according to the information provided by the rib instance division mask, the semantic division binary mask for the ribs reserves the 1 st and later ribs, the 2 nd and later ribs, the 3 rd and later ribs, the 4 th and later ribs, the 5 th and later ribs, the 6 th and later ribs, the 7 th and later ribs, the 8 th and later ribs, the 9 th and later ribs, the 10 th and later ribs, the 11 th and later ribs, the 12 th and later ribs respectively; and semantically segmenting the binary mask to obtain 12 training input images, and labeling the topmost rib of each training input image to obtain a training target image.
Preferably, in step 4, the deep neural network model uses a 3D UNet as a backbone network and includes a plurality of layers of 3D neurons.
Preferably, the 3D neuron comprises 10 layers, which are an input layer, a convolutional layer, a dimension-reducing convolutional layer, a jump-to-deconvolution layer, a convolutional layer, and an output layer.
Preferably, in the step 5, in the training process, on-line data augmentation is performed on the training output image.
Preferably, the online data augmentation includes random translation, random rotation, and random scaling.
Preferably, in step 5, when the deep convolutional neural network is trained, cross entropy and Dice are used as loss functions, an Adam optimization algorithm is used as a learning algorithm, and neuron parameters of each layer are regularized by using L2 Weight Decay.
Preferably, in the step 5, when the neural network architectures at each stage are trained, the learning rate of each iteration training is less than or equal to the learning rate of the previous iteration.
Preferably, in the step 1, the obtained rib semantic segmentation binary mask is denoised, and small noise points with a volume smaller than a preset value in the semantic segmentation mask are deleted.
The invention also provides a system for rib case segmentation, counting and positioning, which comprises the following modules:
the rib semantic segmentation binary mask module: performing semantic segmentation on ribs in chest CT to obtain a rib semantic segmentation binary mask;
rib instance segmentation mask module: collecting artificial labels corresponding to the rib semantic segmentation binary mask to obtain a rib instance segmentation mask;
training a sample set module: processing the rib semantic segmentation binary mask and the rib instance segmentation mask to manufacture a sequence segmentation training sample set;
a neural network model module: establishing a deep convolutional neural network model;
a training module: training each level of neural network architecture in the deep convolutional neural network model by adopting the training sample set to obtain a deep convolutional neural network capable of predicting the topmost rib;
a prediction module: inputting the rib semantic segmentation binary mask into a trained deep convolutional neural network, obtaining a prediction mask of the topmost rib layer by layer sequence from top to bottom, erasing the topmost rib layer from the rib semantic segmentation binary mask, inputting the rib semantic segmentation binary mask into the trained deep convolutional neural network again, repeating the operation until the rib semantic segmentation binary mask is empty or all ribs are obtained, and obtaining the predicted rib case segmentation mask.
Compared with the prior art, the invention has the following beneficial effects:
1. the method utilizes the fact that the rib sequence is arranged, the example segmentation is carried out on the ribs layer by layer through training the deep convolutional neural network, and the explicit modeling is carried out on the rib counting problem, so that the applicability and the reliability of the rib counting and positioning algorithm in a real application scene are greatly improved;
2. according to the invention, the rib counting problem is explicitly modeled, and the ribs are divided one by one from the top of the lung to the bottom of the lung, so that the counting and positioning of the ribs are realized;
3. compared with the example segmentation of the ribs directly by the double lungs 12, the method can robustly process the case that the number of rib pairs is less than 12 pairs or more than 12 pairs;
4. based on the medical fact of rib sequence arrangement, the invention designs a set of training flow of a deep convolutional neural network by constructing sequence rib segmentation training data, can segment the ribs one by one, and realizes an example segmentation algorithm of the ribs.
Drawings
Other features, objects and advantages of the invention will become more apparent upon reading of the detailed description of non-limiting embodiments with reference to the following drawings:
FIG. 1 is a partial flow chart of a method for rib segmentation, counting and positioning according to an exemplary embodiment of the present invention;
FIG. 2 is a partial flowchart of a rib segmentation, counting and positioning method according to an embodiment of the present invention.
Detailed Description
The present invention will be described in detail with reference to specific examples. The following examples will assist those skilled in the art in further understanding the invention, but are not intended to limit the invention in any way. It should be noted that it would be obvious to those skilled in the art that various changes and modifications can be made without departing from the spirit of the invention. All falling within the scope of the present invention.
Example 1:
the embodiment provides a rib instance segmentation, counting and positioning method, which comprises the following steps:
step 1: and performing semantic segmentation on the ribs in the chest CT to obtain a rib semantic segmentation binary mask. Denoising the obtained rib semantic segmentation binary mask, and deleting small noise points with volume smaller than a preset value in the semantic segmentation mask.
Step 2: and collecting the artificial labels corresponding to the rib semantic segmentation binary mask to obtain a rib instance segmentation mask.
And step 3: and processing the rib semantic segmentation binary mask and the rib instance segmentation mask to manufacture a sequence segmentation training sample set. According to the information provided by the rib instance division mask, the semantic division binary mask for the ribs reserves the 1 st and later ribs, the 2 nd and later ribs, the 3 rd and later ribs, the 4 th and later ribs, the 5 th and later ribs, the 6 th and later ribs, the 7 th and later ribs, the 8 th and later ribs, the 9 th and later ribs, the 10 th and later ribs, the 11 th and later ribs, the 12 th and later ribs respectively; and semantically segmenting the binary mask to obtain 12 training input images, and labeling the topmost rib of each training input image to obtain a training target image.
And 4, step 4: and establishing a deep convolutional neural network model. The deep neural network model takes 3D UNet as a backbone network and comprises a plurality of layers of 3D neurons, wherein the 3D neurons are 10 layers, namely an input layer, a convolutional layer, a dimension reduction convolutional layer, a jump connection deconvolution layer, a convolutional layer and an output layer.
And 5: and training each level of neural network architecture in the deep convolutional neural network model by adopting the training sample set to obtain a deep convolutional neural network capable of predicting the topmost rib, and performing online data amplification on a training output image in the training process, wherein the online data amplification comprises random translation, random rotation and random scaling. When the deep convolutional neural network is trained, cross entropy and Dice are used as loss functions, an Adam optimization algorithm is used as a learning algorithm, and neuron parameters of each layer are regularized by using L2 Weight Decay. And when the neural network architectures at all levels are trained, the learning rate of each iteration training is less than or equal to that of the previous iteration.
Step 6: inputting the rib semantic segmentation binary mask into a trained deep convolutional neural network, obtaining a prediction mask of the topmost rib layer by layer sequence from top to bottom, erasing the topmost rib layer from the rib semantic segmentation binary mask, inputting the rib semantic segmentation binary mask into the trained deep convolutional neural network again, repeating the operation until the rib semantic segmentation binary mask is empty or all ribs are obtained, and obtaining the predicted rib case segmentation mask.
Example 2:
the embodiment provides a rib example segmentation, counting and positioning system, which comprises the following modules:
the rib semantic segmentation binary mask module: performing semantic segmentation on ribs in chest CT to obtain a rib semantic segmentation binary mask;
rib instance segmentation mask module: collecting artificial labels corresponding to the rib semantic segmentation binary mask to obtain a rib instance segmentation mask;
training a sample set module: processing the rib semantic segmentation binary mask and the rib instance segmentation mask to manufacture a sequence segmentation training sample set;
a neural network model module: establishing a deep convolutional neural network model;
a training module: training each level of neural network architecture in the deep convolutional neural network model by adopting the training sample set to obtain a deep convolutional neural network capable of predicting the topmost rib;
a prediction module: inputting the rib semantic segmentation binary mask into a trained deep convolutional neural network, obtaining a prediction mask of the topmost rib layer by layer sequence from top to bottom, erasing the topmost rib layer from the rib semantic segmentation binary mask, inputting the rib semantic segmentation binary mask into the trained deep convolutional neural network again, repeating the operation until the rib semantic segmentation binary mask is empty or all ribs are obtained, and obtaining the predicted rib case segmentation mask.
Example 3:
those skilled in the art will understand this embodiment as a more specific description of embodiments 1 and 2.
As shown in fig. 1 and fig. 2, the present embodiment provides an effective chest CT rib example segmentation, rib counting and rib positioning algorithm, and in order to achieve the above purpose, the technical solution adopted by the present embodiment is: a rib example segmentation, counting and positioning method based on deep learning comprises the following steps:
step 1, performing semantic segmentation on ribs in the chest CT, namely marking the rib region in the chest CT as 1 and marking other regions as 0, and obtaining a rib semantic segmentation binary mask. This step can be implemented by using methods such as threshold-based segmentation and deep learning-based segmentation, which are not essential to the present invention and will not be described again.
And 2, collecting the rib semantic segmentation binary mask obtained in the step 1, and manually labeling the rib semantic segmentation binary mask. In particular, in the manual labeling process, for the double lung 12 to the ribs, the labels are 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12 from the top to the bottom of the lung. This step obtains the rib instance segmentation mask.
And 3, processing the rib semantic segmentation binary mask and the rib instance segmentation mask to manufacture a sequence segmentation training sample set. According to the information provided by the rib instance segmentation mask, respectively reserving the 1 st rib and the later ribs, the 2 nd rib and the later ribs, the 3 rd rib and the later ribs for the rib semantic segmentation binary mask, … …, and so on until the 12 th rib, and obtaining 12 training input images by each rib semantic segmentation binary mask; correspondingly, the topmost rib of each training input image is labeled as 1, and the next ribs are labeled as 2, so as to obtain a training target image.
And 4, establishing a deep convolutional neural network model, wherein the deep neural network model takes the 3D UNet as a backbone network and comprises a plurality of layers of 3D neurons.
And 5, aiming at the deep convolutional neural network, training each level of neural network architecture in the deep convolutional neural network for multiple times by adopting the training sample set obtained in the step 3, and adjusting parameters of the neural network architecture according to a set learning rate during training so as to obtain the deep convolutional neural network capable of predicting the topmost rib. In the training process, on-line data augmentation including random translation, random rotation and random scaling is carried out on the training output image. When the deep convolutional neural network is trained, cross entropy and Dice are used as loss functions, an Adam optimization algorithm is used as a learning algorithm, and neuron parameters of each layer are regularized by using L2 Weight Decay. And when the neural network architectures at all levels are trained, the learning rate of each iteration training is less than or equal to that of the previous iteration.
And 6, during prediction, inputting the rib semantic segmentation binary mask into the deep convolutional neural network obtained in the step 5 to obtain a prediction mask of the topmost rib, then erasing the topmost rib from the rib semantic segmentation binary mask, and inputting the rib semantic segmentation binary mask into the deep convolutional neural network again. And repeating the steps until the binary mask for semantic division of the ribs is empty or all the ribs are obtained. The series of top-most rib prediction masks obtained in the above process are sequentially marked as 1, 2, 3, … …, and 12, so as to obtain predicted rib instance segmentation masks.
Example 4:
those skilled in the art will understand this embodiment as a more specific description of embodiments 1 and 2.
The embodiment provides a rib instance segmentation, counting and positioning method based on deep learning, and the core of the instance segmentation, counting and positioning method is as follows: establishing a deep convolutional neural network, and carrying out example segmentation on the ribs layer by layer; explicit modeling of the rib count problem can robustly handle cases where the number of ribs is less than or more than 12 pairs.
The rib example segmentation, counting and positioning method based on deep learning provided by the embodiment comprises the following steps:
step 1, performing semantic segmentation on ribs in chest CT to obtain a rib semantic segmentation binary mask. In the rib semantic segmentation binary mask, the rib region is marked as 1, and other regions are marked as 0. In the step, denoising processing needs to be carried out on the obtained rib semantic segmentation binary mask, and small noise points with the volume smaller than 600mm ^3 in the semantic segmentation mask are deleted.
And 2, collecting the rib semantic segmentation binary mask obtained in the step 1, and manually labeling the rib semantic segmentation binary mask. In particular, in the manual labeling process, for the double lung 12 to the ribs, the labels are 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12 from the top to the bottom of the lung. Specifically, in the manual labeling process, the region corresponding to the 1 st rib in the rib semantic segmentation binary mask is filled with 1, the region corresponding to the 2 nd rib is filled with 2, … …, and the region corresponding to the 12 th rib is filled with 12.
And 3, processing the rib semantic segmentation binary mask and the rib instance segmentation mask to manufacture a sequence segmentation training sample set. According to the information provided by the rib instance segmentation mask, respectively reserving the 1 st rib and the later ribs, the 2 nd rib and the later ribs, the 3 rd rib and the later ribs for the rib semantic segmentation binary mask, … …, and so on until the 12 th rib, and obtaining 12 training input images by each rib semantic segmentation binary mask; correspondingly, the topmost rib of each training input image is labeled as 1, and the next ribs are labeled as 2, so as to obtain a training target image. The specific processing flow is as follows:
Figure BDA0003360119840000081
step 4, establishing a deep convolutional neural network model, wherein the deep neural network model takes 3D UNet as a backbone network, comprises 10 layers of 3D neurons in total, and sequentially comprises the following steps: input layer, convolutional layer, dimension-reducing convolutional layer, skip-to-run deconvolution layer, convolutional layer, output layer, except for output layer, other neurons use Leaky ReLU as activation function. Wherein, the input layer is used for inputting the preprocessed image, i.e. inputting the training input image obtained in step 3. The dimension reduction convolutional layer is a convolutional layer for reducing the dimension of the feature map, and after the dimension reduction convolutional layer is processed, the dimension of each dimension of the feature map is reduced by one time, but the number of channels is increased by one time. The jump-connection deconvolution layer comprises a deconvolution layer and a jump-connection structure, wherein the deconvolution layer is used for increasing the size of the feature map, after the deconvolution layer is processed, the size of each dimension of the feature map is increased by one time, but the number of channels is reduced by one time, and then the jump-connection structure splices the feature map output by the corresponding dimension-reduction convolution layer and the feature map output by the deconvolution layer. The output layer is the last layer, using softmax as the activation function.
And 5, aiming at the deep convolutional neural network, training each level of neural network architecture in the deep convolutional neural network for multiple times by adopting the training sample set obtained in the step 3, and adjusting parameters of the neural network architecture according to a set learning rate during training. For each training of the deep convolutional neural network, each training input image in the training sample set and the corresponding training target image need to be traversed. Each training input image is first scaled so that the physical size of each voxel after scaling is 2.5mm 1.5 mm; center cropping is then performed, with the cropped image size 160 x 128. The following method is adopted when the deep convolutional neural network is trained:
a. in each iteration, the training input image and the training target image corresponding to the training input image are subjected to random translation, rotation and scaling change, so that the diversity of training data is increased;
b. cross Encopy and Dice were used as loss functions;
c. using Adam optimization algorithms, so-called learning algorithms;
d. and (3) performing 300 rounds of iteration on the deep convolutional neural network training, adjusting parameters of the deep convolutional neural network according to a set learning rate in each training process, wherein the learning rate becomes smaller and smaller along with the increase of the number of training rounds, and the learning rate of each training is less than or equal to that of the previous training. The learning rate of 0.005 is used in the iteration of 0-150, the learning rate of 0.0005 is used in the iteration of 150-250, and the learning rate of 0.00005 is used in the iteration of 250-300 epochs;
e. l2 Weight Decay regularization is used for parameters in each layer of the deep convolutional neural network to avoid overfitting.
And 6, during prediction, inputting the rib semantic segmentation binary mask into the deep convolutional neural network obtained in the step 5, sequentially obtaining a prediction mask of the topmost rib, and obtaining a predicted rib instance segmentation mask. The specific flow of the algorithm is as follows:
Figure BDA0003360119840000091
according to the invention, by constructing the sequence rib segmentation training data, the training deep convolution neural network can segment the ribs one by one, and the problem of example segmentation of the ribs is solved. According to the invention, the rib counting problem is explicitly modeled, and the ribs are segmented from the top of the lung to the bottom of the lung one by one, so that the problems of counting and positioning of the ribs when the number of pairs of the ribs is insufficient or the number of redundant 12 pairs of the ribs is excessive are solved.
Those skilled in the art will appreciate that, in addition to implementing the system and its various devices, modules, units provided by the present invention as pure computer readable program code, the system and its various devices, modules, units provided by the present invention can be fully implemented by logically programming method steps in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers and the like. Therefore, the system and various devices, modules and units thereof provided by the invention can be regarded as a hardware component, and the devices, modules and units included in the system for realizing various functions can also be regarded as structures in the hardware component; means, modules, units for performing the various functions may also be regarded as structures within both software modules and hardware components for performing the method.
The foregoing description of specific embodiments of the present invention has been presented. It is to be understood that the present invention is not limited to the specific embodiments described above, and that various changes or modifications may be made by one skilled in the art within the scope of the appended claims without departing from the spirit of the invention. The embodiments and features of the embodiments of the present application may be combined with each other arbitrarily without conflict.

Claims (10)

1. A method for rib instance segmentation, counting and positioning is characterized by comprising the following steps:
step 1: performing semantic segmentation on ribs in chest CT to obtain a rib semantic segmentation binary mask;
step 2: collecting artificial labels corresponding to the rib semantic segmentation binary mask to obtain a rib instance segmentation mask;
and step 3: processing the rib semantic segmentation binary mask and the rib instance segmentation mask to manufacture a sequence segmentation training sample set;
and 4, step 4: establishing a deep convolutional neural network model;
and 5: training each level of neural network architecture in the deep convolutional neural network model by adopting the training sample set to obtain a deep convolutional neural network capable of predicting the topmost rib;
step 6: inputting the rib semantic segmentation binary mask into a trained deep convolutional neural network, obtaining a prediction mask of the topmost rib layer by layer sequence from top to bottom, erasing the topmost rib layer from the rib semantic segmentation binary mask, inputting the rib semantic segmentation binary mask into the trained deep convolutional neural network again, repeating the operation until the rib semantic segmentation binary mask is empty or all ribs are obtained, and obtaining the predicted rib case segmentation mask.
2. The method for rib instance segmentation, enumeration and positioning according to claim 1, wherein the step 3 is specifically: according to the information provided by the rib instance division mask, the semantic division binary mask for the ribs reserves the 1 st and later ribs, the 2 nd and later ribs, the 3 rd and later ribs, the 4 th and later ribs, the 5 th and later ribs, the 6 th and later ribs, the 7 th and later ribs, the 8 th and later ribs, the 9 th and later ribs, the 10 th and later ribs, the 11 th and later ribs, the 12 th and later ribs respectively; and semantically segmenting the binary mask to obtain 12 training input images, and labeling the topmost rib of each training input image to obtain a training target image.
3. The rib instance segmentation, counting and positioning method according to claim 1, wherein in the step 4, the deep neural network model uses 3D UNet as a backbone network, including multiple layers of 3D neurons.
4. The rib instance segmentation, counting and positioning method of claim 3, wherein said 3D neurons are 10 layers, respectively input layer, convolutional layer, dimension-reduced convolutional layer, jump-to-deconvolution layer, convolutional layer, output layer.
5. The method for rib case segmentation, counting and positioning according to claim 1, wherein in step 5, the training output image is subjected to online data augmentation during the training process.
6. The method of rib instance segmentation, enumeration and localization according to claim 5, wherein said online data augmentation comprises random translation, random rotation and random scaling.
7. The rib instance segmentation, counting and positioning method according to claim 1, wherein in the step 5, cross entropy and Dice are used as loss functions, Adam optimization algorithm is used as learning algorithm, and neuron parameters of each layer are normalized by L2 Weight Decay when the deep convolutional neural network is trained.
8. The method for rib case segmentation, counting and positioning according to claim 1, wherein in the step 5, when training each stage of the neural network architecture, a learning rate of each iteration is less than or equal to a learning rate of a previous iteration.
9. The rib instance segmentation, counting and positioning method according to claim 1, wherein in step 1, the obtained rib semantic segmentation binary mask is denoised, and small noise points with a volume smaller than a preset value in the semantic segmentation mask are deleted.
10. A rib example segmentation, counting and positioning system is characterized by comprising the following modules:
the rib semantic segmentation binary mask module: performing semantic segmentation on ribs in chest CT to obtain a rib semantic segmentation binary mask;
rib instance segmentation mask module: collecting artificial labels corresponding to the rib semantic segmentation binary mask to obtain a rib instance segmentation mask;
training a sample set module: processing the rib semantic segmentation binary mask and the rib instance segmentation mask to manufacture a sequence segmentation training sample set;
a neural network model module: establishing a deep convolutional neural network model;
a training module: training each level of neural network architecture in the deep convolutional neural network model by adopting the training sample set to obtain a deep convolutional neural network capable of predicting the topmost rib;
a prediction module: inputting the rib semantic segmentation binary mask into a trained deep convolutional neural network, obtaining a prediction mask of the topmost rib layer by layer sequence from top to bottom, erasing the topmost rib layer from the rib semantic segmentation binary mask, inputting the rib semantic segmentation binary mask into the trained deep convolutional neural network again, repeating the operation until the rib semantic segmentation binary mask is empty or all ribs are obtained, and obtaining the predicted rib case segmentation mask.
CN202111364466.6A 2021-11-17 2021-11-17 Method and system for rib case segmentation, counting and positioning Pending CN114049358A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111364466.6A CN114049358A (en) 2021-11-17 2021-11-17 Method and system for rib case segmentation, counting and positioning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111364466.6A CN114049358A (en) 2021-11-17 2021-11-17 Method and system for rib case segmentation, counting and positioning

Publications (1)

Publication Number Publication Date
CN114049358A true CN114049358A (en) 2022-02-15

Family

ID=80209931

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111364466.6A Pending CN114049358A (en) 2021-11-17 2021-11-17 Method and system for rib case segmentation, counting and positioning

Country Status (1)

Country Link
CN (1) CN114049358A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115035145A (en) * 2022-05-05 2022-09-09 深圳市铱硙医疗科技有限公司 Blood vessel and bone segmentation method and device, computer device and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190066294A1 (en) * 2017-08-31 2019-02-28 Shenzhen United Imaging Healthcare Co., Ltd. System and method for image segmentation
CN111080592A (en) * 2019-12-06 2020-04-28 广州柏视医疗科技有限公司 Rib extraction method and device based on deep learning
KR20200110532A (en) * 2019-03-14 2020-09-24 고려대학교 산학협력단 Machine learning-based thoracic vertebrae detection and rib numbering of CT/MRI images
CN111915620A (en) * 2020-06-19 2020-11-10 杭州深睿博联科技有限公司 CT rib segmentation method and device
CN112529849A (en) * 2020-11-27 2021-03-19 北京深睿博联科技有限责任公司 Automatic counting method and device for CT ribs
CN112950552A (en) * 2021-02-05 2021-06-11 慧影医疗科技(北京)有限公司 Rib segmentation marking method and system based on convolutional neural network
US20210312629A1 (en) * 2020-04-07 2021-10-07 Shanghai United Imaging Intelligence Co., Ltd. Methods, systems and apparatus for processing medical chest images

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190066294A1 (en) * 2017-08-31 2019-02-28 Shenzhen United Imaging Healthcare Co., Ltd. System and method for image segmentation
KR20200110532A (en) * 2019-03-14 2020-09-24 고려대학교 산학협력단 Machine learning-based thoracic vertebrae detection and rib numbering of CT/MRI images
CN111080592A (en) * 2019-12-06 2020-04-28 广州柏视医疗科技有限公司 Rib extraction method and device based on deep learning
US20210312629A1 (en) * 2020-04-07 2021-10-07 Shanghai United Imaging Intelligence Co., Ltd. Methods, systems and apparatus for processing medical chest images
CN111915620A (en) * 2020-06-19 2020-11-10 杭州深睿博联科技有限公司 CT rib segmentation method and device
CN112529849A (en) * 2020-11-27 2021-03-19 北京深睿博联科技有限责任公司 Automatic counting method and device for CT ribs
CN112950552A (en) * 2021-02-05 2021-06-11 慧影医疗科技(北京)有限公司 Rib segmentation marking method and system based on convolutional neural network

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115035145A (en) * 2022-05-05 2022-09-09 深圳市铱硙医疗科技有限公司 Blood vessel and bone segmentation method and device, computer device and storage medium

Similar Documents

Publication Publication Date Title
CN111476292B (en) Small sample element learning training method for medical image classification processing artificial intelligence
Aggarwal et al. COVID-19 image classification using deep learning: Advances, challenges and opportunities
CN110599448B (en) Migratory learning lung lesion tissue detection system based on MaskScoring R-CNN network
Gecer et al. Detection and classification of cancer in whole slide breast histopathology images using deep convolutional networks
CN111882560B (en) Lung parenchyma CT image segmentation method based on weighted full convolution neural network
Zhang et al. Interactive medical image segmentation via a point-based interaction
CN110310280A (en) Hepatic duct and the image-recognizing method of calculus, system, equipment and storage medium
Chen et al. A lung dense deep convolution neural network for robust lung parenchyma segmentation
Shu et al. LVC-Net: Medical image segmentation with noisy label based on local visual cues
CN115018809A (en) Target area segmentation and identification method and system of CT image
Jakhar et al. Pneumothorax segmentation: deep learning image segmentation to predict pneumothorax
CN112037212A (en) Pulmonary tuberculosis DR image identification method based on deep learning
Zhang et al. LungSeek: 3D Selective Kernel residual network for pulmonary nodule diagnosis
WO2022111383A1 (en) Ct-based rib automatic counting method and device
Shabani et al. Self-supervised region-aware segmentation of COVID-19 CT images using 3D GAN and contrastive learning
Guo et al. Bone age assessment based on deep convolutional features and fast extreme learning machine algorithm
CN114049358A (en) Method and system for rib case segmentation, counting and positioning
CN115115570A (en) Medical image analysis method and apparatus, computer device, and storage medium
Liu et al. RPLS-Net: pulmonary lobe segmentation based on 3D fully convolutional networks and multi-task learning
Zhao Cross chest graph for disease diagnosis with structural relational reasoning
Liu et al. Automatic ct segmentation from bounding box annotations using convolutional neural networks
Li et al. A Novel Automatic Method Based on U-Net for Lung Fields Segmentation.
Arulappan et al. Liver tumor segmentation using a new asymmetrical dilated convolutional semantic segmentation network in CT images
de Almeida et al. A deep unsupervised saliency model for lung segmentation in chest X-ray images
CN116630628B (en) Aortic valve calcification segmentation method, system, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination