CN113793357A - Bronchopulmonary segment image segmentation method and system based on deep learning - Google Patents

Bronchopulmonary segment image segmentation method and system based on deep learning Download PDF

Info

Publication number
CN113793357A
CN113793357A CN202110769339.8A CN202110769339A CN113793357A CN 113793357 A CN113793357 A CN 113793357A CN 202110769339 A CN202110769339 A CN 202110769339A CN 113793357 A CN113793357 A CN 113793357A
Authority
CN
China
Prior art keywords
image
lung
segmentation
bronchus
segment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110769339.8A
Other languages
Chinese (zh)
Inventor
袁康
何毅
杨健程
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Diannei Shanghai Biotechnology Co ltd
Original Assignee
Diannei Shanghai Biotechnology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Diannei Shanghai Biotechnology Co ltd filed Critical Diannei Shanghai Biotechnology Co ltd
Priority to CN202110769339.8A priority Critical patent/CN113793357A/en
Publication of CN113793357A publication Critical patent/CN113793357A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/143Segmentation; Edge detection involving probabilistic approaches, e.g. Markov random field [MRF] modelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/155Segmentation; Edge detection involving morphological operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/181Segmentation; Edge detection involving edge growing; involving edge linking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20076Probabilistic image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30061Lung

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The invention relates to a bronchopulmonary segment image segmentation method and a bronchopulmonary segment image segmentation system based on deep learning, wherein the method comprises the following steps: s1: extracting a lung image, and performing bronchial segmentation on the lung image to obtain a bronchial segmentation image; s2: performing key point classification and labeling on the bronchus segmentation image to obtain a bronchus key point labeling image; and segmenting the bronchus key point labeling image to obtain a lung segment segmentation image. According to the method, the lung image is extracted, the bronchial image is segmented and data is labeled firstly, and then the lung image is segmented, so that the image segmentation definition and accuracy of bronchial branches and tail ends are improved, the stability of lung segment image segmentation is improved, the process is simplified, and the data dependency is reduced.

Description

Bronchopulmonary segment image segmentation method and system based on deep learning
Technical Field
The invention relates to the technical field of image segmentation, in particular to a bronchopulmonary segment image segmentation method and system based on deep learning.
Background
The positioning of the focus position is an important step in the diagnosis and treatment process of lung diseases, so that the segmentation of the bronchus is completed, and the segmentation and positioning of the lung segment are completed on the basis, so that the method is a valuable technical means. In the past, the segmentation and the lung segment positioning of the bronchus are manually outlined by doctors, and as the lung condition of each person is different and the variation condition of the bronchus is complicated, doctors often need to spend much time, and the segmentation quality is good and bad, thereby influencing the design and execution of the surgical plan. In recent years, with the development of machine learning and deep learning technologies, we can improve the speed accuracy and stability of bronchial segmentation and lung segment segmentation by means of computer-aided technology.
Many bronchial segmentation methods at the present stage are based on a threshold segmentation technology of a traditional image algorithm, or a convolutional neural network is directly used for prediction, and the results of the algorithm cannot accurately segment bronchial branches and tail ends; in the task of segmenting the lung segment, the situation that the segmentation result is unstable and the segmentation surface is not closed can occur by directly using the model for prediction, and the complete lung segment segmentation data has high marking cost and general quality, so that the model can not accurately learn the characteristics of the lung segment segmentation surface.
Disclosure of Invention
The invention mainly aims to overcome the defects of the prior art and provide a method and a system for segmenting a bronchopulmonary segment image based on deep learning.
According to one aspect of the present invention, the present invention provides a method for segmenting bronchopulmonary segment images based on deep learning, comprising the following steps:
s1: extracting a lung image, and performing bronchial segmentation on the lung image to obtain a bronchial segmentation image;
s2: performing key point classification and labeling on the bronchus segmentation image to obtain a bronchus key point labeling image; and segmenting the bronchus key point labeling image to obtain a lung segment segmentation image.
Preferably, the performing bronchial segmentation on the lung image to obtain a bronchial segmentation image includes:
carrying out threshold segmentation, normalization, data enhancement, resampling and oversampling on the image to obtain model input data;
respectively calculating the lung image data with low resolution and high resolution by adopting a two-stage 3D-UNet network, and outputting a segmentation result;
performing maximum connected domain analysis on the segmentation result, and screening out a region with the maximum number of pixels and a connected region of voxels with the volume larger than a preset number;
and performing morphological adjustment and connection processing on the branch fracture part of the bronchus on the screened area to obtain a bronchus segmentation image.
Preferably, the bronchus segmentation image comprises a bronchus foreground channel and a background channel; the loss functions of the foreground channel and the background channel are formed by combining cross entropy loss and Dice loss of the two classes according to different weights, and the combined weight is automatically adjusted along with the change of the loss function value; the formula of the cross entropy loss and the Dice loss is as follows:
Figure BDA0003152159230000021
Figure BDA0003152159230000031
wherein H (p, q) is cross entropy loss, p (X) is true probability distribution, q (X) is true probability distribution, X is predicted binary segmentation image, and Y is labeled binary segmentation image.
Preferably, the classifying and labeling the key points of the bronchial segment image to obtain a bronchial key point labeled image includes:
classifying and labeling the bronchus key points by using a machine learning model support vector machine and utilizing a loss function to obtain lung lobe labeling data, and determining a basic segmentation surface of a lung section according to a hyperparameter learned in the model, wherein the loss function is a Hinge function:
hinge(y)=max(0,1-y·y′)
wherein y is binary labeling data (0 or 1), and y' is a model prediction value.
Preferably, the segmenting the bronchus key point labeling image to obtain a lung segment segmentation image includes:
loading the hyper-parameters to corresponding lung lobe areas based on the lung lobe labeling data, sorting out coordinates of all lung lobe voxels, and sequentially inputting the coordinates into a model of a support vector machine, so that the lung lobe voxels are completely corresponding to the classification of each lung segment;
and if the lung lobe labeling data are missing, carrying out probability statistics on the lung segment position and the lung segment proportion on the lung segment position missing the labeling data, calculating the position coordinate of the lung segment position, supplementing the lung lobe labeling data according to the position coordinate, and further carrying out automatic completion on the missing lung segment in a corresponding region.
According to another aspect of the present invention, there is also provided a deep learning based broncho-pulmonary segment image segmentation system, comprising:
the bronchus segmentation device is used for extracting a lung image and performing bronchus segmentation on the lung image to obtain a bronchus segmentation image;
the lung segment segmentation device is used for classifying and labeling key points of the bronchial segmented image to obtain a bronchial key point labeled image; and segmenting the bronchus key point labeling image to obtain a lung segment segmentation image.
Preferably, the bronchial segmentation apparatus comprises:
the image preprocessing module is used for carrying out threshold segmentation, normalization, data enhancement, resampling and oversampling on the image to obtain model input data;
the lung segmentation module is used for calculating the low-resolution and high-resolution lung image data by adopting a two-stage 3D-UNet network and outputting segmentation results;
the screening module is used for carrying out maximum connected domain analysis on the segmentation result and screening out a region with the maximum number of pixels and a connected region of voxels with the volume larger than a preset number;
and the optimization module is used for performing morphological adjustment and connection processing on the branch fracture part of the bronchus on the screened area to obtain a bronchus segmentation image.
Preferably, the bronchus segmentation image comprises a bronchus foreground channel and a background channel; the loss functions of the foreground channel and the background channel are formed by combining cross entropy loss and Dice loss of the two classes according to different weights, and the combined weight is automatically adjusted along with the change of the loss function value; the formula of the cross entropy loss and the Dice loss is as follows:
Figure BDA0003152159230000041
Figure BDA0003152159230000042
wherein H (p, q) is cross entropy loss, p (X) is true probability distribution, q (X) is true probability distribution, X is predicted binary segmentation image, and Y is labeled binary segmentation image.
Preferably, the lung segment segmentation apparatus comprises:
the key point classification module is used for classifying and labeling bronchus key points by using a machine learning model support vector machine and utilizing a loss function to obtain lung lobe labeling data, and determining a basic segmentation surface of a lung section according to a hyperparameter learned in the model, wherein the loss function is a Hinge function:
hinge(y)=max(0,1-y·y′)
wherein y is binary labeling data (0 or 1), and y' is a model prediction value.
Preferably, the lung segment segmentation apparatus further comprises:
the lung lobe segmentation module is used for loading the hyper-parameters into corresponding lung lobe areas based on the lung lobe marking data, sorting out coordinates of all lung lobe voxels, and sequentially inputting the coordinates into a model of a support vector machine, so that the lung lobe voxels are completely corresponding to the classification of each lung segment;
and the lung segment complementing module is used for carrying out probability statistics on the lung segment position and proportion on the lung segment position missing the labeled data if the lung lobe labeled data is missing, calculating the position coordinate of the lung segment position, complementing the lung lobe labeled data according to the position coordinate and further carrying out automatic complementing on the missing lung segment in a corresponding area.
Has the advantages that: according to the method, the lung image is extracted, the bronchial image is segmented and data is labeled firstly, and then the lung image is segmented, so that the image segmentation definition and accuracy of bronchial branches and tail ends are improved, the stability of lung segment image segmentation is improved, the process is simplified, and the data dependency is reduced.
The features and advantages of the present invention will become apparent by reference to the following drawings and detailed description of specific embodiments of the invention.
Drawings
Fig. 1 is a flowchart of a bronchopulmonary segment image segmentation method based on deep learning;
FIG. 2 is a bronchial segmentation image provided by an embodiment of the present invention;
FIG. 3 is a bronchus key point labeling image according to an embodiment of the present invention;
FIG. 4 is a bronchus keypoint interface image provided by an embodiment of the present invention;
FIG. 5 is a segmented image of a lung segment provided by an embodiment of the present invention;
FIG. 6 is a schematic diagram of a deep learning based broncho-pulmonary segment image segmentation system;
FIG. 7 is a schematic view of the structure of the bronchial segmentation apparatus;
fig. 8 is a schematic structural diagram of a lung segmentation apparatus.
Detailed Description
The technical solutions in the embodiments of the present invention are clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Example 1
Fig. 1 is a flowchart of a bronchopulmonary segment image segmentation method based on deep learning. As shown in fig. 1, the present invention provides a method for segmenting bronchopulmonary segment image based on deep learning, which includes the following steps:
s1: and extracting a lung image, and performing bronchial segmentation on the lung image to obtain a bronchial segmentation image.
The method comprises the following specific steps:
and S11, performing threshold segmentation, normalization, data enhancement, resampling and oversampling on the image to obtain model input data.
Segmenting the CT image by using a binarization threshold value in a traditional image algorithm, extracting a lung image, and then performing preprocessing operations, wherein the preprocessing operations comprise: normalizing the CT value; data enhancement, further comprising: zooming and rotating the image, adding Gaussian noise and blur, adjusting brightness and contrast, simulating at low resolution and performing mirror image operation; the resampling operation adjusts the spacing (spacing) of the lung images, setting the spacing above 1.5 may result in low resolution images, and setting below 1 may result in high resolution images. The interpolation technology used in the resampling process is linear interpolation and third-order spline interpolation, and then images with two resolutions are respectively sent to a two-stage bronchus segmentation model for training and prediction; based on the volume of the lung image, cutting the image using a slicing operation to reduce the data size per model input; by using the oversampling technology, the proportion of the foreground and the background in the sample image of each input model is ensured to be about 50%.
And S12, adopting a two-stage 3D-UNet network to respectively calculate the lung image data with low resolution and high resolution, and outputting a segmentation result.
In order to improve the performance of a segmentation model at a bronchial thin branch, the invention provides a two-stage deep learning model, wherein a 3D-UNet neural network is adopted, a complete low-resolution lung image CT value is input in the 3D-UNet network in the first stage, a structurally complete rough segmentation result is obtained through 12 convolutional layers, 4 times of downsampling and 4 times of upsampling, the high-resolution lung image is combined to be used as the input of the 3D-UNet network in the second stage, and a fine segmentation result with richer detail information is obtained through 5 times of downsampling and 5 times of upsampling, so that the segmentation effect of the segmentation model at the bronchial thin branch is improved.
The bronchus segmentation image comprises a bronchus foreground channel and a background channel; the loss functions of the foreground channel and the background channel are formed by combining cross entropy loss and Dice loss of the two classes according to different weights, and the combined weight is automatically adjusted along with the change of the loss function value; the formula of the cross entropy loss and the Dice loss is as follows:
Figure BDA0003152159230000081
Figure BDA0003152159230000082
wherein H (p, q) is cross entropy loss, p (X) is true probability distribution, q (X) is true probability distribution, X is predicted binary segmentation image, and Y is labeled binary segmentation image.
And S13, performing maximum connected domain analysis on the segmentation result, and screening out a region with the maximum number of pixels and a connected region with the volume larger than a preset number of voxels.
And marking the most connected region voxels and the connected regions with the number of voxels larger than 200 by adopting a connected region algorithm to obtain a basic result of the bronchial segmentation.
And S14, performing morphological adjustment and connection processing on the branch fracture of the bronchus on the screened area to obtain a bronchus segmentation image.
As shown in fig. 2, the image is finely adjusted by using basic morphological operations such as erosion, dilation, opening operation and closing operation, and is positioned to the end point position of the fracture, and connected by using line segments, so as to obtain the final bronchus segmentation map.
S2: performing key point classification and labeling on the bronchus segmentation image to obtain a bronchus key point labeling image; and segmenting the bronchus key point labeling image to obtain a lung segment segmentation image.
The method comprises the following specific steps:
and S21, carrying out key point classification and labeling on the bronchus segmentation image to obtain a bronchus key point labeling image.
Referring to fig. 3, using a machine learning model support vector machine, classifying and labeling bronchial key points by using a loss function to obtain lung lobe labeling data, and determining a basic segmentation plane of a lung segment according to a hyper-parameter learned in the model, where the determined segmentation plane is shown in fig. 4. Wherein the loss function is a Hinge function:
hinge(y)=max(0,1-y·y′)
wherein y is binary labeled data (0 or 1), y' is a model predicted value, and the bronchus comprises 18 classes of classifications.
And S22, segmenting the bronchus key point labeling image to obtain a lung segment segmentation image.
Preferably, the hyper-parameters are loaded to the corresponding lung lobe areas based on the lung lobe labeling data, coordinates of all lung lobe voxels are sorted out and are sequentially input into a model of a support vector machine, and therefore the lung lobe voxels are completely corresponding to the classification of each lung segment.
Specifically, referring to fig. 5, based on the lung lobe labeling data, the parameters of the hyperplane are loaded to the corresponding lung lobe region, coordinates of all lung lobe voxels are sorted out, and the coordinates are sequentially input into the model of the support vector machine, so that 18 classifications corresponding to each lung segment are completed for 5 classes of lung lobe voxels.
Preferably, if the lung lobe labeling data are missing, probability statistics of lung segment positions and proportions are performed on the lung segment positions missing the labeling data, position coordinates of the lung segment positions are calculated, the lung lobe labeling data are supplemented according to the position coordinates, and then automatic completion of the missing lung segments is performed in corresponding areas.
Specifically, for the problem of lung segment incompleteness caused by the missing of key points which may occur, the positions and proportions of the lung segments are counted, the position with the highest probability of occurrence of each lung segment is calculated, and then the missing lung segments are automatically completed in the corresponding region.
According to the method, through two-stage model training, the overall structure prediction is carried out on the low-resolution image, and then the precise detail segmentation is carried out on the high-resolution image, so that the accuracy of the bronchial thin branch segmentation is improved;
according to the lung segment segmentation method based on machine learning, provided by the invention, by labeling and classifying key points of the bronchus, the dependence on lung segment labeling data is eliminated, the lung segment segmentation scheme is realized at high speed and low cost, and the scheme is favorable for helping doctors to position the focus part of a patient so as to assist the design and execution of an operation scheme.
The automatic lung segment segmentation completion method provided by the invention can be used for counting the relative position, the volume ratio and the adjacent lung segment relation of lung lobes in a data set under the condition that labeled data are incomplete, and further completing missing lung segments, so that the segmentation result is ensured to contain information of complete lung segments.
Example 2
Fig. 6 is a schematic diagram of a bronchopulmonary segment image segmentation system based on deep learning. As shown in fig. 6, the present invention further provides a deep learning based broncho-pulmonary segment image segmentation system, which includes:
the bronchus segmentation device is used for extracting a lung image and performing bronchus segmentation on the lung image to obtain a bronchus segmentation image;
the lung segment segmentation device is used for classifying and labeling key points of the bronchial segmented image to obtain a bronchial key point labeled image; and segmenting the bronchus key point labeling image to obtain a lung segment segmentation image.
Preferably, the bronchial segmentation apparatus comprises:
and the image preprocessing module is used for carrying out threshold segmentation, normalization, data enhancement, resampling and oversampling on the image to obtain model input data.
Specifically, the image preprocessing module comprises a threshold segmentation module, a normalization module, a data enhancement module, a resampling module and an oversampling module.
In the threshold segmentation module, according to a binarization threshold value of 0.5, a voxel point which is larger than the binarization threshold value in the CT image is set to be 1 and marked as a lung voxel, and the rest voxels have values of 0 and are marked as background voxels, so that a binary image is obtained. Multiplying the binary image by the original CT image, the voxel values belonging to the lung region are retained, and the rest positions are 0.
In the normalization module, the window level of the CT value is set to be-1000 to 600, and the voxel values of the lung region are normalized to be on a normal distribution with the mean value of 0 and the variance of 1.
In the data enhancement module, communication is rotated and scaled, the occurrence probability of image scaling rotation is 20%, the scaling ratio is controlled to be between 70% and 140%, and the rotation angle is between plus or minus 30 degrees; gaussian noise and blur are sampled from Gaussian distribution with the mean value of 0 and the variance of 0.1, and the occurrence probability is 15 percent; the brightness and contrast are adjusted to float up and down by 30 percent, and the occurrence probability is 15 percent; mirror inversion will occur on all axes with a probability of 50%.
In the resampling module, images with different resolutions can be obtained by adjusting the spacing (spacing) of the images, third-order spline interpolation is selected for lung image data, a linear interpolation method is used for labeling data, when low resolution is obtained, the spacing is set to 1.5, an image with 180X160 resolution can be obtained, and when high resolution is obtained, the spacing is set to 0.7, an image with 520X480 resolution can be obtained.
In the oversampling module, because of the limitation of video memory of the video card, the whole image cannot be completely input into the model, so that the lung image is randomly sliced first, the cutting size is 128X128, and in order to ensure that most of the samples of the input model contain foreground information, the oversampling technology is used, so that the proportion of the samples including foreground and only background in the sample image is maintained at least 50%.
And the lung segmentation module is used for calculating the low-resolution and high-resolution lung image data by adopting a two-stage 3D-UNet network respectively and outputting segmentation results.
Specifically, the two-stage deep learning network model selects a 3D UNet network, wherein an encoder structure learns the characteristics of all dimensions of an image layer by layer through convolution and down sampling, and a decoder completes the detail and space dimensions of an object layer by layer through convolution and up sampling. In the application of the invention, 3X3X3 is selected as the size of a convolution kernel, a complete low-resolution lung image CT value is input in a 3D-UNet network at a first stage, firstly, 4 times of downsampling is carried out to ensure that the size of an obtained characteristic image is not lower than 4X4X4, a rough segmentation probability map is obtained through 4 times of upsampling, a low-resolution image is used as the input of the first stage to avoid slicing operation to realize complete input, so that more structural information is kept, the rough segmentation result is combined with the high-resolution lung image to be used as the input of the 3D-UNet network at a second stage, and after 5 times of downsampling and 5 times of upsampling, a fine segmentation result with rich detailed information is obtained, so that the segmentation effect of a segmentation model at a bronchial fine branch is improved.
And the screening module is used for carrying out maximum connected domain analysis on the segmentation result and screening out the region with the maximum number of pixels and the connected region of the voxels with the volume larger than the preset number.
Specifically, a region with the most connected region voxels is marked by adopting a connected region algorithm (two-pass) to serve as a main body of the bronchus, the connected regions with the number of voxels larger than 200 and smaller than 800 are reserved, the shape attribute of the region is obtained by using a RegionProps module in Sciki-Image, if the connected regions are columnar regions, broken branches are reserved, the rest are screened out by using noise, and then the basic result of the bronchus segmentation is obtained.
And the optimization module is used for performing morphological adjustment and connection processing on the branch fracture part of the bronchus on the screened area to obtain a bronchus segmentation image.
Specifically, surface smoothing and cavity filling are performed on an image by using basic morphological operations, specifically, a primary segmentation result of a bronchus has a high probability of generating leakage and fracture, the leakage is represented by surface irregularity, namely, the lung interstitium is mistakenly segmented out as the bronchus, here, a dilation operation is performed first, a binary segmentation image of 512x512x512 is input, each pixel is traversed, and if the pixel value is 0 or a pixel value is 1 in a 3x3x3 neighborhood, the pixel value is set to be 1. And performing the erosion operation again, traversing all the pixels similarly, and setting the pixel value to 0 if the pixel value is 1 and the pixel value is 0 in the 3x3x3 neighborhood. By alternately iterating and performing the dilation-erosion operation (i.e., the closing operation) about 3 times, the surface smoothing of the bronchus segmentation image is realized.
In the optimization module, a skeleton extraction and end point positioning method is adopted to realize the connection of the fracture part of the bronchus, the fracture of the bronchus is mostly expressed that the pixel connection between the thin branch of the bronchus and the main trunk is not realized, for the condition, noise is filtered out through a screening module 103, the main trunk and the subdivided branch are reserved, then the K3M algorithm of Khalid sheet is used for carrying out skeleton extraction on the bronchus, a kernel of 3x3x3 is selected to corrode the boundary of the foreground of the binary image, an image with a wide pixel is obtained after repeated iteration for a plurality of times, and the refined result of the image is regarded as the bronchus skeleton. Based on the obtained bronchus skeleton, a quadrant judgment algorithm is used for positioning the end point of the skeleton, specifically, a convolution kernel of 5x5x5 is used for traversing the whole image, the pixel with the convolution result less than or equal to 3 is an end point, the operation is respectively carried out on the bronchus trunk and the broken branch to position all the end points, Euclidean Distance (Euclidean Distance) between the trunk end point and the branch end point is calculated, 1 group of end points with the shortest Distance is screened out, namely, the positions can be regarded as broken positions, pixel connection is carried out, specifically, a straight line with one pixel width is used for connecting the two end points, the diameter of the branch is calculated by using the RegionProp method, the straight line is used as a parameter for carrying out expansion operation, the thicknesses of the broken connection positions are consistent, and transition is natural.
Preferably, the bronchus segmentation image comprises a bronchus foreground channel and a background channel; the loss functions of the foreground channel and the background channel are formed by combining cross entropy loss and Dice loss of the two classes according to different weights, and the combined weight is automatically adjusted along with the change of the loss function value; the formula of the cross entropy loss and the Dice loss is as follows:
Figure BDA0003152159230000141
Figure BDA0003152159230000142
wherein H (p, q) is cross entropy loss, p (X) is true probability distribution, q (X) is true probability distribution, X is predicted binary segmentation image, and Y is labeled binary segmentation image.
Preferably, the lung segment segmentation apparatus comprises:
the key point classification module is used for classifying and labeling bronchus key points by using a machine learning model support vector machine and utilizing a loss function to obtain lung lobe labeling data, and determining a basic segmentation surface of a lung section according to a hyperparameter learned in the model, wherein the loss function is a Hinge function:
hinge(y)=max(0,1-y·y′)
wherein y is binary labeling data (0 or 1), and y' is a model prediction value.
Specifically, after the bronchus segmentation device finishes the bronchus segmentation extraction of the lung CT image, the bronchus image is processed by using a skeleton extraction algorithm in an optimization module, a skeleton with the width of 1 pixel is obtained, a convolution kernel of 3x3x3 is used for traversing the whole image, pixels with the convolution result less than or equal to 5 are used as key points for retention, and then the key points of the bronchus segment grade branches are screened and labeled, specifically, the branches at the tail end of the bronchus correspond to 18 lung segments according to medical priori knowledge, and the upper cusp segment, the rear segment, the upper tongue segment and the lower tongue segment of the left lung are labeled from 1 to 18 in sequence; a left lung inferior lobe basal section, an outer basal section, a front basal section and a rear basal section; the upper lobe tip section, the posterior section and the anterior section of the right lung; the right lung medial lobe inner and outer sections; the right lung inferior lobe inner basal section, the outer basal section, the front basal section, the back basal section and the inferior lobe back section. 36 marking points which are the total of the starting points and the end points of 18 lung segments are screened out from the key points and are used as marking data for the next lung segment segmentation, compared with the method for screening the bronchus key points, the method is faster and more stable in comparison with the complete lung segment marking, and the marking data difference caused by the difference of marking personnel is avoided. Coordinates and corresponding labels are arranged into a data format of [ [ x1, y1], [ x2, y2] … [ x18, y18] ] as input of a machine learning model based on 18 pairs of start point labeling and end point labeling of bronchopulmonary segment level branches. The support vector machine is used as a classic machine learning algorithm, and can position an interface according to labeled coordinate data, so that the determination of the lung segment segmentation plane is assisted.
In this embodiment, a linear kernel (linear kernel) or a polynomial kernel (poly kernel) is selected to map a sample, a regularization coefficient is set to be 1, the maximum degree of the polynomial is set to be 3, a loss function is set to be a Hinge loss, the tag data is input into the machine learning model to perform hyperplane convergence, and parameters of the model are recorded to be used as basic segmentation planes of the lung segment.
Preferably, the lung segment segmentation apparatus further comprises:
and the lung lobe segmentation module loads the hyper-parameters to corresponding lung lobe areas based on the lung lobe marking data, arranges the coordinates of all lung lobe voxels, and sequentially inputs the coordinates into a model of a support vector machine, so that the lung lobe voxels are completely corresponding to the classification of each lung segment.
Specifically, coordinates and corresponding labels of lung lobe voxels are sorted into a data format of [ [ x1, y1], [ x2, y2] … [ x5, y5] ] and are input into the trained support vector machine model in 5 times according to the prior knowledge of the lung lobe and lung segment dependency relationship, the establishment of a hyperplane can complete 3 types, 2 types, 5 types, 4 types and 4 types of lung segment classification of 18 types in total on the voxels of 5 lung lobes in sequence, and thus all voxels in the lung image complete background classification and foreground classification containing 18 types of lung segments
And the lung segment complementing module is used for carrying out probability statistics on the lung segment position and proportion on the lung segment position missing the labeled data if the lung lobe labeled data is missing, calculating the position coordinate of the lung segment position, complementing the lung lobe labeled data according to the position coordinate and further carrying out automatic complementing on the missing lung segment in a corresponding area.
Specifically, considering the situation that the bronchus key point label is missing, the lung segment segmentation scheme may obtain classification results of less than 18 classes, this embodiment provides a completion scheme based on statistical information, statistics and arrangement are performed on positions, volume ratios and adjacent lung segments relationships of lung segments relative to lung lobes in all segmentation results, and are recorded by using percentage information such as { x1: 20%, y1: 40%, z1: 30% }, when the segmentation incomplete result output in the lung lobe segmentation module 202 is, the completion module performs rough completion according to the position ratio information of the missing lung segments, specifically, for example, when some bronchus label data is only labeled by 17 groups of branches, and a right superior lung tip (labeled as S9) is missing, the segmentation result of the model only includes 17 segmentation faces, and does not satisfy the segmentation requirements of 18 standard lung segments, at this time, the module queries the position ratio data of the right superior lung tip S9 relative to the right superior lung lobe, for example { x9: 30%, y9: 30%, z9: 50% }, the coordinates of a bounding box (bounding box) of the right superior lung lobe in the figure are repositioned, the position coordinates with the highest occurrence probability of the tip S9 of the right superior lung lobe can be obtained by multiplying the coordinates by the position proportion, the coordinates are arranged into a data format of [ x9, y9], the data format is integrated into a complete key point label of the bronchus [ … [ x9, y9], … [ x18, y18] ], at this time, the data of the key point label is supplemented and perfected, the data is input into the lung lobe segmentation module 202 again, all coordinate label information is put into a machine learning model for a new round of training to obtain updated hyperplane parameters, and at this time, 18 classes of lung segment segmentation are performed on the lung lobe voxels to obtain a complete segmentation result.
The above description is only a preferred embodiment of the present invention, and is not intended to limit the scope of the present invention, and all modifications and equivalents of the present invention, which are made by the contents of the present specification and the accompanying drawings, or directly/indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (10)

1. A bronchopulmonary segment image segmentation method based on deep learning is characterized by comprising the following steps:
s1: extracting a lung image, and performing bronchial segmentation on the lung image to obtain a bronchial segmentation image;
s2: performing key point classification and labeling on the bronchus segmentation image to obtain a bronchus key point labeling image; and segmenting the bronchus key point labeling image to obtain a lung segment segmentation image.
2. The method of claim 1, wherein the performing bronchial segmentation on the lung image to obtain a bronchial segmentation image comprises:
carrying out threshold segmentation, normalization, data enhancement, resampling and oversampling on the image to obtain model input data;
respectively calculating the lung image data with low resolution and high resolution by adopting a two-stage 3D-UNet network, and outputting a segmentation result;
performing maximum connected domain analysis on the segmentation result, and screening out a region with the maximum number of pixels and a connected region of voxels with the volume larger than a preset number;
and performing morphological adjustment and connection processing on the branch fracture part of the bronchus on the screened area to obtain a bronchus segmentation image.
3. The method of claim 2, wherein the bronchial segmentation image comprises a bronchial foreground channel and a background channel; the loss functions of the foreground channel and the background channel are formed by combining cross entropy loss and Dice loss of the two classes according to different weights, and the combined weight is automatically adjusted along with the change of the loss function value; the formula of the cross entropy loss and the Dice loss is as follows:
Figure FDA0003152159220000011
Figure FDA0003152159220000021
wherein H (p, q) is cross entropy loss, p (X) is true probability distribution, q (X) is true probability distribution, X is predicted binary segmentation image, and Y is labeled binary segmentation image.
4. The method according to claim 3, wherein the classifying and labeling the bronchus segmentation image with the key points to obtain a bronchus key point labeling image comprises:
classifying and labeling the bronchus key points by using a machine learning model support vector machine and utilizing a loss function to obtain lung lobe labeling data, and determining a basic segmentation surface of a lung section according to a hyperparameter learned in the model, wherein the loss function is a Hinge function:
hinge(y)=max(0,1-y·y′)
wherein y is binary labeling data (0 or 1), and y' is a model prediction value.
5. The method of claim 4, wherein the segmenting the bronchus keypoint labeling image to obtain a lung segment segmentation image comprises:
loading the hyper-parameters to corresponding lung lobe areas based on the lung lobe labeling data, sorting out coordinates of all lung lobe voxels, and sequentially inputting the coordinates into a model of a support vector machine, so that the lung lobe voxels are completely corresponding to the classification of each lung segment;
and if the lung lobe labeling data are missing, carrying out probability statistics on the lung segment position and the lung segment proportion on the lung segment position missing the labeling data, calculating the position coordinate of the lung segment position, supplementing the lung lobe labeling data according to the position coordinate, and further carrying out automatic completion on the missing lung segment in a corresponding region.
6. A deep learning based broncho-pulmonary segment image segmentation system, the system comprising:
the bronchus segmentation device is used for extracting a lung image and performing bronchus segmentation on the lung image to obtain a bronchus segmentation image;
the lung segment segmentation device is used for classifying and labeling key points of the bronchial segmented image to obtain a bronchial key point labeled image; and segmenting the bronchus key point labeling image to obtain a lung segment segmentation image.
7. The system of claim 6, wherein the bronchial segmentation device comprises:
the image preprocessing module is used for carrying out threshold segmentation, normalization, data enhancement, resampling and oversampling on the image to obtain model input data;
the lung segmentation module is used for calculating the low-resolution and high-resolution lung image data by adopting a two-stage 3D-UNet network and outputting segmentation results;
the screening module is used for carrying out maximum connected domain analysis on the segmentation result and screening out a region with the maximum number of pixels and a connected region of voxels with the volume larger than a preset number;
and the optimization module is used for performing morphological adjustment and connection processing on the branch fracture part of the bronchus on the screened area to obtain a bronchus segmentation image.
8. The system of claim 7, wherein the bronchial segmentation image includes a bronchial foreground channel and a background channel; the loss functions of the foreground channel and the background channel are formed by combining cross entropy loss and Dice loss of the two classes according to different weights, and the combined weight is automatically adjusted along with the change of the loss function value; the formula of the cross entropy loss and the Dice loss is as follows:
Figure FDA0003152159220000031
Figure FDA0003152159220000032
wherein H (p, q) is cross entropy loss, p (X) is true probability distribution, q (X) is true probability distribution, X is predicted binary segmentation image, and Y is labeled binary segmentation image.
9. The system of claim 8, wherein the lung segment segmentation means comprises:
the key point classification module is used for classifying and labeling bronchus key points by using a machine learning model support vector machine and utilizing a loss function to obtain lung lobe labeling data, and determining a basic segmentation surface of a lung section according to a hyperparameter learned in the model, wherein the loss function is a Hinge function:
hinge(y)=max(0,1-y·y′)
wherein y is binary labeling data (0 or 1), and y' is a model prediction value.
10. The system of claim 9, wherein the lung segment segmentation device further comprises:
the lung lobe segmentation module is used for loading the hyper-parameters into corresponding lung lobe areas based on the lung lobe marking data, sorting out coordinates of all lung lobe voxels, and sequentially inputting the coordinates into a model of a support vector machine, so that the lung lobe voxels are completely corresponding to the classification of each lung segment;
and the lung segment complementing module is used for carrying out probability statistics on the lung segment position and proportion on the lung segment position missing the labeled data if the lung lobe labeled data is missing, calculating the position coordinate of the lung segment position, complementing the lung lobe labeled data according to the position coordinate and further carrying out automatic complementing on the missing lung segment in a corresponding area.
CN202110769339.8A 2021-07-07 2021-07-07 Bronchopulmonary segment image segmentation method and system based on deep learning Pending CN113793357A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110769339.8A CN113793357A (en) 2021-07-07 2021-07-07 Bronchopulmonary segment image segmentation method and system based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110769339.8A CN113793357A (en) 2021-07-07 2021-07-07 Bronchopulmonary segment image segmentation method and system based on deep learning

Publications (1)

Publication Number Publication Date
CN113793357A true CN113793357A (en) 2021-12-14

Family

ID=79181018

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110769339.8A Pending CN113793357A (en) 2021-07-07 2021-07-07 Bronchopulmonary segment image segmentation method and system based on deep learning

Country Status (1)

Country Link
CN (1) CN113793357A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114565761A (en) * 2022-02-25 2022-05-31 无锡市第二人民医院 Deep learning-based method for segmenting tumor region of renal clear cell carcinoma pathological image
CN116416414A (en) * 2021-12-31 2023-07-11 杭州堃博生物科技有限公司 Lung bronchoscope navigation method, electronic device and computer readable storage medium
CN117830302A (en) * 2024-03-04 2024-04-05 瀚依科技(杭州)有限公司 Optimization method and device for lung segment segmentation, electronic equipment and storage medium
CN117830302B (en) * 2024-03-04 2024-07-30 瀚依科技(杭州)有限公司 Optimization method and device for lung segment segmentation, electronic equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140079306A1 (en) * 2012-09-14 2014-03-20 Fujifilm Corporation Region extraction apparatus, method and program
CN107230204A (en) * 2017-05-24 2017-10-03 东北大学 A kind of method and device that the lobe of the lung is extracted from chest CT image
CN110956635A (en) * 2019-11-15 2020-04-03 上海联影智能医疗科技有限公司 Lung segment segmentation method, device, equipment and storage medium
CN111681247A (en) * 2020-04-29 2020-09-18 杭州深睿博联科技有限公司 Lung lobe and lung segment segmentation model training method and device
CN112070790A (en) * 2020-09-11 2020-12-11 杭州微引科技有限公司 Mixed lung segmentation system based on deep learning and image processing
US20210142485A1 (en) * 2019-11-11 2021-05-13 Ceevra, Inc. Image analysis system for identifying lung features

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140079306A1 (en) * 2012-09-14 2014-03-20 Fujifilm Corporation Region extraction apparatus, method and program
CN107230204A (en) * 2017-05-24 2017-10-03 东北大学 A kind of method and device that the lobe of the lung is extracted from chest CT image
US20210142485A1 (en) * 2019-11-11 2021-05-13 Ceevra, Inc. Image analysis system for identifying lung features
CN110956635A (en) * 2019-11-15 2020-04-03 上海联影智能医疗科技有限公司 Lung segment segmentation method, device, equipment and storage medium
CN111681247A (en) * 2020-04-29 2020-09-18 杭州深睿博联科技有限公司 Lung lobe and lung segment segmentation model training method and device
CN112070790A (en) * 2020-09-11 2020-12-11 杭州微引科技有限公司 Mixed lung segmentation system based on deep learning and image processing

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116416414A (en) * 2021-12-31 2023-07-11 杭州堃博生物科技有限公司 Lung bronchoscope navigation method, electronic device and computer readable storage medium
CN116416414B (en) * 2021-12-31 2023-09-22 杭州堃博生物科技有限公司 Lung bronchoscope navigation method, electronic device and computer readable storage medium
CN114565761A (en) * 2022-02-25 2022-05-31 无锡市第二人民医院 Deep learning-based method for segmenting tumor region of renal clear cell carcinoma pathological image
CN117830302A (en) * 2024-03-04 2024-04-05 瀚依科技(杭州)有限公司 Optimization method and device for lung segment segmentation, electronic equipment and storage medium
CN117830302B (en) * 2024-03-04 2024-07-30 瀚依科技(杭州)有限公司 Optimization method and device for lung segment segmentation, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN108154192B (en) High-resolution SAR terrain classification method based on multi-scale convolution and feature fusion
CN110310287B (en) Automatic organ-at-risk delineation method, equipment and storage medium based on neural network
CN111582294B (en) Method for constructing convolutional neural network model for surface defect detection and application thereof
Andres et al. Segmentation of SBFSEM volume data of neural tissue by hierarchical classification
CN111563902A (en) Lung lobe segmentation method and system based on three-dimensional convolutional neural network
CN109035172B (en) Non-local mean ultrasonic image denoising method based on deep learning
US20130216127A1 (en) Image segmentation using reduced foreground training data
CN110070531B (en) Model training method for detecting fundus picture, and fundus picture detection method and device
US20090252429A1 (en) System and method for displaying results of an image processing system that has multiple results to allow selection for subsequent image processing
CN112233129B (en) Deep learning-based parallel multi-scale attention mechanism semantic segmentation method and device
CN106340016A (en) DNA quantitative analysis method based on cell microscope image
CN111126127B (en) High-resolution remote sensing image classification method guided by multi-level spatial context characteristics
CN113793357A (en) Bronchopulmonary segment image segmentation method and system based on deep learning
CN111986125A (en) Method for multi-target task instance segmentation
US11037299B2 (en) Region merging image segmentation algorithm based on boundary extraction
CN111476794B (en) Cervical pathological tissue segmentation method based on UNET
CN114140465B (en) Self-adaptive learning method and system based on cervical cell slice image
CN115775226B (en) Medical image classification method based on transducer
Asheghi et al. A comprehensive review on content-aware image retargeting: From classical to state-of-the-art methods
CN116012291A (en) Industrial part image defect detection method and system, electronic equipment and storage medium
CN112348059A (en) Deep learning-based method and system for classifying multiple dyeing pathological images
CN113609984A (en) Pointer instrument reading identification method and device and electronic equipment
CN114581434A (en) Pathological image processing method based on deep learning segmentation model and electronic equipment
CN111353987A (en) Cell nucleus segmentation method and device
CN113160185A (en) Method for guiding cervical cell segmentation by using generated boundary position

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination