CN110874860A - Target extraction method of symmetric supervision model based on mixed loss function - Google Patents

Target extraction method of symmetric supervision model based on mixed loss function Download PDF

Info

Publication number
CN110874860A
CN110874860A CN201911151141.2A CN201911151141A CN110874860A CN 110874860 A CN110874860 A CN 110874860A CN 201911151141 A CN201911151141 A CN 201911151141A CN 110874860 A CN110874860 A CN 110874860A
Authority
CN
China
Prior art keywords
loss function
symmetric
magnetic resonance
image data
nuclear magnetic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911151141.2A
Other languages
Chinese (zh)
Other versions
CN110874860B (en
Inventor
王宽全
刘亚淑
骆功宁
王玮
张恒贵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Institute of Technology
Original Assignee
Harbin Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Institute of Technology filed Critical Harbin Institute of Technology
Priority to CN201911151141.2A priority Critical patent/CN110874860B/en
Publication of CN110874860A publication Critical patent/CN110874860A/en
Application granted granted Critical
Publication of CN110874860B publication Critical patent/CN110874860B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30048Heart; Cardiac

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)

Abstract

The invention discloses a target extraction method of a symmetric supervision model based on a mixed loss function, and relates to a target extraction method. The invention aims to solve the problems that the existing left atrium extraction method consumes a large amount of manpower and material resources, has artificial difference and is low in efficiency. The process is as follows: 1. acquiring and marking cardiac nuclear magnetic resonance image data; 2. preprocessing the marked cardiac nuclear magnetic resonance image data acquired in step 1; 3. obtaining a well-trained symmetric supervision model, and storing the well-trained symmetric supervision model; 4. predicting the preprocessed cardiac nuclear magnetic resonance image data by adopting a 3-trained symmetric supervision model, outputting the probability that each pixel is distinguished as the left atrium, and setting a probability threshold value to binarize the result; 5. and reconstructing the binarization slicing result obtained by the step 4 into three-dimensional volume data according to the inverse operation of the step 2, and finishing the extraction of the left atrium. The invention is used in the field of target extraction.

Description

Target extraction method of symmetric supervision model based on mixed loss function
Technical Field
The invention relates to a target extraction method.
Background
Medical image processing is an emerging subject and technology rapidly developed along with the development and maturity of computer technology and the progress of clinical diagnosis technology, and nowadays, the medical image processing technology is more and more widely applied to clinics as an assistant tool of doctors. The left atrium is an important component of the human heart, and the structure, size and shape of the left atrium are important factors for identifying the physiological state of the human body. The clinical method for extracting the left atrium still remains in the stage of manual extraction by doctors using software (such as CVI42, Circle cardio imaging), and the method has large subjective difference and needs related personnel with professional knowledge to take charge of the extraction work. The extraction efficiency is also low, which greatly increases the burden of doctors and the waiting time of patients. The left atrium extraction is a preprocessing stage for checking the health of the body, and does not belong to a disease diagnosis method, namely, the diagnosis method does not directly aim at obtaining a disease diagnosis result or a health condition, only belongs to an intermediate image processing process, and cannot directly obtain the diagnosis result. If a final diagnosis result is required to be obtained, a series of other operations are required to be carried out on the basis of the left atrium extraction work, and contents such as disease diagnosis, evaluation of the health state of a patient and the like are not involved, so that the automatic extraction work can reduce the repeated labor of medical workers, improve the work efficiency and relieve the medical pressure. And the automatic extraction has the advantages of objectivity and repeatability.
Disclosure of Invention
The invention aims to solve the problems that the existing left atrium extraction method consumes a large amount of manpower and material resources, has artificial difference and is low in efficiency, and provides a target extraction method of a symmetric supervision model based on a mixed loss function.
The target extraction method of the symmetric supervision model based on the mixed loss function comprises the following specific processes:
step 1, acquiring and labeling cardiac nuclear magnetic resonance image data;
step 2, preprocessing the marked cardiac nuclear magnetic resonance image data obtained in the step 1, wherein the preprocessing comprises the operations of obtaining a two-dimensional slice, cutting, zooming and data normalization;
step 3, taking the preprocessed cardiac nuclear magnetic resonance image data in the step 2 as the input of a symmetrical supervision model;
the symmetric supervision model consists of an encoder and a decoder, wherein the encoder comprises an input layer, a convolution layer, a normalization layer, a maximum pooling layer and an output layer, and the decoder comprises a convolution layer, an anti-convolution layer, a normalization layer, a Dropout layer and an output layer;
the decoder and the coder complete layer jump connection through series operation;
the final output of the symmetric supervision model is formed by the weighted sum of the respective output layers of the encoder and the decoder;
the symmetric supervision model further comprises a channel weighting module; the channel weighting module consists of a global pooling layer and a full-connection layer;
the loss function of the symmetric supervision model consists of three independent loss functions, namely a target segmentation loss function, an edge detection loss function and an uncertainty loss function;
performing network training on the symmetric supervision model, learning network parameters by adopting a random gradient descending mode based on a back propagation algorithm, stopping training when a loss function does not descend or reaches a specified iteration number to obtain a trained symmetric supervision model, and storing the trained symmetric supervision model;
step 4, preprocessing the cardiac nuclear magnetic resonance image data to be processed, predicting the preprocessed cardiac nuclear magnetic resonance image data by adopting the trained symmetric supervision model in the step 3, outputting the probability that each pixel is distinguished as the left atrium, and setting a probability threshold value to binarize the result;
and 5, reconstructing the binarization slicing result obtained in the step 4 into three-dimensional volume data according to the inverse operation of the step 2, and finishing the extraction of the left atrium.
The invention has the beneficial effects that:
the invention provides a symmetric supervised model method using a hybrid loss function for automatically extracting the left atrium. The main innovation of the method is the construction of a symmetrical supervision model, the design of an attention mechanism and the design of a mixed loss function.
The invention uses the nuclear magnetic resonance imaging technology to obtain the heart image data, and in the image preprocessing process, the image information is reserved to the maximum extent in order to avoid the interference of human factors, and the maximum and minimum normalization processing is only carried out on the image. Because the clinically obtained data is three-dimensional Volume data (Volume) which is obtained by reconstructing scanned Slices into a solid, the data Volume is small, the time cost of a three-dimensional model is high, and doctors perform data annotation on the Slices (Slices), the model is trained based on the Slices, the Slices are extracted from the three-dimensional Volume data according to a long axis (heart bicentric), and then the data amplification is realized on the Slices by operations such as rotation, zooming and the like.
In the aspect of model construction, the invention uses a symmetrical deep supervision technology to supervise both sides of the model on the basis of the structure of the U-Net encoder decoder, the deep supervision technology can well reserve the characteristics of a shallow layer and a deep layer of data, and directly injects loss into an intermediate layer, so that the network can be better and faster converged. The symmetric supervision of the invention is embodied on the symmetric outputs of the encoder and the decoder of the model, and the conventional model only has one or more outputs at the decoder for supervising the training of the whole network, and the supervision of the encoder is in a missing state. In order to further comprehensively supervise each layer in the network, an output is defined at an encoder and a decoder respectively, and the final output of the model is determined by using the weighted average of the two outputs. Before outputting, the features of different levels are weighted by using an attention mechanism, and the attention of the model is focused on the features with large weights. For the attention mechanism, since the weight of the feature is obtained by learning, in order to avoid that the learned poor weight reduces the performance of the network, the invention carries out residual addition on the input feature and the output weighted feature to ensure the effectiveness of the network. Finally, the combination of the layer jump connection between the encoder and the decoder can lead the model to learn better and ensure the integrity of data information.
After the final prediction probability is obtained, the result is masked with a threshold. The mask is acted on the input data to extract corresponding data information, similarity is calculated between the mask and the data information obtained by the label and the input data, the similarity is used as uncertainty of a result, the smaller the uncertainty is, the better the prediction effect of the model is, and the efficiency of the model is further improved by optimizing the uncertainty. Besides the conventional target segmentation loss, the invention also integrates a loss function of boundary detection at a decoder end to optimize the boundary.
And predicting the cardiac nuclear magnetic resonance data based on the trained model to obtain a prediction result of the left atrium. In a large number of sample analysis experiments, the manual marking of data of each individual requires about 20 minutes, and experience and judgment among different doctors are subjective, so that misjudgment and inconsistency are easily caused. The invention completes the data preprocessing and left atrium extraction work in an automatic mode, can reduce the extraction time to be within 10 seconds on the basis of ensuring the extraction accuracy, and greatly improves the data processing efficiency. Of course, the technique can be applied not only to cardiac nuclear magnetic resonance data, but also to cardiac computed tomography data, and can also be used for extraction and analysis of other parts of the heart (e.g., left and right ventricles). Compared with the traditional method, the left atrium extraction method based on the symmetric supervised deep learning of the mixed loss function can quickly, accurately and automatically extract the left atrium, and provides important data processing support for the subsequent analysis of the function of the left atrium; the problems that the existing left atrium extraction method consumes a large amount of manpower and material resources, has artificial difference and is low in efficiency are solved.
It should be noted that the present invention does not pertain to a method for diagnosing a disease, and does not directly aim to obtain a disease diagnosis or a health condition, but pertains to an intermediate image processing procedure. If the final diagnosis result needs to be obtained, a series of professional diagnosis operations are carried out by a doctor on the basis of the invention.
Drawings
FIG. 1 is a flow chart of the method of the present invention;
FIG. 2 is a schematic diagram of a symmetrically supervised deep convolutional neural network of the present invention;
FIG. 3 is an MRI slice image of the bottom end of the left atrium and its corresponding border map according to an embodiment of the present invention;
FIG. 4 is a magnetic resonance slice image of the middle left atrium and its corresponding border map according to an embodiment of the present invention;
FIG. 5 is a magnetic resonance slice image of the left atrial tip and its corresponding boundary map in an embodiment of the invention;
FIG. 6 is a three-dimensional schematic view of a binary mask of the left atrium in accordance with an embodiment of the invention.
Detailed Description
The first embodiment is as follows: the concrete process of the target extraction method of the symmetric supervision model based on the mixed loss function in the embodiment is as follows:
step 1, acquiring and labeling cardiac nuclear magnetic resonance image data;
step 2, preprocessing the marked cardiac nuclear magnetic resonance image data obtained in the step 1, wherein the preprocessing comprises operations of obtaining two-dimensional slices, cutting, zooming, data normalization and the like;
step 3, taking the preprocessed cardiac nuclear magnetic resonance image data in the step 2 as the input of a symmetrical supervision model;
the symmetric supervision model consists of an encoder and a decoder, wherein the encoder comprises an input layer, a convolution layer, a normalization layer, a maximum pooling layer and an output layer, and the decoder comprises a convolution layer, an anti-convolution layer, a normalization layer, a Dropout layer and an output layer;
the decoder and the coder complete layer jump connection through series operation;
the final output of the symmetric supervision model is formed by the weighted sum of the respective output layers of the encoder and the decoder;
the symmetric supervision model also comprises a channel weighting module for realizing the sequencing of the characteristics obtained by the convolutional layer; the channel weighting module is composed of a Global Average potential (GA) layer and a fully Connected layer (FC);
the loss function of the symmetric supervision model consists of three independent loss functions, namely a target segmentation loss function, an edge detection loss function and an uncertainty loss function;
performing network training on the symmetric supervision model, learning network parameters by adopting a random gradient descending mode based on a back propagation algorithm, stopping training when a loss function does not descend or reaches a specified iteration number to obtain a trained symmetric supervision model, and storing the trained symmetric supervision model;
step 4, preprocessing the cardiac nuclear magnetic resonance image data to be processed, predicting the preprocessed cardiac nuclear magnetic resonance image data by adopting the trained symmetric supervision model in the step 3, outputting the probability that each pixel is distinguished as the left atrium, and setting a probability threshold value to binarize the result;
and 5, reconstructing the binarization slicing result obtained in the step 4 into three-dimensional data according to the inverse operation of the step 2, and completing the extraction of the left atrium (the step 2 performs zooming and cutting operations on the image, namely filling and amplifying when the image is required to be restored to the original size at last, and then reconstructing the slices into 3D data).
The second embodiment is as follows: the difference between the present embodiment and the first embodiment is that, in the step 1, cardiac mri data is acquired and labeled; the specific process is as follows:
step 11, artificially marking a left atrium region and a background region, wherein the left atrium region is marked as 1, the background region is marked as 0, and the marks are stored and used as labels of the extraction model;
and 12, extracting boundaries on the binary images of the left atrium region and the background region extracted in the step 11 by using a Canny operator, marking the boundaries as 1, marking other regions as 0, and storing the type of the marks as labels for boundary detection.
Other steps and parameters are the same as those in the first embodiment.
The third concrete implementation mode: the difference between the present embodiment and the first or second embodiment is that, in the step 2, the labeled cardiac mri image data obtained in the step 1 is preprocessed, where the preprocessing includes operations such as obtaining a two-dimensional slice, clipping, scaling, and data normalization; the specific process is as follows:
in the training stage, the marked cardiac nuclear magnetic resonance image data image obtained in the step 1 is subjected to random rotation within a specified range and horizontal and vertical overturning operations to realize data amplification so as to avoid an overfitting phenomenon;
extracting two-dimensional slices from the marked cardiac MRI image data obtained in step 1 by long axis (when nuclear magnetic extraction is performed, scanning is performed from head to foot, a slice is scanned at each moment, and is reconstructed into 3D, the 3D slice is stored, the extraction is converted into a 2D slice in the long axis direction according to the stored sequence), and removing most background areas not containing the heart by using center clipping on the acquired slice, finally normalizing the data and the label (the data is the data after removing most background areas not containing the heart, and the label is the labeled acquired in the step 1) to [0,1] by adopting maximum and minimum normalization, and re-binarizing the label changed by scaling by using a set threshold value (the label is the label after removing most background areas not containing the heart from the artificial label acquired in the step 1).
Other steps and parameters are the same as those in the first or second embodiment.
The fourth concrete implementation mode: the difference between this embodiment and one of the first to third embodiments is that the symmetric supervision model in step 3 includes three loss functions: target segmentation loss function LsegBoundary detection loss function LedgeAnd an uncertainty estimation loss function Lun
The target segmentation loss function and the uncertainty estimation loss function act on the outputs of the encoder and the decoder, and the boundary detection loss function only acts on the output of the decoder;
target segmentation loss function LsegThe expression is as follows:
Figure BDA0002283581160000051
where N is the total number of pixels on the slice, yiExtracting a label value of the pixel i after the pretreatment in the step 2, wherein 0 is a background and 1 is a target, namely a left atrium;
Figure BDA0002283581160000052
for the prediction result of the pixel i output by the encoder,
Figure BDA0002283581160000053
α and β are the loss weights (settings) output at the encoder side and decoder side, respectively;
by minimizing the target segmentation loss function LsegThe distribution of the prediction result gradually approaches the distribution of the label, so that the aim of accurately extracting the left atrium is fulfilled;
boundary detection loss function LedgeThe expression is as follows:
Figure BDA0002283581160000061
wherein w is a weight value of the boundary; y isi edgeThe boundary label value of the pixel i after the preprocessing in the step 2 is 0 as the background and 1 as the left atrial boundary;
Figure BDA0002283581160000062
the boundary detection result is output by a decoder end;
detecting loss function L by minimizing boundaryedgeThe aim of accurately detecting the left atrial boundary is fulfilled;
uncertainty estimation loss function LunThe expression is as follows:
Figure BDA0002283581160000063
wherein ,
Figure BDA0002283581160000064
is the label value of the pixel i preprocessed in step 2, and 0 is the backgroundAnd 1 is the left atrium;
Figure BDA0002283581160000065
the binary prediction result obtained in the step 4 is obtained; i isiThe pixel values of the input data preprocessed in the step 2; representing mask mapping, wherein the operation is to reserve the pixel value judged as the left atrium and set the pixel value judged as the background to be 0, so that the aim of extracting left atrium data information is fulfilled;
uncertainty estimation loss function LunMapping a binary prediction result obtained by the symmetric supervision model in the step 3 to the heart nuclear magnetic resonance image data preprocessed in the step 2 through a mask (the prediction result is binary, the atrium part is 1, and other parts are 0), wherein the mask mapping is to multiply the prediction result by input data, the original pixel value of the atrium part is reserved, the other parts are predicted to be 0, and calculation is not involved), so that predicted left atrium data information is obtained, then, the mean square error of the data information of the atrium part and the data information of the real atrium part is calculated, the similarity of the predicted left atrium data information and the data information of the real atrium part is obtained, and the purpose of evaluating the uncertainty of the segmentation result from the perspective of data is achieved;
the data information of the real atrial part is obtained by mapping the binary image marked manually in the step 1 and the preprocessed cardiac nuclear magnetic resonance image data mask in the step 2;
estimating a loss function L by minimizing uncertaintyunThe similarity between the predicted left atrium data information and the data information of the real atrium part is maximized, and the left atrium is extracted so as to improve the accuracy of left atrium extraction;
the overall loss function of the symmetric supervised model is as follows:
L=Lseg+Ledge+Lun,
the symmetric supervised model achieves the purpose of optimizing the left atrial extraction model by minimizing the loss function L.
Other steps and parameters are the same as those in one of the first to third embodiments.
The fifth concrete implementation mode: unlike one of the first to fourth embodiments, the weight w lost by the boundary detection plays a role in balancing the positive and negative samples, and since the boundary pixels occupy a very small ratio on the whole image, the model is easily interfered by the negative samples (i.e., background pixels) if no intervention is made. The weight value w of the boundary is calculated as follows:
Figure BDA0002283581160000071
wherein ,NedgeThe number of boundary pixels.
Other steps and parameters are the same as in one of the first to fourth embodiments.
The sixth specific implementation mode: the difference between this embodiment and one of the first to fifth embodiments is that, in the step 3, the specific solving process of the weighted sum of the output ends of the encoder and the decoder is as follows:
the output result of each pixel value through the weighted sum of the output ends of the encoder and the decoder and the binarization thereof are calculated according to the following formula:
Figure BDA0002283581160000072
Figure BDA0002283581160000073
wherein α and β are loss weights output at the encoder side and decoder side, respectively, and are set manually based on experience;
Figure BDA0002283581160000074
is the result of the prediction output by the encoder,
Figure BDA0002283581160000075
the prediction result is output by a decoder;
Figure BDA0002283581160000076
the weighted prediction result is the weighted prediction result of the symmetric supervision model;
Figure BDA0002283581160000077
is the result of the weighted prediction of the pixel i,
Figure BDA0002283581160000078
for the prediction result after binarization, th is a set probability threshold, the prediction result lower than the threshold is set as 0, and the result is judged as a background; the prediction result not lower than the threshold is set to be 1, and the target is judged.
Other steps and parameters are the same as those in one of the first to fifth embodiments.
The seventh embodiment: the difference between this embodiment and one of the first to sixth embodiments is that, in the step 3, the channel weighting module, i.e., the attention mechanism, inputs the features into the channel weighting module, the module calculates a weight for each channel through a Global Pooling layer (Global Average power), then adds a Full Connected layer (Full Connected) to implement nonlinear mapping, and finally maps the calculated weight back to the input size and multiplies the input feature to perform weight assignment, and then adds the calculated weight to the input feature to implement residual error connection. The mathematical expression is as follows:
Atten=FC(GA(X))×X+X,
wherein, X is the feature obtained by convolution, GA represents the global pooling layer of the channel weighting module, and FC represents the full-link layer of the channel weighting module; atten is the channel weighted eigenvalue.
By weighting the channels of the features, the model can pay attention to the channels with large weights, and the mode of residual connection avoids the phenomenon that the learned poor weights reduce the performance of the model.
Other steps and parameters are the same as those in one of the first to sixth embodiments.
The specific implementation mode is eight: the difference between this embodiment and the first to seventh embodiments is that, in the step 4, the cardiac mri image data to be processed is preprocessed; the specific process is as follows:
two-dimensional slices are extracted from the cardiac MRI image data to be processed through a long axis (when nuclear magnetic extraction is performed, scanning is performed from head to foot direction of a person, a slice is scanned at each moment, and then 3D is reconstructed and stored to form a 3D array, the extraction is converted into 2D slices in the long axis direction according to the stored sequence), most of background areas which do not contain the heart are removed from the obtained slices by using center cutting, and finally, the data and labels (the data is the data after removing most of the background areas which do not contain the heart, and the labels are the labeled data obtained in the step 1) are normalized to [0,1] by using the maximum and minimum normalization.
Other steps and parameters are the same as those in one of the first to seventh embodiments.
The present invention is capable of other embodiments and its several details are capable of modifications in various obvious respects, all without departing from the spirit and scope of the present invention.

Claims (8)

1. The target extraction method of the symmetric supervision model based on the mixed loss function is characterized by comprising the following steps of: the method comprises the following specific processes:
step 1, acquiring and labeling cardiac nuclear magnetic resonance image data;
step 2, preprocessing the marked cardiac nuclear magnetic resonance image data obtained in the step 1, wherein the preprocessing comprises the operations of obtaining a two-dimensional slice, cutting, zooming and data normalization;
step 3, taking the preprocessed cardiac nuclear magnetic resonance image data in the step 2 as the input of a symmetrical supervision model;
the symmetric supervision model consists of an encoder and a decoder, wherein the encoder comprises an input layer, a convolution layer, a normalization layer, a maximum pooling layer and an output layer, and the decoder comprises a convolution layer, an anti-convolution layer, a normalization layer, a Dropout layer and an output layer;
the decoder and the coder complete layer jump connection through series operation;
the final output of the symmetric supervision model is formed by the weighted sum of the respective output layers of the encoder and the decoder;
the symmetric supervision model further comprises a channel weighting module; the channel weighting module consists of a global pooling layer and a full-connection layer;
the loss function of the symmetric supervision model consists of three independent loss functions, namely a target segmentation loss function, an edge detection loss function and an uncertainty loss function;
performing network training on the symmetric supervision model, learning network parameters by adopting a random gradient descending mode based on a back propagation algorithm, stopping training when a loss function does not descend or reaches a specified iteration number to obtain a trained symmetric supervision model, and storing the trained symmetric supervision model;
step 4, preprocessing the cardiac nuclear magnetic resonance image data to be processed, predicting the preprocessed cardiac nuclear magnetic resonance image data by adopting the trained symmetric supervision model in the step 3, outputting the probability that each pixel is distinguished as the left atrium, and setting a probability threshold value to binarize the result;
and 5, reconstructing the binarization slicing result obtained in the step 4 into three-dimensional volume data according to the inverse operation of the step 2, and finishing the extraction of the left atrium.
2. The method for extracting the target of the symmetric supervised model based on the mixture loss function as recited in claim 1, wherein: acquiring and labeling the cardiac nuclear magnetic resonance image data in the step 1; the specific process is as follows:
step 11, artificially marking a left atrium region and a background region, wherein the left atrium region is marked as 1, the background region is marked as 0, and the marks are stored;
and 12, extracting boundaries on the binary images of the left atrium region and the background region extracted in the step 11 by using a Canny operator, marking the boundaries as 1, marking other regions as 0, and storing the type marks.
3. The method for extracting the target of the symmetric supervised model based on the mixture loss function as recited in claim 1 or 2, wherein: preprocessing the marked cardiac nuclear magnetic resonance image data acquired in the step 1 in the step 2, wherein the preprocessing comprises two-dimensional slice acquisition, cutting, scaling and data normalization operation; the specific process is as follows:
rotating, horizontally and vertically overturning the marked cardiac nuclear magnetic resonance image data image obtained in the step (1) to realize data amplification;
and (3) extracting a two-dimensional slice from the marked cardiac magnetic resonance image data acquired in the step (1) through a long axis, removing a background area which does not contain the heart on the acquired slice by using center cutting, and finally normalizing the data and the label to [0,1] by adopting maximum and minimum normalization.
4. The method for extracting the target of the symmetric supervised model based on the mixture loss function as recited in claim 3, wherein: the symmetric supervision model in step 3 comprises three loss functions: target segmentation loss function LsegBoundary detection loss function LedgeAnd an uncertainty estimation loss function Lun
The target segmentation loss function and the uncertainty estimation loss function act on the outputs of the encoder and the decoder, and the boundary detection loss function only acts on the output of the decoder;
target segmentation loss function LsegThe expression is as follows:
Figure FDA0002283581150000021
where N is the total number of pixels on the slice, yiExtracting a label value of the pixel i after the pretreatment in the step 2, wherein 0 is a background and 1 is a target, namely a left atrium;
Figure FDA0002283581150000022
for the prediction result of the pixel i output by the encoder,
Figure FDA0002283581150000023
for pixel i output at decoder endα and β are loss weights at the encoder side output and decoder side output, respectively;
by minimizing the target segmentation loss function Lseg
Boundary detection loss function LedgeThe expression is as follows:
Figure FDA0002283581150000024
wherein w is a weight value of the boundary; y isi edgeThe boundary label value of the pixel i after the preprocessing in the step 2 is 0 as the background and 1 as the left atrial boundary;
Figure FDA0002283581150000025
the boundary detection result is output by a decoder end;
detecting loss function L by minimizing boundaryedgeDetecting a left atrial boundary;
uncertainty estimation loss function LunThe expression is as follows:
Figure FDA0002283581150000031
wherein ,
Figure FDA0002283581150000032
the label value of the pixel i after the pretreatment in the step 2 is shown, 0 is the background, and 1 is the left atrium;
Figure FDA0002283581150000033
the binary prediction result obtained in the step 4 is obtained; i isiThe pixel values of the input data preprocessed in the step 2; represents a mask map;
uncertainty estimation loss function LunMapping the binarization prediction result obtained by the symmetrical supervision model in the step 3 to the heart nuclear magnetic resonance image data preprocessed in the step 2 through a mask to obtain predicted left atrium data information, and then comparing the predicted left atrium data information with the number of the real atrium partCalculating a mean square error according to the information to obtain the similarity of the predicted left atrial data information and the data information of the real atrial part;
the data information of the real atrial part is obtained by mapping the binary image marked manually in the step 1 and the preprocessed cardiac nuclear magnetic resonance image data mask in the step 2;
estimating a loss function L by minimizing uncertaintyunMaximizing similarity between predicted left atrial data information and data information of real atrial portions;
the overall loss function of the symmetric supervised model is as follows:
L=Lseg+Ledge+Lun,
the symmetric supervised model achieves left atrial extraction by minimizing the loss function L.
5. The method for extracting the target of the symmetric supervised model based on the mixture loss function as recited in claim 4, wherein: the weighted value w of the boundary is calculated as follows:
Figure FDA0002283581150000034
wherein ,NedgeThe number of boundary pixels.
6. The method for extracting the target of the symmetric supervised model based on the mixture loss function as recited in claim 5, wherein: the specific solving process of the weighted sum of the output ends of the encoder and the decoder in the step 3 is as follows:
the output result of each pixel value through the weighted sum of the output ends of the encoder and the decoder and the binarization thereof are calculated according to the following formula:
Figure FDA0002283581150000041
Figure FDA0002283581150000042
wherein α and β are loss weights at the encoder side output and decoder side output, respectively;
Figure FDA0002283581150000043
is the result of the prediction output by the encoder,
Figure FDA0002283581150000044
the prediction result is output by a decoder;
Figure FDA0002283581150000045
the weighted prediction result is the weighted prediction result of the symmetric supervision model;
Figure FDA0002283581150000046
is the result of the weighted prediction of the pixel i,
Figure FDA0002283581150000047
for the prediction result after binarization, th is a set probability threshold, the prediction result lower than the threshold is set as 0, and the result is judged as a background; the prediction result not lower than the threshold is set to be 1, and the target is judged.
7. The method for extracting the target of the symmetric supervised model based on the mixture loss function as recited in claim 6, wherein: the mathematical expression of the channel weighting module in the step 3 is as follows:
Atten=FC(GA(X))×X+X,
wherein, X is the feature obtained by convolution, GA represents the global pooling layer of the channel weighting module, and FC represents the full-link layer of the channel weighting module; atten is the channel weighted eigenvalue.
8. The method for extracting the target of the symmetric supervised model based on the mixture loss function as recited in claim 7, wherein: preprocessing the cardiac nuclear magnetic resonance image data to be processed in the step 4; the specific process is as follows:
extracting a two-dimensional slice from the to-be-processed cardiac nuclear magnetic resonance image data through a long axis, removing background areas not containing the heart on the obtained slice by using center cutting, and finally normalizing the data and the label to [0,1] by adopting maximum and minimum normalization.
CN201911151141.2A 2019-11-21 2019-11-21 Target extraction method of symmetrical supervision model based on mixed loss function Active CN110874860B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911151141.2A CN110874860B (en) 2019-11-21 2019-11-21 Target extraction method of symmetrical supervision model based on mixed loss function

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911151141.2A CN110874860B (en) 2019-11-21 2019-11-21 Target extraction method of symmetrical supervision model based on mixed loss function

Publications (2)

Publication Number Publication Date
CN110874860A true CN110874860A (en) 2020-03-10
CN110874860B CN110874860B (en) 2023-04-25

Family

ID=69718234

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911151141.2A Active CN110874860B (en) 2019-11-21 2019-11-21 Target extraction method of symmetrical supervision model based on mixed loss function

Country Status (1)

Country Link
CN (1) CN110874860B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111402218A (en) * 2020-03-11 2020-07-10 北京深睿博联科技有限责任公司 Cerebral hemorrhage detection method and device
CN112017152A (en) * 2020-07-02 2020-12-01 杭州市第一人民医院 Processing method of two-dimensional image of atrial impression
CN112364831A (en) * 2020-11-30 2021-02-12 姜培生 Face recognition method and online education system
CN112651453A (en) * 2020-12-30 2021-04-13 北京百度网讯科技有限公司 Loss function adaptive method, device, equipment and storage medium
CN113312978A (en) * 2021-04-30 2021-08-27 中国科学院计算技术研究所 Method and system for accurately identifying and segmenting target under microscopic image

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107203692A (en) * 2017-05-09 2017-09-26 哈尔滨工业大学(威海) The implementation method of atrial fibrillation detection based on depth convolutional neural networks
CN109166130A (en) * 2018-08-06 2019-01-08 北京市商汤科技开发有限公司 A kind of image processing method and image processing apparatus
CN109872333A (en) * 2019-02-20 2019-06-11 腾讯科技(深圳)有限公司 Medical image dividing method, device, computer equipment and storage medium
US20190201269A1 (en) * 2017-12-28 2019-07-04 Sleep Number Corporation Bed having sleep stage detecting feature
US20190244357A1 (en) * 2018-02-07 2019-08-08 International Business Machines Corporation System for Segmentation of Anatomical Structures in Cardiac CTA Using Fully Convolutional Neural Networks
US20190261945A1 (en) * 2018-02-26 2019-08-29 Siemens Medical Solutions Usa, Inc. Three-Dimensional Segmentation from Two-Dimensional Intracardiac Echocardiography Imaging
CN110428427A (en) * 2019-07-16 2019-11-08 东南大学 Semi-supervised arteria renalis dividing method based on intensive biasing networks and self-encoding encoder

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107203692A (en) * 2017-05-09 2017-09-26 哈尔滨工业大学(威海) The implementation method of atrial fibrillation detection based on depth convolutional neural networks
US20190201269A1 (en) * 2017-12-28 2019-07-04 Sleep Number Corporation Bed having sleep stage detecting feature
US20190244357A1 (en) * 2018-02-07 2019-08-08 International Business Machines Corporation System for Segmentation of Anatomical Structures in Cardiac CTA Using Fully Convolutional Neural Networks
US20190261945A1 (en) * 2018-02-26 2019-08-29 Siemens Medical Solutions Usa, Inc. Three-Dimensional Segmentation from Two-Dimensional Intracardiac Echocardiography Imaging
CN109166130A (en) * 2018-08-06 2019-01-08 北京市商汤科技开发有限公司 A kind of image processing method and image processing apparatus
CN109872333A (en) * 2019-02-20 2019-06-11 腾讯科技(深圳)有限公司 Medical image dividing method, device, computer equipment and storage medium
CN110428427A (en) * 2019-07-16 2019-11-08 东南大学 Semi-supervised arteria renalis dividing method based on intensive biasing networks and self-encoding encoder

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
CHEN-SCARABELLI: "ral supplementation with mixed essential amino acids protects the human heart from the latrogenic ischemia/reperfusion injury associated with cardioplegic arrest reducing myocyte cell loss and exerting favorable metabolic and hemodynamic effects", 《JOURNAL OF THE AMERICAN COLLEGE OF CARDIOLOGY》 *
YASHU LIU: "An Automatic Cardiac Segmentation Framework based on Multi-sequence MR Image", 《JOURNAL PAPER》 *
代洋洋: "基于深度学习的左心房分割算法研究", 《中国优秀硕士论文辑》 *
尚岩峰等: "基于区域能量最小和主动轮廓模型的医学目标提取", 《计算机应用研究》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111402218A (en) * 2020-03-11 2020-07-10 北京深睿博联科技有限责任公司 Cerebral hemorrhage detection method and device
CN112017152A (en) * 2020-07-02 2020-12-01 杭州市第一人民医院 Processing method of two-dimensional image of atrial impression
CN112364831A (en) * 2020-11-30 2021-02-12 姜培生 Face recognition method and online education system
CN112651453A (en) * 2020-12-30 2021-04-13 北京百度网讯科技有限公司 Loss function adaptive method, device, equipment and storage medium
CN112651453B (en) * 2020-12-30 2023-10-13 北京百度网讯科技有限公司 Self-adapting method, device, equipment and storage medium of loss function
CN113312978A (en) * 2021-04-30 2021-08-27 中国科学院计算技术研究所 Method and system for accurately identifying and segmenting target under microscopic image

Also Published As

Publication number Publication date
CN110874860B (en) 2023-04-25

Similar Documents

Publication Publication Date Title
CN110874860B (en) Target extraction method of symmetrical supervision model based on mixed loss function
JP5601378B2 (en) Medical image processing device
CN107784647A (en) Liver and its lesion segmentation approach and system based on multitask depth convolutional network
CN107203989A (en) End-to-end chest CT image dividing method based on full convolutional neural networks
CN111553892B (en) Lung nodule segmentation calculation method, device and system based on deep learning
CN108062749B (en) Identification method and device for levator ani fissure hole and electronic equipment
CN109919254B (en) Breast density classification method, system, readable storage medium and computer device
CN102496156A (en) Medical image segmentation method based on quantum-behaved particle swarm cooperative optimization
CN111340825A (en) Method and system for generating mediastinal lymph node segmentation model
CN110599499A (en) MRI image heart structure segmentation method based on multipath convolutional neural network
CN112270667B (en) TI-RADS-based integrated deep learning multi-tag identification method
CN112508884A (en) Comprehensive detection device and method for cancerous region
CN116883341A (en) Liver tumor CT image automatic segmentation method based on deep learning
CN114693671A (en) Lung nodule semi-automatic segmentation method, device, equipment and medium based on deep learning
CN117727441A (en) Method for predicting lung cancer immune curative effect based on clinical-fusion image computer model
CN113989231A (en) Method and device for determining kinetic parameters, computer equipment and storage medium
CN110738649A (en) training method of Faster RCNN network for automatic identification of stomach cancer enhanced CT images
CN113409447B (en) Coronary artery segmentation method and device based on multi-slice combination
CN114511564A (en) Image analysis method for breast cancer residual tumor load based on DCE-MRI
CN115471512A (en) Medical image segmentation method based on self-supervision contrast learning
CN115472258A (en) Method for generating MRI (magnetic resonance imaging) image and predicting curative effect after breast cancer neoadjuvant chemotherapy
Shao et al. Semantic segmentation method of 3D liver image based on contextual attention model
CN113838020A (en) Molybdenum target image-based lesion area quantification method
CN115861600B (en) ROI (region of interest) area identification method and system for SPECT (Single photon emission computed tomography) image
CN112766333B (en) Medical image processing model training method, medical image processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant