CN113379691B - Breast lesion deep learning segmentation method based on prior guidance - Google Patents

Breast lesion deep learning segmentation method based on prior guidance Download PDF

Info

Publication number
CN113379691B
CN113379691B CN202110605271.XA CN202110605271A CN113379691B CN 113379691 B CN113379691 B CN 113379691B CN 202110605271 A CN202110605271 A CN 202110605271A CN 113379691 B CN113379691 B CN 113379691B
Authority
CN
China
Prior art keywords
foreground
image
feature
background
prior
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110605271.XA
Other languages
Chinese (zh)
Other versions
CN113379691A (en
Inventor
张煜
宁振源
钟升洲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southern Medical University
Original Assignee
Southern Medical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southern Medical University filed Critical Southern Medical University
Priority to CN202110605271.XA priority Critical patent/CN113379691B/en
Publication of CN113379691A publication Critical patent/CN113379691A/en
Application granted granted Critical
Publication of CN113379691B publication Critical patent/CN113379691B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30068Mammography; Breast
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biophysics (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)
  • Image Processing (AREA)

Abstract

A breast lesion deep learning segmentation method based on prior guidance comprises the following steps: s1, reading breast ultrasound image data; s2, acquiring at least three mark points in the focus area; s3, processing the image by using a linear spectral clustering superpixel method and a multi-scale combination grouping method; s4, obtaining a foreground image containing focus prior information in a weighted summation mode; s5, negating the foreground prior image to obtain a background prior image; s6, extracting foreground and background features by using the foreground and background prior images; and S7, fusing the complementary foreground and background characteristics, and outputting a focus segmentation result. According to the invention, after an image containing breast tumor prior information is obtained through pretreatment, a U-Net network frame is utilized to extract the characteristics of a focus area. The method improves the segmentation precision of the ultrasonic breast lesion by using the guidance of the prior information of the foreground and the background. The focus image obtained by the method is rich in texture detail, clear in edge and less in image loss.

Description

Breast lesion deep learning segmentation method based on prior guidance
Technical Field
The invention relates to the technical field of medical image analysis, in particular to a breast lesion depth learning segmentation method based on prior guidance.
Background
The world health organization international agency for research on cancer (IARC) published the latest data on cancer burden worldwide in 2020. The data show that the new cases of breast cancer are up to 226 ten thousand cases and exceed 220 ten thousand cases of lung cancer, and the breast cancer replaces the lung cancer and becomes the first cancer in the world. Meanwhile, breast cancer accounts for the first number of cancer deaths among women. Early diagnosis and treatment of breast cancer are important means to reduce breast cancer mortality. At present, ultrasonic imaging is one of the most widely used breast detection methods in clinical application due to its advantages of no damage, high sensitivity, low cost, etc. However, accurate diagnosis conclusion based on breast ultrasound images is highly required for radiologists, and the diagnosis results of different physicians may be greatly different. Therefore, computer-aided diagnosis systems for assisting doctors in clinical diagnosis have attracted much attention after their introduction.
Whether the focus can be effectively segmented is an important link for auxiliary judgment in a computer-aided diagnosis system. In the existing deep network learning method, especially the convolutional neural network, has been successfully applied to the lesion segmentation of the breast ultrasound image. However, lesion segmentation is challenging due to the pattern complexity and intensity similarity between the surrounding tissue (i.e., background) and the lesion area (i.e., foreground). In addition, other characteristics of breast ultrasound images, such as 1) low apparent contrast, fuzzy boundaries between the lesion and surrounding tissues, and difficult to distinguish; 2) the shape and position of the focus are greatly different, and the difficulty of accurately segmenting the focus is increased. Therefore, it is necessary to provide a breast lesion deep learning segmentation method based on prior guidance to overcome the deficiencies of the prior art.
Disclosure of Invention
The invention aims to avoid the defects of the prior art and provides a breast lesion deep learning segmentation method based on prior guidance. The method comprises the steps of obtaining an image containing breast tumor prior information through preprocessing, and then utilizing a U-Net network framework to extract features of a focus area for accurate segmentation. The method improves the segmentation precision of the ultrasonic breast lesion by using the guidance of the prior information of the foreground and the background.
The above object of the present invention is achieved by the following technical measures.
The method for the deep learning segmentation of the breast lesion based on the prior guidance comprises the following steps:
and S1, reading the breast lesion ultrasonic original image.
S2, randomly selecting at least three marking points of the target lesion area on the breast lesion ultrasonic original image.
And S3, processing the marked breast lesion ultrasonic original image in the step S2 by using a linear spectral clustering superpixel method to obtain a low-level representation.
And processing the marked breast lesion ultrasound original image in the step S2 by using a multi-scale combined grouping method to obtain a high-level representation.
And S4, respectively selecting target area images of the low-level representation diagram and the high-level representation diagram in the step S3 by using the mark points obtained in the step S2, and performing weighted summation on the two selected target area images to obtain a foreground prior image.
And S5, negating the foreground prior image obtained in the step S4 to obtain a background prior image.
And S6, inputting the foreground prior image obtained in the step S4 and the breast focus ultrasonic original image in the step S1 into a first learning network branch together, and extracting the foreground prior image characteristics.
And (4) inputting the background prior image obtained in the step (S5) and the breast lesion ultrasonic original image in the step (S1) into a second learning network branch together, and extracting the characteristics of the background prior image.
And S7, in the feature aggregation guide module of the foreground prior image and the background prior image, extracting foreground features through a background prior image feature guide network by utilizing the complementarity of the foreground prior image features and the background prior image features, and outputting a final breast lesion segmentation result.
Specifically, in step S1, the breast lesion ultrasound original image is a single-channel two-dimensional image.
Preferably, in step S2, the region enclosed by the at least three marker points on the breast lesion ultrasound original image is a target region containing lesion information.
Specifically, in step S3, the number of three different superpixel blocks is set
Figure GDA0003656782630000021
Processing the breast focus ultrasonic original image I to obtain a superpixel image f (I, n) under three different scalesi) Then, three marked points are used
Figure GDA0003656782630000022
Respectively selecting target areas, carrying out weighted summation on the images of the three target areas by the weight of 1:1:1, and obtaining a low-level representation y according to the formula (1)l
Figure GDA0003656782630000023
☉ represents the operation of selecting the target area by the mark point; i denotes the number of super pixel blocks set at the ith processing, niIs three different super pixel block numbers which are respectively n1=8,n2=15,n3=50,i=1,2,3;pjRepresents the three coordinate points p selected in step S21,p2,p3,j=1,2,3。
Specifically, in step S3, a multi-scale combined grouping method is used to generate a multi-scale structure diagram
Figure GDA0003656782630000024
T represents different scales of the object suggestion graph, and the object suggestion graph is integrated into a complete multi-scale clustering image g (I, m) after being restored to the same scalei) Selecting and fusing the target area A by using the same three marking points, and obtaining a high-level representation y according to a formula (2)h
Figure GDA0003656782630000031
Wherein m isiAn ith superpixel block representing the object suggestion map obtained after the processing of the multi-scale grouping method, wherein i is 1, 2, 3, … …, T; p is a radical ofjRepresents the three coordinate points p selected in step S21,p2,p3,j=1,2,3。
Specifically, in step S4, the foreground prior image is a foreground image containing prior information of the breast lesion.
Further, in step S4, the high level representation y is expressed according to the formula (3)hAnd a lower level representation ylWeighting and summing by weight of 1:2 to obtain a foreground prior image yf
yf=ω1yl2yh… … formula (3).
Wherein, ω is1And omega2Respectively representing a hierarchical representation ylRepresentation with higher level yhIs in the foreground prior image yfThe ratio of (1).
Specifically, in step S5, the foreground prior image y is processed according to the formula (4)fObtaining a background prior image y by adopting the operation of negationb
Figure GDA0003656782630000032
Wherein the content of the first and second substances,
Figure GDA0003656782630000033
indicating an inversion operation.
Preferably, in step S6, the first learning network branch is a U-net network, the second learning network branch is a U-net network, and the first learning network branch have the same network structure.
Preferably, in step S7, in the foreground prior image and background prior image feature aggregation guiding module, a specific process of extracting foreground features through a background prior image feature guiding network is as follows:
a1, the feature aggregation guide module firstly receives the foreground feature map and the background feature map of the convolution unit corresponding to the foreground feature and background feature extraction network branch, the received feature maps are respectively strengthened by the convolution operation with convolution kernel of 1x1, and the channel connection of the foreground feature map and the background feature map is carried out.
And A2, after the output image of the previous module is up-sampled by 2x2, performing pixel level summation operation on the output image of the previous module and the enhanced foreground feature image and background feature image in the step A1 by using 1x1 convolution with a void ratio of 2, and inputting the obtained foreground feature image and background feature image into different branches respectively to continuously enhance the foreground feature and the background feature.
A3, after the background feature map obtained in the step A2 is subjected to three times of convolution operations of 1x1 and 3x3, one path of data is used as a background feature output and is used as a background feature input sample of the next feature aggregation guide module.
And B, fusing the other path of data with the foreground feature map obtained in the step A2, and then, converting the background feature map B into a background feature mapjThe operation Θ of negating is performed once according to equation (5).
Figure GDA0003656782630000034
Wherein j represents the j th feature aggregation guide module, i represents the i th channel of the feature block, and i is 1, 2, 3, … …, C, max { · } represents the maximum value for finding the i th channel feature map.
And A4, performing convolution operations of 1x1 and 3x3 on the foreground feature map obtained in the step A2.
A5, obtaining the background feature map obtained in the step A3 after inversion
Figure GDA0003656782630000041
And connecting the foreground feature map obtained in the step A4 on the channel dimension, and blending the foreground feature map into the foreground feature extraction branch through a 3x3 convolution operation to be used as a foreground feature input sample of a next feature aggregation guide module.
The method comprises the steps of marking at least three marking points at random in a lesion area on a breast lesion ultrasonic original image, and processing the original image by using a linear spectral clustering superpixel method and a multi-scale combination grouping method respectively to obtain a low-level representation and a high-level representation correspondingly. And then acquiring target area images of the low-level representation diagram and the high-level representation diagram through the marking points, and calculating the ratio of 2: and summing the weights of 1 to obtain a foreground prior image. And obtaining a background prior image through a negation operation according to the highly complementary property of the foreground image and the background image. The preprocessed breast lesion ultrasonic original image is subjected to foreground and background initial separation, and a better foreground prior image and background prior image are obtained.
The U-net network is based on a coding and decoding structure, image feature fusion is achieved through a splicing mode, and the network structure is simple and stable. Compared with other convolutional neural network models, the U-net is simpler to operate and more convenient to process. The invention utilizes two independent U-net networks to respectively extract the characteristics of the foreground prior image and the background prior image which are subjected to image preprocessing and have obvious characteristics, can further improve the characteristic extraction of the U-net network on the foreground image, and is beneficial to the focus segmentation of the original image.
The invention also designs a foreground image feature and background image feature aggregation guide module, utilizes the high complementarity of the foreground image feature and the background image feature and the advantage that the background image (focus) feature is richer than the foreground image (peripheral tissue) feature texture information, fully utilizes the information outside the focus, assists the foreground segmentation through the background significant representation, guides the network to better extract the focus feature and obtains good segmentation effect.
Drawings
The invention is further illustrated by means of the attached drawings, the content of which is not in any way limiting.
FIG. 1 is a flow chart of an ultrasound breast lesion deep learning segmentation method based on prior guidance according to the present invention;
FIG. 2 is a diagram of the U-net network framework of the present invention;
FIG. 3 is a workflow diagram of the feature aggregation bootstrap module of the present invention;
FIG. 4 is a partial foreground and background saliency map processed by step S3 of the present invention;
FIG. 5 is a graph showing the comparison between the partial segmentation results processed by the prior-guided ultrasound breast lesion deep learning segmentation method of the present invention and the breast lesion segmentation effects of the labeling method and the U-Net method.
Detailed Description
The invention is further illustrated by reference to the following examples.
Example 1.
A breast lesion deep learning segmentation method based on prior guidance is disclosed, and fig. 1 shows a specific process of the method, which comprises the following steps:
and S1, reading the breast lesion ultrasonic original image I. The image data is acquired by special ultrasonic imaging equipment and is a single-channel two-dimensional image.
S2, randomly selecting at least three marking points of the target lesion area on the breast lesion ultrasonic original image I. It should be noted that the number of markers is not strictly limited, and generally, the more markers, the more regions surrounded by markers are advantageous for accurately segmenting the lesion. However, the increase of the number of the marker points also increases the complexity of the calculation and prolongs the calculation time. Three marker points are the minimum number of points that define an area. The practice of the method of the invention can obtain more ideal results by selecting three marking points under the condition of less time consumption. The selection of the marking points needs to be completed by experienced medical image analysis professionals, and the area defined by the selected marking points on the breast lesion ultrasonic original image I is ensured to be a target area containing lesion information.
S3, processing the marked breast focus ultrasonic original image I in the step S2 by using a linear spectral clustering superpixel method to obtain a low-level representation yl. The specific process is as follows: for a breast lesion ultrasonic original image I, because the size of a breast lesion has extreme difference, the number of three different superpixel blocks is set for the adaptability of the saliency map generation method to the lesion size
Figure GDA0003656782630000051
Processing the breast focus ultrasonic original image I to obtain a superpixel image f (I, n) under three different scalesi) Then, three marked points are used
Figure GDA0003656782630000052
Respectively selecting target areas, carrying out weighted summation on the images of the three target areas by the weight of 1:1:1, and obtaining a low-level representation y according to the formula (1)lThe effect of the treatment is shown in FIG. 4.
Figure GDA0003656782630000053
☉ represents the operation of selecting the target area by the mark point; i denotes the number of super-pixel blocks set at the i-th processing, niIs three different super pixel block numbers which are respectively n1=8,n2=15,n3=50,i=1,2,3;pjRepresents the three coordinate points p selected in step S21,p2,p3,j=1,2,3。
For a priori image y processed using only the above methodlOften do not cover the lesion area well and lose a high level of information such as the texture underlying many images. Here, the multi-scale combined grouping method can be used to make up for the deficiencies of the above methods: selecting target area containing mark points from object suggestion graph generated by multi-scale combined grouping method to obtain high-level representation graph y containing rich texture informationhProcessing the marked breast lesion ultrasound original image in the step S2 by using a multi-scale combination and grouping method to obtain a high-level representation yh. The specific process is as follows: firstly, generating a multi-scale structure chart by adopting a multi-scale combination grouping method
Figure GDA0003656782630000054
T represents different scales of the object suggestion graph, and the object suggestion graph is restored to the same scale and then integrated into a complete multi-scale clustering image g (I, m)i) Selecting and fusing the target area A by using the same three marking points, and obtaining a high-level representation y according to a formula (2)hThe effect of the treatment is shown in FIG. 4.
Figure GDA0003656782630000061
Wherein m isiAn ith superpixel block representing the object suggestion map obtained after the processing of the multi-scale grouping method, wherein i is 1, 2, 3, … …, T; p is a radical ofjIndicates the three coordinate points p selected in step S21,p2,p3,j=1,2,3。
S4, selecting the low-level representation y in the step S3 by using the mark points obtained in the step S2lAnd high level representation yhThe target area image of (1). The high-level representation y to be selected is expressed according to the formula (3)hAnd a lower level representation ylThe target area image is weighted and summed by the weight of 1:2 to obtain a foreground prior image yfThe effect of the treatment is shown in FIG. 4. The foreground prior image yfIs a foreground image containing prior information of the breast lesion.
yf=ω1yl2yh… … formula (3). Wherein, ω is1And ω2Respectively representing a hierarchical representation ylRepresentation with higher level yhIs in the foreground prior image yfThe ratio of (1).
S5, the foreground priori image y obtained in the step S4fPerforming negation operation according to the formula (4) to obtain a background prior image ybThe effect of the treatment is shown in FIG. 4.
Figure GDA0003656782630000062
Wherein the content of the first and second substances,
Figure GDA0003656782630000063
indicating an inversion operation.
S6, the foreground priori image y obtained in the step S4fInputting the foreground image and the breast focus ultrasonic original image I in the step S1 into a first learning network branch together to perform foreground prior image yfAnd (5) feature extraction.
The background prior image y obtained in S5bAnd the mammary gland in step S1The focus ultrasonic original image I is input into a second learning network branch together for carrying out background prior image ybAnd (5) feature extraction.
The first learning network branch is a U-net network, the second learning network branch is a U-net network, and the network structures of the first learning network branch and the first learning network branch are the same. The U-Net network frame consists of two parts, namely an encoding part for extracting high-dimensional features of an image and a decoding part for restoring the resolution of the image to generate a segmentation result. Each section consists of four convolution units, each convolution unit containing a two-dimensional convolution operation of three identical convolution kernels, each convolution operation being followed by a normalization and activation operation. After three convolution operations are completed, a pooling (dimensionality reduction) or up-sampling (resolution restoration) operation is performed to form a complete network framework for feature extraction and resolution restoration.
The basic network framework for foreground feature extraction and background feature extraction is U-Net, and the parameter configuration of a U-Net network model is shown in Table 1. The two U-Net network framework structures of the present invention are shown in FIG. 2.
TABLE 1U-Net model parameter configuration table for extracting foreground and background characteristics
Figure GDA0003656782630000064
Figure GDA0003656782630000071
And S7, in the feature aggregation guide module of the foreground prior image and the background prior image, extracting foreground features through a background prior image feature guide network by utilizing the complementarity of the foreground prior image features and the background prior image features, and outputting a final breast lesion segmentation result. As shown in fig. 3, in the feature aggregation guidance module, a specific process of extracting foreground features through a background prior image feature guidance network is as follows:
a1, the feature aggregation guide module firstly receives the foreground feature map and the background feature map of the convolution unit corresponding to the foreground feature and background feature extraction network branch, the received feature maps are respectively strengthened by the convolution operation with convolution kernel of 1x1, and the channel connection of the foreground feature map and the background feature map is carried out.
And A2, after the output image of the previous module is up-sampled by 2x2, performing pixel level summation operation on the foreground feature image and the background feature image which are enhanced in the step A1 by using 1x1 convolution with a void rate of 2, and inputting the obtained foreground feature image and background feature image into different branches respectively to continuously enhance the foreground feature and the background feature.
A3, after the background feature map obtained in the step A2 is subjected to three times of convolution operations of 1x1 and 3x3, one path of data is used as a background feature output and is used as a background feature input sample of the next feature aggregation guide module.
The other path of data is used for fusing the background feature map B with the foreground feature map obtained in the step A2jPerforming a self-negation operation theta according to a formula (5);
Figure GDA0003656782630000081
wherein j denotes the jth feature aggregation guide module, i denotes the ith channel of the feature block, and i ═ 1, 2, 3, … …, C, max { · denotes the maximum value for finding the ith channel feature map.
The negation process is to find a background feature map B by taking a channel C as a unitjEach feature map of
Figure GDA0003656782630000082
Is subjected to pixel-level subtraction with the maximum pixel value of the channel, and the inverted feature map is finally obtained after the operations are completed on the C feature maps
Figure GDA0003656782630000083
And A4, performing convolution operations of 1x1 and 3x3 on the foreground feature map obtained in the step A2.
A5, obtaining the reversed background feature map obtained in the step A3
Figure GDA0003656782630000084
And connecting the foreground feature map obtained in the step A4 on the channel dimension, and blending the foreground feature map into the foreground feature extraction branch through a 3x3 convolution operation to be used as a foreground feature input sample of a next feature aggregation guide module. Connection operation of the remaining two proximity outputs in fig. 3
Figure GDA0003656782630000085
The purpose is to prevent the disappearance of the features after the multi-layer non-linear feature mapping.
The invention sets the feature aggregation guide module to connect two independent U-net networks, fully utilizes the extracted background features, enhances the capability of the U-net networks for extracting the foreground features, and guides the U-net networks to better learn the foreground features.
The method comprises the steps of marking at least three marking points at random in a lesion area on a breast lesion ultrasonic original image, and processing the original image by using a linear spectral clustering superpixel method and a multi-scale combination grouping method respectively to obtain a low-level representation and a high-level representation correspondingly. And then acquiring target area images of the low-level representation diagram and the high-level representation diagram through the marking points, and calculating the ratio of 2: and summing the weights of 1 to obtain a foreground prior image. And obtaining a background prior image through a negation operation according to the highly complementary property of the foreground image and the background image. The foreground and the background of the preprocessed breast lesion ultrasonic original image are initially separated, so that a better foreground prior image and a better background prior image are obtained.
The U-net network is based on a coding and decoding structure, image feature fusion is achieved through a splicing mode, and the network structure is simple and stable. Compared with other convolutional neural network models, the U-net is simpler to operate and more convenient to process. The invention utilizes two independent U-net networks to respectively extract the characteristics of the foreground prior image and the background prior image which are subjected to image preprocessing and have obvious characteristics, can further improve the characteristic extraction of the U-net network on the foreground image, and is beneficial to the focus segmentation of the original image.
The invention also designs a foreground image feature and background image feature aggregation guide module, utilizes the high complementarity of the foreground image feature and the background image feature and the advantage that the background image (focus) feature is richer than the foreground image (peripheral tissue) feature texture information, fully utilizes the information outside the focus, assists the foreground segmentation through the background significant representation, guides the network to better extract the focus feature and obtains good segmentation effect.
According to the method, after the image containing the breast tumor prior information is obtained through preprocessing, the characteristics of the focus area are extracted by utilizing a U-Net network framework. The method improves the segmentation precision of the ultrasonic breast lesion by using the guidance of the prior information of the foreground and the background. The focus image obtained by the method is rich in texture detail, clear in edge and less in image loss.
Example 2.
The results of lesion segmentation are performed on three data sets by applying the prior-guided breast lesion depth learning segmentation method, the labeling method and the U-net method as in example 1, as shown in FIG. 5. Compared with a label method, the focus image obtained by segmentation is more accurate in range, clearer in edge and richer in texture information. Compared with the traditional method for segmenting the image by using a single U-net, the focus image area obtained by segmentation is more coherent, and the image data loss is less. The invention connects two U-net networks through the feature aggregation guide module, and the capability of extracting image features is stronger than that of a single U-net network.
According to the method, after the image containing the breast tumor prior information is obtained through preprocessing, the U-Net network framework is utilized to extract the characteristics of the focus area for accurate segmentation. The method improves the segmentation precision of the ultrasonic breast lesion by using the guidance of the prior information of the foreground and the background.
Finally, it should be noted that the above embodiments are only used for illustrating the technical solutions of the present invention and not for limiting the protection scope of the present invention, and although the present invention is described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions can be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention.

Claims (9)

1. A breast lesion deep learning segmentation method based on prior guidance is characterized by comprising the following steps:
s1, reading the breast lesion ultrasonic original image;
s2, randomly selecting at least three marking points of a target focus area on the breast focus ultrasonic original image;
s3, processing the marked breast lesion ultrasonic original image in the step S2 by using a linear spectral clustering superpixel method to obtain a low-level representation;
processing the marked breast lesion ultrasonic original image in the step S2 by using a multi-scale combined grouping method to obtain a high-level representation;
s4, respectively selecting target area images of the low-level representation diagram and the high-level representation diagram in the step S3 by using the mark points obtained in the step S2, and performing weighted summation on the images of the two selected target areas to obtain a foreground prior image;
s5, negating the foreground prior image obtained in the step S4 to obtain a background prior image;
s6, inputting the foreground prior image obtained in the step S4 and the breast focus ultrasonic original image in the step S1 into a first learning network branch together, and extracting the foreground prior image characteristics;
inputting the background prior image obtained in the step S5 and the breast lesion ultrasonic original image in the step S1 into a second learning network branch together, and extracting the characteristics of the background prior image;
s7, in a feature aggregation guide module of the foreground prior image and the background prior image, extracting foreground features through a background prior image feature guide network by utilizing the complementarity of the foreground prior image features and the background prior image features, and outputting a final breast lesion segmentation result;
in step S7, in the foreground prior image and background prior image feature aggregation guide module, the specific process of extracting foreground features through the background prior image feature guide network is as follows:
a1, the feature aggregation guide module firstly receives foreground feature maps and background feature maps from convolution units corresponding to the foreground feature and background feature extraction network branches, the received feature maps are respectively strengthened through convolution operation with convolution kernel of 1x1, and channel connection of the foreground feature maps and the background feature maps is carried out;
a2, after the output graph of the previous module is up-sampled by 2x2, performing pixel level summation operation on the output graph of the previous module and the enhanced foreground feature graph and background feature graph in the step A1 by using 1x1 convolution with a void ratio of 2, and inputting the obtained foreground feature graph and background feature graph into different branches respectively to continuously enhance the foreground feature and the background feature;
a3, after the background feature map obtained in the step A2 is subjected to three times of convolution operations of 1x1 and 3x3, one path of data is used as background feature output and is used as a background feature input sample of a next feature aggregation guide module;
the other path of data is used for fusing the background feature map B with the foreground feature map obtained in the step A2jPerforming a self-negation operation theta according to a formula (5);
Figure FDA0003656782620000011
wherein j represents a j-th feature aggregation guide module, i represents an i-th channel of the feature block, and i is 1, 2, 3, … …, C, max { · } represents a maximum value for finding an i-th channel feature map;
a4, performing convolution operations of 1x1 and 3x3 on the foreground feature map obtained in the step A2;
a5, obtaining the reversed background feature map obtained in the step A3
Figure FDA0003656782620000026
And connecting the foreground feature map obtained in the step A4 on the channel dimension, and blending the foreground feature map into the foreground feature extraction branch through a 3x3 convolution operation to be used as a foreground feature input sample of a next feature aggregation guide module.
2. The breast lesion depth learning segmentation method based on prior guidance as claimed in claim 1, wherein in step S1, the breast lesion ultrasound original image is a single-channel two-dimensional image.
3. The a priori guidance-based breast lesion depth learning segmentation method according to claim 1, wherein in step S2, the region enclosed by the at least three marker points on the breast lesion ultrasound original image is a target region containing lesion information.
4. The a priori guidance-based breast lesion depth learning segmentation method according to claim 1, wherein in step S3, the number of three different superpixel blocks is set
Figure FDA0003656782620000021
Processing the breast focus ultrasonic original image I to obtain a superpixel image f (I, n) under three different scalesi) Then, three marked points are used
Figure FDA0003656782620000022
Respectively selecting target areas, carrying out weighted summation on the images of the three target areas by the weight of 1:1:1, and obtaining a low-level representation y according to a formula (1)l
Figure FDA0003656782620000023
☉ represents the operation of selecting the target area by the mark point; i denotes the number of super pixel blocks set at the ith processing, niIs three different super pixel block numbers which are respectively n1=8,n2=15,n3=50,i=1,2,3;pjIndicates the three coordinate points p selected in step S21,p2,p3,j=1,2,3。
5. The breast lesion depth learning segmentation method based on prior guidance as claimed in claim 4, wherein in step S3, a multi-scale combined grouping method is used to generate a multi-scale structure chart first
Figure FDA0003656782620000024
T represents different scales of the object suggestion graph, and the object suggestion graph is integrated into a complete multi-scale clustering image g (I, m) after being restored to the same scalei) Selecting and fusing the target area A by using the same three marking points, and obtaining a high-level representation y according to a formula (2)h
Figure FDA0003656782620000025
Wherein m isiAn ith superpixel block representing the object suggestion map obtained after the processing of the multi-scale grouping method, wherein i is 1, 2, 3, … …, T; p is a radical ofjRepresents the three coordinate points p selected in step S21,p2,p3,j=1,2,3。
6. The method for breast lesion depth learning segmentation based on prior guidance as claimed in claim 1, wherein in step S4, the foreground prior image is a foreground image containing prior information of a breast lesion.
7. The a priori guidance-based breast lesion depth learning segmentation method according to claim 6, wherein in step S4, high-level representation y is expressed according to formula (3)hAnd a lower level representation y1Weighting and summing by weight of 1:2 to obtain a foreground prior image yf
yf=ω1yl2yh… … formula (3);
wherein, ω is1And ω2Respectively representing a low-level representation ylRepresentation with higher level yhIs in the foreground prior image yfThe ratio of (1).
8. The breast lesion depth learning segmentation method based on prior guidance as claimed in claim 1, wherein in step S5, the foreground prior image y is processed according to formula (4)fObtaining a background prior image y by adopting the operation of negationb
Figure FDA0003656782620000031
Wherein the content of the first and second substances,
Figure FDA0003656782620000032
indicating an inversion operation.
9. The method for breast lesion deep learning segmentation based on prior guidance according to claim 1, wherein in step S6, the first learning network branch is a U-net network, the second learning network branch is a U-net network, and the network structures of the first learning network branch and the first learning network branch are the same.
CN202110605271.XA 2021-05-31 2021-05-31 Breast lesion deep learning segmentation method based on prior guidance Active CN113379691B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110605271.XA CN113379691B (en) 2021-05-31 2021-05-31 Breast lesion deep learning segmentation method based on prior guidance

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110605271.XA CN113379691B (en) 2021-05-31 2021-05-31 Breast lesion deep learning segmentation method based on prior guidance

Publications (2)

Publication Number Publication Date
CN113379691A CN113379691A (en) 2021-09-10
CN113379691B true CN113379691B (en) 2022-06-24

Family

ID=77575291

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110605271.XA Active CN113379691B (en) 2021-05-31 2021-05-31 Breast lesion deep learning segmentation method based on prior guidance

Country Status (1)

Country Link
CN (1) CN113379691B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114332572B (en) * 2021-12-15 2024-03-26 南方医科大学 Method for extracting breast lesion ultrasonic image multi-scale fusion characteristic parameters based on saliency map-guided hierarchical dense characteristic fusion network

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106203430A (en) * 2016-07-07 2016-12-07 北京航空航天大学 A kind of significance object detecting method based on foreground focused degree and background priori
CN110246141A (en) * 2019-06-13 2019-09-17 大连海事大学 It is a kind of based on joint angle point pond vehicles in complex traffic scene under vehicle image partition method
CN111815582A (en) * 2020-06-28 2020-10-23 江苏科技大学 Two-dimensional code area detection method for improving background prior and foreground prior
CN112785603A (en) * 2021-01-15 2021-05-11 沈阳建筑大学 Brain tissue segmentation method based on Unet and superpixel

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3551908B2 (en) * 1999-09-24 2004-08-11 日本電信電話株式会社 Method and apparatus for separating background sprite and foreground object
CN110163188B (en) * 2019-06-10 2023-08-08 腾讯科技(深圳)有限公司 Video processing and method, device and equipment for embedding target object in video
CN111369582B (en) * 2020-03-06 2023-04-07 腾讯科技(深圳)有限公司 Image segmentation method, background replacement method, device, equipment and storage medium
CN111931787A (en) * 2020-07-22 2020-11-13 杭州电子科技大学 RGBD significance detection method based on feature polymerization

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106203430A (en) * 2016-07-07 2016-12-07 北京航空航天大学 A kind of significance object detecting method based on foreground focused degree and background priori
CN110246141A (en) * 2019-06-13 2019-09-17 大连海事大学 It is a kind of based on joint angle point pond vehicles in complex traffic scene under vehicle image partition method
CN111815582A (en) * 2020-06-28 2020-10-23 江苏科技大学 Two-dimensional code area detection method for improving background prior and foreground prior
CN112785603A (en) * 2021-01-15 2021-05-11 沈阳建筑大学 Brain tissue segmentation method based on Unet and superpixel

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
CF2-Net: Coarse-to-Fine Fusion Convolutional Network for;Zhenyuan Ning,Yu Zhang et al.;《https://arxiv.org/abs/2003.10144》;20201231;第2-6页 *
基于多尺度超像素和图割的交互式图像分割算法;丁陈梅等;《计算机与数字工程》;20191231;第47卷(第12期);第3161-3163页 *
基于超像素分类与多尺度注意力机制的乳腺超声图像分割算法研究;黄永豪;《中国优秀博硕士学位论文全文数据库(硕士) 医药卫生科技辑》;20210215(第2期);第12-71页 *

Also Published As

Publication number Publication date
CN113379691A (en) 2021-09-10

Similar Documents

Publication Publication Date Title
CN109523521B (en) Pulmonary nodule classification and lesion positioning method and system based on multi-slice CT image
CN110111313B (en) Medical image detection method based on deep learning and related equipment
Cai et al. A review of the application of deep learning in medical image classification and segmentation
Tang et al. E 2 Net: An edge enhanced network for accurate liver and tumor segmentation on CT scans
CN106056595B (en) Based on the pernicious assistant diagnosis system of depth convolutional neural networks automatic identification Benign Thyroid Nodules
Tang et al. High-resolution 3D abdominal segmentation with random patch network fusion
CN108257135A (en) The assistant diagnosis system of medical image features is understood based on deep learning method
CN109087703B (en) Peritoneal transfer marking method of abdominal cavity CT image based on deep convolutional neural network
CN107451615A (en) Thyroid papillary carcinoma Ultrasound Image Recognition Method and system based on Faster RCNN
CN112348082B (en) Deep learning model construction method, image processing method and readable storage medium
CN114332572B (en) Method for extracting breast lesion ultrasonic image multi-scale fusion characteristic parameters based on saliency map-guided hierarchical dense characteristic fusion network
EP4141790A1 (en) Method, device and system for automated segmentation of prostate in medical images for tumor detection
CN114693933A (en) Medical image segmentation device based on generation of confrontation network and multi-scale feature fusion
CN117078692B (en) Medical ultrasonic image segmentation method and system based on self-adaptive feature fusion
CN117078930A (en) Medical image segmentation method based on boundary sensing and attention mechanism
Khan et al. PMED-net: Pyramid based multi-scale encoder-decoder network for medical image segmentation
Feng et al. Deep learning for chest radiology: a review
Lama et al. ChimeraNet: U-Net for hair detection in dermoscopic skin lesion images
Tummala et al. Liver tumor segmentation from computed tomography images using multiscale residual dilated encoder‐decoder network
Kousalya et al. Improved the detection and classification of breast cancer using hyper parameter tuning
CN114581474A (en) Automatic clinical target area delineation method based on cervical cancer CT image
CN113379691B (en) Breast lesion deep learning segmentation method based on prior guidance
Razali et al. Enhancement technique based on the breast density level for mammogram for computer-aided diagnosis
CN117876690A (en) Ultrasonic image multi-tissue segmentation method and system based on heterogeneous UNet
CN114283406A (en) Cell image recognition method, device, equipment, medium and computer program product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant