CN116485791B - Automatic detection method and system for double-view breast tumor lesion area based on absorbance - Google Patents

Automatic detection method and system for double-view breast tumor lesion area based on absorbance Download PDF

Info

Publication number
CN116485791B
CN116485791B CN202310715680.4A CN202310715680A CN116485791B CN 116485791 B CN116485791 B CN 116485791B CN 202310715680 A CN202310715680 A CN 202310715680A CN 116485791 B CN116485791 B CN 116485791B
Authority
CN
China
Prior art keywords
image
tumor
breast
double
absorbance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310715680.4A
Other languages
Chinese (zh)
Other versions
CN116485791A (en
Inventor
吴晓琳
杜永兆
陈海信
刘博�
傅玉青
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huaqiao University
Original Assignee
Huaqiao University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huaqiao University filed Critical Huaqiao University
Priority to CN202310715680.4A priority Critical patent/CN116485791B/en
Publication of CN116485791A publication Critical patent/CN116485791A/en
Application granted granted Critical
Publication of CN116485791B publication Critical patent/CN116485791B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • G06V10/449Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
    • G06V10/451Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
    • G06V10/454Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30068Mammography; Breast
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Artificial Intelligence (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Radiology & Medical Imaging (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Quality & Reliability (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)

Abstract

The invention discloses an absorbance-based double-view breast tumor lesion area automatic detection method and system, which relate to the field of medical image processing and comprise the following steps: s1, acquiring a breast ultrasonic tumor gray image dataset, marking data and performing image preprocessing; s2, carrying out absorbance transformation on the preprocessed image to obtain an ultrasonic absorbance image; s3, taking the breast ultrasonic tumor gray level image and the corresponding ultrasonic absorbance image as double views, and inputting a double-view detection model; s4, the double-view detection model respectively performs feature extraction on the double views to effectively reflect the tumor interested region in the double views; the step S4 comprises the following steps: feature fusion is carried out by embedding feature graphs of different scales of the double views into the DFT unit. The invention combines the breast ultrasonic tumor gray level image and the absorbance image to make up for the defect of insufficient breast ultrasonic tumor gray level image information; and the binary relation between the gray level image and the absorbance image is dynamically learned by using the DFT unit, and the gray level image and the absorbance image are fused and interacted, so that the relevance and the complementarity are enhanced.

Description

Automatic detection method and system for double-view breast tumor lesion area based on absorbance
Technical Field
The invention relates to the technical field of medical image processing, in particular to an absorbance-based double-view breast tumor lesion area automatic detection method and system.
Background
The correct interpretation of ultrasound images requires a large accumulation of clinical experience, which is time consuming and subject to subjectivity; meanwhile, the breast ultrasonic tumor image has the problems of speckle noise, low contrast and the like, and is easy to produce misjudgment only by human eyes. To enhance the accuracy of tumor detection, the development of the digital age should be complied with, and a breast tumor computer-aided diagnosis (Computer Aided Diagnosis, CAD) technique should be applied to breast ultrasound tumor image analysis. The detection of the lesion area of the breast tumor is one of the most important steps in the CAD technology, and the realization of the efficient and accurate detection of the lesion area of the breast tumor has important significance and application value.
In recent years, advanced study of mammary gland CAD based on deep learning has been advanced. Zhang et al introduced multi-scale and multi-resolution extraction of candidate boundaries based on the Faster RCNN network to improve detection of smaller-sized breast tumors with accuracy up to 91.30% (Zhang Z, zhang X, lin X, et al Ultrasonic Diagnosis of Breast Nodules Using Modified Faster R-CNN [ J ]. Ultrasonic Imaging, 2019,41 (6): 353-367.). Xu Lifang et al construct SE-Res2Net networks on the basis of YOLOv3 and design novel downsampling modules to alleviate problems of blurred boundaries, large noise and low contrast of breast ultrasound tumor images, which result in difficult feature extraction and easy occurrence of false leak detection, and improve feature extraction capacity by 4.56 percentage points compared with the basic network (Xu Lifang, fu Zhijie, mo Hongwei. Breast ultrasound tumor recognition based on improved YOLOv3 algorithm [ J ]. Intelligent systems theory, 2021,16 (01): 21-29.). In conclusion, deep learning has made a good research progress in the field of detection of lesion areas of breast tumors. However, most of researches are biased to solve the problem that the background gray value and the characteristic distinction of a lesion area are small due to a single-view ultrasonic image imaging mode, so that tumors with small shapes are easy to ignore, and the difference of gray similar tissues and the phenomenon of gland overlapping during imaging are difficult to distinguish, so that the detection of breast ultrasonic tumor images is inaccurate.
Disclosure of Invention
The invention aims to solve the problem of inaccurate breast ultrasonic tumor image detection in the prior art.
The technical scheme adopted for solving the technical problems is as follows: the method for automatically detecting the lesion area of the double-view breast tumor based on the absorbance comprises the following steps:
s1, acquiring a breast ultrasonic tumor gray image dataset, marking the position of a breast tumor in the dataset, preprocessing the dataset, and generating a preprocessed breast ultrasonic tumor image dataset;
s2, carrying out absorbance transformation on the preprocessed image according to an ultrasonic transmission principle to obtain an ultrasonic absorbance image;
s3, taking the preprocessed breast ultrasonic tumor gray level image and the corresponding ultrasonic absorbance image as double views, and inputting a double-view detection model;
s4, the double-view detection model respectively performs feature extraction on the double views through a double-flow backhaul backbone network, and effectively reflects the tumor interested areas in the double views through multi-layer convolution;
the step S4 comprises the following steps: the breast ultrasonic tumor gray image with different scales is characterizedAnd absorbance image featuresEmbedding DFT units to perform feature fusion, wherein i=2, 3,4, and fusing the feature imagesAndadded back to the original characteristic diagramAndand sequentially outputting feature maps P2, P3 and P4 of the features with different scales to fully utilize feature information among different views, improve detection of a breast tumor lesion area, and finally fusing the feature maps P2, P3 and P4 to output as a predicted image.
Preferably, in the step S1, the preprocessing mainly includes removing labeling information around the ultrasound image and enhancing contrast, and the calculation formula of enhancing contrast is as follows:
wherein ,is the value of the pixel point of the original image,to correspond to the pixel values after the enhancement processing,the logarithmic function is represented, and the constant C is used to satisfy the gray dynamic range of the transformed image.
The contrast is enhanced mainly by expanding low-value gray scale and compressing high-value gray scale through logarithmic transformation, so that details of the breast ultrasonic tumor image are easier to see clearly, and the accuracy of the breast ultrasonic tumor image detection is improved.
Preferably, the step S2 includes the following steps:
s21, converting the gray value of each pixel in the preprocessed image into an absorbance value, wherein the calculation formula is as follows:
wherein ,is the value of the absorption degree, and the absorption degree,is the gray value of the pixel and,is the average gray value of the background;
s22, converting the absorbance value by linear transformationMapping to [0,255]Within the scope of this invention,and converting the absorbance value into a depth information image to obtain an ultrasonic absorbance image.
Preferably, in the step S3, before the dual-view is input into the dual-view detection model, mosaic enhancement and adaptive image scaling are performed on the dual-view; the mosaic enhancement is to randomly select 4 pictures to randomly cut and rotate, and then splice and synthesize images.
Preferably, in the step S4, a YOLOv5 trunk extraction network is adopted to extract features of the breast ultrasound tumor gray level image and the absorbance image respectively, so as to effectively reflect the lesion area; wherein, the double-flow backhaul backbone network adopts a CSPDarknet network.
Preferably, the step S4 includes the following steps:
s41, slicing the double views by adopting a focus module; adopting a CBM structure to extract the characteristic information of the double views; adopting a CSP1 structure, and enhancing the extraction of the features through a cross-stage hierarchical structure;
s42, extracting features of lesion areas of breast tumors by adopting CBM and CSP2 structures respectively to obtain feature graphsAndinputting the information into a DFT unit, mapping different views into the same feature space by the DFT unit, enhancing the attention to the feature information of the lesion area of the breast tumor, and outputting a primary fusion feature mapAndwill beAndadded to the original characteristic diagramAndoutputting P2;
s43, adopting a CBM structure and a CSP2 structure to map the characteristic diagramAndfurther extracting feature information of the lesion area of the breast tumor to obtain a feature mapAndinputting the images into a DFT unit, further relieving the influence of noise, enhancing the detection of a lesion area, and outputting a secondary fusion characteristic diagramAndwill beAndadded to the original characteristic diagramAndon, output P3;
s44, adopting a CBM structure to secondarily fuse the characteristic information of the lesion area of the breast tumorAndfurther extraction is carried out; secondary fusion on different sizes by adopting SPP structureFeature informationAndextracting; adopting a CBM structure, extracting the characteristics of the lesion area and obtaining a characteristic mapAndin the DFT unit, the enhancement and compensation of the dual-view to the characteristics of the lesion area are realized, and the three fusion characteristic graphs are outputAndwill beAndadded to the original characteristic diagramAndon, output P4;
s45, fusing the feature maps P2, P3 and P4, and outputting the fused feature maps as a prediction result.
Preferably, in the step S4, feature maps of different scales of the breast ultrasound tumor gray level image and the absorbance image are obtainedAndthe process of embedding the DFT unit for fusion comprises the following steps: feature maps of different scales for double viewsAndinputting a DFT unit, mutually fusing and interacting information between an ultrasonic absorbance image and a gray level image based on a reflection mechanism and a transmission mechanism, adding the fused characteristic images back to the original characteristic images, and sequentially outputting P2, P3 and P4; usingRepresenting a fusion feature, expressed as:
wherein Is a breast ultrasonic tumor gray level image characteristic diagram,in order to make the image feature map of the absorbance,as a function of the fusion,is expressed as a breast ultrasonic tumor gray image characteristic mapAnd an absorbance image feature mapThrough the feature map output after fusionAnd
preferably, the feature fusion in the DFT unit includes the following steps:
s51, respectively inputting breast ultrasonic tumor gray scaleImage featuresAnd absorbance image featuresIs the ith layer feature map of (2)Andwherein i=2, 3,4, c is a channel, H is high, W is wide, and the corresponding sequence is obtained by feature mapping flattening and arranging the matrix orderAnd
s52, connecting each sequence, and adding a learnable position for embedding to obtain an input sequenceThe method comprises the steps of carrying out a first treatment on the surface of the Position embedding is a trainable parameter with dimension of 2HW×C, so that the model can distinguish the space information among different labels token during training;
s53, through Layer Norm normalization operation, stability of data characteristic distribution is guaranteed;
s54, packaging a plurality of complex relations from different representation subspaces at different positions through a multi-head attention mechanism;
s55, through Layer Norm normalization operation, stability of data characteristic distribution is guaranteed;
s56, calculating an output sequence O by using nonlinear transformation, wherein the shape of the output sequence O is the same as that of an input sequence I, and the calculation formula is as follows:
wherein ,the non-linear transformation is represented by a non-linear transformation,representing complex relationships of input sequence I with different positionsAs a result of the addition,
s57, converting the output sequence O into a recalibration result by the inverse operation of step S51Andand added as supplementary information to the original feature mapAndas input to the i+1 layer.
Preferably, in the step S54, the input sequence I is first projected onto three weight matrices to calculate a set of query Q, key K, and value V, where the calculation formula is:
wherein ,andas a matrix of weights, the weight matrix,
second, through self-care layerThe attention weight output Z is calculated using the scaling dot product between Q and K, with the formula:
wherein ,is a scale factor for preventingThe normalized exponential function converges to a region with extremely small gradient when the dot product increases;
finally, a multi-head attention mechanism is adoptedComputing multiple complex relationships expressing different locationsThe calculation formula is as follows:
wherein h represents the number of front faces,represent the firstThe attention weights of i token,the function performs a cascading operation on the features,representation ofIs provided with a projection matrix of (a),anda weight matrix representing the query Q, key K, and value V corresponding to the ith token.
The invention also provides an absorbance-based dual-view breast tumor lesion area automatic detection system, which comprises:
the image acquisition module is used for acquiring a breast ultrasonic tumor gray image data set, marking the position of the breast tumor in the data set, preprocessing the data set and generating a preprocessed breast ultrasonic tumor image data set;
the absorbance conversion module is used for carrying out absorbance conversion on the preprocessed image according to the ultrasonic transmission principle to obtain an ultrasonic absorbance image;
the double-view module takes the preprocessed breast ultrasonic tumor gray level image and the preprocessed breast ultrasonic tumor absorbance image as double views, and inputs the images into the detection module;
the double-view detection model is used for respectively extracting features of the double views through a double-flow backhaul main network and respectively and effectively reflecting the tumor region of interest in the double views through multi-layer convolution;
wherein, the detection module comprises a DFT unit, and the breast ultrasonic tumor gray image features with different scalesAnd absorbance image featuresEmbedding DFT units to perform feature fusion, wherein i=2, 3,4, and fusing the feature imagesAndadded back to the original characteristic diagramAndand sequentially outputting feature maps P2, P3 and P4 of the features with different scales to fully utilize feature information among different views, improve detection of a breast tumor lesion area, and finally fusing the feature maps P2, P3 and P4 to output as a predicted image.
The invention has the following beneficial effects:
the reflection mechanism and the transmission mechanism based on the gray level image and the absorbance image are utilized, and the two are combined to play a good complementary role on the characteristic information of the lesion area, so that the ultrasonic absorbance image is incorporated into the ultrasonic gray level image to serve as the basis of a network, the defect of insufficient information of the single-view ultrasonic gray level image is overcome, and the enhancement and the supplementation of the characteristics of the lesion area by the double views are realized.
In the process of extracting the features of the double views by the double-flow backsheene, the binary relation between the gray level image and the absorbance image is dynamically learned by the DFT module, and the information between the double views is mutually fused and interacted, so that the relevance and complementarity between the information of different views can be enhanced, the potential interaction between the gray level image and the absorbance image can be robustly captured, the possibility that the lesion area of the breast tumor is regarded as noise and other tissues is reduced, the lesion area can be effectively reflected, and the accuracy of detecting the breast ultrasonic tumor image is improved.
The present invention will be described in further detail with reference to the drawings and examples, but the present invention is not limited to the examples.
Drawings
FIG. 1 is a step diagram of an absorbance-based dual-view breast tumor lesion automatic detection method according to an embodiment of the present invention;
FIG. 2 is a diagram of a network structure of a dual view detection model according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a feature fusion flow of a DFT unit according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of an absorbance-based dual-view breast tumor lesion automatic detection system according to an embodiment of the invention.
Detailed Description
Referring to fig. 1, a step diagram of an absorbance-based dual-view breast tumor lesion area automatic detection method according to an embodiment of the invention includes the following steps:
s1, acquiring a breast ultrasonic tumor gray image dataset, marking the position of a breast tumor in the dataset, preprocessing the dataset, and generating a preprocessed breast ultrasonic tumor image dataset;
s2, carrying out absorbance transformation on the preprocessed image according to an ultrasonic transmission principle to obtain an ultrasonic absorbance image;
s3, taking the preprocessed breast ultrasonic tumor gray level image and the corresponding ultrasonic absorbance image as double views, and inputting a double-view detection model;
s4, the double-view detection model respectively performs feature extraction on the double views through a double-flow backhaul backbone network, and effectively reflects the tumor interested areas in the double views through multi-layer convolution;
the step S4 comprises the following steps: the breast ultrasonic tumor gray image with different scales is characterizedAnd absorbance image featuresEmbedding DFT units to perform feature fusion, wherein i=2, 3,4, and fusing the feature imagesAndadded back to the original characteristic diagramAndand sequentially outputting feature maps P2, P3 and P4 of the features with different scales to fully utilize feature information among different views, improve detection of a breast tumor lesion area, and finally fusing the feature maps P2, P3 and P4 to output as a predicted image.
Referring to fig. 2, which is a network structure diagram of a dual-view detection model according to an embodiment of the present invention, the step S4 includes the following steps:
s41, slicing the double views by adopting a focus module; adopting a CBM structure to extract the characteristic information of the double views; adopting a CSP1 structure, and enhancing the extraction of the features through a cross-stage hierarchical structure;
s42, extracting features of lesion areas of breast tumors by adopting CBM and CSP2 structures respectively to obtain feature graphsAndinputting the information into a DFT unit, mapping different views into the same feature space by the DFT unit, enhancing the attention to the feature information of the lesion area of the breast tumor, and outputting a primary fusion feature mapAndwill beAndadded to the original characteristic diagramAndoutputting P2;
s43, adopting a CBM structure and a CSP2 structure to map the characteristic diagramAndfurther extracting feature information of the lesion area of the breast tumor to obtain a feature mapAndinputting the images into a DFT unit, further relieving the influence of noise, enhancing the detection of a lesion area, and outputting a secondary fusion characteristic diagramAndwill beAndadded to the original characteristic diagramAndon, output P3;
s44, adopting a CBM structure to secondarily fuse the characteristic information of the lesion area of the breast tumorAndfurther extraction is carried out; the SPP structure is adopted to fuse the characteristic information of the secondary at different sizesAndextracting; adopting a CBM structure, extracting the characteristics of the lesion area and obtaining a characteristic mapAndin the DFT unit, the enhancement and compensation of the dual-view to the characteristics of the lesion area are realized, and the three fusion characteristic graphs are outputAndwill beAndadded to the original characteristic diagramAndon, output P4;
s45, fusing the feature maps P2, P3 and P4, and outputting the fused feature maps as a prediction result.
Referring to fig. 3, a schematic diagram of a feature fusion flow of a DFT unit according to an embodiment of the invention is shown, including the following steps:
s51, respectively inputting breast ultrasonic tumor gray image characteristicsAnd absorbance image featuresIs the ith layer feature map of (2)Andwherein i=2, 3,4, c is a channel, H is high, W is wide, and the corresponding sequence is obtained by feature mapping flattening and arranging the matrix orderAnd
s52, connecting each sequence, and adding a learnable position for embedding to obtain an input sequenceThe method comprises the steps of carrying out a first treatment on the surface of the Position embedding is a trainable parameter with dimension of 2HW×C, so that the model can distinguish the space information among different labels token during training;
s53, through Layer Norm normalization operation, stability of data characteristic distribution is guaranteed;
s54, packaging a plurality of complex relations from different representation subspaces at different positions through a multi-head attention mechanism;
s55, through Layer Norm normalization operation, stability of data characteristic distribution is guaranteed;
s56, calculating an output sequence O by using nonlinear transformation, wherein the shape of the output sequence O is the same as that of an input sequence I, and the calculation formula is as follows:
wherein ,the non-linear transformation is represented by a non-linear transformation,representing complex relationships of input sequence I with different positionsAs a result of the addition,
s57, converting the output sequence O into a recalibration result by the inverse operation of step S51Andand added as supplementary information to the original feature mapAndas input to the i+1 layer.
Specifically, in S54, the input sequence I is projected onto three weight matrices to calculate a set of query Q, key K, and value V, where the calculation formula is:
wherein ,andfor the rightThe weight matrix is used to determine the weight of the matrix,
second, through self-care layerThe attention weight output Z is calculated using the scaling dot product between Q and K, with the formula:
wherein ,is a scale factor for preventingThe normalized exponential function converges to a region with extremely small gradient when the dot product increases;
finally, a multi-head attention mechanism is adoptedComputing multiple complex relationships expressing different locationsThe calculation formula is as follows:
wherein h represents the number of front faces,represents the attention weight of the ith token,the function performs a cascading operation on the features,representation ofIs provided with a projection matrix of (a),anda weight matrix representing the query Q, key K, and value V corresponding to the ith token.
Referring to fig. 4, a schematic structural diagram of an absorbance-based dual-view breast tumor lesion area automatic detection system according to an embodiment of the invention includes:
the image acquisition module is used for acquiring a breast ultrasonic tumor gray image data set, marking the position of the breast tumor in the data set, preprocessing the data set and generating a preprocessed breast ultrasonic tumor image data set;
the absorbance conversion module is used for carrying out absorbance conversion on the preprocessed image according to the ultrasonic transmission principle to obtain an ultrasonic absorbance image;
the double-view module takes the preprocessed breast ultrasonic tumor gray level image and the preprocessed breast ultrasonic tumor absorbance image as double views, and inputs the images into the detection module;
the double-view detection model is used for respectively extracting features of the double views through a double-flow backhaul main network and respectively and effectively reflecting the tumor region of interest in the double views through multi-layer convolution;
wherein, the detection module comprises a DFT unit, and the breast ultrasonic tumor gray image features with different scalesAnd absorbance image featuresEmbedding DFT units for special purposeSign fusion, wherein i=2, 3,4, and fusing the feature imagesAndadded back to the original characteristic diagramAndand sequentially outputting feature maps P2, P3 and P4 of the features with different scales to fully utilize feature information among different views, improve detection of a breast tumor lesion area, and finally fusing the feature maps P2, P3 and P4 to output as a predicted image.
Therefore, the method and the system for automatically detecting the lesion area of the double-view breast tumor based on the absorbance image transformation have better complementary effect on the characteristic information of the lesion area by combining the gray level image and the absorbance image, better make up the defect of insufficient ultrasonic gray level image information, and realize the enhancement and the supplementation of the characteristics of the lesion area by the double-view breast tumor based on the absorbance image transformation; in the process of extracting the characteristics of the double views through the double main stream backup, the binary relation between the gray level image and the absorbance image is dynamically learned by using the DFT module, and the information between the double views is mutually fused and interacted, so that the relevance and complementarity between the information of different views can be enhanced, the potential interaction between the gray level image and the absorbance image can be robustly captured, the possibility that the lesion area of the breast tumor is regarded as noise and other tissues is reduced, the lesion area can be effectively reflected, and the accuracy of detecting the breast ultrasonic tumor image is improved.
The foregoing is only illustrative of the present invention and is not to be construed as limiting thereof, but rather as various modifications, equivalent arrangements, improvements, etc., within the spirit and principles of the present invention.

Claims (8)

1. The automatic detection method for the lesion area of the double-view breast tumor based on the absorbance is characterized by comprising the following steps of:
s1, acquiring a breast ultrasonic tumor gray image dataset, marking the position of a breast tumor in the dataset, preprocessing the dataset, and generating a preprocessed breast ultrasonic tumor image dataset;
s2, carrying out absorbance transformation on the preprocessed image according to an ultrasonic transmission principle to obtain an ultrasonic absorbance image;
s3, taking the preprocessed breast ultrasonic tumor gray level image and the corresponding ultrasonic absorbance image as double views, and inputting a double-view detection model;
s4, the double-view detection model respectively performs feature extraction on the double views through a double-flow backhaul backbone network, and effectively reflects the tumor interested areas in the double views through multi-layer convolution;
the step S4 comprises the following steps: gray scale characteristic map of breast ultrasonic tumor with different scalesAnd absorbency profile->Embedding DFT units to perform feature fusion, wherein i=2, 3,4, and the fused feature map is +.> and />Added back to the original grey scale feature map>And absorbency profile->Sequentially outputting feature maps P2, P3 and P4 of different scale features to fully utilize feature information among different views, improve detection of breast tumor lesion areas, and finally fusing the feature maps P2, P3 and P4P4, outputting as a predicted image;
in the S4, extracting features of the breast ultrasonic tumor gray level image and the absorbance image are respectively carried out by adopting a YOLOv5 trunk extraction network so as to effectively reflect a lesion area; the dual-flow backhaul backbone network adopts a CSPDarknet network;
the step S4 comprises the following steps:
s41, slicing the double views by adopting a focus module; adopting a CBM structure to extract the characteristic information of the double views; adopting a CSP1 structure, and enhancing the extraction of the features through a cross-stage hierarchical structure;
s42, extracting features of lesion areas of breast tumors by adopting CBM and CSP2 structures respectively to obtain feature graphs and />Inputting the information into a DFT unit, wherein the DFT unit maps different views into the same feature space, strengthens the attention to the feature information of the lesion area of the breast tumor, and outputs a primary fusion feature map-> and />Will-> and />Added to the original feature map-> and />Outputting P2;
s43, adopting CBM structure and CSP2 structureFor characteristic diagram and />Further extracting characteristic information of breast tumor lesion area to obtain a characteristic diagram +.> and />Inputting into DFT unit, further relieving noise effect, enhancing detection of lesion region, and outputting secondary fusion characteristic diagram +.> and />Will-> and />Added to the original feature map-> and />On, output P3;
s44, adopting a CBM structure to secondarily fuse the characteristic information of the lesion area of the breast tumor and />Further extraction is carried out; adopting SPP structure on different scalesOn cun to secondary fusion characteristic information +.> and />Extracting; adopting a CBM structure, extracting the characteristics of a lesion area to obtain a characteristic diagram +.> and />In the DFT unit, the enhancement and compensation of the dual view to the characteristics of the lesion area are realized, and a three-time fusion characteristic diagram is output> and />Will-> and />Added to the original feature map-> and />On, output P4;
s45, fusing the feature maps P2, P3 and P4, and outputting the fused feature maps as a prediction result.
2. The method for automatically detecting a lesion area of a breast tumor in two views based on absorbability according to claim 1, wherein in S1, the preprocessing mainly comprises removing labeling information around an ultrasound image and enhancing contrast, and the calculation formula of enhancing contrast is as follows:
wherein ,is the value of the pixel point of the original image, +.>For the pixel value after corresponding enhancement processing, +.>The logarithmic function is represented, and the constant C is used to satisfy the gray dynamic range of the transformed image.
3. The method for automatically detecting a lesion in a breast tumor in two views based on absorbability according to claim 1, wherein the step S2 comprises the steps of:
s21, converting the gray value of each pixel in the preprocessed image into an absorbance value, wherein the calculation formula is as follows:
wherein ,is the absorbance value, +.>Is the gray value of the pixel, ">Is the average gray value of the background;
s22, converting the absorbance value by linear transformationMapping to [0,255]And in the range, converting the absorbance value into a depth information image to obtain an ultrasonic absorbance image.
4. The method for automatically detecting a lesion area of a double-view breast tumor based on absorbability according to claim 1, wherein in S3, mosaic enhancement and adaptive image scaling are performed on the double-view before the double-view is input into the double-view detection model; the mosaic enhancement is to randomly select 4 pictures to randomly cut and rotate, and then splice and synthesize images.
5. The method for automatically detecting a lesion of a breast tumor in two views based on absorbances according to claim 1, wherein in S4, a feature map of different scales of a breast ultrasound tumor gray level image and an absorbances image is obtained and />The process of embedding the DFT unit for fusion comprises the following steps: feature map of different scales of the double view +.> and />Inputting a DFT unit, mutually fusing and interacting information between an ultrasonic absorbance image and a gray level image based on a reflection mechanism and a transmission mechanism, adding the fused characteristic images back to the original characteristic images, and sequentially outputting P2, P3 and P4; use->Representing a fusion feature, expressed as:
wherein Is a breast ultrasonic tumor gray level image characteristic diagram, < >>For the absorbency image feature map, < >>As a fusion function +.>Expressed as a breast ultrasound tumor gray level image feature map +.>And absorbency image feature map->By means of the feature map output after fusion +.> and />
6. The method for automatically detecting a lesion in a breast tumor in two views based on absorbability according to claim 5, wherein the feature fusion in the DFT unit comprises the steps of:
s51, respectively inputting breast ultrasonic tumor gray image characteristicsAnd absorbency image feature->I-th layer feature map-> and />Wherein i=2, 3,4, c is channel, H is high, W is wide, and the corresponding sequence +_j is obtained by feature mapping flattening and arranging the matrix order> and />
S52, connecting each sequence, and adding a learnable position for embedding to obtain an input sequenceThe method comprises the steps of carrying out a first treatment on the surface of the Position embedding is a trainable parameter with dimension of 2HW×C, so that the model can distinguish the space information among different labels token during training;
s53, through Layer Norm normalization operation, stability of data characteristic distribution is guaranteed;
s54, packaging a plurality of complex relations from different representation subspaces at different positions through a multi-head attention mechanism;
s55, through Layer Norm normalization operation, stability of data characteristic distribution is guaranteed;
s56, calculating an output sequence O by using nonlinear transformation, wherein the shape of the output sequence O is the same as that of an input sequence I, and the calculation formula is as follows:
wherein ,representing a nonlinear transformation>Representing the complex relationship of the input sequence I with different positions +.>The result of the addition, +.>
S57, converting the output sequence O into a recalibration result by the inverse operation of step S51 and />And added as supplementary information to the original feature map +.> and />As input to the i+1 layer.
7. The method for automatically detecting a lesion in a breast tumor in two views based on absorbances according to claim 6, wherein in S54, the input sequence I is projected onto three weight matrices to calculate a set of query Q, key K and value V, and the calculation formula is:
wherein ,、 /> and />Is a weight matrix>
Second, through self-care layerThe attention weight output Z is calculated using the scaling dot product between Q and K, with the formula:
wherein ,is a scale factor for preventing +.>The normalized exponential function converges to a region with extremely small gradient when the dot product increases;
finally, a multi-head attention mechanism is adoptedCalculating a plurality of complex relations expressing different positions +.>The calculation formula is as follows:
wherein h represents the number of front faces,attention weight representing the ith token, +.>The function performs cascade operation on the features, +.>Representation->Projection matrix of>、/> and />A weight matrix representing the query Q, key K, and value V corresponding to the ith token.
8. Automatic detection system of dual view breast tumor lesion area based on absorbance, its characterized in that includes:
the image acquisition module is used for acquiring a breast ultrasonic tumor gray image data set, marking the position of the breast tumor in the data set, preprocessing the data set and generating a preprocessed breast ultrasonic tumor image data set;
the absorbance conversion module is used for carrying out absorbance conversion on the preprocessed image according to the ultrasonic transmission principle to obtain an ultrasonic absorbance image;
the double-view module takes the preprocessed breast ultrasonic tumor gray level image and the preprocessed breast ultrasonic tumor absorbance image as double views, and inputs the images into the detection module;
the double-view detection model is used for respectively extracting features of the double views through a double-flow backhaul main network and respectively and effectively reflecting the tumor region of interest in the double views through multi-layer convolution;
wherein, the detection module comprises a DFT unit, and the breast ultrasonic tumor gray feature images with different scales are displayedAnd absorbency profile->Embedding DFT units to perform feature fusion, wherein i=2, 3,4, and the fused feature map is +.> and />Added back to the original grey scale feature map>And absorbency profile->Sequentially outputting feature maps P2, P3 and P4 of different scale features to fully utilize feature information among different views, improve detection of a breast tumor lesion area, and finally fusing the feature maps P2, P3 and P4 to output as a predicted image;
in the detection module, a Yolov5 trunk extraction network is adopted to extract characteristics of a breast ultrasonic tumor gray level image and an absorbance image respectively so as to effectively reflect a lesion area; the dual-flow backhaul backbone network adopts a CSPDarknet network;
in the detection module, the dual-view detection model respectively performs feature extraction on dual views through a dual-flow backhaul backbone network, and respectively and effectively reflects a tumor region of interest in the dual views through multi-layer convolution, and the method comprises the following steps:
s41, slicing the double views by adopting a focus module; adopting a CBM structure to extract the characteristic information of the double views; adopting a CSP1 structure, and enhancing the extraction of the features through a cross-stage hierarchical structure;
s42, extracting features of lesion areas of breast tumors by adopting CBM and CSP2 structures respectively to obtain feature graphs and />Inputting the information into a DFT unit, wherein the DFT unit maps different views into the same feature space, strengthens the attention to the feature information of the lesion area of the breast tumor, and outputs a primary fusion feature map-> and />Will-> and />Added to the original feature map-> and />Outputting P2;
s43, adopting a CBM structure and a CSP2 structure to map the characteristic diagram and />Is characterized by the characteristic information of the lesion area of breast tumorFurther extracting to obtain characteristic diagram-> and />Inputting into DFT unit, further relieving noise effect, enhancing detection of lesion region, and outputting secondary fusion characteristic diagram +.> and />Will-> and />Added to the original feature map-> and />On, output P3;
s44, adopting a CBM structure to secondarily fuse the characteristic information of the lesion area of the breast tumor and />Further extraction is carried out; the SPP structure is adopted to carry out the secondary fusion of the characteristic information on different sizes> and />ExtractingThe method comprises the steps of carrying out a first treatment on the surface of the Adopting a CBM structure, extracting the characteristics of a lesion area to obtain a characteristic diagram +.> and />In the DFT unit, the enhancement and compensation of the dual view to the characteristics of the lesion area are realized, and a three-time fusion characteristic diagram is output> and />Will-> and />Added to the original feature map-> and />On, output P4;
s45, fusing the feature maps P2, P3 and P4, and outputting the fused feature maps as a prediction result.
CN202310715680.4A 2023-06-16 2023-06-16 Automatic detection method and system for double-view breast tumor lesion area based on absorbance Active CN116485791B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310715680.4A CN116485791B (en) 2023-06-16 2023-06-16 Automatic detection method and system for double-view breast tumor lesion area based on absorbance

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310715680.4A CN116485791B (en) 2023-06-16 2023-06-16 Automatic detection method and system for double-view breast tumor lesion area based on absorbance

Publications (2)

Publication Number Publication Date
CN116485791A CN116485791A (en) 2023-07-25
CN116485791B true CN116485791B (en) 2023-09-29

Family

ID=87227132

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310715680.4A Active CN116485791B (en) 2023-06-16 2023-06-16 Automatic detection method and system for double-view breast tumor lesion area based on absorbance

Country Status (1)

Country Link
CN (1) CN116485791B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117392119B (en) * 2023-12-07 2024-03-12 华侨大学 Tumor lesion area detection method and device based on position priori and feature perception

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104143101A (en) * 2014-07-01 2014-11-12 华南理工大学 Method for automatically identifying breast tumor area based on ultrasound image
WO2018180386A1 (en) * 2017-03-30 2018-10-04 国立研究開発法人産業技術総合研究所 Ultrasound imaging diagnosis assistance method and system
CN110264461A (en) * 2019-06-25 2019-09-20 南京工程学院 Microcalciffcation point automatic testing method based on ultrasonic tumor of breast image
CN110264462A (en) * 2019-06-25 2019-09-20 电子科技大学 A kind of breast ultrasound tumour recognition methods based on deep learning
CN111832563A (en) * 2020-07-17 2020-10-27 江苏大学附属医院 Intelligent breast tumor identification method based on ultrasonic image
CN112529878A (en) * 2020-12-15 2021-03-19 西安交通大学 Multi-view semi-supervised lymph node classification method, system and equipment
CN113870194A (en) * 2021-09-07 2021-12-31 燕山大学 Deep layer characteristic and superficial layer LBP characteristic fused breast tumor ultrasonic image processing device
CN115409832A (en) * 2022-10-28 2022-11-29 新疆畅森数据科技有限公司 Triple negative breast cancer classification method based on ultrasound image and omics big data
CN116109610A (en) * 2023-02-23 2023-05-12 四川大学华西医院 Method and system for segmenting breast tumor in ultrasonic examination report image

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104143101A (en) * 2014-07-01 2014-11-12 华南理工大学 Method for automatically identifying breast tumor area based on ultrasound image
WO2018180386A1 (en) * 2017-03-30 2018-10-04 国立研究開発法人産業技術総合研究所 Ultrasound imaging diagnosis assistance method and system
JPWO2018180386A1 (en) * 2017-03-30 2019-11-07 国立研究開発法人産業技術総合研究所 Ultrasound image diagnosis support method and system
CN110264461A (en) * 2019-06-25 2019-09-20 南京工程学院 Microcalciffcation point automatic testing method based on ultrasonic tumor of breast image
CN110264462A (en) * 2019-06-25 2019-09-20 电子科技大学 A kind of breast ultrasound tumour recognition methods based on deep learning
CN111832563A (en) * 2020-07-17 2020-10-27 江苏大学附属医院 Intelligent breast tumor identification method based on ultrasonic image
CN112529878A (en) * 2020-12-15 2021-03-19 西安交通大学 Multi-view semi-supervised lymph node classification method, system and equipment
CN113870194A (en) * 2021-09-07 2021-12-31 燕山大学 Deep layer characteristic and superficial layer LBP characteristic fused breast tumor ultrasonic image processing device
CN115409832A (en) * 2022-10-28 2022-11-29 新疆畅森数据科技有限公司 Triple negative breast cancer classification method based on ultrasound image and omics big data
CN116109610A (en) * 2023-02-23 2023-05-12 四川大学华西医院 Method and system for segmenting breast tumor in ultrasonic examination report image

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
《深度学习在乳腺超声中的研究进展》;包凌云等;《浙江医学》(第8期);785-790, 813 *

Also Published As

Publication number Publication date
CN116485791A (en) 2023-07-25

Similar Documents

Publication Publication Date Title
CN111652321B (en) Marine ship detection method based on improved YOLOV3 algorithm
CN111627019B (en) Liver tumor segmentation method and system based on convolutional neural network
GB2560218A (en) Editing digital images utilizing a neural network with an in-network rendering layer
US20220239844A1 (en) Neural 3D Video Synthesis
CN111667459B (en) Medical sign detection method, system, terminal and storage medium based on 3D variable convolution and time sequence feature fusion
US20230377097A1 (en) Laparoscopic image smoke removal method based on generative adversarial network
CN116485791B (en) Automatic detection method and system for double-view breast tumor lesion area based on absorbance
Cheng et al. DDU-Net: A dual dense U-structure network for medical image segmentation
Fang et al. GroupTransNet: Group transformer network for RGB-D salient object detection
Zhou et al. A superior image inpainting scheme using Transformer-based self-supervised attention GAN model
Gao et al. 3D interacting hand pose and shape estimation from a single RGB image
Guo et al. 3D semantic segmentation based on spatial-aware convolution and shape completion for augmented reality applications
Correia et al. 3D reconstruction of human bodies from single-view and multi-view images: A systematic review
CN117934824A (en) Target region segmentation method and system for ultrasonic image and electronic equipment
Luo et al. Infrared and visible image fusion based on VPDE model and VGG network
CN117253277A (en) Method for detecting key points of face in complex environment by combining real and synthetic data
CN117253034A (en) Image semantic segmentation method and system based on differentiated context
Zhang et al. Multi-scale aggregation networks with flexible receptive fields for melanoma segmentation
Buck et al. Ignorance is bliss: flawed assumptions in simulated ground truth
Chen et al. Contrastive learning with feature fusion for unpaired thermal infrared image colorization
Yuan et al. A full-set tooth segmentation model based on improved PointNet++
CN115688234A (en) Building layout generation method, device and medium based on conditional convolution
Gunasekaran et al. An efficient technique for three-dimensional image visualization through two-dimensional images for medical data
Li et al. 3D colored object reconstruction from a single view image through diffusion
Zhang et al. Stereo Depth Estimation with Echoes

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant