CN113902945A - Multi-modal breast magnetic resonance image classification method and system - Google Patents

Multi-modal breast magnetic resonance image classification method and system Download PDF

Info

Publication number
CN113902945A
CN113902945A CN202111159748.2A CN202111159748A CN113902945A CN 113902945 A CN113902945 A CN 113902945A CN 202111159748 A CN202111159748 A CN 202111159748A CN 113902945 A CN113902945 A CN 113902945A
Authority
CN
China
Prior art keywords
attention
network
image
module
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111159748.2A
Other languages
Chinese (zh)
Inventor
毛宁
张海程
谢海柱
林凡
高婧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yantai Yuhuangding Hospital
Original Assignee
Yantai Yuhuangding Hospital
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yantai Yuhuangding Hospital filed Critical Yantai Yuhuangding Hospital
Priority to CN202111159748.2A priority Critical patent/CN113902945A/en
Publication of CN113902945A publication Critical patent/CN113902945A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30068Mammography; Breast
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Evolutionary Biology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)

Abstract

The invention discloses a multi-modal breast magnetic resonance image classification method and a multi-modal breast magnetic resonance image classification system, which relate to the technical field of medical treatment and image processing, and are characterized in that the method outputs an acquired magnetic resonance image to be classified and a parameter map to be classified into a target classification network to acquire the prediction probability of a focus category, and determines the focus category according to the prediction probability of the focus category; the target classification network is obtained by training according to the training sample and the calibration convolutional neural network; the training sample comprises input data and a label; the input data comprises a first image and a first parameter image corresponding to the first image; the first image is a three-dimensional dynamic enhanced magnetic resonance image with a primary focus area; the structure of the calibration convolutional neural network comprises a first attention network, a second attention network and a fusion network. According to the invention, the attention mechanism, the multi-scale features and the multi-mode information are combined, so that the neural network learns more effective information which is beneficial to feature extraction, and the accuracy of lesion classification judgment is improved.

Description

Multi-modal breast magnetic resonance image classification method and system
Technical Field
The invention relates to the technical field of medical treatment and image processing, in particular to a multi-modal breast magnetic resonance image classification method and system.
Background
In recent years, deep learning methods, particularly Convolutional Neural Networks (CNNs), have been widely used in the field of medical images, and have played an important role in medical image classification, reconstruction, segmentation, and the like.
The attention mechanism is added into the CNN, so that the network can pay more attention to useful information and ignore useless information in the learning process, and the classification performance of the model is effectively improved. The deepening process of the CNN layers is a semantic feature extraction process from a low layer to a high layer in feature extraction, but as the network deepens, each layer loses some information, and more information is lost in the last layer. In order to solve the problem, a concept of multi-scale feature fusion is introduced, and the basic idea is to fuse features of different scales extracted from the features together, fuse high-level and low-level information and better complete classification tasks. Common fusion methods include average value fusion of multiple scale prediction results and splicing of multiple scale features. However, poor features in these methods may affect the final prediction.
Disclosure of Invention
The invention aims to provide a multi-modal mammary gland magnetic resonance image classification method and system, which combines an attention mechanism, multi-scale features and multi-modal information to enable a neural network to learn more effective information suitable for feature extraction, and improve the accuracy of target detection.
In order to achieve the purpose, the invention provides the following scheme:
a multi-modality breast magnetic resonance image classification method, comprising:
acquiring a magnetic resonance image to be classified;
acquiring a parameter map corresponding to the magnetic resonance image to be classified to obtain the parameter map to be classified;
outputting the magnetic resonance image to be classified and the parameter map to be classified to a target classification network to obtain the prediction probability of the focus category;
determining a lesion category according to the prediction probability of the lesion category;
the target classification network is obtained by training according to a training sample and a calibration convolutional neural network; the training sample comprises input data and a label; the input data comprises a first image and a first parameter image corresponding to the first image; the first image is a three-dimensional dynamic enhanced magnetic resonance image with a primary focus area; the label is a lesion category of a primary lesion area in the first image; the calibration convolutional neural network has a double-input single-output network structure; the structure of the calibration convolutional neural network comprises a first attention network, a second attention network and a fusion network;
an input of the first attention network for inputting the first image and the label;
a first output of the first attention network for outputting a first profile; the first feature map is determined after feature extraction is carried out on the first image by using an attention mechanism;
a second output of the first attention network is used for outputting a third feature map; the third characteristic diagram is obtained by processing the first image by using an attention mechanism and a multi-scale fusion algorithm;
an input of the second attention network is used for inputting the first parameter image and the label;
the first output end of the second attention network is used for outputting a second feature map; the second feature map is determined after feature extraction is carried out on the first parameter image by using an attention mechanism;
a second output of the second attention network is used for outputting a fourth feature map; the fourth characteristic diagram is obtained by processing the first parameter image by using an attention mechanism and a multi-scale fusion algorithm;
the first input end of the fusion network is used for inputting a fifth feature map; the fifth characteristic diagram is obtained by splicing the first characteristic diagram and the second characteristic diagram;
the second input end of the fusion network is used for inputting the third feature map;
a third input end of the fusion network is used for inputting the fourth feature map;
and the output end of the fusion network is used for outputting the prediction probability of the focus category.
Optionally, the first attention network comprises a first fusion module and a plurality of first residual attention modules;
the plurality of first residual attention modules are connected in series; the output end of the tail end first residual attention module is a first output end of the first attention network, and the input end of the head end first residual attention module is an input end of the first attention network;
the input end of the first fusion module is connected with the first calibration residual error attention module; the first calibration residual attention module is any one of the first calibration residual attention modules, and the number of the first calibration residual attention modules is greater than or equal to 2; the output end of the first fusion module is a second output end of the first attention network.
Optionally, the second attention network comprises a second fusion module and a plurality of second residual attention modules;
the plurality of second residual attention modules are connected in series; the output end of the second residual attention module at the tail end is a first output end of the second attention network, and the input end of the second residual attention module at the head end is an input end of the second attention network;
the input end of the second fusion module is connected with the second calibration residual error attention module; the second calibration residual attention module is any second residual attention module, and the number of the second calibration residual attention modules is greater than or equal to 2; the output end of the second fusion module is a second output end of the second attention network.
Optionally, a plurality of the first residual attention modules are identical in structure;
wherein, the data processing process of the first residual attention module is as follows:
carrying out feature extraction on the input image sequentially through a 3 × 3 convolution layer, a GN layer, a Dropout layer and a rectification linear unit to obtain a first feature subgraph;
sequentially carrying out feature extraction on the input image through the convolution layer 1 x 1, the GN layer and the Dropout layer to obtain a second feature subgraph;
carrying out feature extraction on the first feature sub-graph sequentially through the 3 × 3 convolution layer, the GN layer and the Dropout layer to obtain a third feature sub-graph;
performing feature extraction on the third feature subgraph through an attention module to obtain a fourth feature subgraph;
and performing jumping connection and addition on the second characteristic subgraph and the fourth characteristic subgraph to obtain an output image.
Optionally, the data processing procedure of the attention module is as follows:
performing feature extraction on the third feature subgraph through a channel attention module to obtain a channel attention feature graph;
performing element-wise multiplication operation on the channel attention feature graph and the third feature subgraph to obtain an intermediate feature subgraph;
performing feature extraction on the intermediate feature subgraph through a spatial attention module to obtain a spatial attention feature graph;
and carrying out element-wise multiplication operation on the spatial attention feature graph and the intermediate feature subgraph to obtain a fourth feature subgraph.
Optionally, the determining process of the fifth feature map specifically includes:
and splicing the first characteristic diagram and the second characteristic diagram in a channel series connection mode to obtain a fifth characteristic diagram.
Optionally, the training sample construction process specifically includes:
acquiring a plurality of original images; the original images comprise original three-dimensional dynamic enhanced magnetic resonance images and original parameter images corresponding to the original three-dimensional dynamic enhanced magnetic resonance images;
preprocessing a plurality of original three-dimensional dynamic enhanced magnetic resonance images to obtain a first image; the preprocessing comprises resampling processing, cutting processing, standardization processing and data enhancement processing;
preprocessing a plurality of original parameter images to obtain a first parameter image;
tag information is determined from the first image.
In order to achieve the purpose, the invention also provides the following technical scheme:
a multi-modality breast magnetic resonance image classification system, comprising:
a first data acquisition unit for acquiring a magnetic resonance image to be classified;
the second data acquisition part is used for acquiring a parameter map corresponding to the magnetic resonance image to be classified so as to obtain the parameter map to be classified;
the prediction probability determination part is used for outputting the magnetic resonance image to be classified and the parameter map to be classified into a target classification network so as to obtain the prediction probability of the focus category;
a lesion category determination section for determining a lesion category based on a prediction probability of the lesion category;
the target classification network is obtained by training according to a training sample and a calibration convolutional neural network; the training sample comprises input data and a label; the input data comprises a first image and a first parameter image corresponding to the first image; the first image is a three-dimensional dynamic enhanced magnetic resonance image with a primary focus area; the label is a lesion category of a primary lesion area in the first image; the calibration convolutional neural network has a double-input single-output network structure; the structure of the calibration convolutional neural network comprises a first attention network, a second attention network and a fusion network;
an input of the first attention network for inputting the first image and the label;
a first output of the first attention network for outputting a first profile; the first feature map is determined after feature extraction is carried out on the first image by using an attention mechanism;
a second output of the first attention network is used for outputting a third feature map; the third characteristic diagram is obtained by processing the first image by using an attention mechanism and a multi-scale fusion algorithm;
an input of the second attention network is used for inputting the first parameter image and the label;
the first output end of the second attention network is used for outputting a second feature map; the second feature map is determined after feature extraction is carried out on the first parameter image by using an attention mechanism;
a second output of the second attention network is used for outputting a fourth feature map; the fourth characteristic diagram is obtained by processing the first parameter image by using an attention mechanism and a multi-scale fusion algorithm;
the first input end of the fusion network is used for inputting a fifth feature map; the fifth feature map is determined after the first feature map and the second feature map are spliced;
the second input end of the fusion network is used for inputting the third feature map;
a third input end of the fusion network is used for inputting the fourth feature map;
and the output end of the fusion network is used for outputting the prediction probability of the focus category.
Optionally, the first attention network comprises a first fusion module and a plurality of first residual attention modules;
the plurality of first residual attention modules are connected in series; the output end of the tail end first residual attention module is a first output end of the first attention network, and the input end of the head end first residual attention module is an input end of the first attention network;
the input end of the first fusion module is connected with the first calibration residual error attention module; the first calibration residual attention module is any one of the first calibration residual attention modules, and the number of the first calibration residual attention modules is greater than or equal to 2; the output end of the first fusion module is a second output end of the first attention network.
Optionally, the second attention network comprises a second fusion module and a plurality of second residual attention modules;
the plurality of second residual attention modules are connected in series; the output end of the second residual attention module at the tail end is a first output end of the second attention network, and the input end of the second residual attention module at the head end is an input end of the second attention network;
the input end of the second fusion module is connected with the second calibration residual error attention module; the second calibration residual attention module is any second residual attention module, and the number of the second calibration residual attention modules is greater than or equal to 2; the output end of the second fusion module is a second output end of the second attention network.
According to the specific embodiment provided by the invention, the invention discloses the following technical effects:
training through a training sample and a calibration convolutional neural network to obtain a target classification network, wherein the calibration convolutional neural network comprises a first attention network, a second attention network and a fusion network; the first attention network outputs a first feature map determined after feature extraction is carried out on the first image by using an attention mechanism, and also outputs a third feature map obtained after the first image is processed by using the attention mechanism and a multi-scale fusion algorithm; the second attention network outputs a second feature map determined after feature extraction is carried out on the first parameter image by using an attention mechanism, and also outputs a fourth feature map obtained after the first parameter image is processed by using the attention mechanism and a multi-scale fusion algorithm; and inputting a fifth feature map obtained by multi-modal fusion of the first feature map and the second feature map into a fusion network, fusing the fifth feature map, the third feature map and the fourth feature map in the fusion network to finally obtain the prediction probability of the lesion category, and determining the lesion category according to the prediction probability of the lesion category. According to the invention, the attention mechanism, the multi-scale features and the multi-mode information are combined, so that the neural network learns more effective information which is beneficial to feature extraction, and the accuracy of judging the focus category of the primary focus area is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without inventive exercise.
FIG. 1 is a schematic flow chart of the multi-modal breast magnetic resonance image classification method of the present invention;
FIG. 2 is a schematic structural diagram of a first residual attention module of the multi-modal breast magnetic resonance image classification method according to the present invention;
FIG. 3 is a schematic structural diagram of an attention module of the multi-modal breast magnetic resonance image classification method according to the present invention;
FIG. 4 is a schematic structural diagram of a multi-scale fusion module of the multi-modal breast magnetic resonance image classification method of the present invention;
FIG. 5 is a schematic structural diagram of a multi-modal breast magnetic resonance image classification system according to the present invention;
fig. 6 is a schematic structural diagram of a convolutional neural network in the multi-modal breast magnetic resonance image classification system of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The invention aims to provide a multi-modal mammary gland magnetic resonance image classification method and a multi-modal mammary gland magnetic resonance image classification system, which are characterized in that a multi-scale characteristic is fused by using an attention mechanism, and information of multiple modalities of a Dynamic Enhanced scanning and imaging (DCE-MRI) image and a parameter image is fused, so that a target classification network learns more effective information suitable for tasks, the accuracy of classification prediction is improved, and a doctor is assisted to better judge the quality and the malignancy of a mammary gland focus.
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
Example one
As shown in fig. 1, the present embodiment provides a method for classifying a multi-modal breast magnetic resonance image, including: step 100, acquiring a magnetic resonance image to be classified; step 101, acquiring a parameter map corresponding to the magnetic resonance image to be classified to obtain the parameter map to be classified; 102, outputting the magnetic resonance image to be classified and the parameter map to be classified to a target classification network to obtain the prediction probability of the focus category; and 103, determining the focus category according to the prediction probability of the focus category.
The target classification network is obtained by training according to a training sample and a calibration convolutional neural network; the training sample comprises input data and a label; the input data comprises a first image and a first parameter image corresponding to the first image; the first image is a three-dimensional dynamic enhanced magnetic resonance image with a primary focus area; the label is a lesion category of a primary lesion area in the first image.
The calibration convolutional neural network has a double-input single-output network structure; the structure of the calibration convolutional neural network comprises a first attention network, a second attention network and a fusion network.
An input of the first attention network for inputting the first image and the label; a first output of the first attention network for outputting a first profile; the first feature map is determined after feature extraction is carried out on the first image by using an attention mechanism; a second output of the first attention network is used for outputting a third feature map; the third characteristic diagram is obtained by processing the first image by using an attention mechanism and a multi-scale fusion algorithm.
An input of the second attention network is used for inputting the first parameter image and the label; the first output end of the second attention network is used for outputting a second feature map; the second feature map is determined after feature extraction is carried out on the first parameter image by using an attention mechanism; a second output of the second attention network is used for outputting a fourth feature map; the fourth characteristic diagram is obtained by processing the first parameter image by using an attention mechanism and a multi-scale fusion algorithm.
The first input end of the fusion network is used for inputting a fifth feature map; the fifth characteristic diagram is obtained by splicing the first characteristic diagram and the second characteristic diagram; specifically, the first characteristic diagram and the second characteristic diagram are spliced in a channel series connection mode to obtain a fifth characteristic diagram. The second input end of the fusion network is used for inputting the third feature map; a third input end of the fusion network is used for inputting the fourth feature map; the output end of the fusion network is used for outputting the prediction probability of the focus category, specifically outputting the prediction probability value of 0-1.
Preferably, the first attention network comprises a first fusion module and a plurality of first residual attention modules; the plurality of first residual attention modules are connected in series; the output end of the tail end first residual attention module is a first output end of the first attention network, and the input end of the head end first residual attention module is an input end of the first attention network; the input end of the first fusion module is connected with the first calibration residual error attention module; the first calibration residual attention module is any one of the first calibration residual attention modules, and the number of the first calibration residual attention modules is greater than or equal to 2; the output end of the first fusion module is a second output end of the first attention network.
In one example: the plurality of first residual attention modules are respectively a first residual attention sub-module BL1, a second residual attention sub-module BL2, a third residual attention sub-module BL3 and a fourth residual attention sub-module BL 4; the first residual attention sub-module BL1 is the first residual attention module at the head end, and the fourth residual attention sub-module BL4 is the first residual attention module at the tail end.
Specifically, the structures of a plurality of the first residual attention modules are the same, that is, the structures of the first residual attention sub-module BL1, the second residual attention sub-module BL2, the third residual attention sub-module BL3 and the fourth residual attention sub-module BL4 are completely the same. However, the feature maps generated by the first residual attention sub-module BL1, the second residual attention sub-module BL2, the third residual attention sub-module BL3 and the fourth residual attention sub-module BL4 have different channel numbers, which are 32, 64, 128 and 256, respectively.
Further, as shown in fig. 2, the data processing procedure of the first residual attention module is as follows:
A) and performing feature extraction on the input image sequentially through the convolution layer 3 × 3, the GN layer, the Dropout layer and the rectification linear unit to obtain a first feature subgraph. Specifically, normalization of the data is achieved by the GN layer, overfitting of the data is prevented by the Dropout layer, and nonlinear mapping is performed by a post-added rectifying linear unit (ReLU).
If the first residual attention module is the first residual attention sub-module BL1, the input image is the first image; if the first residual attention module is the second residual attention sub-module BL2, the input image is the image output by the first residual attention sub-module BL 1; if the first residual attention module is the third residual attention sub-module BL3 in the data processing process, the input image is the image output by the second residual attention sub-module BL2, and so on.
B) And sequentially carrying out feature extraction on the input image through the convolution layer 1 x 1, the GN layer and the Dropout layer to obtain a second feature subgraph.
C) And sequentially carrying out feature extraction on the first feature sub-graph through the 3 × 3 convolution layer, the GN layer and the Dropout layer to obtain a third feature sub-graph. And step C) and step A) are the same as the convolution operation in channel number, and the size of the generated feature map is the same as the size of the input image, namely the size of the first feature sub-map is the same as the size of the third feature sub-map, and the size of the first feature sub-map is the same as the size of the input image.
D) Performing feature extraction on the third feature subgraph through an attention module to obtain a fourth feature subgraph; among them, the introduction of a Centralized Block Addressing Module (CBAM) enables the network to focus more on useful information. And the attention module includes a channel attention module and a spatial attention module.
E) And performing jumping connection and addition on the second characteristic subgraph and the fourth characteristic subgraph to obtain an output image. Specifically, the output image is an input image of the second residual attention module; and in the data processing process of the fourth residual attention module, the output image is the first feature map.
As shown in fig. 3, the data processing procedure of the attention module in step D) is as follows:
D1) and performing feature extraction on the third feature subgraph through a channel attention module to obtain a channel attention feature graph. Specifically, the third feature subgraph is respectively subjected to maximum pooling and average pooling, then enters a shared multilayer perceptron, is subjected to maximum pooling and average pooling after passing through the feature graph of the shared multilayer perceptron, and is subjected to Element-wise addition operation and sigmoid activation operation sequentially on the maximum pooling result and the average pooling result, so that a channel attention feature graph is obtained.
D2) And performing element-wise multiplication operation on the channel attention feature graph and the third feature subgraph to obtain an intermediate feature subgraph.
D3) And performing feature extraction on the intermediate feature subgraph through a spatial attention module to obtain a spatial attention feature graph. Specifically, the intermediate feature subgraphs are spliced through maximum pooling and average pooling in sequence, the feature graph is reduced to one dimension through convolution operation of 1 × 1 × 1, and then the spatial attention feature graph is obtained through sigmoid operation.
D4) And carrying out element-wise multiplication operation on the spatial attention feature graph and the intermediate feature subgraph to obtain a fourth feature subgraph.
Further, the second attention network comprises a second fusion module and a plurality of second residual attention modules; the plurality of second residual attention modules are connected in series; the output end of the second residual attention module at the tail end is a first output end of the second attention network, and the input end of the second residual attention module at the head end is an input end of the second attention network; the input end of the second fusion module is connected with the second calibration residual error attention module; the second calibration residual attention module is any second residual attention module, and the number of the second calibration residual attention modules is greater than or equal to 2; the output end of the second fusion module is a second output end of the second attention network.
In this embodiment of the present invention, since the second attention network has the same structure as the first attention network, the processing steps of the second attention network on the first parameter image are also the same as the processing steps of the first parameter image in the first attention network, and are not described herein again.
Preferably, in an embodiment of the present invention, the constructing process of the training sample specifically includes:
1) acquiring a plurality of original images; the original images comprise original three-dimensional dynamic enhanced magnetic resonance images and original parameter images corresponding to the original three-dimensional dynamic enhanced magnetic resonance images, namely the original three-dimensional dynamic enhanced magnetic resonance images and the original parameter images corresponding to the original three-dimensional dynamic enhanced magnetic resonance images belong to the same patient. In particular, the original three-dimensional dynamically enhanced magnetic resonance image is composed of a plurality of two-dimensional slices. One two-dimensional slice shows pathological conditions of a certain section, a plurality of two-dimensional slices show a plurality of positions, and the two-dimensional slices are combined together to obtain an original three-dimensional dynamic enhanced magnetic resonance image. In the embodiment of the invention, 1230 patients who have undergone the mammary gland DCE-MRI examination in a certain hospital from 5 months 2017 to 4 months 2020 are collected. The most significant enhancement phase in the DCE-MRI image of each patient was selected as the original three-dimensional dynamically enhanced magnetic resonance image.
Wherein, the type of the original parameter image is 8 types; the original parameter image comprises a 4-class semi-quantitative parameter image and a 4-class quantitative parameter image; wherein, a semi-quantitative parameter map is obtained by describing the shape and structure of a tissue signal intensity-time curve in a primary lesion region, and the semi-quantitative parameters comprise an initial area under the curve (AUC), a Time To Peak (TTP), a maximum slope (MaxSlope) and a contrast agent maximum concentration value (MaxConc). Quantitative parameter maps, which require selection of a pharmacokinetic model matched to the tissue blood supply status, include the volume transfer constant (Ktrans), the efflux rate constant (Kep), the extravascular extracellular space volume fraction (Ve), and the plasma space volume fraction (Vp). These parameters reveal the hemodynamic characteristics of the lesion, evaluate the angiogenesis of the tumor, and play an important role in the diagnosis of breast cancer. And, since the original parametric image includes 8 classes, the number of channels when feature extraction is performed in the second attention network is also 8.
2) Preprocessing a plurality of original three-dimensional dynamic enhanced magnetic resonance images to obtain a first image; the preprocessing comprises resampling processing, clipping processing, standardization processing and data enhancement processing. The method specifically comprises the following steps: a) resampling each acquired original three-dimensional dynamically enhanced magnetic resonance image so that pixel intervals of images generated by different scanning devices are kept consistent. b) Marking a focus area of each two-dimensional slice of the resampled original three-dimensional dynamic enhanced magnetic resonance image by a radiologist through a two-dimensional rectangular frame to obtain a plurality of two-dimensional rectangular frames; and combining the two-dimensional rectangular frames into a three-dimensional rectangular block containing the lesion area. Specifically, the three-dimensional rectangular block containing the lesion area is a mask (mask) containing only 0 and 1. And cutting the primary focus area of the resampled primary three-dimensional dynamic enhanced magnetic resonance image according to the size of the three-dimensional rectangular block (mask) to obtain a three-dimensional image block. The size of the cut is the smallest rectangular block which can cover the primary focus area, the primary focus area is located in the center of the rectangular block, and the size of each primary focus area is different, so that the size of a three-dimensional rectangular block (mask) obtained by labeling each original three-dimensional dynamic enhanced magnetic resonance image is also different, the size of the cut image needs to be unified before the cut image is input into a convolutional neural network, and the sizes of all three-dimensional image blocks are unified to be (8, 80, 80); specifically, the size unified for each three-dimensional image block is an average of all mask sizes. c) Sequentially carrying out standardization processing and data enhancement processing on the three-dimensional image blocks with the uniform sizes to obtain a first image; the data enhancement processing comprises operations of turning, rotating, translating and scaling so as to prevent an overfitting phenomenon caused by a data volume problem.
3) And preprocessing the original parameter images to obtain a first parameter image. The method specifically comprises the following steps: and (4) resampling 8 parameter maps which are correspondingly generated in each original DCE-MRI image, so that the pixel intervals of the images generated by different scanning devices are kept consistent. And then cutting the resampled 8 parameter images according to the size of a three-dimensional rectangular block obtained after the original DCE-MRI image is labeled to generate 8 three-dimensional parameter image blocks. And sequentially carrying out standardization processing and data enhancement processing on the 8 three-dimensional parameter image blocks to obtain a first parameter image.
4) Tag information is determined from the first image. Specifically, the label information is obtained by histopathological detection of the first image.
Further, in the specific embodiment of the present invention, the data processing process of obtaining the third feature map after processing the first image by using an attention mechanism and a multi-scale fusion algorithm specifically includes: and inputting the feature map F1 generated by the second residual attention submodule BL2 and the feature map F2 generated by the third residual attention submodule BL3 into a multi-scale fusion module to perform a multi-scale fusion algorithm so as to obtain a third feature map.
As shown in fig. 4, specifically, the scale of the feature map F1 and the scale of the feature map F2 are matched and spliced by down-sampling to obtain a merged feature map; processing the merged feature map sequentially through the 1 × 1 convolutional layer and the two classification models to obtain a first spatial weight factor S2 and a second spatial weight factor S3; carrying out sigmoid operation on the feature map F1 to obtain a first soft attention feature map; carrying out sigmoid operation on the feature map F2 to obtain a second soft attention feature map; performing element-wise multiplication on the first spatial weight factor S2 and the first soft attention feature map to obtain a first fine-grained feature map; performing element-wise multiplication on the second spatial weight factor S3 and the second soft attention feature map to obtain a second fine-grained feature map; and carrying out element-wise addition on the first fine-grained characteristic diagram and the second fine-grained characteristic diagram to obtain a third characteristic diagram F3.
In the specific embodiment of the present invention, a process of processing the first parameter image by using the attention mechanism and the multi-scale fusion algorithm to obtain the fourth feature map is the same as a process of processing the first parameter image by using the attention mechanism and the multi-scale fusion algorithm to obtain the third feature map, and details are not repeated here.
Further, in the specific embodiment of the present invention, the data processing process of the converged network is as follows:
inputting the third feature map into a first full-connection module to obtain a first result; the first result is a predicted probability of a lesion category of a primary lesion area in the first image.
Inputting the fourth feature map into a second full-connection module to obtain a second result; the second result is a predicted probability of a lesion category of the primary lesion area in the first parametric image.
Inputting the fifth feature map into a third full-connection module to obtain a third result; the third result is the prediction probability of the focus category of the primary focus area in the fifth feature map; and the fifth feature map is obtained by splicing the first feature map obtained according to the first image and the second feature map obtained according to the first parameter image.
Averaging the first, second and third results to obtain a predicted probability of a final output lesion category.
The first full-connection module, the second full-connection module and the third full-connection module have the same structure; the first full-connection module comprises a global average pooling layer, a full-connection layer and a softmax classification layer which are sequentially connected. And the global average pooling can avoid overfitting, and the lightweight design of the network is realized.
In a specific embodiment of the present invention, the specific operation steps of the multi-modal breast magnetic resonance image classification method are as follows:
step 1, obtaining an original three-dimensional DCE-MRI image of a certain patient; step 2, obtaining 8 parameter graphs corresponding to the original three-dimensional DCE-MRI image; step 3, sequentially carrying out resampling processing, cutting processing and standardization processing on an original three-dimensional DCE-MRI image of a certain patient to obtain a first image; step 4, sequentially carrying out resampling processing, cutting processing and standardization processing on 8 parameter images of a certain patient to obtain a first parameter image; step 5, inputting the first image and the first parameter image into a trained target classification network to obtain the prediction probability of the focus category; and 6, determining the focus category according to the prediction probability of the focus category.
Example two
As shown in fig. 5, the present embodiment provides a multi-modality breast magnetic resonance image classification system, which includes:
a first data acquisition part 200 for acquiring a magnetic resonance image to be classified.
And a second data acquisition unit 201, configured to acquire a parameter map corresponding to the magnetic resonance image to be classified, so as to obtain a parameter map to be classified.
A prediction probability determination part 202, configured to output the magnetic resonance image to be classified and the parameter map to be classified into a target classification network to obtain a prediction probability of a lesion category.
A lesion classification determination unit 203 for determining a lesion classification based on the prediction probability of the lesion classification.
The target classification network is obtained by training according to a training sample and a calibration convolutional neural network; the training sample comprises input data and a label; the input data comprises a first image and a first parameter image corresponding to the first image; the first image is a three-dimensional dynamic enhanced magnetic resonance image with a primary focus area; the label is a lesion category of a primary lesion area in the first image; the calibration convolutional neural network structure is a double-input single-output network structure.
As shown in fig. 6, the structure of the scaled convolutional neural network includes a first attention network, a second attention network, and a fusion network.
An input of the first attention network for inputting the first image and the label; a first output of the first attention network for outputting a first profile; the first feature map is determined after feature extraction is carried out on the first image by using an attention mechanism; a second output of the first attention network is used for outputting a third feature map; the third characteristic diagram is obtained by processing the first image by using an attention mechanism and a multi-scale fusion algorithm.
An input of the second attention network is used for inputting the first parameter image and the label; the first output end of the second attention network is used for outputting a second feature map; the second feature map is determined after feature extraction is carried out on the first parameter image by using an attention mechanism; a second output of the second attention network is used for outputting a fourth feature map; the fourth characteristic diagram is obtained by processing the first parameter image by using an attention mechanism and a multi-scale fusion algorithm.
The first input end of the fusion network is used for inputting a fifth feature map; the fifth feature map is determined after the first feature map and the second feature map are spliced; the second input end of the fusion network is used for inputting the third feature map; a third input end of the fusion network is used for inputting the fourth feature map; and the output end of the fusion network is used for outputting the prediction probability of the focus category.
Preferably, the first attention network comprises a first fusion module and a plurality of first residual attention modules; the plurality of first residual attention modules are connected in series; the output end of the tail end first residual attention module is a first output end of the first attention network, and the input end of the head end first residual attention module is an input end of the first attention network; the input end of the first fusion module is connected with the first calibration residual error attention module; the first calibration residual attention module is any one of the first calibration residual attention modules, and the number of the first calibration residual attention modules is greater than or equal to 2; the output end of the first fusion module is a second output end of the first attention network.
The second attention network comprises a second fusion module and a plurality of second residual attention modules; the plurality of second residual attention modules are connected in series; the output end of the second residual attention module at the tail end is a first output end of the second attention network, and the input end of the second residual attention module at the head end is an input end of the second attention network; the input end of the second fusion module is connected with the second calibration residual error attention module; the second calibration residual attention module is any second residual attention module, and the number of the second calibration residual attention modules is greater than or equal to 2; the output end of the second fusion module is a second output end of the second attention network.
Compared with the prior art, the invention also has the following advantages:
(1) the invention carries out multi-mode fusion on the parameter graphs of the quantitative parameters and the semi-quantitative parameters and the original DCE-MRI image, so that richer characteristics can be learned by a network, and the characteristics can be mutually complemented.
(2) According to the invention, an attention mechanism is added on the basis of the proposed residual error network model, so that the network focuses more on useful information, and the training efficiency is improved.
(3) The invention fuses the multi-scale features simultaneously, and the multi-scale attention mechanism enables the network to adaptively learn more refined features in space, so that the network obtains more accurate classification results.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
The principles and embodiments of the present invention have been described herein using specific examples, which are provided only to help understand the method and the core concept of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, the specific embodiments and the application range may be changed. In view of the above, the present disclosure should not be construed as limiting the invention.

Claims (10)

1. A multi-modal breast magnetic resonance image classification method is characterized by comprising the following steps:
acquiring a magnetic resonance image to be classified;
acquiring a parameter map corresponding to the magnetic resonance image to be classified to obtain the parameter map to be classified;
outputting the magnetic resonance image to be classified and the parameter map to be classified to a target classification network to obtain the prediction probability of the focus category;
determining a lesion category according to the prediction probability of the lesion category;
the target classification network is obtained by training according to a training sample and a calibration convolutional neural network; the training sample comprises input data and a label; the input data comprises a first image and a first parameter image corresponding to the first image; the first image is a three-dimensional dynamic enhanced magnetic resonance image with a primary focus area; the label is a lesion category of a primary lesion area in the first image; the calibration convolutional neural network has a double-input single-output network structure; the structure of the calibration convolutional neural network comprises a first attention network, a second attention network and a fusion network;
an input of the first attention network for inputting the first image and the label;
a first output of the first attention network for outputting a first profile; the first feature map is determined after feature extraction is carried out on the first image by using an attention mechanism;
a second output of the first attention network is used for outputting a third feature map; the third characteristic diagram is obtained by processing the first image by using an attention mechanism and a multi-scale fusion algorithm;
an input of the second attention network is used for inputting the first parameter image and the label;
the first output end of the second attention network is used for outputting a second feature map; the second feature map is determined after feature extraction is carried out on the first parameter image by using an attention mechanism;
a second output of the second attention network is used for outputting a fourth feature map; the fourth characteristic diagram is obtained by processing the first parameter image by using an attention mechanism and a multi-scale fusion algorithm;
the first input end of the fusion network is used for inputting a fifth feature map; the fifth characteristic diagram is obtained by splicing the first characteristic diagram and the second characteristic diagram;
the second input end of the fusion network is used for inputting the third feature map;
a third input end of the fusion network is used for inputting the fourth feature map;
and the output end of the fusion network is used for outputting the prediction probability of the focus category.
2. The method of multi-modal breast magnetic resonance image classification method of claim 1, wherein the first attention network comprises a first fusion module and a plurality of first residual attention modules;
the plurality of first residual attention modules are connected in series; the output end of the tail end first residual attention module is a first output end of the first attention network, and the input end of the head end first residual attention module is an input end of the first attention network;
the input end of the first fusion module is connected with the first calibration residual error attention module; the first calibration residual attention module is any one of the first calibration residual attention modules, and the number of the first calibration residual attention modules is greater than or equal to 2; the output end of the first fusion module is a second output end of the first attention network.
3. The method of multi-modal breast magnetic resonance image classification method of claim 1, wherein the second attention network comprises a second fusion module and a plurality of second residual attention modules;
the plurality of second residual attention modules are connected in series; the output end of the second residual attention module at the tail end is a first output end of the second attention network, and the input end of the second residual attention module at the head end is an input end of the second attention network;
the input end of the second fusion module is connected with the second calibration residual error attention module; the second calibration residual attention module is any second residual attention module, and the number of the second calibration residual attention modules is greater than or equal to 2; the output end of the second fusion module is a second output end of the second attention network.
4. The method of claim 2 wherein the plurality of first residual attention modules are identical in structure;
wherein, the data processing process of the first residual attention module is as follows:
carrying out feature extraction on the input image sequentially through a 3 × 3 convolution layer, a GN layer, a Dropout layer and a rectification linear unit to obtain a first feature subgraph;
sequentially carrying out feature extraction on the input image through the convolution layer 1 x 1, the GN layer and the Dropout layer to obtain a second feature subgraph;
carrying out feature extraction on the first feature sub-graph sequentially through the 3 × 3 convolution layer, the GN layer and the Dropout layer to obtain a third feature sub-graph;
performing feature extraction on the third feature subgraph through an attention module to obtain a fourth feature subgraph;
and performing jumping connection and addition on the second characteristic subgraph and the fourth characteristic subgraph to obtain an output image.
5. The method of claim 4, wherein the data processing procedure of the attention module is as follows:
performing feature extraction on the third feature subgraph through a channel attention module to obtain a channel attention feature graph;
performing element-wise multiplication operation on the channel attention feature graph and the third feature subgraph to obtain an intermediate feature subgraph;
performing feature extraction on the intermediate feature subgraph through a spatial attention module to obtain a spatial attention feature graph;
and carrying out element-wise multiplication operation on the spatial attention feature graph and the intermediate feature subgraph to obtain a fourth feature subgraph.
6. The method according to claim 1, wherein the determining of the fifth feature map specifically comprises:
and splicing the first characteristic diagram and the second characteristic diagram in a channel series connection mode to obtain a fifth characteristic diagram.
7. The method according to claim 1, wherein the training sample construction process specifically comprises:
acquiring a plurality of original images; the original images comprise original three-dimensional dynamic enhanced magnetic resonance images and original parameter images corresponding to the original three-dimensional dynamic enhanced magnetic resonance images;
preprocessing a plurality of original three-dimensional dynamic enhanced magnetic resonance images to obtain a first image; the preprocessing comprises resampling processing, cutting processing, standardization processing and data enhancement processing;
preprocessing a plurality of original parameter images to obtain a first parameter image;
tag information is determined from the first image.
8. A multi-modality breast magnetic resonance image classification system, characterized in that the multi-modality breast magnetic resonance image classification system comprises:
a first data acquisition unit for acquiring a magnetic resonance image to be classified;
the second data acquisition part is used for acquiring a parameter map corresponding to the magnetic resonance image to be classified so as to obtain the parameter map to be classified;
the prediction probability determination part is used for outputting the magnetic resonance image to be classified and the parameter map to be classified into a target classification network so as to obtain the prediction probability of the focus category;
a lesion category determination section for determining a lesion category based on a prediction probability of the lesion category;
the target classification network is obtained by training according to a training sample and a calibration convolutional neural network; the training sample comprises input data and a label; the input data comprises a first image and a first parameter image corresponding to the first image; the first image is a three-dimensional dynamic enhanced magnetic resonance image with a primary focus area; the label is a lesion category of a primary lesion area in the first image; the calibration convolutional neural network has a double-input single-output network structure; the structure of the calibration convolutional neural network comprises a first attention network, a second attention network and a fusion network;
an input of the first attention network for inputting the first image and the label;
a first output of the first attention network for outputting a first profile; the first feature map is determined after feature extraction is carried out on the first image by using an attention mechanism;
a second output of the first attention network is used for outputting a third feature map; the third characteristic diagram is obtained by processing the first image by using an attention mechanism and a multi-scale fusion algorithm;
an input of the second attention network is used for inputting the first parameter image and the label;
the first output end of the second attention network is used for outputting a second feature map; the second feature map is determined after feature extraction is carried out on the first parameter image by using an attention mechanism;
a second output of the second attention network is used for outputting a fourth feature map; the fourth characteristic diagram is obtained by processing the first parameter image by using an attention mechanism and a multi-scale fusion algorithm;
the first input end of the fusion network is used for inputting a fifth feature map; the fifth feature map is determined after the first feature map and the second feature map are spliced;
the second input end of the fusion network is used for inputting the third feature map;
a third input end of the fusion network is used for inputting the fourth feature map;
and the output end of the fusion network is used for outputting the prediction probability of the focus category.
9. The multi-modality breast magnetic resonance image classification system of claim 8, wherein the first attention network includes a first fusion module and a plurality of first residual attention modules;
the plurality of first residual attention modules are connected in series; the output end of the tail end first residual attention module is a first output end of the first attention network, and the input end of the head end first residual attention module is an input end of the first attention network;
the input end of the first fusion module is connected with the first calibration residual error attention module; the first calibration residual attention module is any one of the first calibration residual attention modules, and the number of the first calibration residual attention modules is greater than or equal to 2; the output end of the first fusion module is a second output end of the first attention network.
10. The multi-modality breast magnetic resonance image classification system of claim 8, wherein the second attention network includes a second fusion module and a plurality of second residual attention modules;
the plurality of second residual attention modules are connected in series; the output end of the second residual attention module at the tail end is a first output end of the second attention network, and the input end of the second residual attention module at the head end is an input end of the second attention network;
the input end of the second fusion module is connected with the second calibration residual error attention module; the second calibration residual attention module is any second residual attention module, and the number of the second calibration residual attention modules is greater than or equal to 2; the output end of the second fusion module is a second output end of the second attention network.
CN202111159748.2A 2021-09-30 2021-09-30 Multi-modal breast magnetic resonance image classification method and system Pending CN113902945A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111159748.2A CN113902945A (en) 2021-09-30 2021-09-30 Multi-modal breast magnetic resonance image classification method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111159748.2A CN113902945A (en) 2021-09-30 2021-09-30 Multi-modal breast magnetic resonance image classification method and system

Publications (1)

Publication Number Publication Date
CN113902945A true CN113902945A (en) 2022-01-07

Family

ID=79189726

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111159748.2A Pending CN113902945A (en) 2021-09-30 2021-09-30 Multi-modal breast magnetic resonance image classification method and system

Country Status (1)

Country Link
CN (1) CN113902945A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115375665A (en) * 2022-08-31 2022-11-22 河南大学 Early Alzheimer disease development prediction method based on deep learning strategy
CN117456289A (en) * 2023-12-25 2024-01-26 四川大学 Jaw bone disease variable segmentation classification system based on deep learning
CN117636074A (en) * 2024-01-25 2024-03-01 山东建筑大学 Multi-mode image classification method and system based on feature interaction fusion

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115375665A (en) * 2022-08-31 2022-11-22 河南大学 Early Alzheimer disease development prediction method based on deep learning strategy
CN115375665B (en) * 2022-08-31 2024-04-16 河南大学 Advanced learning strategy-based early Alzheimer disease development prediction method
CN117456289A (en) * 2023-12-25 2024-01-26 四川大学 Jaw bone disease variable segmentation classification system based on deep learning
CN117456289B (en) * 2023-12-25 2024-03-08 四川大学 Jaw bone disease variable segmentation classification system based on deep learning
CN117636074A (en) * 2024-01-25 2024-03-01 山东建筑大学 Multi-mode image classification method and system based on feature interaction fusion
CN117636074B (en) * 2024-01-25 2024-04-26 山东建筑大学 Multi-mode image classification method and system based on feature interaction fusion

Similar Documents

Publication Publication Date Title
Usman et al. Volumetric lung nodule segmentation using adaptive roi with multi-view residual learning
CN110428432B (en) Deep neural network algorithm for automatically segmenting colon gland image
CN112651978B (en) Sublingual microcirculation image segmentation method and device, electronic equipment and storage medium
TWI777092B (en) Image processing method, electronic device, and storage medium
JP7026826B2 (en) Image processing methods, electronic devices and storage media
EP4044115A1 (en) Image processing method and apparatus based on artificial intelligence, and device and storage medium
CN111369565B (en) Digital pathological image segmentation and classification method based on graph convolution network
US20220198230A1 (en) Auxiliary detection method and image recognition method for rib fractures based on deep learning
CN113902945A (en) Multi-modal breast magnetic resonance image classification method and system
An et al. Medical image segmentation algorithm based on multilayer boundary perception-self attention deep learning model
CN111951288A (en) Skin cancer lesion segmentation method based on deep learning
Wazir et al. HistoSeg: Quick attention with multi-loss function for multi-structure segmentation in digital histology images
CN111444844A (en) Liquid-based cell artificial intelligence detection method based on variational self-encoder
CN110827236A (en) Neural network-based brain tissue layering method and device, and computer equipment
CN110648331A (en) Detection method for medical image segmentation, medical image segmentation method and device
CN115330669A (en) Computer-implemented method, system, and storage medium for predicting disease quantification parameters of an anatomical structure
Wen et al. Review of research on the instance segmentation of cell images
CN116258937A (en) Small sample segmentation method, device, terminal and medium based on attention mechanism
Wang et al. Automatic consecutive context perceived transformer GAN for serial sectioning image blind inpainting
CN114283406A (en) Cell image recognition method, device, equipment, medium and computer program product
WO2021159778A1 (en) Image processing method and apparatus, smart microscope, readable storage medium and device
CN117437423A (en) Weak supervision medical image segmentation method and device based on SAM collaborative learning and cross-layer feature aggregation enhancement
Hao et al. MFUnetr: A transformer-based multi-task learning network for multi-organ segmentation from partially labeled datasets
Sri et al. Detection Of MRI Brain Tumor Using Customized Deep Learning Method Via Web App
CN112750124B (en) Model generation method, image segmentation method, model generation device, image segmentation device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination