CN113160240A - Cyclic hopping deep learning network - Google Patents
Cyclic hopping deep learning network Download PDFInfo
- Publication number
- CN113160240A CN113160240A CN202110255801.2A CN202110255801A CN113160240A CN 113160240 A CN113160240 A CN 113160240A CN 202110255801 A CN202110255801 A CN 202110255801A CN 113160240 A CN113160240 A CN 113160240A
- Authority
- CN
- China
- Prior art keywords
- convolution
- network
- deep learning
- segmentation
- convolutional
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000013135 deep learning Methods 0.000 title claims abstract description 26
- 125000004122 cyclic group Chemical group 0.000 title claims description 18
- 230000011218 segmentation Effects 0.000 claims abstract description 52
- 238000000034 method Methods 0.000 claims abstract description 14
- 238000013461 design Methods 0.000 claims description 15
- 238000012545 processing Methods 0.000 claims description 10
- 230000008569 process Effects 0.000 claims description 8
- 230000004913 activation Effects 0.000 claims description 7
- 230000006870 function Effects 0.000 claims description 6
- 238000001514 detection method Methods 0.000 claims description 5
- 230000010354 integration Effects 0.000 claims description 5
- 238000012546 transfer Methods 0.000 claims description 5
- 239000000203 mixture Substances 0.000 claims description 3
- 230000035945 sensitivity Effects 0.000 claims description 3
- 238000012937 correction Methods 0.000 claims 1
- 238000003709 image segmentation Methods 0.000 description 8
- 210000004087 cornea Anatomy 0.000 description 5
- 238000003384 imaging method Methods 0.000 description 5
- 238000002372 labelling Methods 0.000 description 5
- 210000001519 tissue Anatomy 0.000 description 5
- 230000007547 defect Effects 0.000 description 4
- 238000000605 extraction Methods 0.000 description 4
- 230000000877 morphologic effect Effects 0.000 description 4
- 238000002474 experimental method Methods 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000004088 simulation Methods 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 2
- 201000010099 disease Diseases 0.000 description 2
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 2
- 210000005081 epithelial layer Anatomy 0.000 description 2
- 230000009191 jumping Effects 0.000 description 2
- 238000011002 quantification Methods 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
- 230000002776 aggregation Effects 0.000 description 1
- 238000004220 aggregation Methods 0.000 description 1
- 210000003484 anatomy Anatomy 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000015556 catabolic process Effects 0.000 description 1
- 238000003759 clinical diagnosis Methods 0.000 description 1
- 238000006731 degradation reaction Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000013209 evaluation strategy Methods 0.000 description 1
- 230000007717 exclusion Effects 0.000 description 1
- PCHJSUWPFVWCPO-UHFFFAOYSA-N gold Chemical compound [Au] PCHJSUWPFVWCPO-UHFFFAOYSA-N 0.000 description 1
- 239000010931 gold Substances 0.000 description 1
- 229910052737 gold Inorganic materials 0.000 description 1
- 230000003902 lesion Effects 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000004393 prognosis Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/048—Activation functions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T9/00—Image coding
- G06T9/002—Image coding using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30041—Eye; Retina; Ophthalmic
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30204—Marker
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- General Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Computational Linguistics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
A cycle-hopping deep learning network is mainly used for accurately extracting different interest targets in medical images. The network introduces a new reverse short-hop link and an attention-guided convolution module on the basis of the existing BiO-Net segmentation network, so that a cyclic-hop deep learning network is constructed, and then local OCT image data and public fundus image data are sampled to verify the segmentation performance of the network. The method can effectively extract different interest targets in the image, and has segmentation performance superior to the existing networks such as U-Net, AU-Net and BiO-Net.
Description
Technical Field
The invention relates to the technical field of image segmentation, in particular to a cyclic hopping deep learning network.
Background
The image segmentation is a process of dividing an image into different regions and enabling each region to have completely different imaging characteristics (such as pixel gray scale distribution, tissue contrast, anatomical morphology and the like), so that the analysis and measurement difficulty of the region of interest is reduced, and key guidance is provided for lesion location and morphological quantification, disease analysis, clinical diagnosis, prognosis monitoring and the like of related diseases, so that the method has important academic research value. To accurately extract the desired region of interest, a number of image segmentation algorithms have been developed, such as threshold-based methods, active contour-based methods, and atlas-based methods. These segmentation algorithms are generally classified simply into unsupervised (unsupervised) and supervised (supervised) segmentation algorithms. (a) The unsupervised segmentation algorithm generally distinguishes different regions according to imaging characteristics such as pixel gray scale distribution of an image, position relation among anatomical structures, tissue contrast, morphological characteristics of the tissue contrast and the like, and extraction of an interest region and exclusion of an irrelevant background are achieved by means of image evaluation strategies such as morphological operation and thresholding. Such algorithms are generally characterized by simple operation and fast operation, and can effectively process high-quality medical images and obtain reasonable segmentation performance, however, they are sensitive to the contrast of tissues, imaging noise and various artifacts, so that the segmentation algorithms are susceptible to interference of these phenomena and have serious performance degradation. In addition, such algorithms often involve more parameters that are assigned empirically, and therefore the design of the algorithms places higher demands on image processing knowledge and clinical background, making the algorithms difficult to adapt for large-scale clinical image processing. (b) Supervised segmentation algorithms need to take into account not only the imaging characteristics of the image itself but also manual intervention (such as manual design and selection of features or labels), perform the resolution and extraction of multiple regions of interest by using various different types of image information, and have higher performance than unsupervised segmentation algorithms. The algorithm uses more image information, so that the interference of various image artifacts or noises on image segmentation can be relieved to a certain extent, and the algorithm is more suitable for processing large-scale clinical data. However, the acquisition of the manually designed features or labels requires sufficient multidisciplinary knowledge such as image processing and clinical background, and is highly dependent on the personal experience of the algorithm designer or image annotator, so that the acquired feature information or labels may have large errors, thereby restricting the segmentation performance of the algorithm.
In recent years, supervised segmentation algorithms based on deep learning have received extensive attention and intensive research. The algorithm can learn various convolution characteristics from the input image under the assistance of the labeling information by superposing a large number of basic deep learning operations (such as convolution operation, batch sample regularization, activation functions and the like), reasonably integrates the characteristic information, can execute accurate classification of each pixel, and realizes high-quality image segmentation. In the algorithm, U-Net is a more classical deep learning network, and has the advantages of simple structure, excellent performance and the like, but the algorithm has the following defects: (a) U-Net uses the image down-sampling operation for many times to accelerate the extraction of convolution characteristics, resulting in the reduction of image dimensionality, thereby causing the blurring and the loss of information; (b) the transfer of the encoded convolution features is performed using only a single kind of jump link, which is not sufficient to reconstruct the large amount of image information that is lost; (c) the simple superposition of a large number of convolutional layers results in that the segmentation network can effectively extract the central region of the target of interest and cannot process the boundary region of the target of interest, and a large boundary error is caused. To achieve accurate image segmentation, a number of improved networks of U-Net have been developed, such as CE-Net, AU-Net, and BiO-Net. Although these segmentation networks have better image processing performance, they still have large segmentation errors in the boundary region of the target.
Disclosure of Invention
In order to solve the technical defects in the prior art, the invention provides a cycle-hopping deep learning network which is mainly used for accurately extracting different interest targets in a medical image, can effectively reduce the influence of phenomena such as weak tissue contrast, serious imaging artifacts or noise and the like on image segmentation, assists the positioning and detection of a focus in the medical image, and lays a solid theoretical foundation for the accurate segmentation and morphological quantification of a focus region.
The technical solution adopted by the invention is as follows: a cycle hopping deep learning network comprising the steps of:
(1) design of reverse short-hop links: in order to alleviate the problem of information loss caused by multiple times of image downsampling, the existing segmentation networks (such as U-Net and BiO-Net) generally use skip link to connect the encoding and decoding convolution characteristics of the images in series, so as to ensure that the decoding convolution module has enough information input and learns and captures the required target characteristics from the input information. However, these jump links can only be transmitted in one direction, and the transmitted feature information needs to have the same image dimension, so that the jump links have limited information storage potential, and the integration of multi-level features is restricted. In order to reduce the information loss caused by down-sampling, the invention introduces a new reverse jump link, connects the input and output variables of the convolution module in series and takes the series result as the input variable of the convolution module again, so that the module can learn and detect the most relevant characteristic information circularly;
(2) note the design of the guided convolution module: when the existing segmentation network executes jump link, the existing segmentation network generally carries out simpler information transmission without carrying out any processing on the information transmission, so that a large amount of redundant information is input into the convolution module for many times, the learning of the convolution module on target characteristics is seriously interfered, and the detection sensitivity of the segmentation network on key interest areas is reduced. Based on the above, the invention introduces a convolution module which is guided by attention, and processes the image information transferred by the jump link in a targeted manner to highlight potential important areas and reduce the influence of the unable background on the segmentation;
(3) deep learning network with cycle hopping: the jump link and the attention convolution module are integrated into the BiO-Net segmentation network, so that a cyclic jump deep learning network is constructed, the defects of the BiO-Net are relieved to a certain extent by the network, and the segmentation of the image can be more accurately executed. In order to verify the performance of the designed network, a cornea OCT image collected by an eye vision hospital affiliated to Wenzhou medical university and a public color eye fundus image are adopted to carry out segmentation experiments, and the experimental results are compared with the performance of the existing segmentation network for evaluation. The network comprises three different jump links, namely a forward jump link, a reverse short jump link and three different convolution modules, namely a coding convolution module, a decoding convolution module and an attention-directed convolution module, wherein the jump links are mainly used for transmitting various convolution characteristics at different levels and different positions, the cyclic use of characteristic information between the coding convolution module and the decoding convolution module and between the coding convolution module and the decoding convolution module is realized, the modules can detect required target characteristics, and the problem of information loss caused by multiple times of image downsampling is solved. The coding and decoding convolution modules have quite similar composition structures and are mainly used for integrating various convolution characteristics and extracting deeper-level image information from the convolution characteristics. The encoding and decoding convolution module consists of two image channel-based tandem operations and two identical convolution layers.
In the step (3), each convolution layer of two identical convolution layers in the encoding and decoding convolution module includes three basic operation units, namely, a convolution operation of 3 × 3, a batch sample regularization operation and a linear modification activation operation, namely, the convolution layer can be represented as Conv3 × 3 → BN → ReLU.
The convolution module guided by the attention in the step (3) is composed of convolution layers with three convolution windows of 1 × 1, and is used for processing convolution characteristics transferred by the jump link and reducing redundant information in the characteristics.
The step (2) is to design the convolution module by the following steps:
processing feature information transferred by a jump link by using convolution layers with three convolution windows of 1 × 1, wherein the first convolution layer and the third convolution layer have the same composition structure, namely Conv1 × 1 → BN → ReLU, and the main function of the convolution layers is to acquire required image dimensionality so that different convolution characteristics can perform pixel-based arithmetic operation; the second convolutional layer has the structure of Conv1 × 1 → BN → Sigmoid, where Sigmoid is an activation function different from ReLU, and secondly, the processed convolution features and their original versions are integrated using pixel-based multiplication and addition operations, highlighting some critical regions of interest under the condition of substantially keeping the original features unchanged, and the feature integration process can be expressed as:
X1=X0(1+S(X0))
wherein, X0And X1Image features representing jump link delivery and notes thereof, respectivelyThe processed version of the convolution module is guided, and S (-) represents a sigmoid activation function.
The invention has the beneficial effects that: the invention provides a cyclic hopping deep learning network, which respectively designs a reverse short hopping link and an attention-directed convolution module; the former is used for connecting the input and output characteristics of each convolution module (mainly coding and decoding convolution modules) in series, so that different types of convolution characteristics can be aggregated in a diversified manner, the convolution modules can be ensured to detect the most important and most relevant image characteristics circularly and progressively, and rapid and accurate target segmentation is realized; the two introduced structures are integrated into the BiO-Net network, a cyclic jump deep learning network can be constructed, and the network can be used for simultaneously and accurately extracting a plurality of target regions in an image. The method can effectively extract different interest targets in the image, and has segmentation performance superior to the existing networks such as U-Net, AU-Net and BiO-Net.
Drawings
FIG. 1 is a flow chart of the present invention;
fig. 2 is a cyclic hopping deep learning network to be designed in the present invention, which is composed of three convolution modules (i.e., encoding convolution module, decoding convolution module, attention-directed convolution module) and three hopping links (i.e., forward hopping link, reverse short hopping link).
Fig. 3 shows two experimental data and corresponding manual labels used in the present invention, wherein the OCT images are labeled epithelial layer (epithelial layer), Bowman's layer (Bowman's layer) and stromal layer (stromal layer) in the central region of the cornea, respectively, and the color fundus images are labeled Optic Disc (OD) and cup (OC) regions, respectively.
FIG. 4 shows the results of the present invention after segmenting the OCT image of the cornea, columns 1-2 are the original image to be segmented and the corresponding manual labeling results, columns 3-6 are the segmentation results corresponding to U-Net, AU-Net, BiO-Net and pseudo-design network.
FIG. 5 shows the segmentation results of the color fundus image according to the present invention, wherein columns 1-2 are the original image to be segmented and the corresponding manual labeling results, and columns 3-6 are the segmentation results corresponding to U-Net, AU-Net, BiO-Net and pseudo-design network.
Detailed Description
The following describes a cyclic hopping deep learning network and its application with reference to the accompanying drawings;
referring to fig. 1, the deep learning network with cyclic hopping and the application thereof of the present invention include the following steps:
step 1, analyzing the effect and the defects of jump links in the existing segmentation networks (such as U-Net and BiO-Net), and then pertinently designing appropriate jump links and convolution modules to relieve the loss of image information and improve the segmentation performance of the network. Specifically, the existing segmentation network mainly uses forward and reverse skip links to transfer feature information between encoding and decoding convolution modules, however, these skip links can only transfer a single kind of features with the same image dimension to a specified convolution module, thereby limiting the diversified integration of image information, resulting in difficulty in reconstructing lost massive information. Therefore, the invention is inspired by reverse jump linkage, the input and output characteristics of each convolution module are connected in series, and the series connection result is used as the input characteristic of the convolution module, thereby constructing a new reverse short jump linkage, and enabling the convolution module to circularly learn various required characteristic information.
Step 2, attention is paid to design of the guided convolution module
Besides the above disadvantages, the jump link also causes a large amount of redundant information to repeatedly appear, reduces the detection sensitivity of the convolution module to important features, and seriously interferes with the extraction of the segmentation network to the interested target. In order to reduce the influence of irrelevant backgrounds, the invention introduces a convolution module which is guided by attention, and the characteristic information transmitted by jump links is processed in a targeted manner, so that the interference of a large number of irrelevant backgrounds on image segmentation is relieved, and the target detection accuracy of a segmentation network is improved.
Step 3, cyclic hopping deep learning network
The reverse short-hop chaining designed above and the attention-directed convolution model are introduced into the BiO-Net network, so that the cyclic hop segmentation network corresponding to the invention can be obtained. According to the network, under the condition that the main structure of the BiO-Net network is kept unchanged, input and output information of each convolution module (mainly an encoding convolution module and a decoding convolution module) is connected in series to serve as new input information, so that cyclic utilization of convolution characteristics is realized, attention is paid to guide the convolution modules to process image information transmitted by all jump links, and finally related information with the same image dimensionality is connected in series, so that flexible aggregation of multi-level characteristics is realized, and accurate segmentation of images is facilitated.
1. Simulation conditions are as follows:
the invention carries out segmentation experiments on Keras depth learning software on two platforms, namely Windows 1064 bit Intel (R) Xeon (R) Gold 5120CPU @2.20GHz 2.19GHz RAM 64GB and Windows 1064 bit Intel (R) core (TM) i9-10920X CPU @3.50GHz 3.50GHz RAM 32GB, wherein the experimental data are cornea OCT images and public color fundus image data (REFUSE) collected and manually marked by an eye vision hospital affiliated to Wenzhou medical university.
2. Simulation content and results
The simulation experiment respectively uses a cornea OCT image and a color fundus image to train and independently verify a circularly jumping deep learning network which is designed in a simulating way, the feasibility and the effectiveness of the algorithm are evaluated, and then the performance of the simulated jumping deep learning network is compared with the performance of the existing three segmentation networks of U-Net, AU-Net and BiO-Net, and the experiment results are shown in figures 4 and 5:
in FIG. 4, columns 1-2 are the original image to be segmented and the corresponding manual labeling result, and columns 3-6 are the segmentation results corresponding to U-Net, AU-Net, BiO-Net, and the pseudo-design network. From experimental results, the proposed design network has better segmentation precision than other segmentation networks, and irrelevant background information can be effectively eliminated.
In FIG. 5, columns 1-2 are the original image to be segmented and the corresponding manual labeling result, and columns 3-6 are the segmentation results corresponding to U-Net, AU-Net, BiO-Net, and the pseudo-design network. According to the experimental result, the designed segmentation network can simultaneously and accurately extract two different image areas, and has better segmentation performance than the existing network when relatively small targets are extracted.
Comparing the experimental results of the four segmented networks can find that: the segmentation network to be designed can simultaneously and accurately detect interest targets with different sizes and shapes, and the comprehensive segmentation performance of the segmentation network is superior to that of other segmentation networks.
In the description of the present invention, it should be noted that the terms "center", "longitudinal", "lateral", "up", "down", "front", "back", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", etc., indicate orientations or positional relationships based on those shown in the drawings, and are only for convenience of description and simplicity of description, but do not indicate or imply that the referred device or element must have a specific orientation, be constructed and operated in a specific orientation, and thus, should not be construed as limiting the present invention. Furthermore, the terms "first" and "second" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
In the description of the present invention, it should be noted that, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art. In addition, in the description of the present invention, "a plurality" means two or more unless otherwise specified.
The skilled person should understand that: although the invention has been described in terms of the above specific embodiments, the inventive concept is not limited thereto and any modification applying the inventive concept is intended to be included within the scope of the patent claims.
The above description is only a preferred embodiment of the present invention, and the protection scope of the present invention is not limited to the above embodiments, and all technical solutions belonging to the idea of the present invention belong to the protection scope of the present invention. It should be noted that modifications and embellishments within the scope of the invention may occur to those skilled in the art without departing from the principle of the invention, and are considered to be within the scope of the invention.
Claims (4)
1. A cycle hopping deep learning network, comprising the steps of:
(1) design of reverse short-hop links: introducing a reverse short jump link for connecting the input and output characteristics of each convolution module in series, realizing the cyclic use of the convolution characteristics, finding the most important characteristic information and assisting the accurate segmentation of the interest target;
(2) note the design of the guided convolution module: a convolution module guided by attention is introduced to pertinently process image information transmitted by jump link, and the target detection sensitivity and effectiveness of a segmentation network are improved;
(3) deep learning network with cycle hopping: integrating the jump link and the attention convolution module into a BiO-Net segmentation network so as to construct a cyclic jump deep learning network, wherein the network comprises three different jump links, namely a forward jump link, a reverse short jump link and three different convolution modules, namely a coding convolution module, a decoding convolution module and an attention guide convolution module, and the coding and decoding convolution modules are composed of two image channel-based series operations and two same convolution layers.
2. The cyclic hopping deep learning network as claimed in claim 1, wherein each convolutional layer of two identical convolutional layers in the encoding and decoding convolutional module in step (3) comprises three basic operation units, i.e. a 3 × 3 convolutional operation, a batch sample regularization operation and a linear correction activation operation, i.e. convolutional layer can be represented as Conv3 × 3 aBN ReLU.
3. The deep learning network with cyclic hopping as claimed in claim 1, wherein the convolution module with the attention guide in step (3) is composed of three convolution layers with convolution window of 1 x 1, and is used for processing convolution features transmitted by the hopping link to reduce redundant information in the features.
4. The cyclic hopping deep learning network of claim 1, wherein the step (2) is implemented by following the design of the convolutional module:
processing the feature information delivered by the jump link using three convolutional layers with convolution windows of 1 × 1, the first and third convolutional layers having the identical composition structure, i.e. Conv1 × 1 BN ReLU, whose main role is to obtain the required image dimension, so that different convolutional features can perform pixel-based arithmetic operations; the second convolutional layer has the structure of Conv1 × 1 BN a Sigmoid, where Sigmoid is an activation function different from ReLU, and secondly, the integration of the processed convolutional features and their original versions using pixel-based multiplication and addition operations highlights some key regions of interest under the condition that the original features are basically kept unchanged, and the feature integration process can be expressed as:
X1=X0(1+S(X0))
wherein, X0And X1Respectively representing the image characteristics of jump link transfer and the processed version of the jump link transfer by the attention-directing convolution module, and S (-) represents a sigmoid activation function.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110255801.2A CN113160240A (en) | 2021-03-09 | 2021-03-09 | Cyclic hopping deep learning network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110255801.2A CN113160240A (en) | 2021-03-09 | 2021-03-09 | Cyclic hopping deep learning network |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113160240A true CN113160240A (en) | 2021-07-23 |
Family
ID=76886684
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110255801.2A Pending CN113160240A (en) | 2021-03-09 | 2021-03-09 | Cyclic hopping deep learning network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113160240A (en) |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170262995A1 (en) * | 2016-03-11 | 2017-09-14 | Qualcomm Incorporated | Video analysis with convolutional attention recurrent neural networks |
CN111127490A (en) * | 2019-12-31 | 2020-05-08 | 杭州电子科技大学 | Medical image segmentation method based on cyclic residual U-Net network |
CN111612790A (en) * | 2020-04-29 | 2020-09-01 | 杭州电子科技大学 | Medical image segmentation method based on T-shaped attention structure |
CN111627019A (en) * | 2020-06-03 | 2020-09-04 | 西安理工大学 | Liver tumor segmentation method and system based on convolutional neural network |
CN112116064A (en) * | 2020-08-11 | 2020-12-22 | 西安电子科技大学 | Deep network data processing method for spectrum super-resolution self-adaptive weighted attention machine |
CN112150476A (en) * | 2019-06-27 | 2020-12-29 | 上海交通大学 | Coronary artery sequence vessel segmentation method based on space-time discriminant feature learning |
CN112215844A (en) * | 2020-11-26 | 2021-01-12 | 南京信息工程大学 | MRI (magnetic resonance imaging) multi-mode image segmentation method and system based on ACU-Net |
-
2021
- 2021-03-09 CN CN202110255801.2A patent/CN113160240A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170262995A1 (en) * | 2016-03-11 | 2017-09-14 | Qualcomm Incorporated | Video analysis with convolutional attention recurrent neural networks |
CN112150476A (en) * | 2019-06-27 | 2020-12-29 | 上海交通大学 | Coronary artery sequence vessel segmentation method based on space-time discriminant feature learning |
CN111127490A (en) * | 2019-12-31 | 2020-05-08 | 杭州电子科技大学 | Medical image segmentation method based on cyclic residual U-Net network |
CN111612790A (en) * | 2020-04-29 | 2020-09-01 | 杭州电子科技大学 | Medical image segmentation method based on T-shaped attention structure |
CN111627019A (en) * | 2020-06-03 | 2020-09-04 | 西安理工大学 | Liver tumor segmentation method and system based on convolutional neural network |
CN112116064A (en) * | 2020-08-11 | 2020-12-22 | 西安电子科技大学 | Deep network data processing method for spectrum super-resolution self-adaptive weighted attention machine |
CN112215844A (en) * | 2020-11-26 | 2021-01-12 | 南京信息工程大学 | MRI (magnetic resonance imaging) multi-mode image segmentation method and system based on ACU-Net |
Non-Patent Citations (1)
Title |
---|
TIANGE XIANG ET AL: "BiO-Net: Learning Recurrent Bi-directional Connections for Encoder-Decoder Architecture", 《ARXIV》 * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Alyoubi et al. | Diabetic retinopathy detection through deep learning techniques: A review | |
CN110060774B (en) | Thyroid nodule identification method based on generative confrontation network | |
Liu et al. | A framework of wound segmentation based on deep convolutional networks | |
CN110934606A (en) | Cerebral apoplexy early-stage flat-scan CT image evaluation system and method and readable storage medium | |
CN108629768B (en) | Method for segmenting epithelial tissue in esophageal pathology image | |
CN113344951B (en) | Boundary-aware dual-attention-guided liver segment segmentation method | |
CN112348785B (en) | Epileptic focus positioning method and system | |
CN112102259A (en) | Image segmentation algorithm based on boundary guide depth learning | |
CN111079901A (en) | Acute stroke lesion segmentation method based on small sample learning | |
Sun et al. | MSCA-Net: Multi-scale contextual attention network for skin lesion segmentation | |
KR20190087681A (en) | A method for determining whether a subject has an onset of cervical cancer | |
Lei et al. | Automated detection of retinopathy of prematurity by deep attention network | |
Elayaraja et al. | An efficient approach for detection and classification of cancer regions in cervical images using optimization based CNN classification approach | |
CN115035127A (en) | Retinal vessel segmentation method based on generative confrontation network | |
CN112634231A (en) | Image classification method and device, terminal equipment and storage medium | |
Abbasi et al. | Automatic brain ischemic stroke segmentation with deep learning: A review | |
Liu et al. | Weakly-supervised localization and classification of biomarkers in OCT images with integrated reconstruction and attention | |
CN114140437A (en) | Fundus hard exudate segmentation method based on deep learning | |
CN113538363A (en) | Lung medical image segmentation method and device based on improved U-Net | |
CN113160261B (en) | Boundary enhancement convolution neural network for OCT image corneal layer segmentation | |
Upadhyay et al. | Characteristic patch-based deep and handcrafted feature learning for red lesion segmentation in fundus images | |
CN112950555A (en) | Deep learning-based type 2 diabetes cardiovascular disease image classification method | |
CN113160240A (en) | Cyclic hopping deep learning network | |
CN116188479A (en) | Hip joint image segmentation method and system based on deep learning | |
CN113192089B (en) | Bidirectional cross-connection convolutional neural network for image segmentation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210723 |
|
RJ01 | Rejection of invention patent application after publication |