CN115205300A - Fundus blood vessel image segmentation method and system based on cavity convolution and semantic fusion - Google Patents

Fundus blood vessel image segmentation method and system based on cavity convolution and semantic fusion Download PDF

Info

Publication number
CN115205300A
CN115205300A CN202211134660.XA CN202211134660A CN115205300A CN 115205300 A CN115205300 A CN 115205300A CN 202211134660 A CN202211134660 A CN 202211134660A CN 115205300 A CN115205300 A CN 115205300A
Authority
CN
China
Prior art keywords
image
blood vessel
feature
fundus blood
vessel image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211134660.XA
Other languages
Chinese (zh)
Other versions
CN115205300B (en
Inventor
张红斌
钟翔
李志杰
胡朗
袁梦
李广丽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
East China Jiaotong University
Original Assignee
East China Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by East China Jiaotong University filed Critical East China Jiaotong University
Priority to CN202211134660.XA priority Critical patent/CN115205300B/en
Publication of CN115205300A publication Critical patent/CN115205300A/en
Application granted granted Critical
Publication of CN115205300B publication Critical patent/CN115205300B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4007Scaling of whole images or parts thereof, e.g. expanding or contracting based on interpolation, e.g. bilinear interpolation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4046Scaling of whole images or parts thereof, e.g. expanding or contracting using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/60Rotation of whole images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Software Systems (AREA)
  • Biomedical Technology (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

The invention provides a fundus blood vessel image segmentation method and system based on cavity convolution and semantic fusion, which comprises the following steps: acquiring a fundus blood vessel image dataset, acquiring a fundus blood vessel image from the fundus blood vessel image dataset and preprocessing the fundus blood vessel image; improving based on a U-Net model to obtain an improved neural network; designing a multilayer multi-scale cavity convolution structure at a jump connection part of an improved neural network; designing a semantic fusion structure at a decoding part of the improved neural network; performing continuous three times of convolution operation to obtain an image to be segmented, and performing two-classification discrimination on pixels in the image to be segmented to segment the fundus blood vessel image and obtain an improved neural network model; and performing auxiliary training and testing on the improved neural network model according to the first loss function. The invention can accurately and effectively segment the fundus blood vessel image, and assist the clinical diagnosis work of doctors to further realize high-quality medical service.

Description

Fundus blood vessel image segmentation method and system based on cavity convolution and semantic fusion
Technical Field
The invention relates to the technical field of computer images, in particular to a fundus blood vessel image segmentation method and system based on cavity convolution and semantic fusion.
Background
For human body, the eye is the only organ in the whole body which can see blood vessels and nerves under direct vision, and the circulation of retina has the same anatomical physiological characteristics as the brain and coronary circulation. Therefore, the fundus oculi has become a very important window for observing related diseases such as cardiovascular and cerebrovascular diseases, eyeball diseases and the like. However, because manual Diagnosis is time-consuming, labor-consuming and inefficient, computer-aided Diagnosis (CAD) is an important means for improving the working efficiency and Diagnosis accuracy of doctors. The fine and accurate fundus blood vessel image segmentation can assist a doctor to better observe the diseases, and then make a correct diagnosis decision. Therefore, the fundus blood vessel image segmentation technology has a high clinical application value, can practically improve the medical service level, and promotes the medical deep fusion.
The existing fundus blood vessel image segmentation method is mostly based on a U-Net network, obtains better segmentation performance, and effectively promotes the intelligent diagnosis development based on CAD, but the existing work has the following defects: (1) The image characteristics are limited by a limited receptive field, and the extraction of local characteristics in the fundus blood vessel image is insufficient; (2) Only convolution operation is adopted, so that the context information in the fundus blood vessel image is less, and the target blood vessel cannot be accurately and completely segmented; (3) Continuous upsampling in the decoder inevitably loses some vessel detail information. To address the above issues, the details of the blood vessels in the image should be preserved as much as possible, thereby providing the physician with intuitive clinical diagnostic information.
Therefore, an advanced and efficient fundus blood vessel image segmentation method is needed to be designed, so that global context information and local features from different receptive fields are considered, loss of detail information is reduced as much as possible, fundus blood vessel segmentation precision is improved finally, and a more accurate and complete segmentation result is provided for doctors.
Disclosure of Invention
In view of the above situation, the main objective of the present invention is to provide a fundus blood vessel image segmentation method and system based on cavity convolution and semantic fusion to solve the above technical problems.
The embodiment of the invention provides an eyeground blood vessel image segmentation method based on cavity convolution and semantic fusion, wherein the method comprises the following steps:
acquiring a fundus blood vessel image dataset, acquiring a fundus blood vessel image from the fundus blood vessel image dataset, and preprocessing the fundus blood vessel image;
performing improved design based on the U-Net model to obtain an improved neural network;
step three, during the improvement design, designing a multilayer multi-scale cavity convolution structure at a jump connection part of the improved neural network, wherein the multilayer multi-scale cavity convolution structure is used for spanning an encoding part and a decoding part of the improved neural network so as to protect detail information in the fundus blood vessel;
designing a semantic fusion structure at a decoding part of the improved neural network, wherein the semantic fusion structure is used for splicing the decoded multi-scale image features, constructing a pair of extrusion excitation modules, and screening key information of the spliced multi-scale image features according to the extrusion excitation modules to finally obtain a multi-scale image feature fusion result;
performing continuous three-time convolution operation on the multi-scale image feature fusion result to obtain an image to be segmented, and performing classification judgment on pixels in the image to be segmented to segment the fundus blood vessel image to obtain an improved neural network model;
step six, performing auxiliary training on the improved neural network model according to a first loss function so as to test the trained neural network model, thereby finally completing segmentation and verification of the fundus blood vessel image;
the expression of the first loss function is:
Figure 923032DEST_PATH_IMAGE001
wherein the content of the first and second substances,
Figure 564229DEST_PATH_IMAGE002
the first loss function is represented as a function of,
Figure 516004DEST_PATH_IMAGE003
the total number of categories of the data is represented,
Figure 644366DEST_PATH_IMAGE004
representing categorieskThe true positive value of (A) is,
Figure 226657DEST_PATH_IMAGE005
representing categorieskThe false negative value of (a) is,
Figure 683046DEST_PATH_IMAGE006
indicating that a pixel belongs to a classkThe number of the (c) component(s),
Figure 313879DEST_PATH_IMAGE007
Figure 172114DEST_PATH_IMAGE008
representing categorieskThe geometric mean confidence of (2) is calculated,
Figure 109327DEST_PATH_IMAGE009
representing categorieskThe recall rate of the (c),
Figure 53012DEST_PATH_IMAGE010
representing the predicted maximum distribution of the input training data over all classes,
Figure 221956DEST_PATH_IMAGE011
indicating labels as categorieskThe set of samples of (a) is,
Figure 934698DEST_PATH_IMAGE012
label data representing the training data is stored in the memory,
Figure 45742DEST_PATH_IMAGE013
indicating the sequence number of the training data.
The fundus blood vessel image segmentation method based on cavity convolution and semantic fusion, wherein in the first step, the fundus blood vessel image data set comprises a STARE data set, a DRIVE data set and a CHASEDB1 data set;
the method for preprocessing the fundus blood vessel image comprises the following steps:
and uniformly cutting the size of the fundus blood vessel image to 512 multiplied by 512, and then performing image overturning, image rotation and image Gaussian blurring operation to finish preprocessing.
The fundus blood vessel image segmentation method based on the cavity convolution and semantic fusion is characterized in that in the third step, the multilayer multi-scale cavity convolution structure comprises an upper-layer image feature, a middle-layer image feature and a lower-layer image feature;
and performing cascade cavity convolution comprising convolution rates with different scales on the upper layer image characteristic, the middle layer image characteristic and the lower layer image characteristic.
The fundus blood vessel image segmentation method based on the hole convolution and semantic fusion comprises the following steps of:
performing cascade cavity convolution with gradually expanded receptive field on the upper image characteristics, obtaining cavity convolution characteristics corresponding to the upper image characteristics through convolution, and performing maximum pooling operation on the cavity convolution characteristics to obtain local characteristics of the large-size image;
dividing the fundus blood vessel image corresponding to the middle layer image characteristics into image blocks with fixed sizes, vectorizing the image blocks through a flattening operation, and executing linear mapping to convert the vectorized image blocks into low-dimensional linear embedded characteristics; inputting the low-dimensional linear embedded features into 12 consecutive transform layers in a transform module to continuously perform linear mapping, and performing self-attention weighting; adding position codes to the low-dimensional linear embedded features to obtain a long-distance dependency relationship in the fundus blood vessel image through modeling, and extracting to obtain image global features;
performing cascade cavity convolution with gradually expanded receptive field on the lower-layer image features, and then performing bilinear interpolation to promote the lower-layer image features to the same size as the middle-layer image features so as to obtain local features of small-size images;
and adding the local features of the large-size image, the image global features and the local features of the small-size image to complete feature fusion.
The fundus blood vessel image segmentation method based on cavity convolution and semantic fusion is characterized in that the following formula exists in the step of processing the characteristics of the middle layer image:
Figure 742302DEST_PATH_IMAGE014
wherein the content of the first and second substances,
Figure 714938DEST_PATH_IMAGE015
a low-dimensional linear embedded feature is represented,
Figure 282185DEST_PATH_IMAGE016
a matrix for performing a linear mapping is represented,
Figure 564131DEST_PATH_IMAGE017
it is shown that the position code is,
Figure 482408DEST_PATH_IMAGE018
is shown as
Figure 993155DEST_PATH_IMAGE019
Each of the image blocks is a block of an image,
Figure 883751DEST_PATH_IMAGE020
the transform layer includes a normalization layer, a multi-headed self-attention and multi-layered perceptron, the first
Figure 133335DEST_PATH_IMAGE021
The formula corresponding to the feature transformation of each transform layer is expressed as:
Figure 476592DEST_PATH_IMAGE022
Figure 181243DEST_PATH_IMAGE023
wherein the content of the first and second substances,
Figure 598449DEST_PATH_IMAGE024
represents passing through
Figure 97563DEST_PATH_IMAGE021
The image characteristics obtained after coding by the transform layer,
Figure 852417DEST_PATH_IMAGE025
representing image features weighted by multi-head attention,
Figure 829601DEST_PATH_IMAGE026
a linear embedded feature representing a sequence of input images,
Figure 366892DEST_PATH_IMAGE027
it is meant that the multi-layer perceptron operates,
Figure 36908DEST_PATH_IMAGE028
indicating self-attention of multiple headsIn the operation of the method, the operation,
Figure 541708DEST_PATH_IMAGE029
indicating the normalization layer operation.
In the fourth step, the method for splicing the decoded multi-scale image features by the semantic fusion structure comprises the following steps:
the semantic fusion structure performs upsampling on the image characteristics of each layer output by the decoding part of the improved neural network and restores the image characteristics to be consistent with the original input image in size to obtain a first upsampling characteristic diagram
Figure 322582DEST_PATH_IMAGE030
Second up-sampling feature map
Figure 714380DEST_PATH_IMAGE031
A third upsampling profile
Figure 555297DEST_PATH_IMAGE032
And a fourth upsampled feature map
Figure 281813DEST_PATH_IMAGE033
Feature maps incorporating coding portions of the improved neural network
Figure 69641DEST_PATH_IMAGE034
And the first up-sampling feature map
Figure 440579DEST_PATH_IMAGE030
Second up-sampling feature map
Figure 327764DEST_PATH_IMAGE031
Third upsampled feature map
Figure 151363DEST_PATH_IMAGE032
And a fourth upsampled feature map
Figure 664253DEST_PATH_IMAGE033
Splicing to obtain a new characteristic diagram
Figure 889698DEST_PATH_IMAGE035
Wherein the new characteristic diagram
Figure 213363DEST_PATH_IMAGE036
The fundus blood vessel image segmentation method based on the cavity convolution and the semantic fusion is characterized in that the new characteristic map is obtained
Figure 789838DEST_PATH_IMAGE035
Thereafter, the method further comprises:
the new feature map is used
Figure 837910DEST_PATH_IMAGE035
Inputting the new feature map into a squeezing excitation module, continuously executing two SE operations, and screening the new feature map in a hierarchical manner
Figure 917861DEST_PATH_IMAGE035
Key information in (1);
wherein the SE operations comprise a global average pooling operation, a non-linear activation operation and a feature channel weighting operation;
the global average pooling operation comprises the steps of:
for the new feature map
Figure 412428DEST_PATH_IMAGE035
Performing global average pooling operation on each channel to obtain feature vectorsmWherein the new feature map
Figure 210620DEST_PATH_IMAGE035
Has the attribute of
Figure 799733DEST_PATH_IMAGE037
Performing a global average pooling operationTo obtain a feature vectormIs expressed as:
Figure 734191DEST_PATH_IMAGE038
wherein the content of the first and second substances,
Figure 399658DEST_PATH_IMAGE039
showing new characteristics
Figure 419567DEST_PATH_IMAGE035
The height of (a) of (b),
Figure 874688DEST_PATH_IMAGE040
showing new characteristics
Figure 132494DEST_PATH_IMAGE035
The width of (a) is greater than (b),
Figure 703284DEST_PATH_IMAGE041
diagram showing new characteristics
Figure 476068DEST_PATH_IMAGE035
The number of the channels of (a) is,
Figure 672563DEST_PATH_IMAGE042
diagram showing new characteristics
Figure 581613DEST_PATH_IMAGE035
To middle
Figure 588883DEST_PATH_IMAGE043
A compressed representation of the global information for each channel,
Figure 583384DEST_PATH_IMAGE019
representing against a new feature graph
Figure 586499DEST_PATH_IMAGE035
The value of the height of (a) is,
Figure 615635DEST_PATH_IMAGE044
representing against a new feature graph
Figure 528228DEST_PATH_IMAGE035
The value of (a) is selected from,
Figure 275604DEST_PATH_IMAGE045
the fundus blood vessel image segmentation method based on the cavity convolution and the semantic fusion is characterized in that the nonlinear activation operation comprises the following steps:
modeling the correlation of new characteristic diagrams between different channels by utilizing two full connection layers; wherein the first fully-connected layer converts the feature vector after nonlinear activationmIs reduced to 1/r, the second fully connected layer reduces the feature vectormAdding dimensionality to original dimensionality, and using sigmoid function to make feature vectormThe feature weight of (A) is normalized to [0,1 ]];
The formula for the nonlinear activation operation is expressed as:
Figure 79481DEST_PATH_IMAGE046
wherein the content of the first and second substances,
Figure 963123DEST_PATH_IMAGE047
the weight vector of the feature is represented,
Figure 46617DEST_PATH_IMAGE048
a sigmoid function is represented as a function,
Figure 15710DEST_PATH_IMAGE049
a parameter representing the first fully connected layer,
Figure 357698DEST_PATH_IMAGE050
a parameter representing the second fully-connected layer,
Figure 299109DEST_PATH_IMAGE051
representing the feature vector after global pooling,
Figure 615821DEST_PATH_IMAGE052
representing the ReLU nonlinear activation function.
The fundus blood vessel image segmentation method based on the cavity convolution and the semantic fusion is characterized in that the characteristic channel weighting operation comprises the following steps:
using feature weight vectors
Figure 72210DEST_PATH_IMAGE053
For the new feature map
Figure 30939DEST_PATH_IMAGE035
Each feature channel in (2) performs a respective multiplicative weighting, with the corresponding formula being:
Figure 13807DEST_PATH_IMAGE054
wherein the content of the first and second substances,
Figure 829317DEST_PATH_IMAGE055
the weighted feature channels are represented and then the weighted feature channels are obtained,
Figure 382789DEST_PATH_IMAGE056
diagram showing new characteristics
Figure 941946DEST_PATH_IMAGE035
To middle
Figure 612329DEST_PATH_IMAGE043
And the feature weight vectors corresponding to the channels.
The invention also provides a fundus blood vessel image segmentation system based on cavity convolution and semantic fusion, wherein the system comprises:
an image acquisition module to:
acquiring a fundus blood vessel image dataset, acquiring a fundus blood vessel image from the fundus blood vessel image dataset, and preprocessing the fundus blood vessel image;
a model improvement module to:
performing improved design based on a U-Net model to obtain an improved neural network;
a first design module to:
in the improved design, a multilayer multi-scale cavity convolution structure is designed at a jump connection part of the improved neural network, and the multilayer multi-scale cavity convolution structure is used for spanning an encoding part and a decoding part of the improved neural network so as to protect detail information in fundus blood vessels;
a second design module to:
designing a semantic fusion structure at a decoding part of the improved neural network, wherein the semantic fusion structure is used for splicing the decoded multi-scale image features and constructing a pair of extrusion excitation modules, and performing key information screening on the spliced multi-scale image features according to the extrusion excitation modules to finally obtain a multi-scale image feature fusion result;
an image segmentation module to:
performing continuous three-time convolution operation on the multi-scale image feature fusion result to obtain an image to be segmented, and performing two-classification discrimination on pixels in the image to be segmented to segment the fundus blood vessel image and obtain an improved neural network model;
the auxiliary training module is used for carrying out auxiliary training on the improved neural network model according to a first loss function so as to test the trained neural network model, and finally completing segmentation and verification of the fundus blood vessel image;
wherein the expression of the first loss function is:
Figure 864319DEST_PATH_IMAGE001
wherein the content of the first and second substances,
Figure 170666DEST_PATH_IMAGE002
the first loss function is represented as a function of,
Figure 267935DEST_PATH_IMAGE003
the total number of categories of the data is represented,
Figure 959817DEST_PATH_IMAGE004
representing categorieskThe true positive value of (A) is,
Figure 320391DEST_PATH_IMAGE005
representing categorieskThe false negative value of (a) is,
Figure 645193DEST_PATH_IMAGE006
indicating that a pixel belongs to a classkThe number of the (c) component(s),
Figure 749415DEST_PATH_IMAGE007
Figure 436748DEST_PATH_IMAGE008
representing categorieskThe geometric mean confidence of (2) is calculated,
Figure 889595DEST_PATH_IMAGE009
representing categorieskThe rate of recall of the (c) is,
Figure 29590DEST_PATH_IMAGE010
representing the predicted maximum distribution of the input training data over all classes,
Figure 609607DEST_PATH_IMAGE011
indicating labels as categorieskThe set of samples of (a) is,
Figure 151447DEST_PATH_IMAGE012
label data representing the training data is stored in the memory,
Figure 775195DEST_PATH_IMAGE013
indicating the sequence number of the training data.
The invention provides an eyeground blood vessel image segmentation method based on cavity convolution and semantic fusion, which has the following beneficial effects:
(1) The invention can accurately and effectively segment the fundus blood vessel image, and assist the clinical diagnosis work of doctors to further realize high-quality medical service;
(2) The multilayer multi-scale cavity convolution structure fusing the transform module has the advantages of high efficiency, portability, strong portability and the like, and can be migrated to other visual analysis tasks such as target detection, area positioning and the like which need to gradually expand image receptive fields or need to combine global features and local features so as to play a greater role;
(3) The semantic fusion structure has the advantages of high efficiency, portability, strong transportability and the like, and can be transferred to other visual analysis tasks needing multi-scale image feature fusion or feature selection, such as tumor image identification, image emotion analysis and the like, so as to play a greater role;
(4) From the perspective of patients, accurate medical diagnosis and treatment can shorten the time of patients to see a doctor, create favorable time conditions for improving the cure rate of diseases, contribute to improving the life quality of people and create good social benefits.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
FIG. 1 is a flow chart of a fundus blood vessel image segmentation method based on cavity convolution and semantic fusion according to the present invention;
FIG. 2 is a detailed structure diagram of each module in the fundus blood vessel image segmentation method based on cavity convolution and semantic fusion according to the present invention;
FIG. 3 is a model diagram of a fundus blood vessel image segmentation method based on cavity convolution and semantic fusion according to the present invention;
fig. 4 is a schematic structural diagram of a fundus blood vessel image segmentation system based on cavity convolution and semantic fusion, which is provided by the invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the accompanying drawings are illustrative only for the purpose of explaining the present invention and are not to be construed as limiting the present invention.
These and other aspects of embodiments of the invention will be apparent with reference to the following description and attached drawings. In the description and drawings, particular embodiments of the invention have been disclosed in detail as being indicative of some of the ways in which the principles of the embodiments of the invention may be practiced, but it is understood that the scope of the embodiments of the invention is not limited correspondingly. On the contrary, the embodiments of the invention include all changes, modifications and equivalents coming within the spirit and terms of the claims appended hereto.
Referring to fig. 1 to 3, the present invention provides a fundus blood vessel image segmentation method based on void convolution and semantic fusion, the method includes the following steps:
s101, acquiring a fundus blood vessel image data set, acquiring a fundus blood vessel image from the fundus blood vessel image data set, and preprocessing the fundus blood vessel image.
In step S101, the fundus blood vessel image data set includes a STARE data set, a DRIVE data set, and a CHASEDB1 data set.
Specifically, the method for preprocessing the blood vessel image of the fundus comprises the following steps:
the size of the fundus blood vessel image is uniformly cut to 512 x 512, then image inversion, image rotation and image Gaussian blur operation are carried out to complete preprocessing, and finally the original fundus blood vessel image data set is expanded to 18 times of the original data set.
S102, performing improved design based on the U-Net model to obtain an improved neural network.
In a specific implementation, applying the U-Net model, the encoding portion retains its prototype, which includes four successive downsampling operations, each of which performs two successive 3 × 3 convolutions. The invention carries out re-improvement design on other parts of the U-Net model, and the steps are specifically described in step S103, step S104 and step S105.
S103, during the improvement design, a multilayer multi-scale cavity convolution structure is designed at the jump connection part of the improved neural network, and the multilayer multi-scale cavity convolution structure is used for spanning the coding part and the decoding part of the improved neural network so as to protect detail information in the fundus blood vessel.
In step S103, the multi-layer multi-scale hole convolution structure includes an upper layer image feature, a middle layer image feature, and a lower layer image feature. And performing cascade cavity convolution comprising convolution rates with different scales on the upper layer image characteristic, the middle layer image characteristic and the lower layer image characteristic.
Specifically, the method for performing the cascade type hole convolution containing different scale convolution rates on the upper layer image characteristic, the middle layer image characteristic and the lower layer image characteristic comprises the following steps:
and S1031, performing cascade type cavity convolution with gradually expanded receptive field on the upper-layer image characteristics, obtaining cavity convolution characteristics corresponding to the upper-layer image characteristics through convolution, and performing maximum pooling operation on the cavity convolution characteristics to obtain local characteristics of the large-size image.
As shown in FIG. 2 (a) Partially shown, the upper layer image features execute a cascading void convolution process, the upper layer image features are subjected to 3 × 3 void convolutions with 3 different scales, and the void convolution rates are 1, (1,3) and (1,3,5), respectively. And adopting the hole convolutions with the hole rates of 1,3 and 5 to cascade step by step so as to obtain the receptive fields with the hole rates of 3, 9 and 19. Therefore, the receptive field of the upper image is continuously enlarged, and an important foundation is laid for accurately and completely segmenting the target blood vessel. In addition, the cascading type cavity convolution is executed to obtain 3 image characteristics capable of displaying different receptive fields, each image characteristic is respectively convoluted with the image characteristic by 1 × 1, and the 1 × 1 convolution results are fused and added to obtain the cavity convolution characteristics corresponding to the upper-layer image characteristics. Finally, a maximum pooling operation is performed on the hollow convolution features, which are reduced to the same size as the middle layer image features, thereby obtaining local features from the large size image.
S1032, segmenting the fundus blood vessel image corresponding to the characteristics of the middle layer image into image blocks with fixed sizes, vectorizing the image blocks through a flattening operation, and executing linear mapping to convert the vectorized image blocks into low-dimensional linear embedded characteristics; inputting the low-dimensional linear embedded features into 12 consecutive transform layers in a transform module to continuously perform linear mapping, and performing self-attention weighting; and adding position codes to the low-dimensional linear embedded features to model and obtain a long-distance dependency relationship in the fundus blood vessel image, and extracting to obtain the global features of the image.
In the step of processing the features of the middle layer image, the following formula exists:
Figure 402485DEST_PATH_IMAGE014
wherein the content of the first and second substances,
Figure 520614DEST_PATH_IMAGE015
representing a low-dimensional linear embedded feature that,
Figure 385802DEST_PATH_IMAGE016
a matrix for performing a linear mapping is represented,
Figure 55817DEST_PATH_IMAGE017
it is shown that the position code is,
Figure 32388DEST_PATH_IMAGE057
is shown as
Figure 78842DEST_PATH_IMAGE019
Each of the image blocks is a block of an image,
Figure 470640DEST_PATH_IMAGE020
as shown in (A) of FIG. 2b) In part, the transform layer includes a normalization layer, a multi-headed self-attention and multi-layered perceptron, a
Figure 311557DEST_PATH_IMAGE021
The formula corresponding to the feature transformation of each transform layer is expressed as:
Figure 38073DEST_PATH_IMAGE022
Figure 622638DEST_PATH_IMAGE023
wherein the content of the first and second substances,
Figure 868943DEST_PATH_IMAGE024
is shown passing through
Figure 880762DEST_PATH_IMAGE021
The image characteristics obtained after the transform layer coding,
Figure 94574DEST_PATH_IMAGE058
representing image features weighted by multiple attention,
Figure 217251DEST_PATH_IMAGE026
a linear embedded feature representing a sequence of input images,
Figure 583641DEST_PATH_IMAGE027
it is meant that the multi-layer perceptron operates,
Figure 766361DEST_PATH_IMAGE028
a multi-head self-attention operation is shown,
Figure 280519DEST_PATH_IMAGE029
representing the normalization layer operations.
It can be understood that becauseMSAAndMLPby using the method, the Transformer module can model long-distance dependence in the fundus blood vessel image, further extract image details and capture image global features, and is better used for fundus blood vessel segmentation. Finally, the image features are reshaped to their original size by a 3 × 3 convolution operation in the image recovery layer.
S1033, performing cascade cavity convolution with gradually expanded receptive fields on the lower-layer image features, and then performing bilinear interpolation to increase the lower-layer image features to the same size as the middle-layer image features so as to obtain local features of small-size images.
As in (2) ((a) Partially shown, the lower layer image features execute a cascading hole convolution process, the lower layer image features are subjected to 3 × 3 hole convolutions with 3 different scales, and the hole convolution rates are 1, (1,3) and (1,3,5), respectively. And adopting the hole convolutions with the hole rates of 1,3 and 5 to cascade step by step so as to obtain the receptive fields with the hole rates of 3, 9 and 19. Therefore, the receptive field of the lower image is continuously enlarged, and an important foundation is laid for accurately and completely segmenting the target blood vessel.
In addition, performing cascade type hole convolution to obtain 3 image features capable of displaying different receptive fields, performing 1 × 1 convolution on each image feature and the image feature, and fusing and adding 1 × 1 convolution results to obtain hole convolution features corresponding to lower-layer image features. And finally, performing bilinear interpolation operation on the hollow convolution characteristic, and increasing the hollow convolution characteristic to the size which is the same as that of the middle-layer image characteristic, thereby obtaining the local characteristic of the small-size image.
S1034, adding the local features of the large-size image, the image global features and the local features of the small-size image to complete feature fusion.
In this step, the physical meaning of the fusion of the upper layer image feature, the middle layer image feature and the lower layer image feature is as follows: the global characteristics of the image output by the transform module and the local characteristics of the image output by the cascade type cavity convolution module are considered, the global characteristics and the local characteristics are complementary, and the details of the fundus blood vessel can be described more comprehensively. The size of the fused image features is the same as the size of the original image, i.e. the input and output sizes at the two ends of the jump connection are the same.
And S104, designing a semantic fusion structure in a decoding part of the improved neural network, wherein the semantic fusion structure is used for splicing the decoded multi-scale image features, constructing a pair of extrusion excitation modules, and screening key information of the spliced multi-scale image features according to the extrusion excitation modules to finally obtain a multi-scale image feature fusion result.
In step S104, the method for splicing the decoded multi-scale image features by the semantic fusion structure includes the following steps:
the semantic fusion structure performs upsampling on the image characteristics of each layer output by the decoding part of the improved neural network and restores the image characteristics to be consistent with the original input image in size to obtain a first upsampling characteristic diagram
Figure 331520DEST_PATH_IMAGE030
Second up-sampling feature map
Figure 677051DEST_PATH_IMAGE031
A third upsampling profile
Figure 906038DEST_PATH_IMAGE032
And a fourth upsampled feature map
Figure 704230DEST_PATH_IMAGE033
Feature maps incorporating coding portions of the improved neural network
Figure 555993DEST_PATH_IMAGE034
And the first up-sampling feature map
Figure 756030DEST_PATH_IMAGE030
Second up-sampling feature map
Figure 155918DEST_PATH_IMAGE031
Third upsampled feature map
Figure 441406DEST_PATH_IMAGE032
And a fourth upsampled feature map
Figure 365369DEST_PATH_IMAGE033
Splicing to obtain a new characteristic diagram
Figure 419912DEST_PATH_IMAGE035
Wherein the new characteristic diagram
Figure 990702DEST_PATH_IMAGE036
After obtaining a new characteristic diagram
Figure 763486DEST_PATH_IMAGE035
Then, new feature map
Figure 959981DEST_PATH_IMAGE035
Complementary information between features from different layers of the image is taken into account, helping to more fully characterize the vessel details in the image.
Further, the new feature map is provided
Figure 72293DEST_PATH_IMAGE035
Inputting the new feature map into a squeezing excitation module, continuously executing two SE operations, and screening the new feature map in a hierarchical manner
Figure 204198DEST_PATH_IMAGE035
The key information in (1). Wherein the SE operations include a global average pooling operation, a non-linear activation operation, and a feature channel weighting operation.
Wherein, the new feature map comprises 320 feature channels
Figure 74065DEST_PATH_IMAGE035
After the first SE operation is performed, a new feature map may be screened
Figure 949617DEST_PATH_IMAGE035
The key channel information preliminarily inhibits noise in image characteristics and recovers important detail information; new characteristic diagram
Figure 837807DEST_PATH_IMAGE035
After the second SE operation is carried out, further key information screening is carried out on the characteristic channel, and detail information in the fundus blood vessel image is accurately described for subsequent useAnd (5) dividing the task.
S1041, the global average pooling operation includes the following steps:
for new characteristic diagram
Figure 406192DEST_PATH_IMAGE035
Performing global average pooling operation on each channel to obtain feature vectorsmWherein the new feature map
Figure 763355DEST_PATH_IMAGE035
Has the attribute of
Figure 177019DEST_PATH_IMAGE037
Performing a global average pooling operation to obtain feature vectorsmIs expressed as:
Figure 188224DEST_PATH_IMAGE059
wherein the content of the first and second substances,
Figure 661931DEST_PATH_IMAGE039
diagram showing new characteristics
Figure 506390DEST_PATH_IMAGE035
The height of (a) of (b),
Figure 927007DEST_PATH_IMAGE040
showing new characteristics
Figure 665156DEST_PATH_IMAGE035
The width of (a) is greater than the width of (b),
Figure 434398DEST_PATH_IMAGE041
showing new characteristics
Figure 890787DEST_PATH_IMAGE035
The number of the channels of (a) is,
Figure 787199DEST_PATH_IMAGE042
showing new characteristics
Figure 379854DEST_PATH_IMAGE035
To middle
Figure 319997DEST_PATH_IMAGE043
A compressed representation of the global information for each channel,
Figure 263683DEST_PATH_IMAGE060
representing against a new feature graph
Figure 698206DEST_PATH_IMAGE035
The value of (a) is selected,
Figure 410947DEST_PATH_IMAGE044
representing a graph against new features
Figure 521992DEST_PATH_IMAGE035
The value of the width of (a) is,
Figure 687394DEST_PATH_IMAGE045
s1042, the nonlinear activation operation includes the steps of:
modeling the correlation of the new characteristic diagram between different channels by utilizing two full-connection layers; wherein the first fully-connected layer converts the feature vector after nonlinear activationmIs reduced to 1/r, the second fully connected layer reduces the feature vectorm. law manAdding the dimensionality to the original dimensionality, and using a sigmoid function to carry out feature vectormThe feature weight of (A) is normalized to [0,1 ]];
The formula for the nonlinear activation operation is expressed as:
Figure 987925DEST_PATH_IMAGE046
wherein the content of the first and second substances,
Figure 696118DEST_PATH_IMAGE053
the weight vector of the feature is represented,
Figure 853430DEST_PATH_IMAGE048
a sigmoid function is represented as a function,
Figure 627832DEST_PATH_IMAGE049
the parameters representing the first fully-connected layer,
Figure 528792DEST_PATH_IMAGE050
a parameter indicative of a second fully connected layer,
Figure 91492DEST_PATH_IMAGE051
representing the feature vectors after global pooling,
Figure 419705DEST_PATH_IMAGE052
representing the ReLU nonlinear activation function.
S1043, the feature channel weighting operation includes the following steps:
using feature weight vectors
Figure 949912DEST_PATH_IMAGE053
For the new feature map
Figure 388984DEST_PATH_IMAGE035
Each feature channel in (2) performs a respective multiplicative weighting, with the corresponding formula being:
Figure 806190DEST_PATH_IMAGE061
wherein the content of the first and second substances,
Figure 305304DEST_PATH_IMAGE055
the weighted feature channels are represented and then the weighted feature channels are obtained,
Figure 57228DEST_PATH_IMAGE056
showing new characteristics
Figure 299991DEST_PATH_IMAGE035
To middle
Figure 899599DEST_PATH_IMAGE043
And feature weight vectors corresponding to the channels.
Human perception of the outside world is a hierarchical structure that retains the most critical information by continuous filtering and screening, and the new feature X has a large number of channels with weights other than 0 and contains a lot of noise. Therefore, hierarchical structure pair new feature map is designed in DSE module (squeeze incentive module)
Figure 241719DEST_PATH_IMAGE035
And (5) screening. As shown in part (c) DSE module in fig. 2:
novel feature map comprising 320 channels
Figure 559568DEST_PATH_IMAGE035
After the first SE operation is executed, according to the information importance of each channel, the characteristic channel is endowed with a certain weight, the weight of 0 represents that the related characteristic channel does not contribute to the segmentation, and a new characteristic diagram
Figure 730655DEST_PATH_IMAGE035
The noise information in the signal is suppressed to a certain extent; continuing to perform a second SE operation on the screened features, the feature channel weight with the original weight of 0 remains unchanged, and the feature channels with the original weights of not 0 are endowed with new weights, i.e. the channel weight containing important information becomes larger to highlight the importance of the feature channels, the channel weight containing secondary information becomes smaller to reduce the importance of the feature channels, the number of channels with weights of not 0 becomes smaller, and the new feature map is
Figure 981508DEST_PATH_IMAGE035
The noise in the eye fundus image is further inhibited, so that the subsequent convolution operation and image pixel binary classification judgment are facilitated, and an important foundation is laid for high-quality eye fundus blood vessel image segmentation.
In conclusion, the multi-scale image features spliced by the high-speed neural network decoding part sequentially pass through the DSE module comprising three operations of global average pooling, nonlinear activation and feature channel weighting, the key information in the features is screened in a hierarchical mode, the multi-scale image feature fusion is completed, and preparation is made for fundus image segmentation.
S105, performing continuous three-time convolution operation on the multi-scale image feature fusion result to obtain an image to be segmented, and performing two-classification discrimination on pixels in the image to be segmented to segment the blood vessel image at the bottom of the eye to obtain an improved neural network model.
In specific implementation, a new feature map which is output by the semantic fusion structure and contains 320 feature channels passes through three continuous convolution layers and 64 filters, key features of blood vessels and the background are extracted, and a segmentation image is generated, wherein each pixel in the segmentation image has a probability value. A two-class discriminator is used to distinguish between vessels and background in the segmented image. Wherein the three convolution layers are 1 × 1,3 × 3, and 1 × 1 convolutions, respectively.
The binary classifier regards the pixels with the probability value larger than 0.5 in the segmentation image as fundus blood vessel pixels, marks the pixel value as 1, regards the pixels with the probability value smaller than 0.5 in the segmentation image as the background, marks the pixel value as 0, sets all the pixels with the pixel value of 1 as white, sets all the pixels with the pixel value of 0 as black, completes segmentation of the fundus blood vessel image, clearly distinguishes the blood vessel and the background, visually shows the segmentation effect to a doctor, and accurately and efficiently assists clinical diagnosis work of the doctor.
And S106, performing auxiliary training on the improved neural network model according to the first loss function so as to test the trained neural network model, thereby finally completing segmentation and verification of the fundus blood vessel image.
Wherein the expression of the first penalty function is:
Figure 697791DEST_PATH_IMAGE001
wherein the content of the first and second substances,
Figure 565253DEST_PATH_IMAGE002
the first loss function is represented as a function of,
Figure 277382DEST_PATH_IMAGE003
the total number of categories of the data is represented,
Figure 648320DEST_PATH_IMAGE004
representing categorieskThe true positive value of (A) is,
Figure 535505DEST_PATH_IMAGE005
representing categorieskThe false negative value of (a) is,
Figure 624683DEST_PATH_IMAGE006
indicating that a pixel belongs to a classkThe number of the (c) component(s),
Figure 950623DEST_PATH_IMAGE007
Figure 300701DEST_PATH_IMAGE008
representing categorieskThe geometric mean confidence of the measured data points of the image,
Figure 749000DEST_PATH_IMAGE009
representing categorieskThe rate of recall of the (c) is,
Figure 935262DEST_PATH_IMAGE010
representing the predicted maximum distribution of the input training data across all classes,
Figure 127209DEST_PATH_IMAGE011
indicating labels as categorieskThe set of samples of (a) is,
Figure 597373DEST_PATH_IMAGE012
label data representing the training data is stored in the memory,
Figure 950994DEST_PATH_IMAGE013
indicating the sequence number of the training data.
The invention provides an eyeground blood vessel image segmentation method based on cavity convolution and semantic fusion, which has the following beneficial effects:
(1) The invention can accurately and effectively segment the fundus blood vessel image, and assists the clinical diagnosis work of doctors to realize high-quality medical service;
(2) The multilayer multi-scale cavity convolution structure integrated with the transform module has the advantages of high efficiency, portability, strong transportability and the like, and can be migrated to other visual analysis tasks such as target detection, area positioning and the like which need to gradually expand image receptive fields or need to combine global features and local features, so as to play a greater role;
(3) The semantic fusion structure provided by the invention has the advantages of high efficiency, portability, strong portability and the like, and can be transferred to other visual analysis tasks needing multi-scale image feature fusion or feature selection, such as tumor image identification, image emotion analysis and the like, so as to play a greater role;
(4) From the perspective of patients, accurate medical diagnosis and treatment can shorten the time of patients to see a doctor, create favorable time conditions for improving the cure rate of diseases, contribute to improving the life quality of people and create good social benefits.
Referring to fig. 4, the present invention further provides a fundus blood vessel image segmentation system based on cavity convolution and semantic fusion, wherein the system includes:
an image acquisition module to:
acquiring a fundus blood vessel image dataset, acquiring a fundus blood vessel image from the fundus blood vessel image dataset, and preprocessing the fundus blood vessel image;
a model improvement module to:
performing improved design based on a U-Net model to obtain an improved neural network;
a first design module to:
in the process of improving design, a multilayer multi-scale cavity convolution structure is designed at a jump connection part of the improved neural network, and the multilayer multi-scale cavity convolution structure is used for spanning an encoding part and a decoding part of the improved neural network so as to protect detail information in a fundus blood vessel;
a second design module to:
designing a semantic fusion structure at a decoding part of the improved neural network, wherein the semantic fusion structure is used for splicing the decoded multi-scale image features and constructing a pair of extrusion excitation modules, and performing key information screening on the spliced multi-scale image features according to the extrusion excitation modules to finally obtain a multi-scale image feature fusion result;
an image segmentation module to:
performing continuous three-time convolution operation on the multi-scale image feature fusion result to obtain an image to be segmented, and performing two-classification discrimination on pixels in the image to be segmented to segment the fundus blood vessel image and obtain an improved neural network model;
the auxiliary training module is used for carrying out auxiliary training on the improved neural network model according to a first loss function so as to test the trained neural network model, and finally completing segmentation and verification of the fundus blood vessel image;
wherein the expression of the first penalty function is:
Figure 624552DEST_PATH_IMAGE001
wherein, the first and the second end of the pipe are connected with each other,
Figure 557873DEST_PATH_IMAGE002
the first loss function is represented as a function of,
Figure 757910DEST_PATH_IMAGE003
the total number of categories of the data is represented,
Figure 407066DEST_PATH_IMAGE004
representing categorieskThe true positive value of (A) is,
Figure 692554DEST_PATH_IMAGE005
representing categorieskThe false negative value of (a) is,
Figure 101670DEST_PATH_IMAGE006
indicating that a pixel belongs to a classkThe number of the (c) component(s),
Figure 156214DEST_PATH_IMAGE007
Figure 238921DEST_PATH_IMAGE008
representing categorieskThe geometric mean confidence of the measured data points of the image,
Figure 11705DEST_PATH_IMAGE009
representing categorieskThe rate of recall of the (c) is,
Figure 693353DEST_PATH_IMAGE010
representing the predicted maximum distribution of the input training data across all classes,
Figure 71244DEST_PATH_IMAGE011
indicating labels as categorieskThe set of samples of (a) is,
Figure 937569DEST_PATH_IMAGE012
label data representing the training data is stored in the memory,
Figure 322283DEST_PATH_IMAGE013
indicating the sequence number of the training data.
It should be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
The above-mentioned embodiments only express several embodiments of the present invention, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present invention. It should be noted that various changes and modifications can be made by those skilled in the art without departing from the spirit of the invention, and these changes and modifications are all within the scope of the invention. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. An eyeground blood vessel image segmentation method based on cavity convolution and semantic fusion is characterized by comprising the following steps:
acquiring a fundus blood vessel image dataset, acquiring a fundus blood vessel image from the fundus blood vessel image dataset, and preprocessing the fundus blood vessel image;
performing improved design based on the U-Net model to obtain an improved neural network;
step three, during the improvement design, designing a multilayer multi-scale cavity convolution structure at a jump connection part of the improved neural network, wherein the multilayer multi-scale cavity convolution structure is used for spanning an encoding part and a decoding part of the improved neural network so as to protect detail information in the fundus blood vessel;
designing a semantic fusion structure at a decoding part of the improved neural network, wherein the semantic fusion structure is used for splicing the decoded multi-scale image features, constructing a pair of extrusion excitation modules, and screening key information of the spliced multi-scale image features according to the extrusion excitation modules to finally obtain a multi-scale image feature fusion result;
performing continuous three-time convolution operation on the multi-scale image feature fusion result to obtain an image to be segmented, and performing classification judgment on pixels in the image to be segmented to segment the fundus blood vessel image to obtain an improved neural network model;
step six, performing auxiliary training on the improved neural network model according to a first loss function so as to test the trained neural network model, thereby finally completing segmentation and verification of the fundus blood vessel image;
the expression of the first loss function is:
Figure 387921DEST_PATH_IMAGE001
wherein the content of the first and second substances,
Figure 836219DEST_PATH_IMAGE002
a first loss function is represented as a function of,
Figure 271749DEST_PATH_IMAGE003
the total number of categories of the data is represented,
Figure 198117DEST_PATH_IMAGE004
representing categorieskThe true positive value of (A) is,
Figure 419013DEST_PATH_IMAGE005
representing categorieskThe false negative value of (a) is,
Figure 772634DEST_PATH_IMAGE006
indicating that a pixel belongs to a classkThe number of the (c) component(s),
Figure 698390DEST_PATH_IMAGE007
Figure 428448DEST_PATH_IMAGE008
representing categorieskThe geometric mean confidence of (2) is calculated,
Figure 503852DEST_PATH_IMAGE009
representing categorieskThe recall rate of the (c),
Figure 28374DEST_PATH_IMAGE010
representing the predicted maximum distribution of the input training data across all classes,
Figure 438495DEST_PATH_IMAGE011
indicating labels as categorieskThe set of samples of (a) is,
Figure 706666DEST_PATH_IMAGE012
label data representing the training data is stored in the memory,
Figure 636576DEST_PATH_IMAGE013
indicating the sequence number of the training data.
2. A fundus blood vessel image segmentation method based on cavity convolution and semantic fusion according to claim 1, characterized in that in the first step, the fundus blood vessel image data set comprises a STARE data set, a DRIVE data set and a CHASEDB1 data set;
the method for preprocessing the fundus blood vessel image comprises the following steps:
and uniformly cutting the size of the fundus blood vessel image to 512 multiplied by 512, and then performing image overturning, image rotation and image Gaussian blurring operation to finish preprocessing.
3. A fundus blood vessel image segmentation method based on hole convolution and semantic fusion as claimed in claim 2 wherein in the third step, the multi-layer multi-scale hole convolution structure comprises an upper layer image feature, a middle layer image feature and a lower layer image feature;
and performing cascade cavity convolution comprising convolution rates with different scales on the upper layer image characteristic, the middle layer image characteristic and the lower layer image characteristic.
4. A fundus blood vessel image segmentation method based on hole convolution and semantic fusion as claimed in claim 3 wherein the method of performing cascade hole convolution containing different scale convolution rates on the upper layer image feature, the middle layer image feature and the lower layer image feature comprises the steps of:
performing cascade cavity convolution with gradually expanded receptive field on the upper image characteristics, obtaining cavity convolution characteristics corresponding to the upper image characteristics through convolution, and performing maximum pooling operation on the cavity convolution characteristics to obtain local characteristics of the large-size image;
segmenting the fundus blood vessel image corresponding to the middle-layer image characteristic into image blocks with fixed sizes, vectorizing the image blocks through a flattening operation, and executing linear mapping to convert the vectorized image blocks into low-dimensional linear embedded characteristics; inputting the low-dimensional linear embedded features into 12 consecutive transform layers in a transform module to continuously perform linear mapping, and performing self-attention weighting; adding position codes to the low-dimensional linear embedded features to obtain a long-distance dependency relationship in the fundus blood vessel image through modeling, and extracting to obtain image global features;
performing cascade cavity convolution with gradually expanded receptive field on the lower-layer image features, and then performing bilinear interpolation to promote the lower-layer image features to the same size as the middle-layer image features so as to obtain local features of small-size images;
and adding the local features of the large-size image, the image global features and the local features of the small-size image to complete feature fusion.
5. A fundus blood vessel image segmentation method based on cavity convolution and semantic fusion according to claim 4, characterized in that the step of processing the middle layer image features has the following formula:
Figure 597578DEST_PATH_IMAGE014
wherein, the first and the second end of the pipe are connected with each other,
Figure 494996DEST_PATH_IMAGE015
a low-dimensional linear embedded feature is represented,
Figure 301278DEST_PATH_IMAGE016
a matrix for performing a linear mapping is represented,
Figure 351274DEST_PATH_IMAGE017
it is shown that the position code is,
Figure 217599DEST_PATH_IMAGE018
is shown as
Figure 602312DEST_PATH_IMAGE019
Each of the image blocks is a block of an image,
Figure 212285DEST_PATH_IMAGE020
the transform layer includes a normalization layer, a multi-headed self-attention and multi-layered perceptron, the first
Figure 116787DEST_PATH_IMAGE021
The formula corresponding to the feature transformation of each transform layer is expressed as follows:
Figure 154014DEST_PATH_IMAGE022
Figure 635810DEST_PATH_IMAGE023
wherein the content of the first and second substances,
Figure 171178DEST_PATH_IMAGE024
is shown passing through
Figure 54821DEST_PATH_IMAGE021
The image characteristics obtained after the transform layer coding,
Figure 403894DEST_PATH_IMAGE025
representing image features weighted by multi-head attention,
Figure 372987DEST_PATH_IMAGE026
a linear embedded feature representing a sequence of input images,
Figure 449396DEST_PATH_IMAGE027
it is meant that the multi-layer perceptron operates,
Figure 187545DEST_PATH_IMAGE028
a multi-head self-attention operation is shown,
Figure 707519DEST_PATH_IMAGE029
representing the normalization layer operations.
6. A fundus blood vessel image segmentation method based on hole convolution and semantic fusion according to claim 5, characterized in that in the fourth step, the method for splicing the decoded multi-scale image features by the semantic fusion structure comprises the following steps:
the semantic fusion structure performs upsampling on the image characteristics of each layer output by the decoding part of the improved neural network and restores the image characteristics to be consistent with the original input image in size to obtain a first upsampling characteristic diagram
Figure 163908DEST_PATH_IMAGE030
Second up-sampling feature map
Figure 44008DEST_PATH_IMAGE031
A third upsampling profile
Figure 636664DEST_PATH_IMAGE032
And a fourth upsampled feature map
Figure 327539DEST_PATH_IMAGE033
Feature maps incorporating coding portions of the improved neural network
Figure 271224DEST_PATH_IMAGE034
And the first up-sampling feature map
Figure 955015DEST_PATH_IMAGE030
Second up-sampling feature map
Figure 402177DEST_PATH_IMAGE031
Third upsampled feature map
Figure 123009DEST_PATH_IMAGE032
And a fourth upsampled feature map
Figure 429356DEST_PATH_IMAGE033
Splicing to obtain a new characteristic diagram
Figure 526625DEST_PATH_IMAGE035
Wherein the new characteristic diagram
Figure 487015DEST_PATH_IMAGE036
7. A fundus blood vessel image segmentation method based on hole convolution and semantic fusion according to claim 6, characterized in that said new feature map is obtained
Figure 113169DEST_PATH_IMAGE035
Thereafter, the method further comprises:
the new feature map is mapped
Figure 906812DEST_PATH_IMAGE035
Inputting the new feature map into a squeezing excitation module, continuously executing two SE operations, and screening the new feature map in a hierarchical manner
Figure 807772DEST_PATH_IMAGE035
Key information in (1);
wherein the SE operation comprises a global average pooling operation, a non-linear activation operation and a feature channel weighting operation;
the global average pooling operation comprises the steps of:
for the new feature map
Figure 354160DEST_PATH_IMAGE035
Performing global average pooling operation on each channel to obtain feature vectorsmWherein the new feature map
Figure 682373DEST_PATH_IMAGE035
Has the attribute of
Figure 697734DEST_PATH_IMAGE037
Performing a global average pooling operation to obtain feature vectorsmIs expressed as:
Figure 402385DEST_PATH_IMAGE038
wherein, the first and the second end of the pipe are connected with each other,
Figure 68858DEST_PATH_IMAGE039
showing new characteristics
Figure 302394DEST_PATH_IMAGE035
The height of (a) of (b),
Figure 929684DEST_PATH_IMAGE040
showing new characteristics
Figure 47813DEST_PATH_IMAGE035
The width of (a) is greater than the width of (b),
Figure 444159DEST_PATH_IMAGE041
diagram showing new characteristics
Figure 238809DEST_PATH_IMAGE035
The number of the channels of (a) is,
Figure 353395DEST_PATH_IMAGE042
diagram showing new characteristics
Figure 9635DEST_PATH_IMAGE035
To middle
Figure 526067DEST_PATH_IMAGE043
A compressed representation of the global information for each channel,
Figure 488689DEST_PATH_IMAGE019
representing against a new feature graph
Figure 824992DEST_PATH_IMAGE035
The value of (a) is selected,
Figure 284923DEST_PATH_IMAGE044
representing against a new feature graph
Figure 655862DEST_PATH_IMAGE035
The value of (a) is selected from,
Figure 792314DEST_PATH_IMAGE045
8. a fundus blood vessel image segmentation method based on hole convolution and semantic fusion according to claim 7, characterized in that said nonlinear activation operation comprises the following steps:
modeling the correlation of the new characteristic diagram between different channels by utilizing two full-connection layers; wherein the first fully-connected layer converts the feature vector after nonlinear activationmIs reduced to 1/r, the second fully connected layer reduces the feature vectorm method Human beingAdding dimensionality to original dimensionality, and using sigmoid function to make feature vectormThe feature weight of (A) is normalized to [0,1 ]];
The formula for the nonlinear activation operation is expressed as:
Figure 615913DEST_PATH_IMAGE046
wherein, the first and the second end of the pipe are connected with each other,
Figure 879536DEST_PATH_IMAGE047
the weight vector of the feature is represented,
Figure 104981DEST_PATH_IMAGE048
a sigmoid function is represented as a function,
Figure 412334DEST_PATH_IMAGE049
the parameters representing the first fully-connected layer,
Figure 723230DEST_PATH_IMAGE050
a parameter indicative of a second fully connected layer,
Figure 790543DEST_PATH_IMAGE051
representing the feature vector after global pooling,
Figure 870494DEST_PATH_IMAGE052
representing the ReLU nonlinear activation function.
9. A fundus blood vessel image segmentation method based on hole convolution and semantic fusion according to claim 8, wherein said characteristic channel weighting operation comprises the following steps:
using feature weight vectors
Figure 224115DEST_PATH_IMAGE047
For the new feature map
Figure 146941DEST_PATH_IMAGE035
Each feature channel in (2) performs a respective multiplicative weighting, with the corresponding formula being:
Figure 876999DEST_PATH_IMAGE053
wherein the content of the first and second substances,
Figure 686824DEST_PATH_IMAGE054
the weighted feature channels are represented and then the weighted feature channels are obtained,
Figure 476925DEST_PATH_IMAGE055
showing new characteristics
Figure 889976DEST_PATH_IMAGE035
To middle
Figure 158147DEST_PATH_IMAGE043
And feature weight vectors corresponding to the channels.
10. A fundus blood vessel image segmentation system based on void convolution and semantic fusion, the system comprising:
an image acquisition module to:
acquiring a fundus blood vessel image data set, acquiring a fundus blood vessel image from the fundus blood vessel image data set, and preprocessing the fundus blood vessel image;
a model improvement module to:
performing improved design based on a U-Net model to obtain an improved neural network;
a first design module to:
in the improved design, a multilayer multi-scale cavity convolution structure is designed at a jump connection part of the improved neural network, and the multilayer multi-scale cavity convolution structure is used for spanning an encoding part and a decoding part of the improved neural network so as to protect detail information in fundus blood vessels;
a second design module to:
designing a semantic fusion structure at a decoding part of the improved neural network, wherein the semantic fusion structure is used for splicing the decoded multi-scale image features and constructing a pair of extrusion excitation modules, and performing key information screening on the spliced multi-scale image features according to the extrusion excitation modules to finally obtain a multi-scale image feature fusion result;
an image segmentation module to:
performing continuous three-time convolution operation on the multi-scale image feature fusion result to obtain an image to be segmented, and performing two-classification discrimination on pixels in the image to be segmented to segment the fundus blood vessel image and obtain an improved neural network model;
the auxiliary training module is used for carrying out auxiliary training on the improved neural network model according to a first loss function so as to test the trained neural network model, and finally completing segmentation and verification of the fundus blood vessel image;
wherein the expression of the first penalty function is:
Figure 88057DEST_PATH_IMAGE001
wherein the content of the first and second substances,
Figure 783480DEST_PATH_IMAGE002
the first loss function is represented as a function of,
Figure 415319DEST_PATH_IMAGE003
the total number of categories of the data is represented,
Figure 487180DEST_PATH_IMAGE004
representing categorieskThe true positive value of (A) is,
Figure 537175DEST_PATH_IMAGE005
representing categorieskThe false negative value of (a) is,
Figure 137921DEST_PATH_IMAGE006
indicating that a pixel belongs to a classkThe number of the (c) is (c),
Figure 398001DEST_PATH_IMAGE007
Figure 132608DEST_PATH_IMAGE008
representing categorieskThe geometric mean confidence of the measured data points of the image,
Figure 161744DEST_PATH_IMAGE009
representing categorieskThe recall rate of the (c),
Figure 74336DEST_PATH_IMAGE010
representing the predicted maximum distribution of the input training data over all classes,
Figure 821712DEST_PATH_IMAGE011
indicating labels as categorieskThe set of samples of (a) is,
Figure 360010DEST_PATH_IMAGE012
label data representing the training data is stored in the memory,
Figure 243652DEST_PATH_IMAGE013
indicating the sequence number of the training data.
CN202211134660.XA 2022-09-19 2022-09-19 Fundus blood vessel image segmentation method and system based on cavity convolution and semantic fusion Active CN115205300B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211134660.XA CN115205300B (en) 2022-09-19 2022-09-19 Fundus blood vessel image segmentation method and system based on cavity convolution and semantic fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211134660.XA CN115205300B (en) 2022-09-19 2022-09-19 Fundus blood vessel image segmentation method and system based on cavity convolution and semantic fusion

Publications (2)

Publication Number Publication Date
CN115205300A true CN115205300A (en) 2022-10-18
CN115205300B CN115205300B (en) 2022-12-09

Family

ID=83573686

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211134660.XA Active CN115205300B (en) 2022-09-19 2022-09-19 Fundus blood vessel image segmentation method and system based on cavity convolution and semantic fusion

Country Status (1)

Country Link
CN (1) CN115205300B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115953420A (en) * 2023-03-15 2023-04-11 深圳市联影高端医疗装备创新研究院 Deep learning network model and medical image segmentation method, device and system
CN116630334A (en) * 2023-04-23 2023-08-22 中国科学院自动化研究所 Method, device, equipment and medium for real-time automatic segmentation of multi-segment blood vessel
CN116934747A (en) * 2023-09-15 2023-10-24 江西师范大学 Fundus image segmentation model training method, fundus image segmentation model training equipment and glaucoma auxiliary diagnosis system
CN117078697A (en) * 2023-08-21 2023-11-17 南京航空航天大学 Fundus disease seed detection method based on cascade model fusion
CN117274256A (en) * 2023-11-21 2023-12-22 首都医科大学附属北京安定医院 Pain assessment method, system and equipment based on pupil change
CN117495876A (en) * 2023-12-29 2024-02-02 山东大学齐鲁医院 Coronary artery image segmentation method and system based on deep learning
CN117671395A (en) * 2024-02-02 2024-03-08 南昌康德莱医疗科技有限公司 Cancer cell type recognition device
CN117788473A (en) * 2024-02-27 2024-03-29 北京大学第一医院(北京大学第一临床医学院) Method, system and equipment for predicting blood pressure based on binocular fusion network

Citations (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108986124A (en) * 2018-06-20 2018-12-11 天津大学 In conjunction with Analysis On Multi-scale Features convolutional neural networks retinal vascular images dividing method
CN110059772A (en) * 2019-05-14 2019-07-26 温州大学 Remote sensing images semantic segmentation method based on migration VGG network
CN110232394A (en) * 2018-03-06 2019-09-13 华南理工大学 A kind of multi-scale image semantic segmentation method
KR20190119261A (en) * 2018-04-12 2019-10-22 가천대학교 산학협력단 Apparatus and method for segmenting of semantic image using fully convolutional neural network based on multi scale image and multi scale dilated convolution
CN110781895A (en) * 2019-10-10 2020-02-11 湖北工业大学 Image semantic segmentation method based on convolutional neural network
CN110992382A (en) * 2019-12-30 2020-04-10 四川大学 Fundus image optic cup optic disc segmentation method and system for assisting glaucoma screening
CN111210435A (en) * 2019-12-24 2020-05-29 重庆邮电大学 Image semantic segmentation method based on local and global feature enhancement module
CN111291789A (en) * 2020-01-19 2020-06-16 华东交通大学 Breast cancer image identification method and system based on multi-stage multi-feature deep fusion
CN111783782A (en) * 2020-05-29 2020-10-16 河海大学 Remote sensing image semantic segmentation method fusing and improving UNet and SegNet
WO2020215236A1 (en) * 2019-04-24 2020-10-29 哈尔滨工业大学(深圳) Image semantic segmentation method and system
CN111898617A (en) * 2020-06-29 2020-11-06 南京邮电大学 Target detection method and system based on attention mechanism and parallel void convolution network
CN112001391A (en) * 2020-05-11 2020-11-27 江苏鲲博智行科技有限公司 Image feature fusion image semantic segmentation method
CN112102283A (en) * 2020-09-14 2020-12-18 北京航空航天大学 Retina fundus blood vessel segmentation method based on depth multi-scale attention convolution neural network
US20210035304A1 (en) * 2018-04-10 2021-02-04 Tencent Technology (Shenzhen) Company Limited Training method for image semantic segmentation model and server
US20210049397A1 (en) * 2018-10-16 2021-02-18 Tencent Technology (Shenzhen) Company Limited Semantic segmentation method and apparatus for three-dimensional image, terminal, and storage medium
CN112508960A (en) * 2020-12-21 2021-03-16 华南理工大学 Low-precision image semantic segmentation method based on improved attention mechanism
CN112949673A (en) * 2019-12-11 2021-06-11 四川大学 Feature fusion target detection and identification method based on global attention
CN112966691A (en) * 2021-04-14 2021-06-15 重庆邮电大学 Multi-scale text detection method and device based on semantic segmentation and electronic equipment
CN113160414A (en) * 2021-01-25 2021-07-23 北京豆牛网络科技有限公司 Automatic identification method and device for remaining amount of goods, electronic equipment and computer readable medium
US20210248761A1 (en) * 2020-02-10 2021-08-12 Hong Kong Applied Science and Technology Research Institute Company Limited Method for image segmentation using cnn
US20210272246A1 (en) * 2018-11-26 2021-09-02 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Method, system, and computer-readable medium for improving quality of low-light images
CN114187450A (en) * 2021-12-15 2022-03-15 山东大学 Remote sensing image semantic segmentation method based on deep learning
WO2022105125A1 (en) * 2020-11-17 2022-05-27 平安科技(深圳)有限公司 Image segmentation method and apparatus, computer device, and storage medium
CN114639020A (en) * 2022-03-24 2022-06-17 南京信息工程大学 Segmentation network, segmentation system and segmentation device for target object of image
US20220208355A1 (en) * 2020-12-30 2022-06-30 London Health Sciences Centre Research Inc. Contrast-agent-free medical diagnostic imaging
CN114881968A (en) * 2022-05-07 2022-08-09 中南大学 OCTA image vessel segmentation method, device and medium based on deep convolutional neural network
CN114972748A (en) * 2022-04-28 2022-08-30 北京航空航天大学 Infrared semantic segmentation method capable of explaining edge attention and gray level quantization network
CN114999637A (en) * 2022-07-18 2022-09-02 华东交通大学 Pathological image diagnosis method and system based on multi-angle coding and embedded mutual learning

Patent Citations (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110232394A (en) * 2018-03-06 2019-09-13 华南理工大学 A kind of multi-scale image semantic segmentation method
US20210035304A1 (en) * 2018-04-10 2021-02-04 Tencent Technology (Shenzhen) Company Limited Training method for image semantic segmentation model and server
KR20190119261A (en) * 2018-04-12 2019-10-22 가천대학교 산학협력단 Apparatus and method for segmenting of semantic image using fully convolutional neural network based on multi scale image and multi scale dilated convolution
CN108986124A (en) * 2018-06-20 2018-12-11 天津大学 In conjunction with Analysis On Multi-scale Features convolutional neural networks retinal vascular images dividing method
US20210049397A1 (en) * 2018-10-16 2021-02-18 Tencent Technology (Shenzhen) Company Limited Semantic segmentation method and apparatus for three-dimensional image, terminal, and storage medium
US20210272246A1 (en) * 2018-11-26 2021-09-02 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Method, system, and computer-readable medium for improving quality of low-light images
WO2020215236A1 (en) * 2019-04-24 2020-10-29 哈尔滨工业大学(深圳) Image semantic segmentation method and system
CN110059772A (en) * 2019-05-14 2019-07-26 温州大学 Remote sensing images semantic segmentation method based on migration VGG network
CN110781895A (en) * 2019-10-10 2020-02-11 湖北工业大学 Image semantic segmentation method based on convolutional neural network
CN112949673A (en) * 2019-12-11 2021-06-11 四川大学 Feature fusion target detection and identification method based on global attention
CN111210435A (en) * 2019-12-24 2020-05-29 重庆邮电大学 Image semantic segmentation method based on local and global feature enhancement module
CN110992382A (en) * 2019-12-30 2020-04-10 四川大学 Fundus image optic cup optic disc segmentation method and system for assisting glaucoma screening
CN111291789A (en) * 2020-01-19 2020-06-16 华东交通大学 Breast cancer image identification method and system based on multi-stage multi-feature deep fusion
US20210248761A1 (en) * 2020-02-10 2021-08-12 Hong Kong Applied Science and Technology Research Institute Company Limited Method for image segmentation using cnn
CN112001391A (en) * 2020-05-11 2020-11-27 江苏鲲博智行科技有限公司 Image feature fusion image semantic segmentation method
CN111783782A (en) * 2020-05-29 2020-10-16 河海大学 Remote sensing image semantic segmentation method fusing and improving UNet and SegNet
CN111898617A (en) * 2020-06-29 2020-11-06 南京邮电大学 Target detection method and system based on attention mechanism and parallel void convolution network
CN112102283A (en) * 2020-09-14 2020-12-18 北京航空航天大学 Retina fundus blood vessel segmentation method based on depth multi-scale attention convolution neural network
WO2022105125A1 (en) * 2020-11-17 2022-05-27 平安科技(深圳)有限公司 Image segmentation method and apparatus, computer device, and storage medium
CN112508960A (en) * 2020-12-21 2021-03-16 华南理工大学 Low-precision image semantic segmentation method based on improved attention mechanism
US20220208355A1 (en) * 2020-12-30 2022-06-30 London Health Sciences Centre Research Inc. Contrast-agent-free medical diagnostic imaging
CN113160414A (en) * 2021-01-25 2021-07-23 北京豆牛网络科技有限公司 Automatic identification method and device for remaining amount of goods, electronic equipment and computer readable medium
CN112966691A (en) * 2021-04-14 2021-06-15 重庆邮电大学 Multi-scale text detection method and device based on semantic segmentation and electronic equipment
CN114187450A (en) * 2021-12-15 2022-03-15 山东大学 Remote sensing image semantic segmentation method based on deep learning
CN114639020A (en) * 2022-03-24 2022-06-17 南京信息工程大学 Segmentation network, segmentation system and segmentation device for target object of image
CN114972748A (en) * 2022-04-28 2022-08-30 北京航空航天大学 Infrared semantic segmentation method capable of explaining edge attention and gray level quantization network
CN114881968A (en) * 2022-05-07 2022-08-09 中南大学 OCTA image vessel segmentation method, device and medium based on deep convolutional neural network
CN114999637A (en) * 2022-07-18 2022-09-02 华东交通大学 Pathological image diagnosis method and system based on multi-angle coding and embedded mutual learning

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
LEI XU 等: "Welding Defect Recognition Technology and Application Based on Convolutional Neural Network", 《2021 INTERNATIONAL WIRELESS COMMUNICATIONS AND MOBILE COMPUTING (IWCMC)》 *
ZHIJIE WEN 等: "GCSBA-Net: Gabor-Based and Cascade Squeeze Bi-Attention Network for Gland Segmentation", 《IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS》 *
李大湘等: "基于改进U-Net视网膜血管图像分割算法", 《光学学报》 *
李轩等: "基于卷积神经网络的图像分割算法", 《沈阳航空航天大学学报》 *
陈洪云等: "融合深度神经网络和空洞卷积的语义图像分割研究", 《小型微型计算机***》 *

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115953420B (en) * 2023-03-15 2023-08-22 深圳市联影高端医疗装备创新研究院 Deep learning network model and medical image segmentation method, device and system
CN115953420A (en) * 2023-03-15 2023-04-11 深圳市联影高端医疗装备创新研究院 Deep learning network model and medical image segmentation method, device and system
CN116630334A (en) * 2023-04-23 2023-08-22 中国科学院自动化研究所 Method, device, equipment and medium for real-time automatic segmentation of multi-segment blood vessel
CN116630334B (en) * 2023-04-23 2023-12-08 中国科学院自动化研究所 Method, device, equipment and medium for real-time automatic segmentation of multi-segment blood vessel
CN117078697B (en) * 2023-08-21 2024-04-09 南京航空航天大学 Fundus disease seed detection method based on cascade model fusion
CN117078697A (en) * 2023-08-21 2023-11-17 南京航空航天大学 Fundus disease seed detection method based on cascade model fusion
CN116934747A (en) * 2023-09-15 2023-10-24 江西师范大学 Fundus image segmentation model training method, fundus image segmentation model training equipment and glaucoma auxiliary diagnosis system
CN116934747B (en) * 2023-09-15 2023-11-28 江西师范大学 Fundus image segmentation model training method, fundus image segmentation model training equipment and glaucoma auxiliary diagnosis system
CN117274256A (en) * 2023-11-21 2023-12-22 首都医科大学附属北京安定医院 Pain assessment method, system and equipment based on pupil change
CN117274256B (en) * 2023-11-21 2024-02-06 首都医科大学附属北京安定医院 Pain assessment method, system and equipment based on pupil change
CN117495876A (en) * 2023-12-29 2024-02-02 山东大学齐鲁医院 Coronary artery image segmentation method and system based on deep learning
CN117495876B (en) * 2023-12-29 2024-03-26 山东大学齐鲁医院 Coronary artery image segmentation method and system based on deep learning
CN117671395A (en) * 2024-02-02 2024-03-08 南昌康德莱医疗科技有限公司 Cancer cell type recognition device
CN117671395B (en) * 2024-02-02 2024-04-26 南昌康德莱医疗科技有限公司 Cancer cell type recognition device
CN117788473A (en) * 2024-02-27 2024-03-29 北京大学第一医院(北京大学第一临床医学院) Method, system and equipment for predicting blood pressure based on binocular fusion network
CN117788473B (en) * 2024-02-27 2024-05-14 北京大学第一医院(北京大学第一临床医学院) Method, system and equipment for predicting blood pressure based on binocular fusion network

Also Published As

Publication number Publication date
CN115205300B (en) 2022-12-09

Similar Documents

Publication Publication Date Title
CN115205300B (en) Fundus blood vessel image segmentation method and system based on cavity convolution and semantic fusion
CN109886273B (en) CMR image segmentation and classification system
CN109345538A (en) A kind of Segmentation Method of Retinal Blood Vessels based on convolutional neural networks
CN109886986A (en) A kind of skin lens image dividing method based on multiple-limb convolutional neural networks
CN106682435A (en) System and method for automatically detecting lesions in medical image through multi-model fusion
Chen et al. 3D intracranial artery segmentation using a convolutional autoencoder
CN107506797A (en) One kind is based on deep neural network and multi-modal image alzheimer disease sorting technique
CN112884788B (en) Cup optic disk segmentation method and imaging method based on rich context network
Rajput et al. An accurate and noninvasive skin cancer screening based on imaging technique
Rajee et al. Gender classification on digital dental x-ray images using deep convolutional neural network
CN113205524B (en) Blood vessel image segmentation method, device and equipment based on U-Net
CN111598894B (en) Retina blood vessel image segmentation system based on global information convolution neural network
CN113012163A (en) Retina blood vessel segmentation method, equipment and storage medium based on multi-scale attention network
CN115294075A (en) OCTA image retinal vessel segmentation method based on attention mechanism
CN114119637A (en) Brain white matter high signal segmentation method based on multi-scale fusion and split attention
Yang et al. RADCU-Net: Residual attention and dual-supervision cascaded U-Net for retinal blood vessel segmentation
Zhao et al. Attention residual convolution neural network based on U-net (AttentionResU-Net) for retina vessel segmentation
CN116758336A (en) Medical image intelligent analysis system based on artificial intelligence
CN113344933B (en) Glandular cell segmentation method based on multi-level feature fusion network
Khattar et al. Computer assisted diagnosis of skin cancer: a survey and future recommendations
Tan et al. A lightweight network guided with differential matched filtering for retinal vessel segmentation
CN113421250A (en) Intelligent fundus disease diagnosis method based on lesion-free image training
Upadhyay et al. Characteristic patch-based deep and handcrafted feature learning for red lesion segmentation in fundus images
Salehi et al. Deep convolutional neural networks for automated diagnosis of disc herniation on axial MRI
Pallawi et al. Study of Alzheimer’s disease brain impairment and methods for its early diagnosis: a comprehensive survey

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant