CN112001928B - Retina blood vessel segmentation method and system - Google Patents

Retina blood vessel segmentation method and system Download PDF

Info

Publication number
CN112001928B
CN112001928B CN202010688015.7A CN202010688015A CN112001928B CN 112001928 B CN112001928 B CN 112001928B CN 202010688015 A CN202010688015 A CN 202010688015A CN 112001928 B CN112001928 B CN 112001928B
Authority
CN
China
Prior art keywords
image
loss function
blood vessel
unet network
retinal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010688015.7A
Other languages
Chinese (zh)
Other versions
CN112001928A (en
Inventor
李瑞瑞
李明鸣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Chemical Technology
Original Assignee
Beijing University of Chemical Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Chemical Technology filed Critical Beijing University of Chemical Technology
Priority to CN202010688015.7A priority Critical patent/CN112001928B/en
Publication of CN112001928A publication Critical patent/CN112001928A/en
Application granted granted Critical
Publication of CN112001928B publication Critical patent/CN112001928B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

The invention discloses a retina blood vessel segmentation method, which comprises the following steps: preprocessing a retina fundus image to be processed to obtain a first image; performing skeleton extraction on the first image through a first UNet network to obtain a second image; and merging the first image and the second image to obtain a third image, and performing blood vessel segmentation on the third image through a second UNet network to obtain a retina blood vessel segmentation result. The invention also discloses a retinal vessel segmentation system. The beneficial effects of the invention are as follows: based on skeleton information, assisted blood vessel segmentation, a complete blood vessel topology structure can be extracted.

Description

Retina blood vessel segmentation method and system
Technical Field
The invention relates to the technical field of medical image processing, in particular to a retina blood vessel segmentation method and a retina blood vessel segmentation system.
Background
In the related art, a full convolution network is mainly used for end-to-end segmentation of retinal blood vessels, but the segmented blood vessels have a large number of broken and missing conditions, a complete topological structure cannot be maintained, and the training process is unstable due to the influence of various characteristic learning difficulties of the blood vessels.
Disclosure of Invention
In order to solve the above problems, the present invention aims to provide a retinal vessel segmentation method and system, which can extract a complete vessel topology structure based on skeleton information for assisting vessel segmentation.
The invention provides a retinal vessel segmentation method, which comprises the following steps:
preprocessing a retina fundus image to be processed to obtain a first image;
performing skeleton extraction on the first image through a first UNet network to obtain a second image;
and merging the first image and the second image to obtain a third image, and performing blood vessel segmentation on the third image through a second UNet network to obtain a retina blood vessel segmentation result.
As a further improvement of the present invention, the method further comprises: training the first UNet network and the second UNet network by a training set;
wherein the training set comprises: a plurality of retinal fundus images, a plurality of retinal vascular annotation images, and a plurality of retinal vascular skeleton annotation images.
As a further improvement of the present invention, the original data set includes each retinal fundus image and each retinal blood vessel labeling image;
extracting retinal vascular skeleton images from each retinal vascular labeling image in the original data set respectively;
Respectively carrying out data augmentation on each extracted retina vascular skeleton labeling image and each retina fundus image and each retina vascular labeling image in the original data set;
and respectively carrying out pixel value standardization processing on all the expanded retina fundus images, retina blood vessel labeling images and retina blood vessel skeleton labeling images to obtain a plurality of fundus retina images, a plurality of retina blood vessel labeling images and a plurality of retina blood vessel skeleton labeling images in the training set.
As a further improvement of the present invention, extracting retinal vascular skeleton images from each retinal vascular labeling image in the original dataset, respectively, includes:
respectively carrying out binarization processing on each retinal blood vessel labeling image in the original data set to obtain each binary image;
and extracting the blood vessel center line of each binary image, and taking the extracted blood vessel center line binary image as a retina blood vessel skeleton labeling image.
As a further improvement of the present invention, training the first UNet network and the second UNet network by a training set includes:
taking the retinal fundus image in the training set as an input image of the first UNet network;
And the output image of the last layer of the first UNet network is combined with the input image of the first UNet network to be used as the input image of the second UNet network.
As a further improvement of the present invention, training the first UNet network and the second UNet network by a training set includes:
upsampling an output image of the first UNet network penultimate layer, and using a first loss function for the upsampled image and a retinal vascular skeleton labeling image corresponding to the retinal fundus image;
using a second loss function for an output image of the last layer of the first UNet network and a retinal vascular skeleton labeling image corresponding to the retinal fundus image;
using a third loss function for an output image of the last layer of the second UNet network and a retinal vascular annotation image corresponding to the retinal fundus image;
wherein the first and second loss functions are different.
As a further refinement of the invention, the first loss function is a weighted cross entropy loss function, the second loss function is a standard cross entropy loss function, and the third loss function is a standard cross entropy loss function.
As a further improvement of the present invention, training the first UNet network and the second UNet network by a training set includes:
adding the first loss function, the second loss function and the third loss function to obtain a target loss function;
determining a minimum value of the target loss function;
and carrying out parameter optimization on the loss of the first UNet network and the second UNet network based on the minimum value of the target loss function.
As a further refinement of the invention, determining the minimum of the target loss function comprises:
calculating a gradient by adopting back propagation for the target loss function;
a random gradient descent algorithm is employed to determine a minimum value of the objective loss function.
As a further improvement of the invention, the last layer of the first UNet network and the last layer of the second UNet network adopt a sigmoid activation function;
the first UNet network and the second UNet network employ normal distribution initialization parameters.
The present invention also provides a retinal vascular segmentation system, the system comprising:
the preprocessing module is used for preprocessing the retina fundus image to be processed to obtain a first image;
The framework extraction module is used for carrying out framework extraction on the first image through a first UNet network to obtain a second image;
and the blood vessel segmentation module is used for merging the first image and the second image to obtain a third image, and carrying out blood vessel segmentation processing on the third image through a second UNet network to obtain a retina blood vessel segmentation result.
As a further improvement of the invention, the system further comprises:
the training module is used for training the first UNet network and the second UNet network through a training set;
wherein the training set comprises: a plurality of retinal fundus images, a plurality of retinal vascular annotation images, and a plurality of retinal vascular skeleton annotation images.
As a further improvement of the present invention, the original data set includes each retinal fundus image and each retinal blood vessel labeling image;
extracting retinal vascular skeleton images from each retinal vascular labeling image in the original data set respectively;
respectively carrying out data augmentation on each extracted retina vascular skeleton labeling image and each retina fundus image and each retina vascular labeling image in the original data set;
And respectively carrying out pixel value standardization processing on all the expanded retina fundus images, retina blood vessel labeling images and retina blood vessel skeleton labeling images to obtain a plurality of fundus retina images, a plurality of retina blood vessel labeling images and a plurality of retina blood vessel skeleton labeling images in the training set.
As a further improvement of the present invention, extracting retinal vascular skeleton images from each retinal vascular labeling image in the original dataset, respectively, includes:
respectively carrying out binarization processing on each retinal blood vessel labeling image in the original data set to obtain each binary image;
and extracting the blood vessel center line of each binary image, and taking the extracted blood vessel center line binary image as a retina blood vessel skeleton labeling image.
As a further improvement of the invention, the training module is configured to:
taking the retinal fundus image in the training set as an input image of the first UNet network;
and the output image of the last layer of the first UNet network is combined with the input image of the first UNet network to be used as the input image of the second UNet network.
As a further improvement of the invention, the training module is configured to:
Upsampling an output image of the first UNet network penultimate layer, and using a first loss function for the upsampled image and a retinal vascular skeleton labeling image corresponding to the retinal fundus image;
using a second loss function for an output image of the last layer of the first UNet network and a retinal vascular skeleton labeling image corresponding to the retinal fundus image;
using a third loss function for an output image of the last layer of the second UNet network and a retinal vascular annotation image corresponding to the retinal fundus image;
wherein the first and second loss functions are different.
As a further refinement of the invention, the first loss function is a weighted cross entropy loss function, the second loss function is a standard cross entropy loss function, and the third loss function is a standard cross entropy loss function.
As a further improvement of the invention, the training module is configured to:
adding the first loss function, the second loss function and the third loss function to obtain a target loss function;
determining a minimum value of the target loss function;
and carrying out parameter optimization on the loss of the first UNet network and the second UNet network based on the minimum value of the target loss function.
As a further refinement of the invention, determining the minimum of the target loss function comprises:
calculating a gradient by adopting back propagation for the target loss function;
a random gradient descent algorithm is employed to determine a minimum value of the objective loss function.
As a further improvement of the invention, the last layer of the first UNet network and the last layer of the second UNet network adopt a sigmoid activation function;
the first UNet network and the second UNet network employ normal distribution initialization parameters.
The beneficial effects of the invention are as follows:
the retinal vessel segmentation is decomposed into two parts of skeleton extraction and vessel segmentation, which are respectively realized through a coding and decoding network. The segmentation of the retinal blood vessel is performed in two stages, wherein the difficult-to-learn features and the easy-to-learn features are separately learned, so that the learning condition of each stage can be observed in real time, and the good fitting effect on the easy-to-learn sample is reduced in the process of learning the difficult-to-learn sample.
The skeleton extraction network and the blood vessel segmentation network adopt a deep convolution network (UNet network) with strong feature extraction, and can fit a large amount of data and extract relevant information from the data.
The skeleton extraction network adopts a deep supervision method, and in the skeleton extraction process, the output of different decoding layers adopts different loss functions, so that vascular skeletons with different scales can be learned with different weights, the network training is more stable, and the skeleton extraction is facilitated. On the basis of extracting the skeleton, the skeleton information is further utilized to assist in blood vessel segmentation, so that the blood vessel segmentation can be realized more efficiently, and the segmented blood vessel structure is more complete.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below. It is evident that the figures in the following description are only some embodiments of the invention, from which other figures can be obtained without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of a retinal vascular segmentation method according to an exemplary embodiment of the present invention;
FIG. 2 is a diagram of a network architecture for a retinal vascular segmentation method according to an exemplary embodiment of the present invention;
FIG. 3 is a schematic view of an extracted retinal vascular skeleton according to an exemplary embodiment of the present invention;
FIG. 4 is a graph showing the results of retinal vascular segmentation in accordance with an exemplary embodiment of the present invention;
fig. 5 is a schematic diagram of a network training process of a retinal vessel segmentation method according to an exemplary embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
It should be noted that, if directional indications (such as up, down, left, right, front, and rear … …) are included in the embodiments of the present invention, the directional indications are merely used to explain the relative positional relationship, movement conditions, etc. between the components in a specific posture (as shown in the drawings), and if the specific posture is changed, the directional indications are correspondingly changed.
In addition, in the description of the present invention, the terminology used is for the purpose of illustration only and is not intended to limit the scope of the present invention. The terms "comprises" and/or "comprising" are used to specify the presence of stated elements, steps, operations, and/or components, but do not preclude the presence or addition of one or more other elements, steps, operations, and/or components. The terms "first," "second," and the like may be used for describing various elements, do not represent a sequence, and are not intended to limit the elements. Furthermore, in the description of the present invention, unless otherwise indicated, the meaning of "a plurality" is two or more. These terms are only used to distinguish one element from another element. These and/or other aspects will become apparent to those skilled in the art from the following description, when taken in conjunction with the accompanying drawings, wherein the present invention is described in connection with embodiments thereof. The drawings are intended to depict embodiments of the invention for purposes of illustration only. Those skilled in the art will readily recognize from the following description that alternative embodiments of the illustrated structures and methods of the present invention may be employed without departing from the principles of the present invention.
The embodiment of the invention discloses a retinal vessel segmentation method, as shown in fig. 1, which comprises the following steps:
s1, preprocessing a retina fundus image to be processed to obtain a first image;
s2, performing skeleton extraction on the first image through a first UNet network to obtain a second image;
and S3, merging the first image and the second image to obtain a third image, and performing blood vessel segmentation on the third image through a second UNet network to obtain a retinal blood vessel segmentation result.
The network frame diagram of the present invention is shown in fig. 2, where the first UNet network is used as a skeleton extraction network, and the second UNet network is used as a blood vessel segmentation network. The preprocessing process (such as resolution unification and graying processing) processes the input retina image to be processed (RGB image), thereby being convenient for being used as the input of the first UNet network and improving the segmentation efficiency. Fig. 3 and 4 are respectively a schematic view of an extracted retinal vascular skeleton (second image) and a schematic view of a result of retinal vascular segmentation.
In the prior art, a single segmentation network is generally adopted to knead various characteristics such as the edge of a blood vessel, the topological structure of the blood vessel, the width of the blood vessel and the like together for network learning. However, features such as edges may be obscured by factors such as location, brightness, etc., and are not easily distinguishable from non-vascular regions, which are difficult features to learn. If these difficult-to-learn features are rubbed together with easy-to-learn features, the learning of the network is not favored. The method of the invention divides retinal blood vessel into two parts of skeleton extraction and blood vessel division, which are respectively realized by a coding and decoding network (UNet network). The first UNet network is used for learning easily-learned features such as a blood vessel topological structure, and the result is used as a structure prior and used as input of the second UNet network to assist in blood vessel segmentation. The retina blood vessel segmentation method in stages decomposes the blood vessel characteristics, and skeleton extraction can be less affected by error labeling than direct blood vessel segmentation, thereby being beneficial to extraction of the blood vessel topological structure.
Wherein the UNet network is composed of a contracted path and an expanded path. Wherein the shrink path follows a typical convolutional network structure consisting of two repeated 33 convolution kernels (no-fill convolution, unpadded convolution) and each using a modified linear unit (rectified linear unit, reLU) activation function and a step size 2 22 max pooling operation for downsampling (downsampling), and the number of characteristic channels is doubled in each downsampling step. In the dilation path, each step involves upsampling (upsampling) the feature map; then performing convolution operation (up-convolution) with a convolution kernel of 22 for reducing the number of characteristic channels by half; then cascading the corresponding cut feature graphs in the contracted path; the convolution operation is then performed with the convolution kernels of both 33, and both use the ReLU activation function. At the last layer, convolution operations are performed with the convolution kernel of 1*1, mapping each 64-dimensional feature vector to the output layer of the network. Thus, the network has 23 convolutional layers.
In an alternative embodiment, the method further comprises: training the first UNet network and the second UNet network by a training set;
Wherein the training set comprises: a plurality of retinal fundus images, a plurality of retinal vascular annotation images, and a plurality of retinal vascular skeleton annotation images.
Wherein, the retina fundus image, the retina blood vessel labeling image and the retina blood vessel skeleton labeling image have a one-to-one correspondence.
In an alternative embodiment, the raw dataset comprises respective retinal fundus images and respective retinal blood vessel labeling images;
extracting retinal vascular skeleton images from each retinal vascular labeling image in the original data set respectively;
respectively carrying out data augmentation on each extracted retina vascular skeleton labeling image and each retina fundus image and each retina vascular labeling image in the original data set;
and respectively carrying out pixel value standardization processing on all the expanded retina fundus images, retina blood vessel labeling images and retina blood vessel skeleton labeling images to obtain a plurality of fundus retina training images, a plurality of retina blood vessel labeling training images and a plurality of retina blood vessel skeleton labeling training images in the training set.
According to the method disclosed by the invention, each network is trained respectively to obtain the optimal network parameters, and skeleton extraction and blood vessel segmentation can be better carried out on the retina fundus image to be processed respectively. The raw dataset includes respective retinal fundus images and respective retinal blood vessel labeling images. Wherein each retinal vessel labeling image employs, for example, a DRIVE dataset and a STARE dataset. Wherein the DRIVE dataset contains 40 images with vessel labeling, 7 of which are early diabetic retinopathy and 33 of which are fundus images without diabetic retinopathy, each image having a resolution of 565 x 584, each image corresponding to the result of 2 expert manual segmentations. The STARE dataset includes 20 images with vessel labeling, 10 of which had lesions and 10 of which had no lesions, each image having a resolution of 605X 700, each image corresponding to the results of 2 expert manual segmentations.
The data of each image is amplified, for example, by rotating, turning over, changing brightness, and the like, and the pixel value of the amplified image is standardized, so that the network training is more stable, and an expanded data set is obtained. The extended data set is divided into a training set, a testing set and a verification set for training, testing and verification of the network.
An alternative embodiment, extracting retinal vascular skeleton images for each retinal vascular labeling image in the original dataset, respectively, includes:
respectively carrying out binarization processing on each retinal blood vessel labeling image in the original data set to obtain each binary image;
and extracting the blood vessel center line of each binary image, and taking the extracted blood vessel center line binary image as a retina blood vessel skeleton labeling image.
In the method, in the network training process, the skeleton information extracted from the first UNet network is used for assisting in segmenting the blood vessels of the second UNet network, so that a more complete blood vessel segmentation result is obtained. Because of the large amount of data in the extended data set, it is necessary to train both networks sufficiently to enable the two networks to work cooperatively.
An alternative embodiment, the last layer of the first UNet network and the last layer of the second UNet network use a sigmoid activation function; the first UNet network and the second UNet network employ normal distribution initialization parameters.
An alternative embodiment, training the first UNet network and the second UNet network by a training set, comprises:
taking the retinal fundus image in the training set as an input image of the first UNet network;
and the output image of the last layer of the first UNet network is combined with the input image of the first UNet network to be used as the input image of the second UNet network.
An alternative embodiment, training the first UNet network and the second UNet network by a training set, comprises:
upsampling an output image of the first UNet network penultimate layer, and using a first loss function for the upsampled image and a retinal vascular skeleton labeling image corresponding to the retinal fundus image;
using a second loss function for an output image of the last layer of the first UNet network and a retinal vascular skeleton labeling image corresponding to the retinal fundus image;
using a third loss function for an output image of the last layer of the second UNet network and a retinal vascular annotation image corresponding to the retinal fundus image;
wherein the first and second loss functions are different.
In an alternative embodiment, the upsampled image and the retinal vascular skeleton labeling image corresponding to the retinal fundus image may use, for example, a weighted cross entropy loss function as the first loss function; the output image of the last layer of the first UNet network and the retinal vascular skeleton labeling image corresponding to the retinal fundus image can be used as a second loss function by using a standard cross entropy loss function; the output image of the last layer of the second UNet network and the retinal vessel labeling image corresponding to the retinal fundus image may use, for example, a standard cross entropy loss function as the third loss function.
After the method provided by the invention analyzes the characteristics of different scales of the hidden layer in the first UNet network, the loss is calculated by adopting different loss functions on different layers, so that the substantivity and transparency of the network hidden layer learning process can be improved. The extracted binary image of the blood vessel center line is used as a retina blood vessel skeleton labeling image to represent the structure of blood vessels, and the skeleton of the thin blood vessel is usually self due to small width. For a coarse vessel, a small shift in the centerline does not affect the characterization of the vessel structure, and these shifted portions are called false positive samples. To reduce the penalty on such false positive samples, the present invention applies a weighted cross entropy loss function to the first UNet network penultimate layer output. Because of the characteristic of information filtering in the first UNet network layer, the penultimate layer often only retains the information of the crude blood vessels and filters the information of the fine blood vessels. Thus, the first UNet network penultimate layer may reduce only the loss for coarse vascular skeleton offsets using a weighted cross entropy loss function.
An alternative embodiment, training the first UNet network and the second UNet network by a training set, comprises:
adding the first loss function, the second loss function and the third loss function to obtain a target loss function;
determining a minimum value of the target loss function;
and carrying out parameter optimization on the loss of the first UNet network and the second UNet network based on the minimum value of the target loss function.
In an alternative embodiment, determining the minimum of the target loss function comprises:
calculating a gradient by adopting back propagation for the target loss function;
a random gradient descent algorithm is employed to determine a minimum value of the objective loss function.
The invention takes retinal fundus images (RGB three-channel images) in a training set as input of a first UNet network, carries out forward propagation, adopts a deep supervision method, calculates loss by using weighted cross entropy with corresponding skeleton labels after upsampling the output of the penultimate layer of the first UNet network, and calculates loss by using standard cross entropy with the output of the last layer of the first UNet network and the corresponding skeleton labels. Combining the output of the last layer of the first UNet network and the input of the first UNet network into four channels, taking the four channels as the input of the second UNet network, obtaining a segmentation result after forward propagation, and calculating the loss of the segmentation result and the corresponding segmentation label by using standard cross entropy.
Wherein the forward propagation formula is shown as (1), and the weighted cross entropy formula is shown as (2) (wherein α=1 is the standard cross entropy).
Seg=UNet2(combinate(RGB,UNet1(RGB))) (1)
In the formula (1), UNet2 represents a second UNet network, RGB represents a retinal fundus image, and UNet1 represents a first UNet network. In the formula (2), y i Representing the marked image, y i ' represents the output image.
In the method, as shown in fig. 5, in the training process of a network, an input retina fundus image is utilized to extract a blood vessel skeleton by utilizing a skeleton extraction network, the extracted skeleton and the retina image are combined and input into a blood vessel segmentation network for segmentation, and in the network training process, all data in a training set are in one round, and a plurality of rounds of iteration are repeated. And after each round is finished, the data in the verification set can be used for verifying the network, calculating the loss and observing whether the fitting condition exists. After training, the network can be tested by using data in the test set, and the test result is evaluated by using indexes such as recall rate, accuracy rate, F1 value and the like, so that the learning condition of the network is quantitatively analyzed.
The method of the invention divides retinal blood vessel into two parts of skeleton extraction and blood vessel division, which are realized by a coding and decoding network respectively. The segmentation of the retinal blood vessel is performed in two stages, wherein the difficult-to-learn features and the easy-to-learn features are separately learned, so that the learning condition of each stage can be observed in real time, and the good fitting effect on the easy-to-learn sample is reduced in the process of learning the difficult-to-learn sample. The skeleton extraction network and the blood vessel segmentation network adopt a deep convolution network (UNet network) with strong feature extraction, and can fit a large amount of data and extract relevant information from the data. The skeleton extraction network adopts a deep supervision method, and in the skeleton extraction process, the output of different decoding layers adopts different loss functions, so that vascular skeletons with different scales can be learned with different weights, the network training is more stable, and the skeleton extraction is facilitated. On the basis of extracting the skeleton, the skeleton information is further utilized to assist in blood vessel segmentation, so that the blood vessel segmentation can be realized more efficiently, and the segmented blood vessel structure is more complete.
The embodiment of the invention discloses a retinal vascular segmentation system, which comprises:
the preprocessing module is used for preprocessing the retina fundus image to be processed to obtain a first image;
the framework extraction module is used for carrying out framework extraction on the first image through a first UNet network to obtain a second image;
and the blood vessel segmentation module is used for merging the first image and the second image to obtain a third image, and carrying out blood vessel segmentation processing on the third image through a second UNet network to obtain a retina blood vessel segmentation result.
The network framework diagram adopted by the system is shown in fig. 2, wherein a first UNet network is used as a skeleton extraction network, and a second UNet network is used as a blood vessel segmentation network. The preprocessing module preprocesses an input retina image to be processed (RGB image) through resolution unification, graying processing and the like, and then serves as input of a first UNet network, so that the segmentation efficiency is improved. Fig. 3 and 4 are respectively a schematic view of an extracted retinal vascular skeleton (second image) and a schematic view of a result of retinal vascular segmentation.
In the prior art, a single segmentation network is generally adopted to knead various characteristics such as the edge of a blood vessel, the topological structure of the blood vessel, the width of the blood vessel and the like together for network learning. However, features such as edges may be obscured by factors such as location, brightness, etc., and are not easily distinguishable from non-vascular regions, which are difficult features to learn. If these difficult-to-learn features are rubbed together with easy-to-learn features, the learning of the network is not favored. The system of the invention divides retinal blood vessel into two parts of skeleton extraction and blood vessel division, which are respectively realized by a coding and decoding network (UNet network). The first UNet network is used for learning easily-learned features such as a blood vessel topological structure, and the result is used as a structure prior and used as input of the second UNet network to assist in blood vessel segmentation. The retina blood vessel segmentation method in stages decomposes the blood vessel characteristics, and skeleton extraction can be less affected by error labeling than direct blood vessel segmentation, thereby being beneficial to extraction of the blood vessel topological structure.
Wherein the UNet network is composed of a contracted path and an expanded path. Wherein the shrink path follows a typical convolutional network structure consisting of two repeated 33 convolution kernels (no-fill convolution, unpadded convolution) and each using a modified linear unit (rectified linear unit, reLU) activation function and a step size 2 22 max pooling operation for downsampling (downsampling), and the number of characteristic channels is doubled in each downsampling step. In the dilation path, each step involves upsampling (upsampling) the feature map; then performing convolution operation (up-convolution) with a convolution kernel of 22 for reducing the number of characteristic channels by half; then cascading the corresponding cut feature graphs in the contracted path; the convolution operation is then performed with the convolution kernels of both 33, and both use the ReLU activation function. At the last layer, convolution operations are performed with the convolution kernel of 1*1, mapping each 64-dimensional feature vector to the output layer of the network. Thus, the network has 23 convolutional layers.
In an alternative embodiment, the system further comprises:
the training module is used for training the first UNet network and the second UNet network through a training set;
Wherein the training set comprises: a plurality of retinal fundus images, a plurality of retinal vascular annotation images, and a plurality of retinal vascular skeleton annotation images.
Wherein, the retina fundus image, the retina blood vessel labeling image and the retina blood vessel skeleton labeling image have a one-to-one correspondence.
In an alternative embodiment, the raw dataset comprises respective retinal fundus images and respective retinal blood vessel labeling images;
extracting retinal vascular skeleton images from each retinal vascular labeling image in the original data set respectively;
respectively carrying out data augmentation on each extracted retina vascular skeleton labeling image and each retina fundus image and each retina vascular labeling image in the original data set;
and respectively carrying out pixel value standardization processing on all the expanded retina fundus images, retina blood vessel labeling images and retina blood vessel skeleton labeling images to obtain a plurality of fundus retina training images, a plurality of retina blood vessel labeling training images and a plurality of retina blood vessel skeleton labeling training images in the training set.
The system provided by the invention trains each network respectively to obtain the optimal network parameters, and can better perform skeleton extraction and blood vessel segmentation on the retina fundus image to be processed respectively. The raw dataset includes respective retinal fundus images and respective retinal blood vessel labeling images. Wherein each retinal vessel labeling image employs, for example, a DRIVE dataset and a STARE dataset. Wherein the DRIVE dataset contains 40 images with vessel labeling, 7 of which are early diabetic retinopathy and 33 of which are fundus images without diabetic retinopathy, each image having a resolution of 565 x 584, each image corresponding to the result of 2 expert manual segmentations. The STARE dataset includes 20 images with vessel labeling, 10 of which had lesions and 10 of which had no lesions, each image having a resolution of 605X 700, each image corresponding to the results of 2 expert manual segmentations.
The data of each image is amplified, for example, by rotating, turning over, changing brightness, and the like, and the pixel value of the amplified image is standardized, so that the network training is more stable, and an expanded data set is obtained. The extended data set is divided into a training set, a testing set and a verification set for training, testing and verification of the network.
An alternative embodiment, extracting retinal vascular skeleton images for each retinal vascular labeling image in the original dataset, respectively, includes:
respectively carrying out binarization processing on each retinal blood vessel labeling image in the original data set to obtain each binary image;
and extracting the blood vessel center line of each binary image, and taking the extracted blood vessel center line binary image as a retina blood vessel skeleton labeling image.
In the system, in the network training process, the skeleton information extracted from the first UNet network is used for assisting in segmenting the blood vessels of the second UNet network, so that a more complete blood vessel segmentation result is obtained. Because of the large amount of data in the extended data set, it is necessary to train both networks sufficiently to enable the two networks to work cooperatively.
An alternative embodiment, the last layer of the first UNet network and the last layer of the second UNet network use a sigmoid activation function; the first UNet network and the second UNet network employ normal distribution initialization parameters.
In an alternative embodiment, the training module is further configured to:
taking the retinal fundus image in the training set as an input image of the first UNet network;
and the output image of the last layer of the first UNet network is combined with the input image of the first UNet network to be used as the input image of the second UNet network.
In an alternative embodiment, the training module is further configured to:
upsampling an output image of the first UNet network penultimate layer, and using a first loss function for the upsampled image and a retinal vascular skeleton labeling image corresponding to the retinal fundus image;
using a second loss function for an output image of the last layer of the first UNet network and a retinal vascular skeleton labeling image corresponding to the retinal fundus image;
using a third loss function for an output image of the last layer of the second UNet network and a retinal vascular annotation image corresponding to the retinal fundus image;
wherein the first and second loss functions are different.
In an alternative embodiment, the upsampled image and the retinal vascular skeleton labeling image corresponding to the retinal fundus image may use, for example, a weighted cross entropy loss function as the first loss function; the output image of the last layer of the first UNet network and the retinal vascular skeleton labeling image corresponding to the retinal fundus image can be used as a second loss function by using a standard cross entropy loss function; the output image of the last layer of the second UNet network and the retinal vessel labeling image corresponding to the retinal fundus image may use, for example, a standard cross entropy loss function as the third loss function.
After the system provided by the invention analyzes the characteristics of different scales of the hidden layer in the first UNet network, the loss is calculated by adopting different loss functions on different layers, so that the substantivity and transparency of the network hidden layer learning process can be improved. The extracted binary image of the blood vessel center line is used as a retina blood vessel skeleton labeling image to represent the structure of blood vessels, and the skeleton of the thin blood vessel is usually self due to small width. For a coarse vessel, a small shift in the centerline does not affect the characterization of the vessel structure, and these shifted portions are called false positive samples. To reduce the penalty on such false positive samples, the present invention applies a weighted cross entropy loss function to the first UNet network penultimate layer output. Because of the characteristic of information filtering in the first UNet network layer, the penultimate layer often only retains the information of the crude blood vessels and filters the information of the fine blood vessels. Thus, the first UNet network penultimate layer may reduce only the loss for coarse vascular skeleton offsets using a weighted cross entropy loss function.
In an alternative embodiment, the training module is further configured to:
adding the first loss function, the second loss function and the third loss function to obtain a target loss function;
Determining a minimum value of the target loss function;
and carrying out parameter optimization on the loss of the first UNet network and the second UNet network based on the minimum value of the target loss function.
In an alternative embodiment, determining the minimum of the target loss function comprises:
calculating a gradient by adopting back propagation for the target loss function;
a random gradient descent algorithm is employed to determine a minimum value of the objective loss function.
The invention takes retinal fundus images (RGB three-channel images) in a training set as input of a first UNet network, carries out forward propagation, adopts a deep supervision method, calculates loss by using weighted cross entropy with corresponding skeleton labels after upsampling the output of the penultimate layer of the first UNet network, and calculates loss by using standard cross entropy with the output of the last layer of the first UNet network and the corresponding skeleton labels. Combining the output of the last layer of the first UNet network and the input of the first UNet network into four channels, taking the four channels as the input of the second UNet network, obtaining a segmentation result after forward propagation, and calculating the loss of the segmentation result and the corresponding segmentation label by using standard cross entropy.
Wherein the forward propagation formula is shown as (1), and the weighted cross entropy formula is shown as (2) (wherein α=1 is the standard cross entropy).
Seg=UNet2(combinate(RGB,UNet1(RGB))) (1)
In the formula (1), UNet2 represents a second UNet network, RGB represents a retinal fundus image, and UNet1 represents a first UNet network. In the formula (2), y i Representing the marked image, y i ' represents the output image.
In the system of the invention, in the training process of the network, as shown in fig. 5, the training module extracts the vascular skeleton from the input retina fundus image by utilizing the skeleton extraction network, combines the extracted skeleton and the retina image and inputs the combined skeleton and retina image into the vascular segmentation network for segmentation, and in the network training process, all data in the training set are in one round, and a plurality of rounds of iteration are repeated. And after each round is finished, the data in the verification set can be used for verifying the network, calculating the loss and observing whether the fitting condition exists. After training, the network can be tested by using data in the test set, and the test result is evaluated by using indexes such as recall rate, accuracy rate, F1 value and the like, so that the learning condition of the network is quantitatively analyzed.
The system of the invention divides retinal blood vessel into two parts of skeleton extraction and blood vessel division, which are realized by a coding and decoding network respectively. The segmentation of the retinal blood vessel is performed in two stages, wherein the difficult-to-learn features and the easy-to-learn features are separately learned, so that the learning condition of each stage can be observed in real time, and the good fitting effect on the easy-to-learn sample is reduced in the process of learning the difficult-to-learn sample. The skeleton extraction network and the blood vessel segmentation network adopt a deep convolution network (UNet network) with strong feature extraction, and can fit a large amount of data and extract relevant information from the data. The skeleton extraction network adopts a deep supervision method, and in the skeleton extraction process, the output of different decoding layers adopts different loss functions, so that vascular skeletons with different scales can be learned with different weights, the network training is more stable, and the skeleton extraction is facilitated. On the basis of extracting the skeleton, the skeleton information is further utilized to assist in blood vessel segmentation, so that the blood vessel segmentation can be realized more efficiently, and the segmented blood vessel structure is more complete.
In the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Furthermore, one of ordinary skill in the art will appreciate that while some embodiments described herein include some features but not others included in other embodiments, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the claims, any of the claimed embodiments may be used in any combination.
It will be understood by those skilled in the art that while the invention has been described with reference to exemplary embodiments, various changes may be made and equivalents may be substituted for elements thereof without departing from the scope of the invention. In addition, many modifications may be made to adapt a particular situation or material to the teachings of the invention without departing from the essential scope thereof. Therefore, it is intended that the invention not be limited to the particular embodiment disclosed, but that the invention will include all embodiments falling within the scope of the appended claims.

Claims (12)

1. A method of retinal vascular segmentation, the method comprising:
preprocessing a retina fundus image to be processed to obtain a first image;
performing skeleton extraction on the first image through a first UNet network to obtain a second image;
combining the first image and the second image to obtain a third image, and performing blood vessel segmentation on the third image through a second UNet network to obtain a retina blood vessel segmentation result;
training the first UNet network and the second UNet network through a training set, the training set including a plurality of retinal fundus images, a plurality of retinal vascular annotation images, and a plurality of retinal vascular skeleton annotation images, comprising: upsampling an output image of the first UNet network penultimate layer, and using a first loss function for the upsampled image and a retinal vascular skeleton labeling image corresponding to the retinal fundus image; using a second loss function for an output image of the last layer of the first UNet network and a retinal vascular skeleton labeling image corresponding to the retinal fundus image; using a third loss function for an output image of the last layer of the second UNet network and a retinal vascular annotation image corresponding to the retinal fundus image; wherein the first and second loss functions are different;
The original data set comprises each retinal fundus image and each retinal blood vessel labeling image, and binarization processing is carried out on each retinal blood vessel labeling image in the original data set to obtain each binary image; and extracting the blood vessel center line of each binary image, and taking the extracted blood vessel center line binary image as a retina blood vessel skeleton labeling image.
2. The method of claim 1, wherein retinal vascular skeleton images are extracted separately for each retinal vascular annotation image in the raw dataset;
respectively carrying out data augmentation on each extracted retina vascular skeleton labeling image and each retina fundus image and each retina vascular labeling image in the original data set;
and respectively carrying out pixel value standardization processing on all the expanded retina fundus images, retina blood vessel labeling images and retina blood vessel skeleton labeling images to obtain a plurality of fundus retina images, a plurality of retina blood vessel labeling images and a plurality of retina blood vessel skeleton labeling images in the training set.
3. The method of claim 1, wherein training the first UNet network and the second UNet network through a training set comprises:
Taking the retinal fundus image in the training set as an input image of the first UNet network;
and the output image of the last layer of the first UNet network is combined with the input image of the first UNet network to be used as the input image of the second UNet network.
4. The method of claim 1, wherein the first loss function is a weighted cross entropy loss function, the second loss function is a standard cross entropy loss function, and the third loss function is a standard cross entropy loss function.
5. The method of claim 4, wherein training the first UNet network and the second UNet network with a training set comprises:
adding the first loss function, the second loss function and the third loss function to obtain a target loss function;
determining a minimum value of the target loss function;
and carrying out parameter optimization on the loss of the first UNet network and the second UNet network based on the minimum value of the target loss function.
6. The method as recited in claim 5, wherein determining a minimum of the objective loss function comprises:
calculating a gradient by adopting back propagation for the target loss function;
A random gradient descent algorithm is employed to determine a minimum value of the objective loss function.
7. A retinal vascular segmentation system, the system comprising:
the preprocessing module is used for preprocessing the retina fundus image to be processed to obtain a first image;
the framework extraction module is used for carrying out framework extraction on the first image through a first UNet network to obtain a second image;
the blood vessel segmentation module is used for merging the first image and the second image to obtain a third image, and performing blood vessel segmentation processing on the third image through a second UNet network to obtain a retina blood vessel segmentation result;
a training module for training the first UNet network and the second UNet network through a training set comprising a plurality of retinal fundus images, a plurality of retinal vascular annotation images, and a plurality of retinal vascular skeleton annotation images, the training module configured to: upsampling an output image of the first UNet network penultimate layer, and using a first loss function for the upsampled image and a retinal vascular skeleton labeling image corresponding to the retinal fundus image; using a second loss function for an output image of the last layer of the first UNet network and a retinal vascular skeleton labeling image corresponding to the retinal fundus image; using a third loss function for an output image of the last layer of the second UNet network and a retinal vascular annotation image corresponding to the retinal fundus image; wherein the first and second loss functions are different;
The original data set comprises each retinal fundus image and each retinal blood vessel labeling image, and binarization processing is carried out on each retinal blood vessel labeling image in the original data set to obtain each binary image; and extracting the blood vessel center line of each binary image, and taking the extracted blood vessel center line binary image as a retina blood vessel skeleton labeling image.
8. The system of claim 7, wherein retinal vascular skeleton images are extracted separately for each retinal vascular annotation image in the raw dataset;
respectively carrying out data augmentation on each extracted retina vascular skeleton labeling image and each retina fundus image and each retina vascular labeling image in the original data set;
and respectively carrying out pixel value standardization processing on all the expanded retina fundus images, retina blood vessel labeling images and retina blood vessel skeleton labeling images to obtain a plurality of fundus retina images, a plurality of retina blood vessel labeling images and a plurality of retina blood vessel skeleton labeling images in the training set.
9. The system of claim 7, wherein the training module is configured to:
Taking the retinal fundus image in the training set as an input image of the first UNet network;
and the output image of the last layer of the first UNet network is combined with the input image of the first UNet network to be used as the input image of the second UNet network.
10. The system of claim 7, wherein the first loss function is a weighted cross entropy loss function, the second loss function is a standard cross entropy loss function, and the third loss function is a standard cross entropy loss function.
11. The system of claim 10, wherein the training module is configured to:
adding the first loss function, the second loss function and the third loss function to obtain a target loss function;
determining a minimum value of the target loss function;
and carrying out parameter optimization on the loss of the first UNet network and the second UNet network based on the minimum value of the target loss function.
12. The system of claim 11, wherein determining the minimum of the objective loss function comprises:
calculating a gradient by adopting back propagation for the target loss function;
determining the minimum value of the target loss function by adopting a random gradient descent algorithm;
As a further improvement of the invention, the last layer of the first UNet network and the last layer of the second UNet network adopt a sigmoid activation function;
the first UNet network and the second UNet network employ normal distribution initialization parameters.
CN202010688015.7A 2020-07-16 2020-07-16 Retina blood vessel segmentation method and system Active CN112001928B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010688015.7A CN112001928B (en) 2020-07-16 2020-07-16 Retina blood vessel segmentation method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010688015.7A CN112001928B (en) 2020-07-16 2020-07-16 Retina blood vessel segmentation method and system

Publications (2)

Publication Number Publication Date
CN112001928A CN112001928A (en) 2020-11-27
CN112001928B true CN112001928B (en) 2023-12-15

Family

ID=73468037

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010688015.7A Active CN112001928B (en) 2020-07-16 2020-07-16 Retina blood vessel segmentation method and system

Country Status (1)

Country Link
CN (1) CN112001928B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113643354B (en) * 2020-09-04 2023-10-13 深圳硅基智能科技有限公司 Measuring device of vascular caliber based on fundus image with enhanced resolution
CN113344842A (en) * 2021-03-24 2021-09-03 同济大学 Blood vessel labeling method of ultrasonic image
CN113658104A (en) * 2021-07-21 2021-11-16 南方科技大学 Blood vessel image processing method, electronic device and computer-readable storage medium
CN114565620B (en) * 2022-03-01 2023-04-18 电子科技大学 Fundus image blood vessel segmentation method based on skeleton prior and contrast loss
CN114897832A (en) * 2022-05-13 2022-08-12 三峡大学 Blood vessel segmentation method for ultra-wide-angle fundus image under assistance of blood vessel type information
WO2023240319A1 (en) * 2022-06-16 2023-12-21 Eyetelligence Limited Fundus image analysis system
CN116797794B (en) * 2023-07-10 2024-06-18 北京透彻未来科技有限公司 Intestinal cancer pathology parting system based on deep learning

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109087302A (en) * 2018-08-06 2018-12-25 北京大恒普信医疗技术有限公司 A kind of eye fundus image blood vessel segmentation method and apparatus
CN109118495A (en) * 2018-08-01 2019-01-01 沈阳东软医疗***有限公司 A kind of Segmentation Method of Retinal Blood Vessels and device
CN109658422A (en) * 2018-12-04 2019-04-19 大连理工大学 A kind of retinal images blood vessel segmentation method based on multiple dimensioned deep supervision network
CN109949302A (en) * 2019-03-27 2019-06-28 天津工业大学 Retinal feature Structural Techniques based on pixel
CN110197493A (en) * 2019-05-24 2019-09-03 清华大学深圳研究生院 Eye fundus image blood vessel segmentation method
CN110443815A (en) * 2019-08-07 2019-11-12 中山大学 In conjunction with the semi-supervised retina OCT image layer dividing method for generating confrontation network
CN110652312A (en) * 2019-07-19 2020-01-07 慧影医疗科技(北京)有限公司 Blood vessel CTA intelligent analysis system and application
CN110689526A (en) * 2019-09-09 2020-01-14 北京航空航天大学 Retinal blood vessel segmentation method and system based on retinal fundus image

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109118495A (en) * 2018-08-01 2019-01-01 沈阳东软医疗***有限公司 A kind of Segmentation Method of Retinal Blood Vessels and device
CN109087302A (en) * 2018-08-06 2018-12-25 北京大恒普信医疗技术有限公司 A kind of eye fundus image blood vessel segmentation method and apparatus
CN109658422A (en) * 2018-12-04 2019-04-19 大连理工大学 A kind of retinal images blood vessel segmentation method based on multiple dimensioned deep supervision network
CN109949302A (en) * 2019-03-27 2019-06-28 天津工业大学 Retinal feature Structural Techniques based on pixel
CN110197493A (en) * 2019-05-24 2019-09-03 清华大学深圳研究生院 Eye fundus image blood vessel segmentation method
CN110652312A (en) * 2019-07-19 2020-01-07 慧影医疗科技(北京)有限公司 Blood vessel CTA intelligent analysis system and application
CN110443815A (en) * 2019-08-07 2019-11-12 中山大学 In conjunction with the semi-supervised retina OCT image layer dividing method for generating confrontation network
CN110689526A (en) * 2019-09-09 2020-01-14 北京航空航天大学 Retinal blood vessel segmentation method and system based on retinal fundus image

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Connection Sensitive Attention U-NET for Accurate Retinal Vessel Segmentation;Ruirui Li 等;arXiv;全文 *

Also Published As

Publication number Publication date
CN112001928A (en) 2020-11-27

Similar Documents

Publication Publication Date Title
CN112001928B (en) Retina blood vessel segmentation method and system
CN109886273B (en) CMR image segmentation and classification system
CN112132817B (en) Retina blood vessel segmentation method for fundus image based on mixed attention mechanism
CN106056595B (en) Based on the pernicious assistant diagnosis system of depth convolutional neural networks automatic identification Benign Thyroid Nodules
CN109671094B (en) Fundus image blood vessel segmentation method based on frequency domain classification
CN101667289B (en) Retinal image segmentation method based on NSCT feature extraction and supervised classification
CN113393446B (en) Convolutional neural network medical image key point detection method based on attention mechanism
CN108764342B (en) Semantic segmentation method for optic discs and optic cups in fundus image
CN111161287A (en) Retinal vessel segmentation method based on symmetric bidirectional cascade network deep learning
CN114266794B (en) Pathological section image cancer region segmentation system based on full convolution neural network
CN115205300A (en) Fundus blood vessel image segmentation method and system based on cavity convolution and semantic fusion
CN106845551A (en) A kind of histopathology image-recognizing method
CN109919915A (en) Retina fundus image abnormal region detection method and device based on deep learning
CN112712526B (en) Retina blood vessel segmentation method based on asymmetric convolutional neural network double channels
CN114565620B (en) Fundus image blood vessel segmentation method based on skeleton prior and contrast loss
CN112750132A (en) White blood cell image segmentation method based on dual-path network and channel attention
CN112598031A (en) Vegetable disease detection method and system
CN112102259A (en) Image segmentation algorithm based on boundary guide depth learning
CN114742799A (en) Industrial scene unknown type defect segmentation method based on self-supervision heterogeneous network
CN112884788A (en) Cup optic disk segmentation method and imaging method based on rich context network
CN115526829A (en) Honeycomb lung focus segmentation method and network based on ViT and context feature fusion
Zhao et al. Attention residual convolution neural network based on U-net (AttentionResU-Net) for retina vessel segmentation
CN116883341A (en) Liver tumor CT image automatic segmentation method based on deep learning
CN117351487A (en) Medical image segmentation method and system for fusing adjacent area and edge information
CN115359046B (en) Organ blood vessel segmentation method and device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant