CN109191476A - The automatic segmentation of Biomedical Image based on U-net network structure - Google Patents

The automatic segmentation of Biomedical Image based on U-net network structure Download PDF

Info

Publication number
CN109191476A
CN109191476A CN201811048857.5A CN201811048857A CN109191476A CN 109191476 A CN109191476 A CN 109191476A CN 201811048857 A CN201811048857 A CN 201811048857A CN 109191476 A CN109191476 A CN 109191476A
Authority
CN
China
Prior art keywords
image
layer
convolutional layer
deformable
net network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811048857.5A
Other languages
Chinese (zh)
Other versions
CN109191476B (en
Inventor
胡学刚
杨洪光
郑攀
王良晨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University of Post and Telecommunications
Original Assignee
Chongqing University of Post and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University of Post and Telecommunications filed Critical Chongqing University of Post and Telecommunications
Priority to CN201811048857.5A priority Critical patent/CN109191476B/en
Publication of CN109191476A publication Critical patent/CN109191476A/en
Application granted granted Critical
Publication of CN109191476B publication Critical patent/CN109191476B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing

Abstract

The invention belongs to image procossings and technical field of computer vision, it is related to the automatic segmentation of Biomedical Image based on U-net network structure, including biomedical data collection is divided into training set and test set, to test set and amplification, treated that pretreatment is normalized in test set;The image of training set is inputted into improved U-net network model, output image passes through softmax layers of generation class probability figure;The weight parameter of network model is obtained by gradient back-propagation method by the error that centrality loss function calculates class probability figure and goldstandard;By the defeated improved U-net network model of the image in test set, exports image and pass through softmax layers of generation class probability figure;According to the class probability in class probability figure, the segmentation result figure of image is obtained;The present invention solves the problems such as simple sample is too large to fine difficulty of learning sample to the contribution of loss function in image segmentation process.

Description

The automatic segmentation of Biomedical Image based on U-net network structure
Technical field
The invention belongs to image procossings and technical field of computer vision, in particular to a kind of to be based on U-net network structure The automatic segmentation of Biomedical Image.
Background technique
Medical image segmentation all has ten to three-dimensional localization, three-dimensional visualization, surgery planning and computer-aided diagnosis etc. Divide important meaning, is one of hot research field of image processing and analysis.Its method is divided into artificial segmentation, semi-automatic segmentation With automatic three kinds of segmentation.Artificial dividing method is quite time-consuming, and the subjective factors such as knowledge experience dependent on clinical expert, can It is less reproducible, real-time needs clinically cannot be fully met.Semi-automatic partition method takes human-computer interaction, to a certain degree On improve splitting speed, but still depend on observer, limit its application in clinical practice.Automatic division method is complete Area-of-interest edge is extracted by computer entirely, this kind of method completely avoids the influence of observer's subjective factor, improves The speed of processing data, favorable repeatability.However, due in biomedicine, the complicated variation of target individual structure and Low contrast, noise caused by various medical imaging modes and technology etc. influences, and keeps the changeability of medical image very high.Therefore, Automatic segmentation Biomedical Image becomes one of the research hotspot of current image procossing.
In recent years, method pixel-based and structure-based method achieve essence in terms of Biomedical Image segmentation The progress of property.These methods achieve desired knot in some simple segmentation tasks using manual feature and priori knowledge Fruit, it is often ineffective when being applied to the object with complicated variation characteristic.Recently, deep neural network (DNNs), especially It is that full convolutional neural networks (FCN) are highly effective to medical image segmentation, it is considered to be carry out image segmentation using deep learning Basic structure.The network reaches segmentation purpose by pixel classifications, and structure includes downward sampling section and upward sampling unit Point.Downward sampling section is made of convolutional layer and maximum pond layer, and upward sampling section by convolutional layer and deconvolution, (roll up by transposition Product) layer composition.U-net is a kind of medical image cutting method based on FCN, it includes encoder (encoder) and decoder (decoder) two parts.Encoder and decoder respectively correspond downward sampling section and upward sampling section in FCN, decoding Device connects encoder to merge minutia, to improve the effect of segmentation, and in ISBI2015 cell segmentation by jump It wins the championship title in contest.Then, a series of medical image cutting methods based on U-net structure are successively proposed, and are successfully answered For clinical diagnosis.
In the segmentation network based on U-net structure, input picture resolution ratio after encoder becomes smaller, and decoder is usual Using 2 × 2 convolution both methodss are connect after deconvolution or bilinear interpolation again, gradually restore resolution ratio and final output point Cut result figure.But the zero padding operation that is carried out before convolution of deconvolution and bilinear interpolation can not learning characteristic, all can Influence the performance of decoder.The shapes and sizes variation complexity of target is one of the main difficulty of Biomedical Image segmentation.Solution Certainly usually there are two types of methods for the problem: first is that using the manual eigentransformation algorithm with space-invariance, such as Scale invariant spy Sign transformation algorithm (SIFT).But when object variations are too complicated, this method often fails.Second is that being enhanced by data and being had There is the neural network of study geometric transformation ability to complete.Data enhancing be by some geometric transform methods as rotation, overturning, Scaling etc. increases the quantity of image in data set, but very time-consuming, is not suitable for the target that there is complex geometry to convert.Space Converting network (Spatial Transformer Networks, STNs) is a kind of convolution mind proposed by Jaderberg et al. Through network architecture model, robustness is fine, has the space-invariances such as translation, flexible, rotation, disturbance, bending, some small Good effect is obtained in image classification task.STNs distorts spy by study global change's parameter (such as affine transformation) Sign figure.But study global change's parameter is difficult, and very time-consuming.Deformable convolution also has study geometric transformation Ability, new characteristic pattern is generated in characteristic pattern up-sampling by local and intensive mode, is become with adapting to the geometry of image Change.Compared to STNs, the calculation amount of deformable convolution is smaller and is easier to train.
In addition, often there is positive and negative sample distribution imbalance problem in Biomedical Image, and similar sample also have difficulty or ease it Point, for example, object edge region sample compared with the more difficult segmentation in central area.Both of these problems will lead to loss function convergence The position bad to one causes the generalization ability of model to reduce.Centrality loss function (Focal loss) is initially applied In intensive object detection task, seriously uneven with the sample for solving the generation of Anchor mechanism, simple sample is to loss function Contribute it is excessive and cannot very well difficulty of learning sample the problems such as.
Summary of the invention
In view of this, the purpose of the present invention is to provide a kind of Biomedical Image based on U-net network structure is automatic Segmentation improves the ability of the extraction feature of encoder using deformable convolution, proposes a kind of new top sampling method Enhance decoder and restore resolution ratio, the ability of fusion feature improves model learning difficulty sample using centrality loss function Ability finally improves segmentation effect.
In order to achieve the above objectives, Biomedical Image of the present invention offer based on U-net network structure divides new side automatically Method, comprising the following steps:
S1: being divided into training set and test set for biomedical data collection, training set is carried out data amplification processing, and to survey Treated that pretreatment is normalized in data set for examination collection and amplification;
S2: the image of training set is inputted into improved U-net network model, output image generates one by softmax layers The class probability figure that a port number is 2, wherein class probability figure is identical as the resolution ratio of input picture;
S3: the error of class probability figure and goldstandard is calculated by centrality loss function, and passes through gradient back-propagation Method obtains the weight parameter of improved U-net network model;
S4: trained improved U-net network model, output image in the image input S3 in test set are passed through Softmax layers of generation class probability figure;
S5: it according to the class probability in class probability figure, takes the classification of maximum probability as the classification of the location of pixels, obtains To the segmentation result figure of image.
Further, step S1 is specifically included:
S11: the image data in training set is rotated, and rotates angle between (- 20 °, 20 °), after interception rotation Image data maximum rectangle;
S12: postrotational image data is spun upside down and is controlled reversion with 80% probability, then skips to step S13:
S13: image data is subjected to elastic distortion with 80% probability, then skips to step S14;
Image data: being carried out the scaling processing of (50%, 80%) range by S14, completes data amplification processing;
S15: the mean value and standard deviation of image data in test set and amplification treated training set are calculated, is returned by contrast One changes the contrast of formula manipulation image, and wherein contrast normalization formula indicates are as follows:
I=(I-Mean)/Std;
Wherein, I indicates the contrast of image, and Mean indicates the mean value of image data, and Std indicates the standard of image data Difference.
Preferably, improved U-net network model includes deformable encoder and the decoding with reconstruct up-sampling structure Device network composition, the composition of deformable encoder successively includes input layer, the first deformable convolutional layer, the second deformable convolution It is layer, the first maximum pond layer, the deformable convolutional layer of third, the 4th deformable convolutional layer, the second maximum pond layer, the 5th deformable Convolutional layer, the 6th deformable convolutional layer, third maximum pond layer, the 7th deformable convolutional layer, the 8th deformable convolutional layer, the 4th Maximum pond layer, the 9th deformable convolutional layer;With reconstruct up-sampling structure decoder network include the first regular volume lamination, First reconstruct up-sampling layer, the second regular volume lamination, third regular volume lamination, the second reconstruct up-sampling layer, the 4th conventional convolution Layer, third reconstructed up-sampling layer, the 6th regular volume lamination, the 7th regular volume lamination, is adopted in fourth reconstructed the 5th conventional convolutional layer Sample layer, the 8th regular volume lamination, the 9th regular volume lamination, the tenth regular volume lamination, that is, output layer;First regular volume lamination and Nine deformable convolutional layer connections, the first reconstruct up-sampling layer are spliced with the 8th deformable convolutional layer, the second reconstruct up-sampling Layer is spliced with the 6th deformable convolutional layer, and third reconstructed up-sampling layer is spliced with the 4th deformable convolutional layer, and the 4th Reconstruct up-sampling layer is spliced with the second deformable convolutional layer;In the activation letter of each deformable convolutional layer and regular volume lamination Before number, addition group is standardized.
Preferably, the operation of deformable convolutional layer includes:
S21: being that h × w × c characteristic pattern inputs in deformable convolutional layer by size, the convolution for the use of activation primitive being elu Layer carries out convolution to characteristic pattern;
S22: the convolutional layer that the convolution results input activation primitive in S21 is rule is subjected to convolution operation;
S23: operation is reconstructed to the convolution results of S22, generates the offset domain of 3h × 3w × 2;
S24: bilinear interpolation is carried out to characteristic pattern using offset domain, generates 3h × 3w × c characteristic pattern;
S25: by 3h × 3w × c characteristic pattern input convolution kernel number be d, 3 × 3 convolutional layers that step-length is 3, obtain h × w The output of the characteristic pattern of × d, that is, deformable convolution.
Preferably, reconstruct up-sampling layer operation include:
S31: being h × w for resolution ratio, and port number is the characteristic pattern of c, first by the convolution of 2c 1 × 1 by its channel Number increases 2 times;
The characteristic pattern of S32:S31 output has obtained h × w × 2c spy by group normalizing operation and relu activation primitive Sign figure;
S33: the characteristic pattern that S32 is obtained is divided into c/2 part, and each part is h × w × 4, carries out weight to each part Structure up-sampling, ultimately generates 2h × 2w × c/2 characteristic pattern, thus completes resolution ratio and expands twice, port number reduces half Upper sampling process.
Preferably, the centrality loss function LfocalIt indicates are as follows:
Wherein, α is constant, for the factor for coping with class imbalance;γ is the parameter greater than 0, to control difficulty or ease sample pair The parameter of the contribution gap of loss function;Y (x) indicates input feature vector figure;P (x) indicates the value at pixel x;For positive sample, p (x) bigger explanation is simple sample, corresponding (1-p (x))γIt is smaller, to reduce its contribution to loss function;P (x) is smaller Explanation is difficult sample, corresponding (1-p (x))γIt is bigger, it is increased in the accounting of loss function.
The beneficial effects of the present invention are:
1) present invention often has non-rigid according to the target of biological targets, other than using rotation, reversion, also takes The data amplification method of elastic distortion, this method can greatly increase the picture number of training set;
2) present invention uses this full convolutional neural networks structure of U-net, can learn from data, is that a kind of image arrives The segmentation network of image, while the network can also merge detailed information and global information, and then improve segmentation effect;
3) present invention is for conventional convolution used in encoder because its geometry fixed is defined to geometric deformation Learning ability convolutional coding structure can according to data adaptive be changed using deformable convolution;
4) present invention has the shortcomings that for top sampling method used in encoder, such as deconvolution is needed in convolution Preceding carry out zero padding, the not learnability of bilinear interpolation propose reconstruct up-sampling convolution, do not need zero padding, and be that can learn It practises;
5) often there is positive and negative sample distribution imbalance problem for Biomedical Image in the present invention, and similar sample also has Point of difficulty or ease, for example the sample in object edge region can using centrality loss function compared with the more difficult segmentation in central area With solve simple sample it is excessive to the contribution of loss function and cannot very well difficulty of learning sample the problems such as.
Detailed description of the invention
In order to keep the purpose of the present invention, technical scheme and beneficial effects clearer, the present invention provides following attached drawing and carries out Illustrate:
Fig. 1 is that the present invention is based on the Biomedical Image automatic division method flow diagrams of U-net network structure;
Fig. 2 is the improved U-net schematic network structure of the present invention;
Fig. 3 is the schematic diagram of the deformable convolution of the present invention;
Fig. 4 is the schematic diagram of present invention reconstruct up-sampling structure;
Fig. 5 is the 1/30th slice map of training set cytological map that the present invention uses;
Fig. 6 is the corresponding goldstandard schematic diagram of cytological map in Fig. 5 of the present invention;
Fig. 7 is 18th slice map schematic diagram of test set of the present invention;
Fig. 8 is the cell segmentation effect picture obtained using the method for the present invention.
Specific embodiment
A kind of Biomedical Image based on U-net network structure of the present invention is divided automatically with reference to the accompanying drawings of the specification New method is further detailed.
The present invention provides a kind of Biomedical Image automatic division method based on U-net network structure, such as Fig. 1, including Following steps:
S1: being divided into training set and test set for biomedical data collection, training set is carried out data amplification processing, and to survey Treated that pretreatment is normalized in data set for examination collection and amplification;
S2: the image of training set is inputted into improved U-net network model, output image generates one by softmax layers The class probability figure that a port number is 2, wherein class probability figure is identical as the resolution ratio of input picture;
S3: the error of class probability figure and goldstandard is calculated by centrality loss function, and passes through gradient back-propagation Method obtains the weight parameter of improved U-net network model;
S4: trained improved U-net network model, output image in the image input S3 in test set are passed through Softmax layers of generation class probability figure;
S5: it according to the class probability in class probability figure, takes the classification of maximum probability as the classification of the location of pixels, obtains To the segmentation result figure of image.
Since deep neural network needs a large amount of data to be trained, and the metamorphosis of biological targets is complicated, so Need to training set carry out data amplification, to training set data amplification the following steps are included:
S11: the image data in training set is rotated, and rotates angle between (- 20 °, 20 °), after interception rotation Image data maximum rectangle;
S12: postrotational image data is spun upside down and is controlled reversion with 80% probability, then skips to step S13:
S13: image data is subjected to elastic distortion with 80% probability, then skips to step S14;
Image data: being carried out the scaling processing of (50%, 80%) range by S14, completes data amplification processing.
Due to the influence of the factors such as equipment, image-forming condition, in data set the brightness irregularities or biological targets of picture by In excessively bright or excessively dark, keep its contrast not high, therefore, image is normalized in we.Normalization is being improved and is being carried on the back While the contrast of scape, very big it can must retain biological targets self character, to divide in subsequent deep neural network It cuts out, normalized is as follows:
S15: the mean value and standard deviation of image data in test set and amplification treated training set are calculated, is returned by contrast One changes the contrast of formula manipulation image, and wherein contrast normalization formula indicates are as follows:
I=(I-Mean)/Std;
Wherein, I indicates the contrast of image, and Mean indicates the mean value of image data, and Std indicates the standard of image data Difference;Image data refers to the contrast of image in data set in the present embodiment.
Centrality loss function LfocalIt indicates are as follows:
Wherein, α is constant, for the factor for coping with class imbalance;γ is the parameter greater than 0, to control difficulty or ease sample pair The parameter of the contribution gap of loss function;Y (x) indicates input feature vector figure;P (x) indicates the value at pixel x;Wherein, for positive sample This, p (x) is bigger, and explanation is simple sample, corresponding (1-p (x))γIt is smaller, to reduce its contribution to loss function;p(x) Smaller explanation is difficult sample, corresponding (1-p (x))γIt is bigger, it is increased in the accounting of loss function
Improved U-net network structure, as shown in Fig. 2, wherein C1~C9 respectively corresponds the first deformable convolutional layer~the Nine deformable convolutional layers, P1~P4 respectively correspond the first maximum pond layer~4th maximum pond layer, and R1~R4 respectively corresponds the One reconstruct up-sampling layer~fourth reconstructed up-samples layer, and C10~C19 respectively corresponds the first regular volume lamination~the tenth conventional convolution Layer, it should be noted that the convolution kernel number of the tenth regular volume lamination is class number.Left-hand component in Fig. 2 (from input layer to It C9) is encoder section, white box represents the characteristic pattern of encoder section;Using the image after normalization as input, just The convolution kernel number of two deformable convolutional layer C1, C2 beginning are 16, every by primary maximum pond layer, the resolution ratio of characteristic pattern Half can be reduced, follows the convolution kernel number of the subsequent two deformable convolutional layer of pond layer can for two before the pond layer Deform twice of the convolution kernel number of convolutional layer.Deformable convolution can be expressed by following formula:
Wherein, y (p0) indicates the value of pixel p 0 in output characteristic pattern y, and x (p0+pn+ Δ pn) is indicated in input feature vector figure x The p0+pn+ Δ pn value of pixel, pn indicate convolution displacement parameter, R be pn codomain, be expressed as R=(- 1, -1), (- 1,0), (- 1,1), (0, -1), (0,0), (0,1), (1, -1), (1,0), (1,1) }, Δ pn is offset, usually decimal, so x (p0+pn+ Δ pn) can be acquired by bilinear interpolation.
The structure of deformable convolution is as shown in figure 3, deformable convolution grasps input feature vector figure by additional convolution Make, be h × w × c input feature vector figure for size, first passes around the convolutional layer that two convolution kernel numbers are 18, first volume The activation primitive of lamination is index linear unit (Exponential Linear Unit, ELU) activation primitive, linear with amendment It still has output when input is negative value unlike unit (rectified linear unit, ReLU) activation primitive, More information inputs can thus be retained to second convolutional layer, the activation primitive of second 3 × 3 convolutional layer is by tanh Function (tanh) replaces, so that output is mapped between (- 1,1), corresponds to offset between (- 1,1);
The new feature figure that second convolutional layer obtains is reconstructed into the offset domain of 3h × 3w × 2, two channel difference again Corresponding to x coordinate offset and y-coordinate offset, offset (the corresponding variable shape volume of every 3 × 3 convolution of Regional Representative Δ p in productn).Offset domain is added to have obtained p with grid domain0+pn+Δpn, and bilinear interpolation behaviour is carried out to input feature vector figure Make, the resolution ratio of input feature vector figure is made to expand three times, and the convolution for being 3 by d step-length, generates output characteristic pattern, d Fig. 2 In deformable convolution kernel number;Deformable convolution can adapt to the several of image according to the change receptive field of data adaptive What changes, and calculation amount increases to obtain very little;In the training process, at the same learn obtain offset domain additional convolution kernel and acquisition The convolution kernel of characteristic pattern is exported, in order to learn offset, gradient can pass through the formula back-propagating of amphicheirality's interpolation arithmetic.
Right-hand component (from C10 to C19) is decoder section in Fig. 2, and grey square frame represents decoder section generation Characteristic pattern.By the output of C9 as input, the convolution kernel number of C10 is all 256 such as C9, every by primary reconstruct The resolution ratio of sample level, characteristic pattern can expand twice.The structure chart of top sampling method is reconstructed as shown in figure 4, being h for resolution ratio × w, port number are the characteristic pattern of c, its port number is increased 2 times by 2c 1 × 1 convolution first, then passes through data group Normalizing operation and ReLU activation primitive have obtained h × w × 2c new feature figure.Then, which is divided into c/2 It partially (is indicated in Fig. 4 with different gray scales), operation is reconstructed to each section, the characteristic pattern of h × w × 4 just becomes 2h × 2w × 1 ultimately generates 2h × 2w × c/2 characteristic pattern, i.e. up-sampling characteristic pattern, thus completes resolution ratio and expands twice, leads to The upper sampling process of road number diminution half.Reconstruct top sampling method by directly on input feature vector figure convolution predict to up-sample The value of each pixel in characteristic pattern, it can learn and not need filling zero, and therefore, this method is than deconvolution and two-wire The top sampling method of property interpolation is more effective;In addition, the lightweight of 1 × 1 convolution, therefore the parameter and calculation amount of this method Both less than other two methods.
Class probability figure is converted by softmax function by the output of C19.
Other than specified otherwise, convolution used in the present invention is all 3 × 3, step-length 1, activation primitive ReLU;Volume Lamination all obtains the characteristic pattern of same resolution ratio by way of zero padding.
In training, the error of class probability figure and goldstandard is calculated by centrality loss function, and by after gradient It to propagation, is updated using weight parameter of the Adam optimization algorithm to model, finally obtains the convergence model of the neural network.
In test, trained network model is loaded, inputs test image after arbitrary size and normalized, is improved U-net network structure output pass through softmax layers of generation class probability figure.
Embodiment 1
In the present embodiment, using TensorFlow open source deep learning library, NVIDIA Tesla M40GPU is used to carry out Accelerate, using Adam optimization algorithm training pattern, initial learning rate is 0.001, it is tactful using being decayed with " poly " learning rate, and And L2 regularization (decay factor 0.0005) is adopted to reduce over-fitting situation;Using 2012 electron microscope cell of ISBI point The Drosophila EM data set for cutting challenge match offer is tested.
30 by drosophila first-instar young cns stem cell under an electron microscope of the training dataset of the present embodiment Serial section composition, each includes 512 × 512 pixels and has the goldstandard of a segmentation to be corresponding to it;As Fig. 5-6 institute Show, in goldstandard image, white represents cell, and black indicates cell membrane, and test set is by other 30 image constructions.Due to Deep learning needs a large amount of data to be trained, and uses the data enhancement methods such as random overturning, rotation and elastic distortion here, To increase the amount of images of training set.
In the present embodiment, two evaluation indexes, i.e. V are proposed using ISBI contest organizerRandAnd VInfo.The two refer to Mark closer to 1, divide more accurate by expression.As it can be seen from table 1 the method for the present invention is joined using model compared with U-net Number reduces 30M, and VRandAnd VInfoIt is all promoted, and the V of this methodRand0.57% is improved compared with U-net, is reached To 97.84%.Centrality loss function compared with loss function again without calculate a complicated weight map, so training when Between greatly reduce.
Further, as shown in Fig. 7 arrow, the boundary of nucleus is very close with cell boundaries, leads to the cell at this Boundary is difficult detected.It, can be compared with as shown in figure 8, the method for the present invention can accurately but be classified using centrality loss function Guarantee the continuity of cell boundaries well, this is very important in cell segmentation.
1 context of methods of table and U-net method are in EM data set Experimental comparison
Table 2 illustrates the comparative situation of some outstanding results in context of methods and ISBI cell segmentation match ranking list. Wherein, M2FCN method uses multi-stage network structure, and training process is extremely complex.Other methods then all used post-processing or The more training patterns of person average methods improves segmentation effect.From index VRandFrom the point of view of, the method for the present invention effect is best, from index VInfoFrom the point of view of, FusionNet method is best.In fact, post-processing approach used in IALIC and CUMedVision can also Using in the method for the invention, to enhance segmentation performance.Meanwhile the present invention can use ResNet as FusionNet Structure, and then the depth of model can be continued growing, improve the robustness of model.
2 the method for the present invention of table and other methods are in EM data set Experimental comparison
Embodiment 2
In the present embodiment, provided unlike the first embodiment using GLand Segmentation (GLaS) challenge match Warwick-QU data set is tested.The data set include 165 original images (after dyeing), every image have one by The goldstandard image of expert's mark is corresponding to it, and is used to train with 85 images in training set here, with test set A and test Collection B is verified.
In training, it is (right when the length of original image or wide less than 512 that original image is cut out 512 × 512 sizes at random It fills image block 0), not only can guarantee that image is in the same size in each batch when training, while being also used as number According to a kind of method of enhancing, over-fitting is reduced.In addition, this method uses and Drosophila EM data tests identical number According to Enhancement Method.
Used here as F1 score and Object Hausdorff as evaluation index, wherein F1 score its be used to assessment gland Physical examination is surveyed, and it is then considered as true positives, otherwise that one, which is divided the overlapping region of the individual and its goldstandard at least 50% that, It is designated as false positive, if the individual that a goldstandard individual is not partitioned into overlaps or overlapping area is less than 50%, Then it is designated as false negative;
Hausdorff distance can be used to carry out shape similarity measurement with goldstandard individual to segmentation individual, this refers to Mark it is smaller show segmentation result and goldstandard has bigger shape similarity, segmentation effect is better.
As shown in table 3, segmentation result of the invention is than the segmentation result using Freiburg method on two test sets F1 score 0.023 and 0.027 has been respectively increased, show the method for the present invention body of gland detection on it is more outstanding.It notices all Result of the method in test set B to be much worse than test set A, main reason is that in test set B 80% shared by pernicious case, Its random and complicated structure makes body of gland detection become more difficult.By observing table 3, the method for the present invention is on test set A Object Hausdorff promoted it is most, illustrate that our segmentation result and goldstandard have higher shape similarity, demonstrate,prove Deformable convolution, which is illustrated, has stronger learning ability to target deformation.
3 the method for the present invention of table and other methods experimental result on Warwick-QU data set compare
To sum up experiment shows that the present invention is not only effectively, and compared with existing congenic method, to have apparent excellent Gesture.
Those of ordinary skill in the art will appreciate that all or part of the steps in the various methods of above-described embodiment is can It is completed with instructing relevant hardware by program, which can be stored in a computer readable storage medium, storage Medium may include: ROM, RAM, disk or CD etc..
Embodiment provided above has carried out further detailed description, institute to the object, technical solutions and advantages of the present invention It should be understood that embodiment provided above is only the preferred embodiment of the present invention, be not intended to limit the invention, it is all Any modification, equivalent substitution, improvement and etc. made for the present invention, should be included in the present invention within the spirit and principles in the present invention Protection scope within.

Claims (6)

1. the automatic segmentation of Biomedical Image based on U-net network structure, which comprises the following steps:
S1: being divided into training set and test set for biomedical data collection, carries out data amplification processing to training set, and to test set Treated that pretreatment is normalized in training set with amplification;
S2: the image of training set is inputted into improved U-net network model, output image generates one by softmax layers and leads to The class probability figure that road number is 2, wherein class probability figure is identical as the resolution ratio of input picture;
S3: being calculated the error of class probability figure and goldstandard by centrality loss function, and pass through gradient back-propagation method, Obtain the weight parameter of improved U-net network model;
S4: trained improved U-net network model, output image in the image input S3 in test set are passed through Softmax layers of generation class probability figure;
S5: it according to the class probability in class probability figure, takes the classification of maximum probability as the classification of the location of pixels, obtains figure The segmentation result figure of picture.
2. the automatic segmentation of the Biomedical Image according to claim 1 based on U-net network structure, feature It is, step S1 is specifically included:
S11: the image data in training set is rotated, and is rotated angle between (- 20 °, 20 °), is intercepted postrotational figure As the maximum rectangle of data;
S12: postrotational image data is spun upside down and is controlled reversion with 80% probability, then skips to step S13:
S13: image data is subjected to elastic distortion with 80% probability, then skips to step S14;
Image data: being carried out the scaling processing of (50%, 80%) range by S14, completes data amplification processing;
S15: the mean value and standard deviation of image data in test set and amplification treated training set are calculated, by contrast normalization The contrast of formula manipulation image, wherein contrast normalization formula indicates are as follows:
I=(I-Mean)/Std;
Wherein, I indicates the contrast of image, and Mean indicates the mean value of image data, and Std indicates the standard deviation of image data.
3. the automatic segmentation of the Biomedical Image according to claim 1 based on U-net network structure, feature It is, improved U-net network model includes that deformable encoder and the decoder network with reconstruct up-sampling structure form, The composition of deformable encoder successively includes input layer, the first deformable convolutional layer, the second deformable convolutional layer, the first maximum pond Change layer, the deformable convolutional layer of third, the 4th deformable convolutional layer, the second maximum pond layer, the 5th deformable convolutional layer, the 6th can Deform convolutional layer, third maximum pond layer, the 7th deformable convolutional layer, the 8th deformable convolutional layer, the 4th maximum pond layer, the Nine deformable convolutional layers;Decoder network with reconstruct up-sampling structure in the first regular volume lamination, the first reconstruct including adopting Sample layer, the second regular volume lamination, third regular volume lamination, the second reconstruct up-sampling layer, the 4th regular volume lamination, the 5th regular volume Lamination, third reconstructed up-sampling layer, the 6th regular volume lamination, the 7th regular volume lamination, fourth reconstructed up-sampling layer, the 8th routine Convolutional layer, the 9th regular volume lamination, the tenth regular volume lamination, that is, output layer;First regular volume lamination and the 9th deformable convolutional layer Connection, the first reconstruct up-sampling layer are spliced with the 8th deformable convolutional layer, and the second reconstruct up-samples layer and the 6th deformable Convolutional layer is spliced, third reconstructed up-sampling layer spliced with the 4th deformable convolutional layer, fourth reconstructed up-sample layer and Second deformable convolutional layer is spliced;Before the activation primitive of each deformable convolutional layer and regular volume lamination, it is added Group standardization.
4. the automatic segmentation of the Biomedical Image according to claim 3 based on U-net network structure, feature It is, the operation of deformable convolutional layer includes:
S21: being that h × w × c characteristic pattern inputs in deformable convolutional layer by size, the convolutional layer pair for the use of activation primitive being elu Characteristic pattern carries out convolution;
S22: the convolutional layer that the convolution results input activation primitive in S21 is rule is subjected to convolution operation;
S23: operation is reconstructed to the convolution results of S22, generates the offset domain of 3h × 3w × 2;
S24: bilinear interpolation is carried out to characteristic pattern using offset domain, generates 3h × 3w × c characteristic pattern;
S25: by 3h × 3w × c characteristic pattern input convolution kernel number be d, 3 × 3 convolutional layers that step-length is 3, obtain h × w × d The output of characteristic pattern, that is, deformable convolution.
5. the automatic segmentation of the Biomedical Image according to claim 3 based on U-net network structure, feature It is, the operation of reconstruct up-sampling layer includes:
S31: being h × w for resolution ratio, and port number is the characteristic pattern of c, is first increased its port number by 2c 1 × 1 convolution It is 2 times big;
The characteristic pattern of S32:S31 output has obtained h × w × 2c characteristic pattern by group normalizing operation and relu activation primitive;
S33: the characteristic pattern that S32 is obtained is divided into c/2 part, and each part is h × w × 4, and each part is reconstructed Sampling, ultimately generates 2h × 2w × c/2 characteristic pattern, thus completes resolution ratio and expands twice, adopts in port number diminution half Sample process.
6. the automatic segmentation of the Biomedical Image according to claim 1 based on U-net network structure, feature It is, the centrality loss function LfocalIt indicates are as follows:
Wherein, α is constant, for the factor for coping with class imbalance;γ is to control difficulty or ease sample to the contribution gap of loss function Parameter, and γ > 0;Y (x) indicates input feature vector figure x;P (x) indicates that the pixel of input feature vector figure x, Ω are input feature vector figure x Codomain.
CN201811048857.5A 2018-09-10 2018-09-10 Novel biomedical image automatic segmentation method based on U-net network structure Active CN109191476B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811048857.5A CN109191476B (en) 2018-09-10 2018-09-10 Novel biomedical image automatic segmentation method based on U-net network structure

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811048857.5A CN109191476B (en) 2018-09-10 2018-09-10 Novel biomedical image automatic segmentation method based on U-net network structure

Publications (2)

Publication Number Publication Date
CN109191476A true CN109191476A (en) 2019-01-11
CN109191476B CN109191476B (en) 2022-03-11

Family

ID=64915609

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811048857.5A Active CN109191476B (en) 2018-09-10 2018-09-10 Novel biomedical image automatic segmentation method based on U-net network structure

Country Status (1)

Country Link
CN (1) CN109191476B (en)

Cited By (70)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109858612A (en) * 2019-01-18 2019-06-07 清华大学 A kind of adaptive deformation cavity convolution method
CN109871798A (en) * 2019-02-01 2019-06-11 浙江大学 A kind of remote sensing image building extracting method based on convolutional neural networks
CN109886967A (en) * 2019-01-16 2019-06-14 成都蓝景信息技术有限公司 Lung anatomy position location algorithms based on depth learning technology
CN109949209A (en) * 2019-03-06 2019-06-28 武汉工程大学 A kind of rope detection and minimizing technology based on deep learning
CN109949299A (en) * 2019-03-25 2019-06-28 东南大学 A kind of cardiologic medical image automatic segmentation method
CN109978037A (en) * 2019-03-18 2019-07-05 腾讯科技(深圳)有限公司 Image processing method, model training method, device and storage medium
CN110047073A (en) * 2019-05-05 2019-07-23 北京大学 A kind of X-ray weld image fault grading method and system
CN110120051A (en) * 2019-05-10 2019-08-13 上海理工大学 A kind of right ventricle automatic division method based on deep learning
CN110147794A (en) * 2019-05-21 2019-08-20 东北大学 A kind of unmanned vehicle outdoor scene real time method for segmenting based on deep learning
CN110211140A (en) * 2019-06-14 2019-09-06 重庆大学 Abdominal vascular dividing method based on 3D residual error U-Net and Weighted Loss Function
CN110287930A (en) * 2019-07-01 2019-09-27 厦门美图之家科技有限公司 Wrinkle disaggregated model training method and device
CN110298844A (en) * 2019-06-17 2019-10-01 艾瑞迈迪科技石家庄有限公司 X-ray contrastographic picture blood vessel segmentation and recognition methods and device
CN110310280A (en) * 2019-07-10 2019-10-08 广东工业大学 Hepatic duct and the image-recognizing method of calculus, system, equipment and storage medium
CN110322435A (en) * 2019-01-20 2019-10-11 北京工业大学 A kind of gastric cancer pathological image cancerous region dividing method based on deep learning
CN110378913A (en) * 2019-07-18 2019-10-25 深圳先进技术研究院 Image partition method, device, equipment and storage medium
CN110490840A (en) * 2019-07-11 2019-11-22 平安科技(深圳)有限公司 A kind of cell detection method, device and the equipment of glomerulus pathology sectioning image
CN110517267A (en) * 2019-08-02 2019-11-29 Oppo广东移动通信有限公司 A kind of image partition method and device, storage medium
CN110598711A (en) * 2019-08-31 2019-12-20 华南理工大学 Target segmentation method combined with classification task
CN110675408A (en) * 2019-09-19 2020-01-10 成都数之联科技有限公司 High-resolution image building extraction method and system based on deep learning
CN110751175A (en) * 2019-09-12 2020-02-04 上海联影智能医疗科技有限公司 Method and device for optimizing loss function, computer equipment and storage medium
CN110852316A (en) * 2019-11-07 2020-02-28 中山大学 Image tampering detection and positioning method adopting convolution network with dense structure
CN110889859A (en) * 2019-11-11 2020-03-17 珠海上工医信科技有限公司 U-shaped network for fundus image blood vessel segmentation
CN110969630A (en) * 2019-11-11 2020-04-07 东北大学 Ore bulk rate detection method based on RDU-net network model
CN110992373A (en) * 2019-11-25 2020-04-10 杭州电子科技大学 Deep learning-based thoracic organ segmentation method
CN111047589A (en) * 2019-12-30 2020-04-21 北京航空航天大学 Attention-enhanced brain tumor auxiliary intelligent detection and identification method
CN111062880A (en) * 2019-11-15 2020-04-24 南京工程学院 Underwater image real-time enhancement method based on condition generation countermeasure network
CN111080603A (en) * 2019-12-12 2020-04-28 哈尔滨市科佳通用机电股份有限公司 Method for detecting breakage fault of shaft end bolt of railway wagon
CN111145170A (en) * 2019-12-31 2020-05-12 电子科技大学 Medical image segmentation method based on deep learning
CN111161273A (en) * 2019-12-31 2020-05-15 电子科技大学 Medical ultrasonic image segmentation method based on deep learning
CN111179275A (en) * 2019-12-31 2020-05-19 电子科技大学 Medical ultrasonic image segmentation method
CN111260619A (en) * 2020-01-14 2020-06-09 浙江中医药大学 Tongue body automatic segmentation method based on U-net model
CN111275712A (en) * 2020-01-15 2020-06-12 浙江工业大学 Residual semantic network training method oriented to large-scale image data
CN111340816A (en) * 2020-03-23 2020-06-26 沈阳航空航天大学 Image segmentation method based on double-U-shaped network framework
CN111414788A (en) * 2019-09-23 2020-07-14 中国矿业大学 Overlapped chromosome segmentation method based on deformable U-shaped network
CN111461165A (en) * 2020-02-26 2020-07-28 上海商汤智能科技有限公司 Image recognition method, recognition model training method, related device and equipment
CN111563439A (en) * 2020-04-28 2020-08-21 北京海益同展信息科技有限公司 Aquatic organism disease detection method, device and equipment
CN111583291A (en) * 2020-04-20 2020-08-25 中山大学 Layer segmentation method and system for retina layer and effusion region based on deep learning
CN111583285A (en) * 2020-05-12 2020-08-25 武汉科技大学 Liver image semantic segmentation method based on edge attention strategy
CN111681252A (en) * 2020-05-30 2020-09-18 重庆邮电大学 Medical image automatic segmentation method based on multipath attention fusion
CN111709293A (en) * 2020-05-18 2020-09-25 杭州电子科技大学 Chemical structural formula segmentation method based on Resunet neural network
CN111724399A (en) * 2020-06-24 2020-09-29 北京邮电大学 Image segmentation method and terminal
CN111724371A (en) * 2020-06-19 2020-09-29 联想(北京)有限公司 Data processing method and device and electronic equipment
WO2020199528A1 (en) * 2019-04-01 2020-10-08 北京市商汤科技开发有限公司 Image processing method and apparatus, electronic device, and storage medium
CN111862136A (en) * 2020-06-22 2020-10-30 南开大学 Multi-modal nuclear magnetic image ischemic stroke lesion segmentation method based on convolutional neural network
CN111931805A (en) * 2020-06-23 2020-11-13 西安交通大学 Knowledge-guided CNN-based small sample similar abrasive particle identification method
CN112102323A (en) * 2020-09-17 2020-12-18 陕西师范大学 Adherent nucleus segmentation method based on generation of countermeasure network and Caps-Unet network
CN112101362A (en) * 2020-08-25 2020-12-18 中国科学院空间应用工程与技术中心 Semantic segmentation method and system for space science experimental data
CN112102229A (en) * 2020-07-23 2020-12-18 西安交通大学 Intelligent industrial CT detection defect identification method based on deep learning
CN112132843A (en) * 2020-09-30 2020-12-25 福建师范大学 Hematoxylin-eosin staining pathological image segmentation method based on unsupervised deep learning
CN112164035A (en) * 2020-09-15 2021-01-01 郑州金惠计算机***工程有限公司 Image-based defect detection method and device, electronic equipment and storage medium
CN112464579A (en) * 2021-02-02 2021-03-09 四川大学 Identification modeling method for searching esophageal cancer lesion area based on evolutionary neural network structure
CN112489062A (en) * 2020-12-10 2021-03-12 中国科学院苏州生物医学工程技术研究所 Medical image segmentation method and system based on boundary and neighborhood guidance
CN112750137A (en) * 2021-01-14 2021-05-04 江南大学 Liver tumor segmentation method and system based on deep learning
CN112767259A (en) * 2020-12-29 2021-05-07 上海联影智能医疗科技有限公司 Image processing method, image processing device, computer equipment and storage medium
CN112801996A (en) * 2021-02-05 2021-05-14 强联智创(北京)科技有限公司 Grading method, grading device and grading equipment
CN112861868A (en) * 2021-02-10 2021-05-28 广东众聚人工智能科技有限公司 Image segmentation method and system
CN112907708A (en) * 2021-02-05 2021-06-04 深圳瀚维智能医疗科技有限公司 Human face cartoon method, equipment and computer storage medium
CN113129321A (en) * 2021-04-20 2021-07-16 重庆邮电大学 Turbine blade CT image segmentation method based on full convolution neural network
CN113116305A (en) * 2021-04-20 2021-07-16 深圳大学 Nasopharyngeal endoscope image processing method and device, electronic equipment and storage medium
CN113327258A (en) * 2021-07-15 2021-08-31 重庆邮电大学 Lung CT image identification method based on deep learning
CN113706451A (en) * 2021-07-07 2021-11-26 杭州脉流科技有限公司 Method, device, system and computer-readable storage medium for intracranial aneurysm identification detection
CN113706464A (en) * 2021-07-22 2021-11-26 西安交通大学 Printed matter appearance quality detection method and system
WO2022001372A1 (en) * 2020-06-30 2022-01-06 华为技术有限公司 Neural network training method and apparatus, and image processing method and apparatus
CN113949867A (en) * 2020-07-16 2022-01-18 武汉Tcl集团工业研究院有限公司 Image processing method and device
CN114155372A (en) * 2021-12-03 2022-03-08 长春工业大学 Deep learning-based structured light weld curve identification and fitting method
CN114596319A (en) * 2022-05-10 2022-06-07 华南师范大学 Medical image segmentation method based on Boosting-Unet segmentation network
CN114708255A (en) * 2022-04-29 2022-07-05 浙江大学 Multi-center children X-ray chest image lung segmentation method based on TransUNet model
CN115131364A (en) * 2022-08-26 2022-09-30 中加健康工程研究院(合肥)有限公司 Method for segmenting medical image based on Transformer
CN116012388A (en) * 2023-03-28 2023-04-25 中南大学 Three-dimensional medical image segmentation method and imaging method for acute ischemic cerebral apoplexy
CN116152211A (en) * 2023-02-28 2023-05-23 哈尔滨市科佳通用机电股份有限公司 Identification method for brake shoe abrasion overrun fault

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106920227A (en) * 2016-12-27 2017-07-04 北京工业大学 Based on the Segmentation Method of Retinal Blood Vessels that deep learning is combined with conventional method
CN107016665A (en) * 2017-02-16 2017-08-04 浙江大学 A kind of CT pulmonary nodule detection methods based on depth convolutional neural networks
CN108154196A (en) * 2018-01-19 2018-06-12 百度在线网络技术(北京)有限公司 For exporting the method and apparatus of image
CN108346145A (en) * 2018-01-31 2018-07-31 浙江大学 The recognition methods of unconventional cell in a kind of pathological section

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106920227A (en) * 2016-12-27 2017-07-04 北京工业大学 Based on the Segmentation Method of Retinal Blood Vessels that deep learning is combined with conventional method
CN107016665A (en) * 2017-02-16 2017-08-04 浙江大学 A kind of CT pulmonary nodule detection methods based on depth convolutional neural networks
CN108154196A (en) * 2018-01-19 2018-06-12 百度在线网络技术(北京)有限公司 For exporting the method and apparatus of image
CN108346145A (en) * 2018-01-31 2018-07-31 浙江大学 The recognition methods of unconventional cell in a kind of pathological section

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
JIFENG DAI 等: ""Deformable Convolutional Networks"", 《2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV)》 *
MO ZHANG等: ""IMAGE SEGMENTATION AND CLASSIFICATION FOR SICKLE CELL DISEASE USING DEFORMABLE U-NET"", 《ARXIV》 *
TSUNG-YI LIN 等: ""Focal Loss for Dense Object Detection"", 《2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV)》 *
周鲁科 等: ""基于 U-net 网络的肺部肿瘤图像分割算法研究"", 《信息与电脑》 *

Cited By (96)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109886967A (en) * 2019-01-16 2019-06-14 成都蓝景信息技术有限公司 Lung anatomy position location algorithms based on depth learning technology
CN109858612A (en) * 2019-01-18 2019-06-07 清华大学 A kind of adaptive deformation cavity convolution method
CN110322435A (en) * 2019-01-20 2019-10-11 北京工业大学 A kind of gastric cancer pathological image cancerous region dividing method based on deep learning
CN109871798A (en) * 2019-02-01 2019-06-11 浙江大学 A kind of remote sensing image building extracting method based on convolutional neural networks
CN109949209A (en) * 2019-03-06 2019-06-28 武汉工程大学 A kind of rope detection and minimizing technology based on deep learning
CN109949209B (en) * 2019-03-06 2022-07-19 武汉工程大学 Rope detection and removal method based on deep learning
CN109978037A (en) * 2019-03-18 2019-07-05 腾讯科技(深圳)有限公司 Image processing method, model training method, device and storage medium
CN109978037B (en) * 2019-03-18 2021-08-06 腾讯科技(深圳)有限公司 Image processing method, model training method, device and storage medium
CN109949299A (en) * 2019-03-25 2019-06-28 东南大学 A kind of cardiologic medical image automatic segmentation method
WO2020199528A1 (en) * 2019-04-01 2020-10-08 北京市商汤科技开发有限公司 Image processing method and apparatus, electronic device, and storage medium
CN110047073A (en) * 2019-05-05 2019-07-23 北京大学 A kind of X-ray weld image fault grading method and system
CN110120051A (en) * 2019-05-10 2019-08-13 上海理工大学 A kind of right ventricle automatic division method based on deep learning
CN110147794A (en) * 2019-05-21 2019-08-20 东北大学 A kind of unmanned vehicle outdoor scene real time method for segmenting based on deep learning
CN110211140A (en) * 2019-06-14 2019-09-06 重庆大学 Abdominal vascular dividing method based on 3D residual error U-Net and Weighted Loss Function
CN110298844B (en) * 2019-06-17 2021-06-29 艾瑞迈迪科技石家庄有限公司 X-ray radiography image blood vessel segmentation and identification method and device
CN110298844A (en) * 2019-06-17 2019-10-01 艾瑞迈迪科技石家庄有限公司 X-ray contrastographic picture blood vessel segmentation and recognition methods and device
CN110287930A (en) * 2019-07-01 2019-09-27 厦门美图之家科技有限公司 Wrinkle disaggregated model training method and device
CN110287930B (en) * 2019-07-01 2021-08-20 厦门美图之家科技有限公司 Wrinkle classification model training method and device
CN110310280A (en) * 2019-07-10 2019-10-08 广东工业大学 Hepatic duct and the image-recognizing method of calculus, system, equipment and storage medium
CN110310280B (en) * 2019-07-10 2021-05-11 广东工业大学 Image recognition method, system, equipment and storage medium for hepatobiliary duct and calculus
CN110490840A (en) * 2019-07-11 2019-11-22 平安科技(深圳)有限公司 A kind of cell detection method, device and the equipment of glomerulus pathology sectioning image
WO2021003821A1 (en) * 2019-07-11 2021-01-14 平安科技(深圳)有限公司 Cell detection method and apparatus for a glomerular pathological section image, and device
CN110378913A (en) * 2019-07-18 2019-10-25 深圳先进技术研究院 Image partition method, device, equipment and storage medium
CN110517267A (en) * 2019-08-02 2019-11-29 Oppo广东移动通信有限公司 A kind of image partition method and device, storage medium
CN110598711A (en) * 2019-08-31 2019-12-20 华南理工大学 Target segmentation method combined with classification task
CN110751175A (en) * 2019-09-12 2020-02-04 上海联影智能医疗科技有限公司 Method and device for optimizing loss function, computer equipment and storage medium
CN110675408A (en) * 2019-09-19 2020-01-10 成都数之联科技有限公司 High-resolution image building extraction method and system based on deep learning
CN111414788B (en) * 2019-09-23 2023-08-11 中国矿业大学 Overlapped chromosome image segmentation method based on deformable U-shaped network
CN111414788A (en) * 2019-09-23 2020-07-14 中国矿业大学 Overlapped chromosome segmentation method based on deformable U-shaped network
CN110852316B (en) * 2019-11-07 2023-04-18 中山大学 Image tampering detection and positioning method adopting convolution network with dense structure
CN110852316A (en) * 2019-11-07 2020-02-28 中山大学 Image tampering detection and positioning method adopting convolution network with dense structure
CN110969630A (en) * 2019-11-11 2020-04-07 东北大学 Ore bulk rate detection method based on RDU-net network model
CN110889859A (en) * 2019-11-11 2020-03-17 珠海上工医信科技有限公司 U-shaped network for fundus image blood vessel segmentation
CN111062880A (en) * 2019-11-15 2020-04-24 南京工程学院 Underwater image real-time enhancement method based on condition generation countermeasure network
CN110992373B (en) * 2019-11-25 2022-04-01 杭州电子科技大学 Deep learning-based thoracic organ segmentation method
CN110992373A (en) * 2019-11-25 2020-04-10 杭州电子科技大学 Deep learning-based thoracic organ segmentation method
CN111080603A (en) * 2019-12-12 2020-04-28 哈尔滨市科佳通用机电股份有限公司 Method for detecting breakage fault of shaft end bolt of railway wagon
CN111047589B (en) * 2019-12-30 2022-07-26 北京航空航天大学 Attention-enhanced brain tumor auxiliary intelligent detection and identification method
CN111047589A (en) * 2019-12-30 2020-04-21 北京航空航天大学 Attention-enhanced brain tumor auxiliary intelligent detection and identification method
CN111145170B (en) * 2019-12-31 2022-04-22 电子科技大学 Medical image segmentation method based on deep learning
CN111161273A (en) * 2019-12-31 2020-05-15 电子科技大学 Medical ultrasonic image segmentation method based on deep learning
CN111179275A (en) * 2019-12-31 2020-05-19 电子科技大学 Medical ultrasonic image segmentation method
CN111179275B (en) * 2019-12-31 2023-04-25 电子科技大学 Medical ultrasonic image segmentation method
CN111145170A (en) * 2019-12-31 2020-05-12 电子科技大学 Medical image segmentation method based on deep learning
CN111260619A (en) * 2020-01-14 2020-06-09 浙江中医药大学 Tongue body automatic segmentation method based on U-net model
CN111275712A (en) * 2020-01-15 2020-06-12 浙江工业大学 Residual semantic network training method oriented to large-scale image data
CN111461165A (en) * 2020-02-26 2020-07-28 上海商汤智能科技有限公司 Image recognition method, recognition model training method, related device and equipment
CN111340816A (en) * 2020-03-23 2020-06-26 沈阳航空航天大学 Image segmentation method based on double-U-shaped network framework
CN111583291A (en) * 2020-04-20 2020-08-25 中山大学 Layer segmentation method and system for retina layer and effusion region based on deep learning
CN111583291B (en) * 2020-04-20 2023-04-18 中山大学 Layer segmentation method and system for retina layer and effusion region based on deep learning
CN111563439A (en) * 2020-04-28 2020-08-21 北京海益同展信息科技有限公司 Aquatic organism disease detection method, device and equipment
CN111563439B (en) * 2020-04-28 2023-08-08 京东科技信息技术有限公司 Aquatic organism disease detection method, device and equipment
CN111583285A (en) * 2020-05-12 2020-08-25 武汉科技大学 Liver image semantic segmentation method based on edge attention strategy
CN111709293A (en) * 2020-05-18 2020-09-25 杭州电子科技大学 Chemical structural formula segmentation method based on Resunet neural network
CN111709293B (en) * 2020-05-18 2023-10-03 杭州电子科技大学 Chemical structural formula segmentation method based on Resunet neural network
CN111681252A (en) * 2020-05-30 2020-09-18 重庆邮电大学 Medical image automatic segmentation method based on multipath attention fusion
CN111681252B (en) * 2020-05-30 2022-05-03 重庆邮电大学 Medical image automatic segmentation method based on multipath attention fusion
CN111724371A (en) * 2020-06-19 2020-09-29 联想(北京)有限公司 Data processing method and device and electronic equipment
CN111724371B (en) * 2020-06-19 2023-05-23 联想(北京)有限公司 Data processing method and device and electronic equipment
CN111862136A (en) * 2020-06-22 2020-10-30 南开大学 Multi-modal nuclear magnetic image ischemic stroke lesion segmentation method based on convolutional neural network
CN111931805B (en) * 2020-06-23 2022-10-28 西安交通大学 Knowledge-guided CNN-based small sample similar abrasive particle identification method
CN111931805A (en) * 2020-06-23 2020-11-13 西安交通大学 Knowledge-guided CNN-based small sample similar abrasive particle identification method
CN111724399A (en) * 2020-06-24 2020-09-29 北京邮电大学 Image segmentation method and terminal
WO2022001372A1 (en) * 2020-06-30 2022-01-06 华为技术有限公司 Neural network training method and apparatus, and image processing method and apparatus
CN113949867A (en) * 2020-07-16 2022-01-18 武汉Tcl集团工业研究院有限公司 Image processing method and device
CN113949867B (en) * 2020-07-16 2023-06-20 武汉Tcl集团工业研究院有限公司 Image processing method and device
CN112102229A (en) * 2020-07-23 2020-12-18 西安交通大学 Intelligent industrial CT detection defect identification method based on deep learning
CN112101362A (en) * 2020-08-25 2020-12-18 中国科学院空间应用工程与技术中心 Semantic segmentation method and system for space science experimental data
CN112164035A (en) * 2020-09-15 2021-01-01 郑州金惠计算机***工程有限公司 Image-based defect detection method and device, electronic equipment and storage medium
CN112102323A (en) * 2020-09-17 2020-12-18 陕西师范大学 Adherent nucleus segmentation method based on generation of countermeasure network and Caps-Unet network
CN112102323B (en) * 2020-09-17 2023-07-07 陕西师范大学 Adhesion cell nucleus segmentation method based on generation of countermeasure network and Caps-Unet network
CN112132843A (en) * 2020-09-30 2020-12-25 福建师范大学 Hematoxylin-eosin staining pathological image segmentation method based on unsupervised deep learning
CN112132843B (en) * 2020-09-30 2023-05-19 福建师范大学 Hematoxylin-eosin staining pathological image segmentation method based on unsupervised deep learning
CN112489062B (en) * 2020-12-10 2024-01-30 中国科学院苏州生物医学工程技术研究所 Medical image segmentation method and system based on boundary and neighborhood guidance
CN112489062A (en) * 2020-12-10 2021-03-12 中国科学院苏州生物医学工程技术研究所 Medical image segmentation method and system based on boundary and neighborhood guidance
CN112767259A (en) * 2020-12-29 2021-05-07 上海联影智能医疗科技有限公司 Image processing method, image processing device, computer equipment and storage medium
CN112750137A (en) * 2021-01-14 2021-05-04 江南大学 Liver tumor segmentation method and system based on deep learning
CN112464579B (en) * 2021-02-02 2021-06-01 四川大学 Identification modeling method for searching esophageal cancer lesion area based on evolutionary neural network structure
CN112464579A (en) * 2021-02-02 2021-03-09 四川大学 Identification modeling method for searching esophageal cancer lesion area based on evolutionary neural network structure
CN112907708B (en) * 2021-02-05 2023-09-19 深圳瀚维智能医疗科技有限公司 Face cartoon method, equipment and computer storage medium
CN112801996A (en) * 2021-02-05 2021-05-14 强联智创(北京)科技有限公司 Grading method, grading device and grading equipment
CN112907708A (en) * 2021-02-05 2021-06-04 深圳瀚维智能医疗科技有限公司 Human face cartoon method, equipment and computer storage medium
CN112861868A (en) * 2021-02-10 2021-05-28 广东众聚人工智能科技有限公司 Image segmentation method and system
CN113129321A (en) * 2021-04-20 2021-07-16 重庆邮电大学 Turbine blade CT image segmentation method based on full convolution neural network
CN113116305A (en) * 2021-04-20 2021-07-16 深圳大学 Nasopharyngeal endoscope image processing method and device, electronic equipment and storage medium
CN113706451A (en) * 2021-07-07 2021-11-26 杭州脉流科技有限公司 Method, device, system and computer-readable storage medium for intracranial aneurysm identification detection
CN113327258A (en) * 2021-07-15 2021-08-31 重庆邮电大学 Lung CT image identification method based on deep learning
CN113706464B (en) * 2021-07-22 2023-09-12 西安交通大学 Printed matter appearance quality detection method and system
CN113706464A (en) * 2021-07-22 2021-11-26 西安交通大学 Printed matter appearance quality detection method and system
CN114155372A (en) * 2021-12-03 2022-03-08 长春工业大学 Deep learning-based structured light weld curve identification and fitting method
CN114708255A (en) * 2022-04-29 2022-07-05 浙江大学 Multi-center children X-ray chest image lung segmentation method based on TransUNet model
CN114596319A (en) * 2022-05-10 2022-06-07 华南师范大学 Medical image segmentation method based on Boosting-Unet segmentation network
CN114596319B (en) * 2022-05-10 2022-07-26 华南师范大学 Medical image segmentation method based on Boosting-Unet segmentation network
CN115131364A (en) * 2022-08-26 2022-09-30 中加健康工程研究院(合肥)有限公司 Method for segmenting medical image based on Transformer
CN116152211A (en) * 2023-02-28 2023-05-23 哈尔滨市科佳通用机电股份有限公司 Identification method for brake shoe abrasion overrun fault
CN116012388A (en) * 2023-03-28 2023-04-25 中南大学 Three-dimensional medical image segmentation method and imaging method for acute ischemic cerebral apoplexy

Also Published As

Publication number Publication date
CN109191476B (en) 2022-03-11

Similar Documents

Publication Publication Date Title
CN109191476A (en) The automatic segmentation of Biomedical Image based on U-net network structure
CN110930416B (en) MRI image prostate segmentation method based on U-shaped network
CN106920227B (en) The Segmentation Method of Retinal Blood Vessels combined based on deep learning with conventional method
CN110211140B (en) Abdominal blood vessel segmentation method based on 3D residual U-Net and weighting loss function
CN106056595B (en) Based on the pernicious assistant diagnosis system of depth convolutional neural networks automatic identification Benign Thyroid Nodules
CN108257135A (en) The assistant diagnosis system of medical image features is understood based on deep learning method
CN107437092A (en) The sorting algorithm of retina OCT image based on Three dimensional convolution neutral net
CN109993735A (en) Image partition method based on concatenated convolutional
CN108764342B (en) Semantic segmentation method for optic discs and optic cups in fundus image
Zhao et al. Data-driven enhancement of blurry retinal images via generative adversarial networks
CN110647802A (en) Remote sensing image ship target detection method based on deep learning
CN115018824B (en) Colonoscope polyp image segmentation method based on CNN and Transformer fusion
CN111524144A (en) Intelligent pulmonary nodule diagnosis method based on GAN and Unet network
Wang et al. A generalizable and robust deep learning algorithm for mitosis detection in multicenter breast histopathological images
CN111899259A (en) Prostate cancer tissue microarray classification method based on convolutional neural network
Guo et al. CAFR-CNN: coarse-to-fine adaptive faster R-CNN for cross-domain joint optic disc and cup segmentation
CN109242879A (en) Brain glioma nuclear-magnetism image partition method based on depth convolutional neural networks
CN114913433A (en) Multi-scale target detection method combining equalization feature and deformable convolution
CN112734769B (en) Medical image segmentation and quantitative analysis method based on interactive information guided deep learning method, computer device and storage medium
CN108447066A (en) Biliary tract image partition method, terminal, storage medium
CN110570417B (en) Pulmonary nodule classification device and image processing equipment
CN114372985A (en) Diabetic retinopathy focus segmentation method and system adapting to multi-center image
CN113362360A (en) Ultrasonic carotid plaque segmentation method based on fluid velocity field
CN117392468B (en) Cancer pathology image classification system, medium and equipment based on multi-example learning
Wu et al. Mscan: Multi-scale channel attention for fundus retinal vessel segmentation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant