CN110309855B - Training method for image segmentation, computer device and storage medium - Google Patents

Training method for image segmentation, computer device and storage medium Download PDF

Info

Publication number
CN110309855B
CN110309855B CN201910461791.0A CN201910461791A CN110309855B CN 110309855 B CN110309855 B CN 110309855B CN 201910461791 A CN201910461791 A CN 201910461791A CN 110309855 B CN110309855 B CN 110309855B
Authority
CN
China
Prior art keywords
segmentation
sub
network
image
training sample
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910461791.0A
Other languages
Chinese (zh)
Other versions
CN110309855A (en
Inventor
肖彬
石峰
周翔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai United Imaging Intelligent Healthcare Co Ltd
Original Assignee
Shanghai United Imaging Intelligent Healthcare Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai United Imaging Intelligent Healthcare Co Ltd filed Critical Shanghai United Imaging Intelligent Healthcare Co Ltd
Priority to CN201910461791.0A priority Critical patent/CN110309855B/en
Publication of CN110309855A publication Critical patent/CN110309855A/en
Application granted granted Critical
Publication of CN110309855B publication Critical patent/CN110309855B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to a training method for image segmentation, an image segmentation method, a computer device and a storage medium. The training method comprises the following steps: acquiring a training sample image with a plurality of structures to be segmented; inputting a training sample image into a first sub-network of an initial segmentation network to perform feature extraction processing to obtain a feature map with multi-channel features; intercepting a plurality of sub-blocks on the feature map, inputting the sub-blocks into a second sub-network of the initial segmentation network for segmentation processing, and obtaining the segmentation result of each sub-block; and adjusting network parameters in the initial segmentation network according to the segmentation result of each sub-block and the segmentation gold standard of the training sample image to obtain the trained segmentation network. Because the method inputs a plurality of small blocks in the feature map into the second sub-network of the initial segmentation network for segmentation processing, namely, the small blocks are segmented one by one each time, the occupation of GPU video memory is greatly reduced, and the efficiency of neural network model training is improved.

Description

Training method for image segmentation, computer device and storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a training method for image segmentation, an image segmentation method, a computer device, and a storage medium.
Background
In the current medical field, for some patients suffering from neurodegenerative diseases (such as alzheimer's disease, parkinson's disease, etc.), it is usually necessary to acquire their brain images, and automatically segment and analyze brain regions of the brain images, for example, measurement and analysis of gray attributes and shape attributes of the brain regions are required, and the quality of automatic segmentation of the brain regions directly affects the accuracy and stability of the later analysis. Therefore, the deep learning method capable of automatically extracting the target object features is applied to the process of automatically segmenting the brain region.
When the brain areas are automatically segmented by means of a deep learning method, a used neural network model needs to be trained firstly, and because a brain image contains a large number of brain areas, when the brain areas are segmented and trained at one time, a large number of Graphic Processing Units (GPUs) are occupied in the training process, so that the working performance of the GPUs is greatly reduced, and further the training efficiency of the neural network model is low.
Disclosure of Invention
Therefore, it is necessary to provide an image segmentation training method, an image segmentation method, a computer device, and a storage medium, for solving the problem that the training efficiency of a neural network model is low due to the occupation of a large amount of GPU video memory in the neural network model training process in the conventional technology.
In a first aspect, an embodiment of the present application provides a training method for image segmentation, including:
acquiring a training sample image with a plurality of structures to be segmented, wherein the training sample image is a complete image;
inputting a training sample image into a first sub-network of an initial segmentation network to perform feature extraction processing to obtain a feature map with multi-channel features;
intercepting a plurality of sub-blocks on the feature map, inputting the sub-blocks into a second sub-network of the initial segmentation network for segmentation processing, and obtaining the segmentation result of each sub-block;
and adjusting network parameters in the initial segmentation network according to the segmentation result of each sub-block and the segmentation gold standard of the training sample image to obtain the trained segmentation network.
The training method of the image segmentation comprises the steps of firstly obtaining training sample images with a plurality of structures to be segmented, inputting the training sample images into a first sub-network of an initial segmentation network to carry out feature extraction processing to obtain a feature map with multi-channel features, then cutting a plurality of subblocks on the feature map, inputting the plurality of subblocks into a second sub-network of the initial segmentation network to carry out segmentation processing to obtain segmentation results of each subblock, and finally adjusting network parameters in the initial segmentation network according to the segmentation results of each subblock and a segmentation gold standard of the training sample images to obtain the trained segmentation network. Because the method inputs a plurality of small blocks in the feature map into the second sub-network of the initial segmentation network for segmentation processing, namely, the method segments one small block each time and does not segment the whole feature map at one time, the occupation of GPU video memory is greatly reduced, and the efficiency of neural network model training is improved.
In one embodiment, the cutting out the plurality of sub-blocks on the feature map includes:
randomly selecting a plurality of pixel points on a training sample image;
determining mapping points of the pixel points on the characteristic graph based on the positions of the pixel points;
taking the mapping point as a center, and intercepting a plurality of sub-blocks with preset sizes from the characteristic diagram by adopting a preset interception rule; alternatively, the first and second electrodes may be,
randomly selecting a plurality of pixel points on the characteristic diagram;
and taking the pixel point as a center, and intercepting a plurality of subblocks with preset sizes from the characteristic diagram by adopting a preset intercepting rule.
In one embodiment, the adjusting the network parameters in the initial segmentation network according to the segmentation result of each sub-block and the segmentation standard of the training sample image to obtain the trained segmentation network includes:
determining the gold segmentation standard of each sub-block according to the gold segmentation standard of the training sample image and the interception rule;
and training the initial segmentation network according to the segmentation result of each sub-block and the segmentation gold standard corresponding to each sub-block to obtain the trained segmentation network.
In one embodiment, the feature map has the same resolution as the training sample image.
In one embodiment, the plurality of sub-blocks have all of the features that comprise all of the features of the feature map.
In one embodiment, the second sub-network comprises a convolution layer for performing feature mapping on the plurality of sub-blocks to obtain a segmentation result of each sub-block; the convolution layer corresponds to a convolution kernel size of 1 × 1 × 1.
In one embodiment, the method further includes:
acquiring a test image with a plurality of structures to be segmented, wherein the test image is a complete image;
inputting a test image into a first sub-network of a segmentation network to perform feature extraction processing to obtain a feature map with multi-channel features;
inputting the feature map into a second sub-network of the segmentation network for segmentation processing to obtain a segmentation result of the test image;
and obtaining a test result according to the segmentation result of the test image and the segmentation gold standard of the test image.
In a second aspect, an embodiment of the present application provides an image segmentation method, including:
acquiring an image to be segmented with a plurality of structures to be segmented;
inputting the image to be segmented into a segmentation network for segmentation processing to obtain a segmentation result of the image to be segmented; the training method of the segmentation network comprises the following steps:
acquiring a training sample image with a plurality of structures to be segmented, wherein the training sample image is a complete image;
inputting a training sample image into a first sub-network of an initial segmentation network to perform feature extraction processing to obtain a feature map with multi-channel features;
intercepting a plurality of sub-blocks on the feature map, inputting the sub-blocks into a second sub-network of the initial segmentation network for segmentation processing, and obtaining the segmentation result of each sub-block;
and adjusting network parameters in the initial segmentation network according to the segmentation result of each sub-block and the segmentation gold standard of the training sample image to obtain the trained segmentation network.
In a third aspect, an embodiment of the present application provides an image segmentation training device, including:
the acquisition module is used for acquiring training sample images with a plurality of structures to be segmented, wherein the training sample images are complete images;
the feature extraction module is used for inputting the training sample image into a first sub-network of the initial segmentation network to perform feature extraction processing to obtain a feature map with multi-channel features;
the segmentation module is used for intercepting a plurality of sub-blocks on the feature map, inputting the sub-blocks into a second sub-network of the initial segmentation network for segmentation processing, and obtaining the segmentation result of each sub-block;
and the updating module is used for adjusting network parameters in the initial segmentation network according to the segmentation result of each sub-block and the segmentation gold standard of the training sample image to obtain the trained segmentation network.
In a fourth aspect, an embodiment of the present application provides a computer device, including a memory and a processor, where the memory stores a computer program, and the processor implements the following steps when executing the computer program:
acquiring an image to be segmented with a plurality of structures to be segmented;
inputting the image to be segmented into a segmentation network for segmentation processing to obtain a segmentation result of the image to be segmented; the training method of the segmentation network comprises the following steps:
acquiring a training sample image with a plurality of structures to be segmented, wherein the training sample image is a complete image;
inputting a training sample image into a first sub-network of an initial segmentation network to perform feature extraction processing to obtain a feature map with multi-channel features;
intercepting a plurality of sub-blocks on the feature map, inputting the sub-blocks into a second sub-network of the initial segmentation network for segmentation processing, and obtaining the segmentation result of each sub-block;
and adjusting network parameters in the initial segmentation network according to the segmentation result of each sub-block and the segmentation gold standard of the training sample image to obtain the trained segmentation network.
In a fifth aspect, an embodiment of the present application provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the following steps:
acquiring an image to be segmented with a plurality of structures to be segmented;
inputting the image to be segmented into a segmentation network for segmentation processing to obtain a segmentation result of the image to be segmented; the training method of the segmentation network comprises the following steps:
acquiring a training sample image with a plurality of structures to be segmented, wherein the training sample image is a complete image;
inputting a training sample image into a first sub-network of an initial segmentation network to perform feature extraction processing to obtain a feature map with multi-channel features;
intercepting a plurality of sub-blocks on the feature map, inputting the sub-blocks into a second sub-network of the initial segmentation network for segmentation processing, and obtaining the segmentation result of each sub-block;
and adjusting network parameters in the initial segmentation network according to the segmentation result of each sub-block and the segmentation gold standard of the training sample image to obtain the trained segmentation network.
In the neural network training process, the computer device firstly obtains training sample images with a plurality of structures to be segmented, then inputs the training sample images into a first sub-network of an initial segmentation network to carry out feature extraction processing to obtain a feature map with multi-channel features, then cuts a plurality of subblocks on the feature map, inputs the subblocks into a second sub-network of the initial segmentation network to carry out segmentation processing to obtain the segmentation result of each subblock, and finally adjusts network parameters in the initial segmentation network according to the segmentation result of each subblock and the segmentation gold standard of the training sample images to obtain the trained segmentation network. Because the training method inputs a plurality of small blocks in the feature map into the second sub-network of the initial segmentation network for segmentation processing, namely, the small blocks are segmented one by one each time, but the whole feature map is not segmented at one time, the occupation of GPU video memory is greatly reduced, and the training efficiency of the neural network model is improved.
Drawings
FIG. 1 is a schematic flow chart of a training method for image segmentation according to an embodiment;
FIG. 1a is a flowchart illustrating a method for intercepting a plurality of sub-blocks on a feature map according to an embodiment;
FIG. 1b is a schematic flowchart of a method for intercepting a plurality of sub-blocks on a feature map according to another embodiment;
FIG. 2 is a schematic flowchart of a training method for image segmentation according to another embodiment;
FIG. 3 is a schematic flowchart of a training method for image segmentation according to another embodiment;
FIG. 3a is a network flow diagram of a training method for image segmentation according to an embodiment;
FIG. 4 is a flowchart illustrating an image segmentation method according to an embodiment;
FIG. 5 is a schematic diagram of an embodiment of an image segmentation training device;
FIG. 6 is a schematic structural diagram of an image segmentation training apparatus according to another embodiment;
fig. 7 is a schematic internal structural diagram of a computer device according to an embodiment.
Description of reference numerals:
21: a first sub-network; 22: a second sub-network.
Detailed Description
The training method for image segmentation provided by the embodiment of the application can be suitable for the training process of a neural network model for image segmentation, and the image to be segmented can be a three-dimensional brain image or other medical images with a plurality of structures. For a brain image, a neural network model is generally required to be used for segmenting and labeling a large number of brain regions (e.g., the number is greater than 100) included in the brain image, so that in the continuous training process of the neural network model, a large number of GPU video memories are occupied when the large number of brain regions are segmented at one time, and the training efficiency of the neural network model is low. The present application provides a training method for image segmentation, an image segmentation method, a computer device and a storage medium, which aim to solve the above technical problems.
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions in the embodiments of the present application are further described in detail by the following embodiments in conjunction with the accompanying drawings. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
It should be noted that the execution subject of the method embodiments described below may be an image segmentation training apparatus, which may be implemented as part of or all of a computer device by software, hardware, or a combination of software and hardware. The following method embodiments take the execution subject as a computer device for example, where the computer device may be a terminal, may also be a server, may be a separate computing device, or may be integrated on a medical detection device, as long as the training of the neural network model can be completed, and this embodiment is not limited thereto.
Fig. 1 is a flowchart illustrating a training method for image segmentation according to an embodiment. The embodiment relates to a specific process that a computer device trains an initial segmentation network by using an acquired training sample image and obtains a trained segmentation network. As shown in fig. 1, the method includes:
s101, obtaining training sample images with a plurality of structures to be segmented, wherein the training sample images are complete images.
Specifically, the computer device first obtains a training sample image having a plurality of structures to be segmented, where the training sample image is a complete image. For example, assuming that the training sample image is a three-dimensional brain image, the brain image includes a complete structure of the brain, and the plurality of structures to be segmented may include 112 brain structures such as the pituitary, the hypothalamus, the precordial region, and the like. Alternatively, the training sample image may be other medical images having a plurality of structures. Alternatively, the manner in which the computer device obtains the training sample image may be directly retrieved from the memory of the computer device. In addition, since the training process of the neural network model is a process of iterating a large amount of training data, a plurality of training sample images acquired by the computer device in this embodiment may be provided.
And S102, inputting the training sample image into a first sub-network of the initial segmentation network to perform feature extraction processing, so as to obtain a feature map with multi-channel features.
Specifically, the computer device inputs the training sample image into a first sub-network of an initial segmentation network, which may be a Vnet network or another segmentation network, as long as the network can be divided into different sub-networks. In the training sample image, if it is determined which structure a certain region belongs to, it needs to be determined according to a plurality of features together, so that the first sub-network performs feature extraction on the training sample image by using a multi-channel convolution kernel, and each channel can extract one or more features of the plurality of features. In the first sub-network, the training sample image is subjected to convolution operation, normalization processing, residual connection, feature mapping and the like, so that a feature map with multi-channel features can be obtained.
Optionally, the feature map obtained in this step has the same resolution as the training sample image, so that the pixel points in the obtained feature map correspond to the pixel points in the training sample image one to one, and it is ensured that the feature information in the training sample image is not lost. For example, if the training sample image is a three-dimensional image with a resolution of 1 × H × W × D, where H, W, D is the scale of the training sample image in three dimensions, the resolution of the resulting feature map is C × H × W × D, where C is the number of channels of the feature map. Optionally, if the training sample image is a three-dimensional image, the pixel points in the training sample image are voxel points.
Optionally, after obtaining the feature map of the training sample image, the computer device may delete some intermediate data to save memory usage.
And S103, intercepting a plurality of sub-blocks on the feature map, inputting the sub-blocks into a second sub-network of the initial segmentation network for segmentation processing, and obtaining the segmentation result of each sub-block.
Specifically, after the computer device obtains the feature map of the training sample image, a plurality of sub-blocks may be cut from the feature map, and the sub-blocks may be input into the second sub-network of the initial segmentation network, and subjected to convolution operation, feature mapping, and the like, so as to obtain the segmentation result of each sub-block. Optionally, the segmentation result may include an annotation result of each sub-block, and may further include an annotation result of each region included in each sub-block. Optionally, in this embodiment, the second sub-network of the initial segmentation network may include two layers of network structures, where the first layer is a convolution + normalization + activation function cascade structure, the second layer is a convolution + Softmax normalization exponential function cascade structure, and the segmentation result of the plurality of sub-blocks may be obtained after the plurality of sub-blocks are processed by the two layers of network structures.
Optionally, all the features of the plurality of sub-blocks obtained by the segmentation include all the features of the feature map, that is, the plurality of sub-blocks obtained by the segmentation may include all the regions of the whole feature map, so that it is further ensured that the feature information in the plurality of sub-blocks is complete, and the obtained segmentation result is more accurate.
And S104, adjusting network parameters in the initial segmentation network according to the segmentation result of each sub-block and the segmentation gold standard of the training sample image to obtain the trained segmentation network.
Specifically, in the neural network model training process, the segmentation gold standard corresponding to the training sample image is the labeled segmentation result corresponding to the image, and then optionally, the computer device may merge the segmentation results of each sub-block to determine the segmentation result of the training sample image; and then comparing the segmentation result of the training sample image with the segmentation gold standard, calculating the loss between the training sample image and the segmentation gold standard, and adjusting the network parameters in the initial segmentation network by using a back propagation gradient method according to the loss so as to carry out circular training until the segmentation network reaches a convergence state. For example, assuming that the training sample image is to be segmented into 112 regions in the present embodiment, the obtained segmentation result is the probability that each pixel in each sub-block belongs to each region, and the segmentation criterion is: the probability that each pixel in each sub-block belongs to the region to which the pixel actually belongs is 1, and the probability that each pixel belongs to the rest regions is 0; the computer device may calculate the loss between the segmentation result and the segmentation gold criteria and then adjust the network parameters in the initially segmented network based on the loss.
In the training method for image segmentation provided by this embodiment, a computer device first obtains a training sample image with a plurality of structures to be segmented, inputs the training sample image into a first subnetwork of an initial segmentation network to perform feature extraction processing, so as to obtain a feature map with multi-channel features, then cuts out a plurality of subblocks from the feature map, inputs the plurality of subblocks into a second subnetwork of the initial segmentation network to perform segmentation processing, so as to obtain a segmentation result of each subblock, and finally adjusts network parameters in the initial segmentation network according to the segmentation result of each subblock and a segmentation gold standard of the training sample image, so as to obtain a trained segmentation network. Because the method inputs a plurality of small blocks in the feature map into the second sub-network of the initial segmentation network for segmentation processing, namely, the small blocks are segmented one by one each time, but the whole feature map is not segmented at one time, the calculation amount of each processing process is greatly reduced, the occupation of GPU video memory is greatly reduced, and the efficiency of neural network model training is improved.
Optionally, as to a method for intercepting a plurality of sub-blocks on the feature map, reference may be made to the embodiment shown in fig. 1a and 1 b. For the embodiment shown in fig. 1a, the interception method includes:
s103a, randomly selecting a plurality of pixel points on the training sample image.
S103b, determining the mapping point of the pixel point on the characteristic diagram based on the position of the pixel point.
Specifically, the computer device first randomly selects a large number of pixel points on the training sample image, the coordinates of the pixel points are (x, y, z), and the feature map and the training sample image can have the same resolution, so that the pixel points of the training sample image can correspond to the mapping points on the feature map one to one. Then, based on the pixel points in S103a, the computer device may determine the mapping point corresponding to each pixel point on the feature map, that is, the coordinate of the mapping point on the feature map is also (x, y, z).
S103c, using the mapping point as the center, and using a preset clipping rule to clip a plurality of sub-blocks with a preset size from the feature map.
Specifically, on the feature map, the computer device uses the determined mapping point as a center of the sub-block, and uses a preset interception rule to intercept the sub-block with a preset size from the feature map. Optionally, the capture rule may be preset, for example, it may be determined whether all the sub-blocks to be captured are included in the feature map according to the position of the mapping point and the size of the sub-block to be captured, and if yes, the whole sub-block is captured; if not, only a part contained in the feature map is intercepted.
On the other hand, for the embodiment shown in fig. 1b, the interception method includes:
s103d, randomly selecting a plurality of pixel points on the feature map;
s103e, taking the pixel point as the center, intercepting a plurality of sub-blocks with preset sizes from the characteristic diagram by adopting a preset intercepting rule.
Specifically, the computer device directly selects a large number of pixel points on the obtained feature map, then uses the pixel points as the centers of the sub-blocks, and intercepts the sub-blocks with preset sizes from the feature map by adopting a preset intercepting rule. The interception rule in this embodiment may refer to the content of the above embodiments, and is not described herein again.
In the method for intercepting a plurality of sub-blocks on the feature map provided by this embodiment, a large number of pixel points are randomly selected on the training sample image or the feature map, so that omission of feature information in the training sample image can be further avoided.
Fig. 2 is a schematic flowchart of a training method for image segmentation according to another embodiment. The embodiment relates to a specific process that the computer device trains the initial segmentation network according to the segmentation result of each sub-block and the segmentation gold standard of the training sample image. Optionally, on the basis of the foregoing embodiment, as shown in fig. 2, S104 may include:
s201, determining the gold segmentation standard of each sub-block according to the gold segmentation standard and the interception rule of the training sample image.
Specifically, in the above embodiment, the intercepting of the plurality of sub-blocks from the feature map is performed according to a preset intercepting rule, and then according to the cut gold standard of the whole training sample image and the intercepting rule of the plurality of sub-blocks, the computer device may obtain the cut gold standard corresponding to each sub-block by reverse-extrapolation, that is, the marked cutting result corresponding to each sub-block may be obtained.
S202, training an initial segmentation network according to the segmentation result of each sub-block and the segmentation gold standard corresponding to each sub-block to obtain the trained segmentation network.
Specifically, the computer device may calculate a loss between the segmentation result of each sub-block output by the initial segmentation network and the segmentation gold standard corresponding to each sub-block, and then adjust network parameters in the initial segmentation network by using a back propagation gradient method according to the loss, so as to perform cyclic training until the segmentation network reaches a convergence state.
In the training method for image segmentation provided by this embodiment, the computer device determines the segmentation gold standard of each sub-block according to the segmentation gold standard of the training sample image and the truncation rule for truncating the plurality of sub-blocks, and then adjusts the network parameters in the initial segmentation network according to the segmentation result of each sub-block and the segmentation gold standard corresponding to each sub-block, so as to obtain the trained segmentation network. In the method, in the process of adjusting the network parameters in the initial segmentation network, the segmentation result and the segmentation gold standard of each subblock are also utilized, and the segmentation result and the segmentation gold standard of the whole training sample image are not adopted, so that the calculation amount of each processing process and the occupation of GPU video memory are further reduced.
Optionally, in an embodiment, the second sub-network includes a convolutional layer, and the convolutional layer is configured to perform feature mapping on a plurality of sub-blocks to obtain a segmentation result of each sub-block, where a size of a convolution kernel corresponding to the convolutional layer in this embodiment is 1 × 1 × 1, and a receptive field of the initial segmentation network is not changed by using the convolution kernel of this size, that is, a size of an image obtained by convolution is the same as a size of an input image, that is, the size of the input image is not sensitive to the convolution kernel of 1 × 1 × 1 size. Then under this parameter configuration, the trained segmented network can be tested in the following way.
Fig. 3 is a flowchart illustrating a training method for image segmentation according to yet another embodiment. The embodiment relates to a specific process of testing a trained segmented network by computer equipment. Optionally, on the basis of the foregoing embodiment, as shown in fig. 3, the method further includes:
s301, obtaining a test image with a plurality of structures to be segmented, wherein the test image is a complete image.
S302, inputting the test image into a first sub-network of the segmentation network for feature extraction processing to obtain a feature map with multi-channel features.
And S303, inputting the feature map into a second sub-network of the segmentation network to perform segmentation processing, so as to obtain a segmentation result of the test image.
S304, obtaining a test result according to the segmentation result of the test image and the segmentation gold standard of the test image.
In order to make the trained segmented network have better segmentation performance, the trained segmented network may be tested, specifically, when training sample image data is prepared, a data set in a certain proportion may be used as a test image, and the computer device may input the test image into a first sub-network of the trained segmented network to perform feature extraction processing, so as to obtain a feature map with multi-channel features, and the processing manner of the computer device in this process may refer to the description of the above embodiment. And then the computer equipment directly inputs the characteristic diagram into a second sub-network of the segmentation network for segmentation processing to obtain a segmentation result of the test image, and the segmentation result is compared with a segmentation gold standard corresponding to the segmentation result to obtain the test result. The test result can be used for verifying whether the split network reaches a preset standard, and if the split network reaches the preset standard, the split network can be used as a split network which passes the test; if the criteria are not met, the training process may continue. Since the convolution kernel size used by the second sub-network in this embodiment is 1 × 1 × 1, the feature map is directly input into the second sub-network of the segmentation network for segmentation processing, and the final segmentation result is not affected.
With regard to the training and testing processes in the training method for image segmentation provided in the embodiment of the present application, refer to the network flow diagram shown in fig. 3 a. For example, assuming that the resolution of the training sample image input into the first subnetwork 21 of the initial segmentation network is 1 × 208 × 208 × 160, and the first subnetwork 21 adopts 32-channel convolution kernels, the resolution of the feature map output by the first subnetwork 21 is 32 × 208 × 208 × 160; in the network training process, a plurality of 32 × 96 × 96 × 96 sub-blocks are cut from the feature map, and input into the second sub-network 22 to perform convolution to obtain a segmentation result of 113 × 96 × 96, where 113 is the number of 112 brain structures to be labeled and 1 background label. In the network testing process, the feature map with the resolution of 32 × 208 × 208 × 160 is directly input into the second sub-network 22 for the segmentation process.
According to the training method for image segmentation provided by the embodiment, after the trained segmentation network is obtained, the network is tested, so that the trained segmentation network has better segmentation performance.
Generally, in practical applications of training a segmentation network, there may be a case that the data size of a training sample image is not enough to meet requirements of a training process, so the embodiment of the present application provides a method for expanding the data size of the training sample image through multi-atlas registration fusion, which includes the following specific processes:
let the data set M { (M) with the fine label1,l1)、(m2,l2),…,(mk,lk) And unlabeled dataset F { (F)1)、(f2)、…、(ft) In which m iskFor labeled sample images,/kAs a result of labeling of the sample image, ftThe sample image is marked.
First, for f1Registering the sample image with each sample image in the precisely labeled data set M to obtain a transformation data set M { (M) of the data set M1’,l1’)、(m2’,l2’),…,(mk’,lk') } wherein mk' registered sample image, /)k' is the labeling result of the registered sample image.
Then, the transformation data set M' is used as the input of the complete label fusion algorithm to perform label fusion, and the label fusion can be performedTo obtain f1Weakly labeled result s of1(ii) a The complete tag fusion algorithm was proposed in 2016 in the IEEE international conference on computer vision and pattern recognition. Then, for any F in the unlabeled dataset FtAll can obtain the corresponding weakly labeled result s by the above methodtThat is, a weakly labeled dataset F { (F) can be obtained1,s1)、(f2,s2),…,(fk,sk)}。
After the weakly labeled data set F 'is obtained, the initial segmentation network can be respectively subjected to cyclic training by using a fully supervised training method and using the finely labeled data set M and the weakly labeled data set F' until the network reaches a convergence state; in addition, the method can obtain a large number of weakly labeled data sets, and the training data volume is greatly increased.
After the training of the segmentation network is completed, we can use the segmentation network to perform image segmentation, and fig. 4 is a schematic flow chart of an image segmentation method provided in an embodiment of the present application, where the method includes:
s401, acquiring an image to be segmented with a plurality of structures to be segmented;
s402, inputting the image to be segmented into a segmentation network for segmentation processing to obtain a segmentation result of the image to be segmented; the training method of the segmentation network comprises the following steps: acquiring a training sample image with a plurality of structures to be segmented, wherein the training sample image is a complete image; inputting a training sample image into a first sub-network of an initial segmentation network to perform feature extraction processing to obtain a feature map with multi-channel features; intercepting a plurality of sub-blocks on the feature map, inputting the sub-blocks into a second sub-network of the initial segmentation network for segmentation processing, and obtaining the segmentation result of each sub-block; and adjusting network parameters in the initial segmentation network according to the segmentation result of the individual block and the segmentation gold standard of the training sample image to obtain the trained segmentation network.
Specifically, after the computer device obtains the image to be segmented, the image to be segmented may be input to a segmentation network for segmentation processing, so as to obtain a segmentation result of the image to be segmented. Optionally, in the process of segmenting the image to be segmented by using the segmentation network, the feature map of the image to be segmented may be input to the second sub-network of the segmentation network, or may be a plurality of sub-blocks in the feature map of the image to be segmented. For the training process of the segmented network, reference may be made to the method shown in the above embodiment, which is not described herein again.
It should be understood that although the various steps in the flowcharts of fig. 1-4 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 1-4 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performance of the sub-steps or stages is not necessarily sequential, but may be performed in turn or alternating with other steps or at least some of the sub-steps or stages of other steps.
Fig. 5 is a schematic structural diagram of an image segmentation training apparatus according to an embodiment. As shown in fig. 5, the apparatus includes: an acquisition module 11, a feature extraction module 12, a segmentation module 13 and an update module 14.
Specifically, the obtaining module 11 is configured to obtain a training sample image with a plurality of structures to be segmented, where the training sample image is a complete image.
And the feature extraction module 12 is configured to input the training sample image into a first sub-network of the initial segmentation network to perform feature extraction processing, so as to obtain a feature map with multi-channel features.
And the segmentation module 13 is configured to intercept a plurality of sub-blocks from the feature map, and input the plurality of sub-blocks into a second sub-network of the initial segmentation network to perform segmentation processing, so as to obtain a segmentation result of each sub-block.
And the updating module 14 is configured to adjust network parameters in the initial segmentation network according to the segmentation result of each sub-block and the segmentation gold standard of the training sample image, so as to obtain a trained segmentation network.
The training device for image segmentation provided by this embodiment may implement the above method embodiments, and its implementation principle and technical effect are similar, and are not described herein again.
In one embodiment, the segmentation module 13 is specifically configured to randomly select a plurality of pixel points on a training sample image; determining mapping points of the pixel points on the characteristic graph based on the positions of the pixel points; taking the mapping point as a center, and intercepting a plurality of sub-blocks with preset sizes from the characteristic diagram by adopting a preset interception rule; or, the segmentation module 13 is specifically configured to randomly select a plurality of pixel points on the feature map; and taking the pixel point as a center, and intercepting a plurality of subblocks with preset sizes from the characteristic diagram by adopting a preset intercepting rule.
In one embodiment, the updating module 14 is specifically configured to determine the cut gold standard of each sub-block according to the cut gold standard and the truncation rule of the training sample image; and training the initial segmentation network according to the segmentation result of each sub-block and the segmentation gold standard corresponding to each sub-block to obtain the trained segmentation network.
In one embodiment, the feature map has the same resolution as the training sample image.
In one embodiment, all of the features of the sub-blocks include all of the features of the feature map.
In one embodiment, the second sub-network includes a convolution layer, and the convolution layer is configured to perform feature mapping on a plurality of sub-blocks to obtain a segmentation result of each sub-block; the convolution layer corresponds to a convolution kernel size of 1 × 1 × 1.
Fig. 6 is a schematic structural diagram of an image segmentation training device according to another embodiment. On the basis of the embodiment shown in fig. 5, as shown in fig. 6, the apparatus further includes: the module 15 is tested.
Specifically, the test module 15 is configured to obtain a test image with a plurality of structures to be segmented, where the test image is a complete image; inputting a test image into a first sub-network of a segmentation network to perform feature extraction processing to obtain a feature map with multi-channel features; inputting the feature map into a second sub-network of the segmentation network for segmentation processing to obtain a segmentation result of the test image; and determining whether the trained segmentation network is used as a segmentation network passing the test or not according to the segmentation result of the test image and the segmentation gold standard of the test image.
The training device for image segmentation provided by this embodiment may implement the above method embodiments, and its implementation principle and technical effect are similar, and are not described herein again.
For specific limitations of the training apparatus for image segmentation, reference may be made to the above limitations of the training method for image segmentation, which are not described herein again. The modules in the training device for image segmentation can be wholly or partially realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a terminal, and its internal structure diagram may be as shown in fig. 7. The computer device includes a processor, a memory, a network interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement an image segmentation method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the architecture shown in fig. 7 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided, comprising a memory and a processor, the memory having a computer program stored therein, the processor implementing the following steps when executing the computer program:
acquiring an image to be segmented with a plurality of structures to be segmented;
inputting the image to be segmented into a segmentation network for segmentation processing to obtain a segmentation result of the image to be segmented; the training method of the segmentation network comprises the following steps:
acquiring a training sample image with a plurality of structures to be segmented, wherein the training sample image is a complete image;
inputting a training sample image into a first sub-network of an initial segmentation network to perform feature extraction processing to obtain a feature map with multi-channel features;
intercepting a plurality of sub-blocks on the feature map, inputting the sub-blocks into a second sub-network of the initial segmentation network for segmentation processing, and obtaining the segmentation result of each sub-block;
and adjusting network parameters in the initial segmentation network according to the segmentation result of each sub-block and the segmentation gold standard of the training sample image to obtain the trained segmentation network.
The implementation principle and technical effect of the computer device provided in this embodiment are similar to those of the method embodiments described above, and are not described herein again.
In one embodiment, a computer-readable storage medium is provided, having a computer program stored thereon, which when executed by a processor, performs the steps of:
acquiring an image to be segmented with a plurality of structures to be segmented;
inputting the image to be segmented into a segmentation network for segmentation processing to obtain a segmentation result of the image to be segmented; the training method of the segmentation network comprises the following steps:
acquiring a training sample image with a plurality of structures to be segmented, wherein the training sample image is a complete image;
inputting a training sample image into a first sub-network of an initial segmentation network to perform feature extraction processing to obtain a feature map with multi-channel features;
intercepting a plurality of sub-blocks on the feature map, inputting the sub-blocks into a second sub-network of the initial segmentation network for segmentation processing, and obtaining the segmentation result of each sub-block;
and adjusting network parameters in the initial segmentation network according to the segmentation result of each sub-block and the segmentation gold standard of the training sample image to obtain the trained segmentation network.
The implementation principle and technical effect of the computer-readable storage medium provided by this embodiment are similar to those of the above-described method embodiment, and are not described herein again.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A method for training image segmentation, comprising:
acquiring a training sample image with a plurality of structures to be segmented, wherein the training sample image is a complete image;
inputting the training sample image into a first sub-network of an initial segmentation network to perform feature extraction processing to obtain a feature map with multi-channel features;
intercepting a plurality of sub-blocks on the feature map, inputting the sub-blocks into a second sub-network of the initial segmentation network for segmentation processing, and obtaining the segmentation result of each sub-block; wherein intercepting a plurality of sub-blocks on the feature map comprises: randomly selecting a plurality of pixel points on the training sample image; determining a mapping point of the pixel point on the feature map based on the position of the pixel point; taking the mapping point as a center, and intercepting a plurality of sub-blocks with preset sizes from the characteristic diagram by adopting a preset interception rule;
and adjusting network parameters in the initial segmentation network according to the segmentation result of each sub-block and the segmentation gold standard of the training sample image to obtain the trained segmentation network.
2. The method of claim 1, wherein the truncating a plurality of sub-blocks on the feature map further comprises:
randomly selecting a plurality of pixel points on the characteristic diagram;
and intercepting a plurality of subblocks with preset sizes from the characteristic diagram by adopting a preset intercepting rule by taking the pixel point as a center.
3. The method according to claim 2, wherein the adjusting network parameters in the initial segmentation network according to the segmentation result of each sub-block and the segmentation gold standard of the training sample image to obtain the trained segmentation network comprises:
determining the gold segmentation standard of each sub-block according to the gold segmentation standard of the training sample image and the interception rule;
and training the initial segmentation network according to the segmentation result of each sub-block and the segmentation gold standard corresponding to each sub-block to obtain the trained segmentation network.
4. The method of claim 1, wherein the feature map has the same resolution as the training sample image.
5. The method of claim 1, wherein all of the features of the plurality of sub-blocks comprise all of the features of the feature map.
6. The method of any of claims 1-5, wherein the second sub-network comprises a convolutional layer for performing feature mapping on the plurality of sub-blocks to obtain a segmentation result for each sub-block; the convolution kernel size corresponding to the convolution layer is 1 × 1 × 1.
7. The method of claim 6, further comprising:
acquiring a test image with a plurality of structures to be segmented, wherein the test image is a complete image;
inputting the test image into a first sub-network of the segmentation network to perform feature extraction processing to obtain a feature map with multi-channel features;
inputting the feature map into a second sub-network of the segmentation network to perform segmentation processing to obtain a segmentation result of the test image;
and obtaining a test result according to the segmentation result of the test image and the segmentation gold standard of the test image.
8. An image segmentation method, comprising:
acquiring an image to be segmented with a plurality of structures to be segmented;
inputting the image to be segmented into a segmentation network for segmentation processing to obtain a segmentation result of the image to be segmented; the training method of the segmentation network comprises the following steps:
acquiring a training sample image with a plurality of structures to be segmented, wherein the training sample image is a complete image;
inputting the training sample image into a first sub-network of an initial segmentation network to perform feature extraction processing to obtain a feature map with multi-channel features;
intercepting a plurality of sub-blocks on the feature map, inputting the sub-blocks into a second sub-network of the initial segmentation network for segmentation processing, and obtaining the segmentation result of each sub-block; wherein intercepting a plurality of sub-blocks on the feature map comprises: randomly selecting a plurality of pixel points on the training sample image; determining a mapping point of the pixel point on the feature map based on the position of the pixel point; taking the mapping point as a center, and intercepting a plurality of sub-blocks with preset sizes from the characteristic diagram by adopting a preset interception rule;
and adjusting network parameters in the initial segmentation network according to the segmentation result of each sub-block and the segmentation gold standard of the training sample image to obtain the trained segmentation network.
9. A computer device comprising a memory and a processor, the memory storing a computer program, wherein the processor implements the steps of the method of claim 8 when executing the computer program.
10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method as claimed in claim 8.
CN201910461791.0A 2019-05-30 2019-05-30 Training method for image segmentation, computer device and storage medium Active CN110309855B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910461791.0A CN110309855B (en) 2019-05-30 2019-05-30 Training method for image segmentation, computer device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910461791.0A CN110309855B (en) 2019-05-30 2019-05-30 Training method for image segmentation, computer device and storage medium

Publications (2)

Publication Number Publication Date
CN110309855A CN110309855A (en) 2019-10-08
CN110309855B true CN110309855B (en) 2021-11-23

Family

ID=68075651

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910461791.0A Active CN110309855B (en) 2019-05-30 2019-05-30 Training method for image segmentation, computer device and storage medium

Country Status (1)

Country Link
CN (1) CN110309855B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111161270B (en) * 2019-12-24 2023-10-27 上海联影智能医疗科技有限公司 Vascular segmentation method for medical image, computer device and readable storage medium
CN111325281B (en) * 2020-03-05 2023-10-27 新希望六和股份有限公司 Training method and device for deep learning network, computer equipment and storage medium
CN111402164B (en) * 2020-03-18 2023-10-24 北京市商汤科技开发有限公司 Training method and device for correction network model, text recognition method and device

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106529468A (en) * 2016-11-07 2017-03-22 重庆工商大学 Finger vein identification method and system based on convolutional neural network
CN107146246A (en) * 2017-05-08 2017-09-08 湘潭大学 One kind is used for workpiece machining surface background texture suppressing method
CN107945098A (en) * 2017-11-24 2018-04-20 腾讯科技(深圳)有限公司 Image processing method, device, computer equipment and storage medium
CN109102469A (en) * 2018-07-04 2018-12-28 华南理工大学 A kind of panchromatic sharpening method of remote sensing images based on convolutional neural networks
CN109117831A (en) * 2018-09-30 2019-01-01 北京字节跳动网络技术有限公司 The training method and device of object detection network
CN109635636A (en) * 2018-10-30 2019-04-16 国家新闻出版广电总局广播科学研究院 The pedestrian that blocking characteristic based on attributive character and weighting blends recognition methods again
CN109658481A (en) * 2018-12-24 2019-04-19 北京旷视科技有限公司 Image labeling method and device, feature drawing generating method and device
CN109784258A (en) * 2019-01-08 2019-05-21 华南理工大学 A kind of pedestrian's recognition methods again cut and merged based on Analysis On Multi-scale Features

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107844753A (en) * 2017-10-20 2018-03-27 珠海习悦信息技术有限公司 Pedestrian in video image recognition methods, device, storage medium and processor again
CN107909537B (en) * 2017-11-16 2020-11-06 厦门美图之家科技有限公司 Image processing method based on convolutional neural network and mobile terminal
CN108376257B (en) * 2018-02-10 2021-10-29 西北大学 Incomplete code word identification method for gas meter
CN108830855B (en) * 2018-04-02 2022-03-25 华南理工大学 Full convolution network semantic segmentation method based on multi-scale low-level feature fusion
CN109118490B (en) * 2018-06-28 2021-02-26 厦门美图之家科技有限公司 Image segmentation network generation method and image segmentation method
CN109242849A (en) * 2018-09-26 2019-01-18 上海联影智能医疗科技有限公司 Medical image processing method, device, system and storage medium
CN109409503B (en) * 2018-09-27 2020-07-24 深圳市铱硙医疗科技有限公司 Neural network training method, image conversion method, device, equipment and medium
CN109671076A (en) * 2018-12-20 2019-04-23 上海联影智能医疗科技有限公司 Blood vessel segmentation method, apparatus, electronic equipment and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106529468A (en) * 2016-11-07 2017-03-22 重庆工商大学 Finger vein identification method and system based on convolutional neural network
CN107146246A (en) * 2017-05-08 2017-09-08 湘潭大学 One kind is used for workpiece machining surface background texture suppressing method
CN107945098A (en) * 2017-11-24 2018-04-20 腾讯科技(深圳)有限公司 Image processing method, device, computer equipment and storage medium
CN109102469A (en) * 2018-07-04 2018-12-28 华南理工大学 A kind of panchromatic sharpening method of remote sensing images based on convolutional neural networks
CN109117831A (en) * 2018-09-30 2019-01-01 北京字节跳动网络技术有限公司 The training method and device of object detection network
CN109635636A (en) * 2018-10-30 2019-04-16 国家新闻出版广电总局广播科学研究院 The pedestrian that blocking characteristic based on attributive character and weighting blends recognition methods again
CN109658481A (en) * 2018-12-24 2019-04-19 北京旷视科技有限公司 Image labeling method and device, feature drawing generating method and device
CN109784258A (en) * 2019-01-08 2019-05-21 华南理工大学 A kind of pedestrian's recognition methods again cut and merged based on Analysis On Multi-scale Features

Also Published As

Publication number Publication date
CN110309855A (en) 2019-10-08

Similar Documents

Publication Publication Date Title
CN109829930B (en) Face image processing method and device, computer equipment and readable storage medium
CN110309855B (en) Training method for image segmentation, computer device and storage medium
CN110473172B (en) Medical image anatomical centerline determination method, computer device and storage medium
CN110717905B (en) Brain image detection method, computer device, and storage medium
CN109584327B (en) Face aging simulation method, device and equipment
CN110363774B (en) Image segmentation method and device, computer equipment and storage medium
CN111461170A (en) Vehicle image detection method and device, computer equipment and storage medium
CN110210519B (en) Classification method, computer device, and storage medium
CN110751149B (en) Target object labeling method, device, computer equipment and storage medium
CN110866909B (en) Training method of image generation network, image prediction method and computer equipment
CN111488872B (en) Image detection method, image detection device, computer equipment and storage medium
CN111832561B (en) Character sequence recognition method, device, equipment and medium based on computer vision
CN112115860A (en) Face key point positioning method and device, computer equipment and storage medium
CN115239705A (en) Method, device, equipment and storage medium for counting the number of endometrial plasma cells
CN113160199B (en) Image recognition method and device, computer equipment and storage medium
CN111160441B (en) Classification method, computer device, and storage medium
CN112418196A (en) Crowd quantity prediction method and device, computer equipment and storage medium
CN111583264A (en) Training method for image segmentation network, image segmentation method, and storage medium
CN111951316A (en) Image quantization method and storage medium
CN111210414B (en) Medical image analysis method, computer device, and readable storage medium
CN115797547A (en) Image modeling method, computer device, and storage medium
CN115908363A (en) Tumor cell counting method, device, equipment and storage medium
CN115345928A (en) Key point acquisition method, computer equipment and storage medium
CN112330652A (en) Chromosome recognition method and device based on deep learning and computer equipment
CN113077440A (en) Pathological image processing method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant