CN117911432A - Image segmentation method, device and storage medium - Google Patents

Image segmentation method, device and storage medium Download PDF

Info

Publication number
CN117911432A
CN117911432A CN202311745371.8A CN202311745371A CN117911432A CN 117911432 A CN117911432 A CN 117911432A CN 202311745371 A CN202311745371 A CN 202311745371A CN 117911432 A CN117911432 A CN 117911432A
Authority
CN
China
Prior art keywords
image data
network branch
deformation field
image
segmentation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311745371.8A
Other languages
Chinese (zh)
Inventor
涂丽云
周峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Posts and Telecommunications
Original Assignee
Beijing University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Posts and Telecommunications filed Critical Beijing University of Posts and Telecommunications
Priority to CN202311745371.8A priority Critical patent/CN117911432A/en
Publication of CN117911432A publication Critical patent/CN117911432A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • G06N3/0455Auto-encoder networks; Encoder-decoder networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0475Generative networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/088Non-supervised learning, e.g. competitive learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/30Assessment of water resources

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)

Abstract

The invention provides an image segmentation method, an image segmentation device and a storage medium, wherein the method comprises the following steps: acquiring a pre-trained image segmentation model, wherein the image segmentation model comprises a first network branch, a second network branch and a space transformation network layer; the output of the first network branch and the output of the second network branch are input into a spatial transformation network layer; generating a first deformation field corresponding to the first image data by the first network branch; generating a second deformation field corresponding to the second image data by the second network branch; the deformation direction of the first deformation field is opposite to that of the second deformation field; the space transformation network layer is used for generating a segmentation mask corresponding to the first image data based on the first image data, the second image data, the first deformation field and the second deformation field; and inputting the first image data and the second image data into an image segmentation model to obtain a segmented image. The invention can solve the problem that the multi-target segmentation effect is poor in the multi-target segmentation task of the complex image in the current mainstream deep learning method.

Description

Image segmentation method, device and storage medium
Technical Field
The present invention relates to the field of digital image processing technologies, and in particular, to an image segmentation method, an image segmentation device, and a storage medium.
Background
The principle of multi-objective automatic segmentation of images is to classify multiple objects with semantic tags voxel by voxel (pixels for two-dimensional images), i.e. all voxels of an image are classified using a set of object classes to segment and describe multiple objects of interest in the image. In the medical field, image segmentation refers to the extraction of voxels of an organ or lesion from medical images such as CT or MRI. The technique can help doctors to diagnose diseases more accurately and develop more effective treatment schemes, and is one of the most challenging tasks in medical image analysis. In medical image analysis, the accuracy and reliability of image segmentation is of paramount importance, which is directly related to subsequent clinical decisions and therapeutic effects, even higher-level tasks, such as classification of objects. Thus, image multi-objective automatic segmentation is the most basic and critical process to help describe, characterize and visualize regions of interest in medical images.
Currently, a multi-target segmentation method of medical images includes: the segmentation model is trained in the form of deep learning, and the medical image is segmented using the segmentation model.
However, the current mainstream deep learning method has good performance in single-object segmentation, but has a problem of poor multi-object segmentation effect in multi-object segmentation tasks of complex images.
Disclosure of Invention
In view of the foregoing, embodiments of the present invention provide an image segmentation method, apparatus, and storage medium to obviate or mitigate one or more disadvantages in the prior art. The method can solve the problems that the current mainstream deep learning method has good performance in single-target segmentation, but has poor multi-target segmentation effect in multi-target segmentation tasks of complex images.
One aspect of the present invention provides an image segmentation method including the steps of:
Acquiring first image data and second image data, wherein the first image data refers to image data obtained through magnetic resonance imaging, and the second image data refers to reference image data corresponding to the first image data; the second image data comprises a segmentation mask of at least one target region in the second image data;
Acquiring a pre-trained image segmentation model, wherein the image segmentation model comprises a first network branch, a second network branch and a space transformation network layer; the output of the first network branch and the output of the second network branch are input into a spatial transformation network layer; the first network branch is used for generating a first deformation field corresponding to the first image data; the second network branch is used for generating a second deformation field corresponding to the second image data; the deformation direction of the first deformation field is opposite to that of the second deformation field; the space transformation network layer is used for generating a segmentation mask corresponding to the first image data based on the first image data, the second image data, the first deformation field and the second deformation field;
and inputting the first image data and the second image data into an image segmentation model to obtain a segmented image.
Optionally, the first network branch comprises a first encoder, a decoder and a first deformation field module; the second network branch includes a second encoder, a decoder, and a second deformation field module;
the first encoder layer is used for generating a first feature map corresponding to the first image data; the second encoder layer is used for generating a second feature map corresponding to the second image data;
The decoder layer is used for generating a third feature map based on the first feature map and the second feature map;
The first deformation field module is used for generating a first deformation field based on the third feature map, the first image data and the second image data;
The second deformation field module is to generate a second deformation field based on the third feature map, the first image data, and the second image data.
Optionally, the weight of the first encoder is the same as the weight of the second encoder.
Optionally, the decoder is configured to recursively use the feature map of the higher-level semantic information to extract a feature map having lower-level detail information in the form of a layer-jump connection until a third feature map of the same resolution as the first image data and the second image data is obtained.
Optionally, acquiring the pre-trained image segmentation model includes:
Acquiring training data, the training data comprising: sample magnetic resonance image data and sample reference image data corresponding to the sample magnetic resonance image data;
Acquiring a pre-established initial segmentation model; the initial segmentation model has the same model structure as the image segmentation model; the initial segmentation model comprises a first initial network branch and a second initial network branch;
inputting sample magnetic resonance image data into a first initial network branch, inputting output sample reference image data into a second initial network branch, and respectively obtaining a first training result output by a first encoder in the first initial network branch and a second encoder in the second initial network branch;
inputting a first training result into a preset loss function to obtain a first loss function value;
Performing iterative training on a first encoder in a first initial network branch and a second encoder in a second initial network branch by using a first loss function value to obtain a first intermediate network branch and a second intermediate network branch;
inputting sample magnetic resonance image data into a first intermediate network branch, inputting output sample reference image data into a second intermediate network branch, and respectively outputting second training results by a first deformation field module in the first intermediate network branch, a second deformation field module in the second intermediate network branch and a space transformation network layer;
Inputting a second training result, sample reference image data and sample magnetic resonance image data into a preset loss function to obtain a second loss function value;
Performing iterative training on the first deformation field module in the first intermediate network branch and the second deformation field module in the second intermediate network branch by using the second loss function value to obtain a first iterative network branch and a second iterative network branch;
Inputting sample magnetic resonance image data into a first iteration network branch, inputting output sample reference image data into a second iteration network branch, and respectively obtaining a first deformation field module in the first iteration network branch and a third training result output by a second deformation field module in the second iteration network branch;
Inputting the third training result and the gradient corresponding to the third training result into a preset loss function to obtain a third loss function value;
and performing iterative training on the first deformation field module in the first iterative network branch and the second deformation field module in the second iterative network branch by using the third loss function value to obtain the first network branch and the second network branch.
Optionally, the loss function comprises a first loss function, a second loss function, and a third loss function;
The first loss function is used for amplifying the similarity between the first training sub-result and the second training sub-structure in the first training result and reducing the difference between the first training sub-result and the second training sub-result; the first training sub-result is a characteristic sequence output by a first encoder in a first initial network branch; the second training sub-result is a characteristic sequence output by a second encoder in the first initial network branch;
The second loss function is used for reducing appearance differences among the sample reference image data, the sample magnetic resonance image data and the second training result; the second training result comprises a registration image corresponding to the sample magnetic resonance image data and a registration image corresponding to the sample reference image data;
The third loss function is used for reducing the difference of adjacent positions in the third training result; the third training result includes a deformation field output by the first deformation field module in the first iterative network branch and a deformation field output by the second deformation field module in the second iterative network branch.
Optionally, the spatial transformation network layer includes a grid generator and a sampler;
The grid generator is used for dividing the first image data into a plurality of grid cells, and determining the position of each grid cell in the second image data according to the first deformation field; dividing the second image data into a plurality of grid cells, and determining the position of each grid cell in the first image data according to the second deformation field;
The sampler is used for carrying out linear interpolation on a plurality of grid cells corresponding to the first image data to generate a first registration image corresponding to the first image data; and linearly interpolating the grid cells corresponding to the segmentation masks in the second image data, generating the segmentation masks corresponding to the first image data, and applying the segmentation masks to the first image data to obtain segmented images.
Optionally, performing linear interpolation on a plurality of grid cells corresponding to the first image data includes: in a plurality of grid units, linearly interpolating values of eight adjacent voxels of any voxel; each grid cell includes at least one voxel therein.
Another aspect of the present invention provides an image segmentation apparatus, comprising: the image segmentation method is characterized in that the image segmentation method comprises a processor and a memory, wherein the memory stores computer instructions, the processor is used for executing the computer instructions stored in the memory, and the device realizes the steps of the image segmentation method when the computer instructions are executed by the processor.
Another aspect of the present invention provides a computer-readable storage medium having stored thereon a computer program, characterized in that the program when executed by a processor implements the steps of the above-described image segmentation method.
The image segmentation method, the device and the storage medium can input the first image data and the second image data into the image segmentation model to obtain segmented images; the method can solve the problem that the multi-target segmentation effect is poor in the multi-target segmentation task of the complex image in the current mainstream deep learning method; firstly, taking first image data to be registered and second image data serving as a reference image as registration pairs to be input into a first network branch and a second network branch, respectively extracting feature mapping of the two image data and integrating, and then calculating an output deformation field; and finally, aligning the segmentation mask corresponding to at least one target area in the second image data to a space coordinate system of the first image data through a space transformation network, obtaining the segmentation mask corresponding to at least one target area in the first image data in an unsupervised mode, and applying the segmentation mask to the first image data. Thus, a plurality of target regions of interest can be segmented from the first image data, and the effect of multi-target segmentation can be improved.
Further, by combining the image level registration system and the feature level contrast learning, embedding the contrast learning into an unsupervised registration segmentation architecture, and using a reference data set with a label and a visual result to evaluate the segmentation accuracy and generalization capability of the model, the performance of the unsupervised registration segmentation system can be greatly improved by introducing the contrast learning, and meanwhile, the performance gap between supervised segmentation and unsupervised segmentation is reduced.
Further, the cross-correlation loss is used for replacing the common mean square error loss in the existing medical image segmentation method, and can be used as an unsupervised reconstruction loss function, so that unsupervised segmentation performance can be improved.
Furthermore, the model is trained by referring to the image, the training of the image segmentation model can be completed without a data set with a label, a great amount of time and cost are saved, the high-quality labeling data are prevented from being acquired, and the training difficulty of the image segmentation model is reduced.
Additional advantages, objects, and features of the invention will be set forth in part in the description which follows and in part will become apparent to those having ordinary skill in the art upon examination of the following or may be learned from practice of the invention. The objectives and other advantages of the invention may be realized and attained by the structure particularly pointed out in the written description and drawings.
It will be appreciated by those skilled in the art that the objects and advantages that can be achieved with the present invention are not limited to the above-described specific ones, and that the above and other objects that can be achieved with the present invention will be more clearly understood from the following detailed description.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this specification, illustrate and together with the description serve to explain the application. In the drawings:
FIG. 1 is a flowchart of an image segmentation method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of an image segmentation model according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a spatial transformation network layer according to an embodiment of the present invention;
Fig. 4 is a flowchart of an image segmentation model training method according to an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the following embodiments and the accompanying drawings, in order to make the objects, technical solutions and advantages of the present invention more apparent. The exemplary embodiments of the present invention and the descriptions thereof are used herein to explain the present invention, but are not intended to limit the invention.
It should be noted here that, in order to avoid obscuring the present invention due to unnecessary details, only structures and/or processing steps closely related to the solution according to the present invention are shown in the drawings, while other details not greatly related to the present invention are omitted.
It should be emphasized that the term "comprises/comprising" when used herein is taken to specify the presence of stated features, elements, steps or components, but does not preclude the presence or addition of one or more other features, elements, steps or components.
It is also noted herein that the term "coupled" may refer to not only a direct connection, but also an indirect connection in which an intermediate is present, unless otherwise specified.
Hereinafter, embodiments of the present invention will be described with reference to the accompanying drawings. In the drawings, the same reference numerals represent the same or similar components, or the same or similar steps.
The picture segmentation method provided by the application is described below.
Optionally, the execution main body of the image segmentation method provided by the application is an electronic device, and the electronic device may be a terminal such as a computer, a mobile phone, a tablet computer, a camera, or may also be a server, and the implementation manner of the electronic device is not limited in this embodiment.
The present embodiment provides an image segmentation method, as shown in fig. 1, at least steps S101 to S103:
Step S101, acquiring first image data and second image data, wherein the first image data refers to image data obtained through magnetic resonance imaging, and the second image data refers to reference image data corresponding to the first image data; the second image data includes a segmentation mask for at least one target region in the second image data. Wherein the target region includes, but is not limited to, the hippocampal region, lateral ventricle region, caudate nucleus region, or the like.
In this embodiment, the first image data is an image file acquired by a magnetic resonance imaging (Magnetic Resonance Imaging, MRI) technique, including, but not limited to, an image file in NifTI format or an image file in MINC format.
The magnetic resonance imaging has the characteristics of non-invasiveness, capability of providing images with higher resolution, and the like, thereby becoming a first-choice method for exploring the change of the internal structure of the brain and having extremely important clinical application value for early identification and prognosis evaluation of nerve and brain-related diseases. Common brain diseases are often accompanied by morphological changes in local or multiple brain structures during progression, such as atrophy of the hippocampus, temporal lobes, and frontal lobes of Alzheimer's Disease (AD) patients to a greater extent than normal aging elderly people. Structural magnetic resonance imaging is commonly used clinically to observe tissue structures within the brain. Thus, segmenting these regions of interest (e.g., the hippocampus) from magnetic resonance images and quantitatively observing them is beneficial for assisting a physician in clinical diagnosis.
The second image data is image data of the same region magnetic resonance imaging as the first image data. The second image data includes a segmentation mask therein.
The second image data and the first image data may be image data of the same region magnetic resonance imaging of the same target, or may be image data of the same region magnetic resonance imaging of different targets.
For example: in the case that the target includes a target a and a target B, the first image data may be image data obtained by brain magnetic resonance imaging of the target a, and the second image data may be image data obtained by brain magnetic resonance imaging of the target a; or the first image data may be image data obtained by brain magnetic resonance imaging of the target a, and the second image data may be image data obtained by brain magnetic resonance imaging of the target B.
In this embodiment, the number of the second image data corresponding to different portions is one, and in actual implementation, the number of the second image data may be adjusted to be at least two according to actual situations.
Step S102, obtaining a pre-trained image segmentation model, wherein the image segmentation model comprises a first network branch, a second network branch and a space transformation network layer; the output of the first network branch and the output of the second network branch are input into a spatial transformation network layer; the first network branch is used for generating a first deformation field corresponding to the first image data; the second network branch is used for generating a second deformation field corresponding to the second image data; the deformation direction of the first deformation field is opposite to that of the second deformation field; the spatial transformation network layer is used for generating a segmentation mask corresponding to the first image data based on the first image data, the second image data, the first deformation field and the second deformation field.
Referring to the image segmentation model shown in fig. 2, the input of the first network branch 21 is first image data, so that the network branch 21 generates a first deformation field corresponding to the first image data based on the first image data; the second network branch 22 inputs second image data for the second network branch 22 to generate a second deformation field corresponding to the second image data based on the second image data. The spatial transformation network layer 23 generates a segmentation mask corresponding to the first image data using the first image data, the second image data, the first deformation field, and the second deformation field.
As shown in fig. 2, the first network branch 21 comprises a first encoder, a decoder and a first deformation field module; the second network branch 22 includes a second encoder, decoder and second deformation field module. The first encoder layer is used for generating a first feature map corresponding to the first image data; the second encoder layer is used for generating a second feature map corresponding to the second image data; the decoder layer is used for generating a third feature map based on the first feature map and the second feature map; the first deformation field module is used for generating a first deformation field based on the third feature map, the first image data and the second image data; the second deformation field module is to generate a second deformation field based on the third feature map, the first image data, and the second image data.
In this embodiment, the weight of the first encoder is the same as the weight of the second encoder. The first encoder and the second encoder each comprise a Three-dimensional convolutional neural network (Three-Dimensional Convolutional Neural Network,3D CNN).
The first encoder and the second encoder are self-encoders (Autoencoder, AE), and are one type of generation model, and may be replaced by other generation models, such as a variable self-encoder (Variational Autoencoder, VAE), a generation countermeasure Network (GAN), and the like, in actual implementation.
And taking the first image data and the second image data as registration pairs to be input into two three-dimensional convolutional neural networks with the same weight, and generating a first feature map corresponding to the first image data and a second feature map corresponding to the second image data. The resolution of the first feature map and the second feature map is the same as the resolution of the input image. The first feature map and the second feature map are integrated by a decoder.
Specifically, the first feature map and the second feature map are spliced and then input to a decoder for recursively extracting feature maps having low-level detail information (high resolution) using feature maps of high-level semantic information (low resolution) in the form of a layer jump connection until a third feature map of the same resolution as the first image data and the second image data is obtained.
The decoder uses the high-level semantic information stored at the encoder stage and combines with other lower-level feature maps to reconstruct the detail information of the input data. The structure can help the three-dimensional convolutional neural network to learn and keep the detail characteristics of the input image data better, so that the performance of the image segmentation model on the image segmentation task is improved.
In this embodiment, the first feature map corresponding to the first image data and the second feature map corresponding to the second image data obtain the third feature map after passing through the decoder. And inputting the third characteristic map into a deformation field calculation module, and calculating target transformation parameters by the deformation field calculation module. The target transformation parameters are acted on the first image data to obtain a first deformation field; and applying the target transformation parameters to the second image data to obtain a second deformation field. The deformation field calculation module is a positioning network (Localization net) and is used for receiving the input characteristic mapping and returning the target transformation parameters phi.
Such as: taking the first image data as an image x and the second image data as an image y as an example, and outputting a feature map A after the image x is input into a first encoder; the image y outputs a feature map B after being input to the second encoder; splicing the feature map A and the feature map B to obtain a feature map AB; inputting the feature map AB into a decoder, and outputting a feature map C; inputting the feature map C into a deformation field calculation module to obtain a target transformation parameter phi; the target transformation parameters phi are respectively applied to the image x and the image y to obtain a deformation field phi_xy and a deformation field phi_yx.
The deformation field can be seen as a non-rigid deformation of the image, wherein each voxel corresponds to a new location. Through the application of deformation fields, voxel points can be mapped from one image to the corresponding position of the other image, so that the registration or deformation of the images is realized.
The first deformation field is used for deforming the first image data, changing the first image data into the second image data, realizing the registration of the first image data, and obtaining a registration image corresponding to the first image data. The first image data is aligned with the second image data such that they are spatially optimally matched. Correspondingly, the second deformation field is used for deforming the second image data to obtain a registration image corresponding to the second image data.
In this embodiment, the registration image corresponding to the second image data is used to determine a difference between the registration images corresponding to the first image data during the training process, and the deformation field calculation module is optimized by minimizing the difference between the registration image corresponding to the second image data and the registration image corresponding to the first image data to learn the optimal parameter value.
In this embodiment, as shown in fig. 2, the deformation field calculation module includes a first deformation field calculation module and a second deformation field calculation module, where the first deformation field calculation module and the second deformation field calculation module are the same function. Through calculation of the deformation field calculation module, deformation fields in two directions including a first deformation field and a second deformation field can be obtained, important roles can be played in tasks such as image registration and image reconstruction, and accurate matching and deformation of images are achieved.
After the first deformation field and the second deformation field are output by the deformation field calculation module, the first deformation field and the second deformation field are input to the spatial transformation network layer 23. As shown in fig. 3, the spatial transformation network layer 23 includes a Grid generator (Grid generator) and a Sampler (Sampler). The mesh generator is used to apply deformation fields to the input image. The sampler serves as an interpolator to construct the final output image.
The grid generator is used for dividing the first image data into a plurality of grid cells, and determining the position of each grid cell in the second image data according to the first deformation field; the second image data is segmented into a number of grid cells, and the position of each grid cell in the first image data is determined from the second deformation field.
The sampler is used for carrying out linear interpolation on a plurality of grid cells corresponding to the first image data to generate a first registration image corresponding to the first image data; and linearly interpolating the grid cells corresponding to the segmentation masks in the second image data, generating the segmentation masks corresponding to the first image data, and applying the segmentation masks to the first image data to obtain segmented images.
The method for performing linear interpolation on a plurality of grid cells corresponding to the first image data comprises the following steps: in a plurality of grid units, linearly interpolating values of eight adjacent voxels of any voxel; each grid cell includes at least one voxel therein.
Accordingly, the grid cells corresponding to the segmentation mask in the second image data are subjected to linear interpolation, and the values of eight adjacent voxels of any voxel are subjected to linear interpolation in a plurality of grid cells corresponding to the segmentation mask.
In this embodiment, in order to implement the standard gradient-based method, the registered image of the first image data and the second image data is calculated by a micro-arithmetic based on the spatial transformation network layer.
For each voxel, the present embodiment calculates the position of (sub-voxel) in the first image data, represented by:
p'=p+u(p)
wherein p is a voxel in the first image data; u (p) represents a first deformation field; p' represents the voxel after the first deformation field effect and the first image data.
Since the image values are defined only in an integer range, the image segmentation model linearly interpolates the values of eight neighboring voxels, represented by:
Wherein m represents first image data; p' represents a voxel after the first deformation field effect and the first image data; z (p ') is the adjacent voxel of p'; d iterating over a three-dimensional euclidean space (R 3); p 'd represents the (x, y, z) value of the spatial three-dimensional coordinates of voxel p'; q represents any one of the adjacent voxels Z (p'), q d represents the (x, y, Z) value of the spatial three-dimensional coordinates of voxel q.
The introduction of the spatial transformation network layer 23 makes the gradient computable and thus allows for back-propagation of errors during the optimization process.
Step S103, inputting the first image data and the second image data into an image segmentation model to obtain a segmented image.
Specifically, the first image data is input into the first network branch 21, and the second image is input into the second network branch 22, and after the calculation of the image segmentation model, a segmented image corresponding to the first image data is obtained.
Referring to fig. 4, the present embodiment provides a training method of an image segmentation model, which at least includes steps S401 to S411:
Step S401, acquiring training data, where the training data includes: sample magnetic resonance image data, and sample reference image data corresponding to the sample magnetic resonance image data.
Optionally, acquiring training data includes: acquiring magnetic resonance image data of the same part of different target objects as sample magnetic resonance image data; a slice of magnetic resonance image data is determined from the sample magnetic resonance image data as sample reference image data.
Wherein, a piece of image data can be randomly extracted from the sample magnetic resonance image data; or screening out the image data with highest image quality from the sample magnetic resonance image data.
Step S402, an initial segmentation model created in advance is acquired.
The initial segmentation model and the image segmentation model have the same model structure; the initial segmentation model includes a first initial network branch and a second initial network branch.
Step S403, inputting the sample magnetic resonance image data into the first initial network branch, inputting the output sample reference image data into the second initial network branch, and obtaining the first training results output by the first encoder in the first initial network branch and the second encoder in the second initial network branch, respectively.
Step S404, inputting the first training result into a preset loss function to obtain a first loss function value.
Optionally, the loss function includes a first loss function, a second loss function, and a third loss function. Wherein the first loss function is a contrast loss function (Contrastive loss); the second loss function is a reconstructed loss function (Reconstruction loss); the third loss function is a Smooth loss function (Smooth). Specifically, the loss function can be expressed by the following formula:
Ltotal=Lrecon+αLsmooth+βLcontrast
Wherein L total represents a loss function; l recon denotes the reconstruction loss function; l smooth represents a smoothing loss function; l contrast represents a contrast loss function; alpha and beta are super parameters that balance L recon、Lsmooth and L contrast, with alpha set to 1 and beta set to 0.01.
In this embodiment, the first loss function is used to amplify the similarity between the first training sub-result and the second training sub-structure in the first training result, and to reduce the difference between the first training sub-result and the second training sub-result.
The first training sub-result is a characteristic sequence output by a first encoder in a first initial network branch; the second training sub-result is a feature sequence output by the second encoder in the first initial network branch.
In this embodiment, the first loss function formally gives a set of images, treats the first image data and the second image data as an enhanced image pair, and treats the other images in the set of images as negative samples. Specifically, the first loss function may be represented by the following formula:
Wherein f (x) is represented as a first feature sequence corresponding to the first image data; f (y) is represented as a second feature sequence corresponding to the second image data; f (i) represents a sequence of features for other images in a given set of images; 1 i≠x ε {0,1} is an identifier, 1 is taken when i+.x; τ is a superparameter; sim (f (x), f (y)) is expressed as cosine similarity between f (x) and f (y), and can be expressed by the following formula:
accordingly, sim (f (x), f (i)) is expressed as cosine similarity between f (x) and f (i), and can be expressed by the following formula:
In this embodiment, the first encoder and the decoder are connected through a projection layer; the second encoder and the decoder are connected by another projection layer. The two projection layers are used to calculate contrast loss to maximize consistency between the image and the enhanced view, thereby improving unsupervised performance.
The second loss function is used for reducing appearance differences among the sample reference image data, the sample magnetic resonance image data and the second training result; accordingly, the second training result comprises a registration image corresponding to the sample magnetic resonance image data and a registration image corresponding to the sample reference image data.
For the second loss function, in the present embodiment, in order to enhance the unsupervised performance, a Cross-correlation function (Cross-
Corelation loss), the function is more suitable for dealing with registration problems between different scanners and different data sets, and has better robustness. Is provided withAnd/>Representing an image obtained by averaging the gray values of voxels in a certain region around each voxel in the image,/>F and/>Is noted as:
Where m represents first image data, f represents second image data (i.e., reference image), Representing applying a deformation field phi to the first image data; /(I)And/>Representing an image obtained by averaging gray values of voxels in a certain region around each voxel in the image; p represents a voxel; p i denotes the ith voxel (where i=
1,2, … N), n being the number of voxels in the image; Ω∈R 3 represents the three-dimensional Euclidean space where n voxels are located.
The higher the value of the local Cross Correlation (CC), the better the alignment effect. Based on the calculation formula of CC, the second loss function is expressed as:
In the method, in the process of the invention, Representing f and/>Is a local cross-correlation of (2); f represents second image data (i.e., a reference image); indicating that a deformation field phi is applied to the first image data.
In addition, in this embodiment, the Cross-Correlation Loss (Cross-Correlation Loss) is used as the reconstruction Loss of the image segmentation model to achieve the best effect, and other commonly used reconstruction Loss functions can be selected to replace the reconstruction Loss functions in actual implementation; such as mean square error loss (Mean Square Error Loss), cross entropy loss (cross-Entropy Loss), and the like.
Step S405, performing iterative training on the first encoder in the first initial network branch and the second encoder in the second initial network branch by using the first loss function value, to obtain a first intermediate network branch and a second intermediate network branch.
Step S406, inputting the sample magnetic resonance image data into a first intermediate network branch, inputting the output sample reference image data into a second intermediate network branch, and outputting second training results by a first deformation field module in the first intermediate network branch, a second deformation field module in the second intermediate network branch and a spatial transformation network layer respectively.
Step S407, inputting the second training result, the sample reference image data and the sample magnetic resonance image data into a preset loss function to obtain a second loss function value.
And step S408, performing iterative training on the first deformation field module in the first intermediate network branch and the second deformation field module in the second intermediate network branch by using the second loss function value to obtain a first iterative network branch and a second iterative network branch.
Step S409, inputting the sample magnetic resonance image data into a first iteration network branch, inputting the output sample reference image data into a second iteration network branch, and respectively obtaining a first deformation field module in the first iteration network branch and a third training result output by a second deformation field module in the second iteration network branch.
In this embodiment, the third loss function is used to reduce the difference between adjacent positions in the third training result; the third training result includes a deformation field output by the first deformation field module in the first iterative network branch and a deformation field output by the second deformation field module in the second iterative network branch.
Minimizing the third loss function will align m deg. phi with f, but may create a physically unrealistic non-smooth deformation field phi. Therefore, a smooth loss function is introduced to act on the deformation field, ensuring good registration. The third loss function is represented by:
wherein p represents a voxel; phi represents a deformation field; the gradient is used for representing the gradient of the voxel p, and omega epsilon R 3 is used for representing the three-dimensional Euclidean space where n voxels are located.
Step S410, inputting the third training result and the gradient corresponding to the third training result into a preset loss function to obtain a third loss function value.
In step S411, the first deformation field module in the first iterative network branch and the second deformation field module in the second iterative network branch are iteratively trained by using the third loss function value, so as to obtain the first network branch and the second network branch.
In this embodiment, the image segmentation model embeds a contrast learning mechanism into the registration system to extract feature mapping with richer information, thereby improving the performance of unsupervised segmentation. Specifically, the present embodiment follows the principle of four components of the contrast learning process. The first part is an input registration pair, wherein the first image data that is not registered and the second image data that is the reference are treated as images sampled from different enhancement views. The second part is the CNN encoder where two weights are shared. In this way it can be ensured that the CNN-based encoder can extract uniform CNN features from the unregistered first image data and the second image data as reference. The third part adopts the full-connected layer as a projection layer, and the CNN features are mapped to potential space. Finally, the fourth component is contrast loss, defined according to the criteria of the contrast learning process.
In summary, in the image segmentation method provided in the present embodiment, the first image data and the second image data are input into the image segmentation model to obtain the segmented image; the method can solve the problem that the multi-target segmentation effect is poor in the multi-target segmentation task of the complex image in the current mainstream deep learning method; firstly, taking first image data to be registered and second image data serving as a reference image as registration pairs to be input into a first network branch and a second network branch, respectively extracting feature mapping of the two image data and integrating, and then calculating an output deformation field; and finally, aligning the segmentation mask corresponding to at least one target area in the second image data to a space coordinate system of the first image data through a space transformation network, obtaining the segmentation mask corresponding to at least one target area in the first image data in an unsupervised mode, and applying the segmentation mask to the first image data. Thus, a plurality of target regions of interest can be segmented from the first image data, and the effect of multi-target segmentation can be improved.
Further, by combining the image level registration system and the feature level contrast learning, embedding the contrast learning into an unsupervised registration segmentation architecture, and using a reference data set with a label and a visual result to evaluate the segmentation accuracy and generalization capability of the model, the performance of the unsupervised registration segmentation system can be greatly improved by introducing the contrast learning, and meanwhile, the performance gap between supervised segmentation and unsupervised segmentation is reduced.
Further, the cross-correlation loss is used for replacing the common mean square error loss in the existing medical image segmentation method, and can be used as an unsupervised reconstruction loss function, so that unsupervised segmentation performance can be improved.
Furthermore, the model is trained by referring to the image, the training of the image segmentation model can be completed without a data set with a label, a great amount of time and cost are saved, the high-quality labeling data are prevented from being acquired, and the training difficulty of the image segmentation model is reduced.
Correspondingly, the invention also provides an image segmentation method device, which comprises computer equipment, wherein the computer equipment comprises a processor and a memory, the memory is stored with computer instructions, the processor is used for executing the computer instructions stored in the memory, and the device realizes the steps of the image segmentation method when the computer instructions are executed by the processor.
The embodiments of the present invention also provide a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the aforementioned image segmentation method. The computer readable storage medium may be a tangible storage medium such as Random Access Memory (RAM), memory, read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, floppy disks, hard disk, a removable memory disk, a CD-ROM, or any other form of storage medium known in the art.
Those of ordinary skill in the art will appreciate that the various illustrative components, systems, and methods described in connection with the embodiments disclosed herein can be implemented as hardware, software, or a combination of both. The particular implementation is hardware or software dependent on the specific application of the solution and the design constraints. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention. When implemented in hardware, it may be, for example, an electronic circuit, an Application Specific Integrated Circuit (ASIC), suitable firmware, a plug-in, a function card, or the like. When implemented in software, the elements of the invention are the programs or code segments used to perform the required tasks. The program or code segments may be stored in a machine readable medium or transmitted over transmission media or communication links by a data signal carried in a carrier wave.
It should be understood that the invention is not limited to the particular arrangements and instrumentality described above and shown in the drawings. For the sake of brevity, a detailed description of known methods is omitted here. In the above embodiments, several specific steps are described and shown as examples. The method processes of the present invention are not limited to the specific steps described and shown, but various changes, modifications and additions, or the order between steps may be made by those skilled in the art after appreciating the spirit of the present invention.
In this disclosure, features that are described and/or illustrated with respect to one embodiment may be used in the same way or in a similar way in one or more other embodiments and/or in combination with or instead of the features of the other embodiments.
The above description is only of the preferred embodiments of the present invention and is not intended to limit the present invention, and various modifications and variations can be made to the embodiments of the present invention by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. An image segmentation method, characterized in that the method comprises the steps of:
Acquiring first image data and second image data, wherein the first image data refers to image data obtained through magnetic resonance imaging, and the second image data refers to reference image data corresponding to the first image data; the second image data comprises a segmentation mask of at least one target region in the second image data;
Acquiring a pre-trained image segmentation model, wherein the image segmentation model comprises a first network branch, a second network branch and a space transformation network layer; the output of the first network branch and the output of the second network branch are input to the spatial transformation network layer; the first network branch is used for generating a first deformation field corresponding to the first image data; the second network branch is used for generating a second deformation field corresponding to the second image data; the deformation direction of the first deformation field is opposite to that of the second deformation field; the spatial transformation network layer is used for generating a segmentation mask corresponding to the first image data based on the first image data, the second image data, the first deformation field and the second deformation field;
and inputting the first image data and the second image data into the image segmentation model to obtain a segmented image.
2. The image segmentation method as set forth in claim 1, wherein the first network branch comprises a first encoder, a decoder, and a first deformation field module; the second network branch includes a second encoder, the decoder, and a second deformation field module;
The first encoder layer is used for generating a first feature map corresponding to the first image data; the second encoder layer is used for generating a second feature map corresponding to the second image data;
the decoder layer is configured to generate a third feature map based on the first feature map and the second feature map;
the first deformation field module is configured to generate the first deformation field based on the third feature map, the first image data, and the second image data;
the second deformation field module is configured to generate the second deformation field based on the third feature map, the first image data, and the second image data.
3. The image segmentation method as set forth in claim 2, wherein the weight of the first encoder is the same as the weight of the second encoder.
4. The image segmentation method according to claim 2, characterized in that the decoder is configured to recursively extract feature maps with low-level detail information using feature maps of high-level semantic information in the form of a layer-jump connection until the third feature map of the same resolution as the first image data and the second image data is obtained.
5. The image segmentation method as set forth in claim 2, wherein the acquiring the pre-trained image segmentation model comprises:
Acquiring training data, the training data comprising: sample magnetic resonance image data and sample reference image data corresponding to the sample magnetic resonance image data;
Acquiring a pre-established initial segmentation model; the initial segmentation model has the same model structure as the image segmentation model; the initial segmentation model comprises a first initial network branch and a second initial network branch;
Inputting the sample magnetic resonance image data into the first initial network branch, inputting the output sample reference image data into the second initial network branch, and respectively obtaining a first training result output by a first encoder in the first initial network branch and a second encoder in the second initial network branch;
inputting the first training result into a preset loss function to obtain a first loss function value;
performing iterative training on a first encoder in the first initial network branch and a second encoder in the second initial network branch by using a first loss function value to obtain a first intermediate network branch and a second intermediate network branch;
Inputting the sample magnetic resonance image data into the first intermediate network branch, inputting the output sample reference image data into the second intermediate network branch, and outputting a second training result by a first deformation field module in the first intermediate network branch, a second deformation field module in the second intermediate network branch and the spatial transformation network layer respectively;
Inputting the second training result, the sample reference image data and the sample magnetic resonance image data into the preset loss function to obtain a second loss function value;
Performing iterative training on a first deformation field module in the first intermediate network branch and a second deformation field module in the second intermediate network branch by using the second loss function value to obtain a first iterative network branch and a second iterative network branch;
Inputting the sample magnetic resonance image data into the first iteration network branch, inputting the output sample reference image data into the second iteration network branch, and respectively obtaining a first deformation field module in the first iteration network branch and a third training result output by a second deformation field module in the second iteration network branch;
Inputting the third training result and the gradient corresponding to the third training result into the preset loss function to obtain a third loss function value;
And performing iterative training on a first deformation field module in the first iterative network branch and a second deformation field module in the second iterative network branch by using a third loss function value to obtain the first network branch and the second network branch.
6. The image segmentation method as set forth in claim 5, wherein the loss function comprises a first loss function, a second loss function, and a third loss function;
The first loss function is used for amplifying the similarity between a first training sub-result and a second training sub-structure in the first training result and reducing the difference between the first training sub-result and the second training sub-result; the first training sub-result is a feature sequence output by a first encoder in the first initial network branch; the second training sub-result is a characteristic sequence output by a second encoder in the first initial network branch;
The second loss function is used for reducing appearance differences among the sample reference image data, the sample magnetic resonance image data and a second training result; the second training result comprises a registration image corresponding to the sample magnetic resonance image data and a registration image corresponding to the sample reference image data;
the third loss function is used for reducing the difference of adjacent positions in the third training result; the third training result comprises a deformation field output by a first deformation field module in the first iteration network branch and a deformation field output by a second deformation field module in the second iteration network branch.
7. The image segmentation method as set forth in claim 1, wherein the spatial transformation network layer includes a grid generator and a sampler;
The grid generator is used for dividing the first image data into a plurality of grid cells, and determining the position of each grid cell in the second image data according to the first deformation field; dividing the second image data into a plurality of grid cells, and determining the position of each grid cell in the first image data according to the second deformation field;
The sampler is used for carrying out linear interpolation on a plurality of grid cells corresponding to the first image data to generate a first registration image corresponding to the first image data; and linearly interpolating grid cells corresponding to the segmentation masks in the second image data, generating the segmentation masks corresponding to the first image data, and applying the segmentation masks to the first image data to obtain the segmented image.
8. The image segmentation method as set forth in claim 7, wherein the linearly interpolating the plurality of grid cells corresponding to the first image data comprises: in the grid units, linearly interpolating values of eight adjacent voxels of any voxel; each grid cell includes at least one voxel therein.
9. An image segmentation apparatus comprising a processor and a memory, wherein the memory has stored therein computer instructions for executing the computer instructions stored in the memory, the apparatus implementing the steps of the image segmentation method according to any one of claims 1 to 8 when the computer instructions are executed by the processor.
10. A computer-readable storage medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, implements the steps of the image segmentation method as claimed in any one of claims 1 to 8.
CN202311745371.8A 2023-12-18 2023-12-18 Image segmentation method, device and storage medium Pending CN117911432A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311745371.8A CN117911432A (en) 2023-12-18 2023-12-18 Image segmentation method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311745371.8A CN117911432A (en) 2023-12-18 2023-12-18 Image segmentation method, device and storage medium

Publications (1)

Publication Number Publication Date
CN117911432A true CN117911432A (en) 2024-04-19

Family

ID=90695819

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311745371.8A Pending CN117911432A (en) 2023-12-18 2023-12-18 Image segmentation method, device and storage medium

Country Status (1)

Country Link
CN (1) CN117911432A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118115838A (en) * 2024-04-29 2024-05-31 北京邮电大学 Medical image segmentation model training method, segmentation method, device and program product
CN118115838B (en) * 2024-04-29 2024-07-30 北京邮电大学 Medical image segmentation model training method, segmentation method, device and program product

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114283162A (en) * 2021-12-27 2022-04-05 河北工业大学 Real scene image segmentation method based on contrast self-supervision learning
CN115082293A (en) * 2022-06-10 2022-09-20 南京理工大学 Image registration method based on Swin transducer and CNN double-branch coupling
CN115239637A (en) * 2022-06-28 2022-10-25 中国科学院深圳先进技术研究院 Automatic segmentation method, system, terminal and storage medium for CT pancreatic tumors
WO2023207266A1 (en) * 2022-04-29 2023-11-02 腾讯科技(深圳)有限公司 Image registration method, apparatus and device, and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114283162A (en) * 2021-12-27 2022-04-05 河北工业大学 Real scene image segmentation method based on contrast self-supervision learning
WO2023207266A1 (en) * 2022-04-29 2023-11-02 腾讯科技(深圳)有限公司 Image registration method, apparatus and device, and storage medium
CN115082293A (en) * 2022-06-10 2022-09-20 南京理工大学 Image registration method based on Swin transducer and CNN double-branch coupling
CN115239637A (en) * 2022-06-28 2022-10-25 中国科学院深圳先进技术研究院 Automatic segmentation method, system, terminal and storage medium for CT pancreatic tumors

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
LIYUN TU等: "Supervising the Decoder of Variational Autoencoders to Improve Scientific Utility", IEEE TRANSACTIONS ON SIGNAL PROCESSING, vol. 70, 19 December 2022 (2022-12-19), pages 5954, XP011931614, DOI: 10.1109/TSP.2022.3230329 *
田娟秀;刘国才;谷珊珊;鞠忠建;刘劲光;顾冬冬;: "医学图像分析深度学习方法研究与挑战", 自动化学报, no. 03, 15 March 2018 (2018-03-15) *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118115838A (en) * 2024-04-29 2024-05-31 北京邮电大学 Medical image segmentation model training method, segmentation method, device and program product
CN118115838B (en) * 2024-04-29 2024-07-30 北京邮电大学 Medical image segmentation model training method, segmentation method, device and program product

Similar Documents

Publication Publication Date Title
Mahapatra et al. Joint registration and segmentation of xray images using generative adversarial networks
US20200167930A1 (en) A System and Computer-Implemented Method for Segmenting an Image
CN107886508B (en) Differential subtraction method and medical image processing method and system
CN109272510B (en) Method for segmenting tubular structure in three-dimensional medical image
CN111429421B (en) Model generation method, medical image segmentation method, device, equipment and medium
CN109978037A (en) Image processing method, model training method, device and storage medium
Hashimoto et al. Automated segmentation of 2D low-dose CT images of the psoas-major muscle using deep convolutional neural networks
WO2021102644A1 (en) Image enhancement method and apparatus, and terminal device
CN112785632A (en) Cross-modal automatic registration method for DR (digital radiography) and DRR (digital radiography) images in image-guided radiotherapy based on EPID (extended medical imaging)
Iglesias A ready-to-use machine learning tool for symmetric multi-modality registration of brain MRI
Cai et al. Accurate weakly supervised deep lesion segmentation on CT scans: Self-paced 3D mask generation from RECIST
CN113610752A (en) Mammary gland image registration method, computer device and storage medium
CN116091466A (en) Image analysis method, computer device, and storage medium
Tong et al. Registration of histopathology images using self supervised fine grained feature maps
CN112950684B (en) Target feature extraction method, device, equipment and medium based on surface registration
CN112686932B (en) Image registration method for medical image, image processing method and medium
Chatterjee et al. A survey on techniques used in medical imaging processing
CN112365512B (en) Method for training image segmentation model, method for image segmentation and device thereof
Dong et al. Hole-filling based on content loss indexed 3D partial convolution network for freehand ultrasound reconstruction
CN108596900B (en) Thyroid-associated ophthalmopathy medical image data processing device and method, computer-readable storage medium and terminal equipment
Gao et al. Consistency based co-segmentation for multi-view cardiac MRI using vision transformer
Lafitte et al. Accelerating multi-modal image registration using a supervoxel-based variational framework
CN117911432A (en) Image segmentation method, device and storage medium
CN114596286A (en) Image segmentation method, system, device and storage medium
CN113963037A (en) Image registration method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination