CN112241955A - Method and device for segmenting broken bones of three-dimensional image, computer equipment and storage medium - Google Patents

Method and device for segmenting broken bones of three-dimensional image, computer equipment and storage medium Download PDF

Info

Publication number
CN112241955A
CN112241955A CN202011161212.XA CN202011161212A CN112241955A CN 112241955 A CN112241955 A CN 112241955A CN 202011161212 A CN202011161212 A CN 202011161212A CN 112241955 A CN112241955 A CN 112241955A
Authority
CN
China
Prior art keywords
segmentation
result
dimensional
dimensional image
sampling
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011161212.XA
Other languages
Chinese (zh)
Other versions
CN112241955B (en
Inventor
洪振厚
王健宗
瞿晓阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN202011161212.XA priority Critical patent/CN112241955B/en
Priority to PCT/CN2020/134546 priority patent/WO2021179702A1/en
Publication of CN112241955A publication Critical patent/CN112241955A/en
Application granted granted Critical
Publication of CN112241955B publication Critical patent/CN112241955B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30008Bone
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Quality & Reliability (AREA)
  • Radiology & Medical Imaging (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a broken bone segmentation method and device of a three-dimensional image, computer equipment and a storage medium, and belongs to the field of intelligent medical treatment. The broken bone segmentation method of the three-dimensional image takes the three-dimensional image as input, and performs feature extraction on the three-dimensional image to be segmented through a feature extraction module in a segmentation model to obtain a basic feature map, wherein the correlation of tissues in the three-dimensional image is considered; the basic feature map is subjected to feature extraction through the intermediate module to obtain a segmentation map with the same size as the original three-dimensional image to be segmented, the segmentation module is used for fusing the segmentation map and the basic feature map, a three-dimensional broken bone segmentation result is obtained, segmentation time is saved, and segmentation precision and efficiency are improved.

Description

Method and device for segmenting broken bones of three-dimensional image, computer equipment and storage medium
Technical Field
The invention relates to the field of intelligent medical treatment, in particular to a broken bone segmentation method and device of a three-dimensional image, computer equipment and a storage medium.
Background
Fractures are a common disease, and more serious fractures often have broken bones, which can cause serious consequences if the broken bones are omitted in the process of operation. The existing traditional segmentation method mainly understands images through artificial subjective consciousness, so as to extract specific characteristic information, such as: the segmentation of the bone fragments is realized by gray scale information, texture information, symmetry information and the like, but the segmentation method has a relatively good segmentation result only for a specific image, but has the problems of too rough segmentation result and low segmentation efficiency. With the advent of artificial intelligence, and in particular deep learning, to provide new directions for automatic segmentation of fractured bones, traditional fractured bone segmentation methods are gradually being replaced by machine learning-based approaches. The convolutional neural network is used as a representative of supervised learning, can directly learn feature representation from data, combines the image layer by layer from simple bottom layer features such as edges and angular points to form more abstract high-level features through layer-by-layer feature extraction, achieves a remarkable effect in the field of image recognition, and is widely applied to medical image processing. The current mainstream machine segmentation method is mainly used for segmenting an MRI image based on a tissue slice, and has the problems that the spatial correlation of tissues cannot be reflected and the segmentation efficiency is low.
Disclosure of Invention
Aiming at the problem that the existing broken bone segmentation method cannot consider the spatial correlation of tissues, the broken bone segmentation method, the device, the computer equipment and the storage medium aiming at the three-dimensional image which can consider the spatial correlation of the tissues are provided.
In order to achieve the above object, the present invention provides a method for segmenting a bone fragment of a three-dimensional image, comprising:
acquiring a three-dimensional image to be segmented;
identifying the three-dimensional image to be segmented by adopting a segmentation model to obtain a three-dimensional broken bone segmentation result;
the step of adopting a segmentation model to identify the three-dimensional image to be segmented to obtain a three-dimensional broken bone segmentation result comprises the following steps:
extracting the features of the three-dimensional image to be segmented to obtain a basic feature map;
carrying out feature extraction on the basic feature map to obtain a segmentation map;
and fusing the segmentation graph and the basic feature graph to generate the three-dimensional broken bone segmentation result.
Optionally, the performing feature extraction on the three-dimensional image to be segmented to obtain a basic feature map includes:
obtaining a first characteristic result by performing convolution on the three-dimensional image to be segmented;
down-sampling the first characteristic result to obtain a first sampling result;
adding the first sampling result and the first characteristic result element by element and then performing convolution to obtain a second characteristic result;
down-sampling the second feature result to obtain a second sampling result;
adding the second sampling result and the second characteristic result element by element and then performing convolution to obtain a third characteristic result;
down-sampling the third feature result to obtain a third sampling result;
adding the third sampling result and the third characteristic result element by element and then performing convolution to obtain a fourth characteristic result;
down-sampling the fourth feature result to obtain a fourth sampling result;
and adding the fourth sampling result and the fourth feature result element by element and then performing convolution to obtain the basic feature map.
Optionally, performing feature extraction on the basic feature map to obtain a segmentation map, including:
up-sampling the basic feature map to obtain a first segmentation result;
fusing, decoding and upsampling the first segmentation result and the fourth feature result to obtain a second segmentation result;
fusing and decoding the second segmentation result and the third feature result to obtain a first output result, and upsampling the first output result to obtain a third segmentation result;
fusing, decoding and upsampling the third segmentation result and the second feature result to obtain a fourth segmentation result;
and fusing the fourth segmentation result and the first characteristic result, convolving and segmenting to obtain a fifth segmentation result, and taking the fifth segmentation result as the segmentation graph.
Optionally, fusing the segmentation map and the basic feature map to generate the three-dimensional bone fragment segmentation result, including:
segmenting the first output result to obtain a second output result, and performing up-sampling to obtain a first additional segmentation result;
upsampling the second output result and the first additional segmentation result element-by-element to obtain a second additional result;
and adding the segmentation graph and the second additional result element by element and then classifying to obtain the three-dimensional broken bone segmentation result.
Optionally, a segmentation model is used to identify the three-dimensional image to be segmented to obtain a three-dimensional broken bone segmentation result, and the method further includes:
an initial classification model is trained to obtain the segmentation model.
Optionally, training the initial classification model to obtain the segmentation model includes:
reconstructing a two-dimensional sequence diagram in a sample through a three-dimensional diagram to obtain a three-dimensional training image;
normalizing the three-dimensional training image to obtain a three-dimensional sample image;
extracting the features of the three-dimensional sample image to obtain a basic training feature map;
carrying out feature extraction on the basic training feature map to obtain a segmentation training map;
fusing the segmentation training diagram with the basic training characteristic diagram to generate the three-dimensional bone fragment training segmentation result;
and adjusting parameters in the feature extraction module, the intermediate module and the segmentation module according to the training segmentation result to obtain the segmentation model.
Optionally, adjusting parameters in the initial classification model according to the training segmentation result to obtain the segmentation model, including:
and adjusting parameters in the initial classification model according to the training segmentation result by adopting an Adam optimizer to obtain the segmentation model.
In order to achieve the above object, the present invention further provides a bone fragment segmentation apparatus for a three-dimensional image, comprising:
the receiving unit is used for acquiring a three-dimensional image to be segmented;
the segmentation unit is used for identifying the three-dimensional image to be segmented by adopting a segmentation model to obtain a three-dimensional broken bone segmentation result;
the segmentation model comprises a feature extraction module, an intermediate module and a segmentation module;
the segmentation unit performs feature extraction on the three-dimensional image to be segmented through the feature extraction module to obtain a basic feature map, performs feature extraction on the basic feature map through the intermediate module to obtain a segmentation map, and fuses the segmentation map and the basic feature map through the segmentation module to generate the three-dimensional broken bone segmentation result.
To achieve the above object, the present invention further provides a computer device, which includes a memory, a processor, and a computer program stored on the memory and executable on the processor, and the processor implements the steps of the above method when executing the computer program.
To achieve the above object, the present invention also provides a computer-readable storage medium having stored thereon a computer program which, when being executed by a processor, carries out the steps of the above method.
According to the method, the device, the computer equipment and the storage medium for fragmenting the bone of the three-dimensional image, the three-dimensional image is used as input, the characteristic extraction module in the segmentation model is used for extracting the characteristics of the three-dimensional image to be segmented to obtain the basic characteristic diagram, and the correlation of tissues in the three-dimensional image is considered; the basic feature map is subjected to feature extraction through the intermediate module to obtain a segmentation map with the same size as the original three-dimensional image to be segmented, the segmentation module is used for fusing the segmentation map and the basic feature map, a three-dimensional broken bone segmentation result is obtained, segmentation time is saved, and segmentation precision and efficiency are improved.
Drawings
FIG. 1 is a flow chart of an embodiment of a method for bone segmentation of a three-dimensional image according to the present invention;
FIG. 2 is a flowchart of an embodiment of training an initial classification model to obtain a segmentation model according to the present invention;
FIG. 3 is a block diagram of an embodiment of a three-dimensional image bone fragment segmentation apparatus according to the present invention;
FIG. 4 is a diagram of the internal components of the segmentation model according to the present invention;
fig. 5 is a hardware architecture diagram of one embodiment of the computer apparatus of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that the embodiments and features of the embodiments may be combined with each other without conflict.
The invention provides a broken bone segmentation method and device of a three-dimensional image, computer equipment and a storage medium, which are suitable for the field of intelligent medical treatment. According to the method, a three-dimensional image is used as input, a basic feature map is obtained by performing feature extraction on the three-dimensional image to be segmented through a feature extraction module in a segmentation model, and the correlation of tissues in the three-dimensional image is considered; the basic feature map is subjected to feature extraction through the intermediate module to obtain a segmentation map with the same size as the original three-dimensional image to be segmented, the segmentation module is used for fusing the segmentation map and the basic feature map, a three-dimensional broken bone segmentation result is obtained, segmentation time is saved, and segmentation precision and efficiency are improved.
Example one
Referring to fig. 1, a method for segmenting a bone fragment of a three-dimensional image according to the present embodiment includes:
s1, obtaining a three-dimensional image to be segmented.
It should be noted that: the three-dimensional image to be segmented is a three-dimensional MRI (Magnetic Resonance Imaging) image. MRI displays the internal information of the body in an image mode, and has the advantages of no wound, multiple modes, accurate positioning and the like.
And S2, identifying the three-dimensional image to be segmented by adopting a segmentation model to obtain a three-dimensional broken bone segmentation result.
The segmentation model comprises a feature extraction module, an intermediate module and a segmentation module.
Specifically, the step S2 of recognizing the three-dimensional image to be segmented by using the segmentation model to obtain the three-dimensional bone fragment segmentation result includes:
and S21, extracting the features of the three-dimensional image to be segmented to obtain a basic feature map.
Further, the step S21 of extracting features of the three-dimensional image to be segmented to obtain a basic feature map may include the following steps:
s211, performing convolution on the three-dimensional image to be segmented to obtain a first characteristic result;
s212, down-sampling the first characteristic result to obtain a first sampling result;
s213, adding the first sampling result and the first characteristic result element by element, and then performing convolution to obtain a second characteristic result;
s214, down-sampling the second characteristic result to obtain a second sampling result;
s215, adding the second sampling result and the second characteristic result element by element, and then performing convolution to obtain a third characteristic result;
s216, down-sampling the third characteristic result to obtain a third sampling result;
s217, adding the third sampling result and the third characteristic result element by element, and then performing convolution to obtain a fourth characteristic result;
s218, down-sampling the fourth characteristic result to obtain a fourth sampling result;
s219, adding the fourth sampling result and the fourth characteristic result element by element, and then performing convolution to obtain the basic characteristic diagram.
In this embodiment, a feature extraction module is adopted to perform feature extraction on the three-dimensional image to be segmented to obtain a basic feature map. Wherein, the feature extraction module includes in order: a first context layer, a first downsampling layer, a second context layer, a second downsampling layer, a third context layer, a third downsampling layer, a fourth context layer, a fourth downsampling layer, and a fifth context layer. Performing feature extraction on the three-dimensional image to be segmented through the first context layer, inputting a first feature result into the first downsampling layer to obtain a first sampling result, adding the first sampling result and the first feature result element by element to be used as input of the second context layer to obtain a second feature result, inputting the second feature result into the second downsampling layer to obtain a second sampling result, adding the second sampling result and the second feature result element by element to be used as input of the third context layer to obtain a third feature result, inputting the third feature result into the third downsampling layer to obtain a third sampling result, adding the third sampling result and the third feature result element by element to be used as input of the fourth context layer to obtain a fourth feature result, inputting the fourth feature result into the fourth downsampling layer to obtain a fourth sampling result, adding the fourth sampling result element by element with the fourth feature result as input of the fifth context layer to obtain the base feature map.
By way of example and not limitation, a three-dimensional image to be segmented of 128 × 128 × 128 voxels may be input to an input layer of a segmentation model, and pass through five context layers, each context layer is connected with another context layer through a sampling layer, and the results of each downsampling layer and the context layer are added element by element to be used as input of a next downsampling layer, so as to obtain a basic feature map, namely a rough segmentation map.
The voxel is also called as a volume element and is the minimum unit of digital data on three-dimensional space segmentation, and the voxel is mainly used in the fields of three-dimensional imaging, scientific data, medical images and the like.
And S22, extracting the features of the basic feature map to obtain a segmentation map.
Further, step S22 may include the steps of:
s221, performing up-sampling on the basic feature map to obtain a first segmentation result;
s222, fusing, decoding and upsampling the first segmentation result and the fourth feature result to obtain a second segmentation result;
s223, fusing and decoding the second segmentation result and the third feature result to obtain a first output result, and performing upsampling on the first output result to obtain a third segmentation result;
s224, fusing, decoding and upsampling the third segmentation result and the second characteristic result to obtain a fourth segmentation result;
and S225, fusing the fourth segmentation result with the first characteristic result, convolving and segmenting to obtain a fifth segmentation result, and taking the fifth segmentation result as the segmentation graph.
In this embodiment, an intermediate module is used to perform feature extraction on the basic feature map to obtain a segmentation map. Wherein, the middle module includes in order: the device comprises a first up-sampling layer, a first decoding layer, a second up-sampling layer, a second decoding layer, a third up-sampling layer, a third decoding layer, a fourth up-sampling layer, a three-dimensional convolution layer and a first segmentation layer. Inputting the basic feature map into the first up-sampling layer to obtain a first segmentation result, fusing the first segmentation result and the fourth feature result, and obtaining a second segmentation result through the first decoding layer and the second up-sampling layer; fusing the second segmentation result and the third feature result, and obtaining a third segmentation result through the second decoding layer and the third up-sampling layer; fusing the third segmentation result with the second feature result, and obtaining a fourth segmentation result through the third decoding layer and the fourth upsampling layer; and fusing the fourth segmentation result with the first characteristic result, obtaining a fifth segmentation result through the three-dimensional convolution layer and the first segmentation layer, and taking the fifth segmentation result as the segmentation graph.
In this embodiment, a first segmentation result is obtained by a first upsampling layer behind a fifth context layer, the first segmentation result and the fourth feature result are fused, a second segmentation result is obtained by a first decoding layer and a second upsampling layer after the fusion, a third segmentation result is obtained by a third feature result of the second segmentation result, a fourth segmentation result is obtained by a second decoding layer and a third upsampling layer after the fusion, the fourth segmentation result is fused with the first feature result, and a fifth segmentation result is obtained by a three-dimensional convolution layer and the first segmentation layer. In this embodiment, the upsampling of the segmentation result is realized through the upsampling layer, so as to obtain a segmentation map with the same size as that of the original three-dimensional image to be segmented.
And S23, fusing the segmentation graph with the basic characteristic graph to generate the three-dimensional broken bone segmentation result.
Further, step S23 may include the steps of:
s231, segmenting the first output result to obtain a second output result, and performing upsampling to obtain a first additional segmentation result;
s232, adding the second output result and the first additional segmentation result element by element and up-sampling to obtain a second additional result;
and S233, adding the segmentation graph and the second additional result element by element, and then classifying to obtain the three-dimensional broken bone segmentation result.
In this embodiment, a segmentation module is adopted to fuse the segmentation map and the basic feature map, so as to generate the three-dimensional broken bone segmentation result. The segmentation module sequentially comprises: a second segmentation layer, a fifth upsampling layer, a sixth upsampling layer, and a classification layer. A first output result of a second decoding layer is subjected to the second division layer and the fifth upsampling layer to obtain a first additional division result, and the first output result of the second decoding layer is subjected to the element-by-element addition of a second output result of the second division layer and the first additional division result and is subjected to the sixth upsampling layer to obtain a second additional result; and adding the segmentation graph and the second additional result element by element and inputting the result into the classification layer to obtain the three-dimensional broken bone segmentation result.
In this embodiment, a second segmentation layer and a fifth upsampling layer are sequentially adopted to segment a first output result output by a second decoding layer to obtain a first additional segmentation result; and the segmentation result is a probability fraction matrix which corresponds to the number of semantic segmentation categories and has the same size with the original image, and the final classification is judged by retrieving the probability that each pixel belongs to various categories to form the final three-dimensional bone fragment segmentation result.
In an embodiment, before performing step S2, identifying the three-dimensional image to be segmented with a segmentation model to obtain a three-dimensional bone fragment segmentation result, the method further includes:
A. an initial classification model is trained to obtain the segmentation model.
The initial classification model comprises a feature extraction module, an intermediate module and a segmentation module.
Specifically, step a may comprise:
A1. and reconstructing the two-dimensional sequence diagram in the sample through a three-dimensional diagram to obtain a three-dimensional training image.
In this embodiment, the two-dimensional sequence diagram is a two-dimensional sparse sequence diagram, and the two-dimensional sequence diagram is reconstructed by a trilinear interpolation (also called three-dimensional linear difference) or super-resolution reconstruction method to obtain an isotropic three-dimensional training image. Trilinear interpolation is mainly a linear interpolation method used to compute the values of other points in a 3D cube by giving the values of vertices.
In practical applications, a random batch (batch) of samples may be used, for example: size (size) 2, size 128 × 128 × 128 voxels, over 100 images per round, for a total of 300 rounds of training.
A2. And carrying out normalization processing on the three-dimensional training image to obtain a three-dimensional sample image.
In this embodiment, the pixel planning of the image is realized by performing normalization processing on the three-dimensional training image, and the pixels of the three-dimensional sample image are unified, so as to facilitate subsequent model training.
The two modalities of spin-lattice relaxation time and spin-spin relaxation time of the three-dimensional training image may also be field offset corrected prior to performing step a2.
A3. And performing feature extraction on the three-dimensional sample image to obtain a basic training feature map. Wherein, the feature extraction module includes in order: a first context layer, a first downsampling layer, a second context layer, a second downsampling layer, a third context layer, a third downsampling layer, a fourth context layer, a fourth downsampling layer, and a fifth context layer.
In this embodiment, feature extraction is performed on the three-dimensional sample image by the feature extraction module in the initial classification model to obtain a basic training feature map. The three-dimensional sample image can be input into an input layer of the initial classification model, and passes through five context layers, wherein each context layer is connected with each other through a sampling layer, and the results of each downsampling layer and each context layer are added element by element to be used as the input of the next downsampling layer, so as to obtain a basic feature map, namely a rough segmentation map.
A4. And performing feature extraction on the basic training feature map to obtain a segmentation training map.
Wherein, the middle module includes in order: the device comprises a first up-sampling layer, a first decoding layer, a second up-sampling layer, a second decoding layer, a third up-sampling layer, a third decoding layer, a fourth up-sampling layer, a three-dimensional convolution layer and a first segmentation layer.
In this embodiment, the intermediate module performs feature extraction on the basic training feature map to obtain a segmentation training map. And obtaining a first segmentation result by a first upsampling layer behind a fifth context layer, fusing the first segmentation result with the fourth feature result, obtaining a second segmentation result by a first decoding layer and a second upsampling layer after fusion, fusing a third feature result of the second segmentation result, obtaining a third segmentation result by a second decoding layer and a third upsampling layer after fusion, obtaining a fourth segmentation result by analogy, fusing the fourth segmentation result with the first feature result, and obtaining a fifth segmentation result by a three-dimensional convolution layer and the first segmentation layer. In this embodiment, the upsampling of the segmentation result is realized through the upsampling layer, so as to obtain a segmentation map with the same size as that of the original three-dimensional image to be segmented.
A5. And fusing the segmentation training graph and the basic training characteristic graph to generate the three-dimensional broken bone training segmentation result.
Wherein, the segmentation module includes in order: a second segmentation layer, a fifth upsampling layer, a sixth upsampling layer, and a classification layer.
In this embodiment, the segmentation training graph and the basic training feature graph are fused by a segmentation module to generate the three-dimensional bone fragment training segmentation result. Sequentially adopting a second segmentation layer and a fifth up-sampling layer to segment a first output result output by the second decoding layer so as to obtain a first additional segmentation result; and the segmentation result is a probability distribution matrix which corresponds to the number of semantic segmentation categories and has the same size with the original image, and the final classification is judged by retrieving the probability that each pixel belongs to various categories to form the final three-dimensional bone fragment segmentation result.
A6. And adjusting parameters in the initial classification model according to the training segmentation result to obtain the segmentation model.
Specifically, the step a6, adjusting parameters in the feature extraction module, the intermediate module and the segmentation module according to the training segmentation result to obtain the segmentation model, may include:
and adjusting parameters in the feature extraction module, the intermediate module and the segmentation module by adopting an Adam optimizer according to the training segmentation result to obtain the segmentation model.
In this embodiment, the Adam optimizer can adjust different learning rates for each different parameter, update frequently-changing parameters with smaller step sizes, and update sparse parameters with larger step sizes. In order to solve the problem of unbalanced training data categories, the traditional cross entropy loss function is abandoned, and a multi-category Dice loss function can be used for fragmenting bone. The Dice loss function is a set similarity metric function, and is typically used to calculate the similarity of two samples (similarity range [0,1 ]). And punishing the prediction with low confidence coefficient through a Dice loss function, thereby improving the prediction effect.
Compared with the traditional segmentation method, the method for segmenting the broken bones of the three-dimensional image can convert the segmentation of the broken bones of the MRI image into the semantic labeling problem of the pixel-level 3D image, a context module and a downsampling module in a trained segmentation model are used as a feature extraction module of a 3D convolution network, so that an intermediate module outputs rough segmentation images corresponding to the semantic segmentation category number, and an upsampling module is added behind the intermediate module and is used for upsampling the rough segmentation images to obtain segmentation images with the same size as an original image. The broken bone segmentation method of the three-dimensional image has the advantages that: compared with the existing single-slice segmentation method, the broken bone segmentation method of the three-dimensional image does not need manual intervention, does not need to segment the broken bone one by one, is an automatic and simple broken bone segmentation method, not only improves the segmentation precision, but also greatly improves the segmentation efficiency; the broken bone segmentation method of the three-dimensional image takes the whole three-dimensional image as an input image, can save the segmentation time, can consider the spatial correlation better, and obtains higher segmentation accuracy.
In the embodiment, a three-dimensional image is taken as an input in the method for segmenting the broken bones of the three-dimensional image, a basic feature map is obtained by extracting features of the three-dimensional image to be segmented through a feature extraction module in a segmentation model, and the correlation of tissues in the three-dimensional image is considered; the basic feature map is subjected to feature extraction through the intermediate module to obtain a segmentation map with the same size as the original three-dimensional image to be segmented, the segmentation module is used for fusing the segmentation map and the basic feature map, a three-dimensional broken bone segmentation result is obtained, segmentation time is saved, and segmentation precision and efficiency are improved.
Example two
Referring to fig. 3, a three-dimensional image bone fragment segmentation apparatus 1 of the present embodiment includes: a receiving unit 11 and a dividing unit 12.
The receiving unit 11 is configured to acquire a three-dimensional image to be segmented.
It should be noted that: the three-dimensional image to be segmented is a three-dimensional MRI image. MRI displays the internal information of the body in an image mode, and has the advantages of no wound, multiple modes, accurate positioning and the like.
And the segmentation unit 12 is configured to identify the three-dimensional image to be segmented by using a segmentation model to obtain a three-dimensional broken bone segmentation result.
The segmentation model shown in fig. 4 includes a feature extraction module 121, an intermediate module 122, and a segmentation module 123.
The segmentation unit 12 performs feature extraction on the three-dimensional image to be segmented through the feature extraction module 121 to obtain a basic feature map, performs feature extraction on the basic feature map through the intermediate module 122 to obtain a segmentation map, and fuses the segmentation map and the basic feature map through the segmentation module 123 to generate the three-dimensional broken bone segmentation result.
In this embodiment, the feature extraction module 121 sequentially includes: a first context layer, a first downsampling layer, a second context layer, a second downsampling layer, a third context layer, a third downsampling layer, a fourth context layer, a fourth downsampling layer, and a fifth context layer.
The specific process of extracting the features of the three-dimensional image to be segmented by the feature extraction module 121 to obtain the basic feature map includes:
performing feature extraction on the three-dimensional image to be segmented through the first context layer, inputting a first feature result into the first downsampling layer to obtain a first sampling result, adding the first sampling result and the first feature result element by element to be used as input of the second context layer to obtain a second feature result, inputting the second feature result into the second downsampling layer to obtain a second sampling result, adding the second sampling result and the second feature result element by element to be used as input of the third context layer to obtain a third feature result, inputting the third feature result into the third downsampling layer to obtain a third sampling result, adding the third sampling result and the third feature result element by element to be used as input of the fourth context layer to obtain a fourth feature result, inputting the fourth feature result into the fourth downsampling layer to obtain a fourth sampling result, adding the fourth sampling result element by element with the fourth feature result as input of the fifth context layer to obtain the base feature map.
By way of example and not limitation, a three-dimensional image to be segmented of 128 × 128 × 128 voxels may be input to an input layer of a segmentation model, and pass through five context layers, each context layer is connected with another context layer through a sampling layer, and the results of each downsampling layer and the context layer are added element by element to be used as input of a next downsampling layer, so as to obtain a basic feature map, namely a rough segmentation map.
The voxel is also called as a volume element and is the minimum unit of digital data on three-dimensional space segmentation, and the voxel is mainly used in the fields of three-dimensional imaging, scientific data, medical images and the like.
In this embodiment, the intermediate module 122 sequentially includes: the device comprises a first up-sampling layer, a first decoding layer, a second up-sampling layer, a second decoding layer, a third up-sampling layer, a third decoding layer, a fourth up-sampling layer, a three-dimensional convolution layer and a first segmentation layer.
The specific process of extracting features of the basic feature map through the intermediate module 122 to obtain a segmentation map includes:
inputting the basic feature map into the first up-sampling layer to obtain a first segmentation result, fusing the first segmentation result and the fourth feature result, and obtaining a second segmentation result through the first decoding layer and the second up-sampling layer; fusing the second segmentation result and the third feature result, and obtaining a third segmentation result through the second decoding layer and the third up-sampling layer; fusing the third segmentation result with the second feature result, and obtaining a fourth segmentation result through the third decoding layer and the fourth upsampling layer; and fusing the fourth segmentation result with the first characteristic result, obtaining a fifth segmentation result through the three-dimensional convolution layer and the first segmentation layer, and taking the fifth segmentation result as the segmentation graph.
In this embodiment, a first segmentation result is obtained by a first upsampling layer behind a fifth context layer, the first segmentation result and the fourth feature result are fused, a second segmentation result is obtained by a first decoding layer and a second upsampling layer after the fusion, a third segmentation result is obtained by a third feature result of the second segmentation result, a fourth segmentation result is obtained by a second decoding layer and a third upsampling layer after the fusion, the fourth segmentation result is fused with the first feature result, and a fifth segmentation result is obtained by a three-dimensional convolution layer and the first segmentation layer. In this embodiment, the upsampling of the segmentation result is realized through the upsampling layer, so as to obtain a segmentation map with the same size as that of the original three-dimensional image to be segmented.
In this embodiment, the dividing module 123 sequentially includes: a second segmentation layer, a fifth upsampling layer, a sixth upsampling layer, and a classification layer.
The specific process of fusing the segmentation map and the basic feature map through the segmentation module 123 to generate the three-dimensional broken bone segmentation result includes:
a first additional segmentation result is obtained by a first output result of the second decoding layer through the second segmentation layer and the fifth upsampling layer, and a second additional result is obtained by adding a second output result of the first decoding layer through the second segmentation layer and the first additional segmentation result element by element through the sixth upsampling layer; and adding the segmentation graph and the second additional result element by element and inputting the result into the classification layer to obtain the three-dimensional broken bone segmentation result.
In this embodiment, a second segmentation layer and a fifth upsampling layer are sequentially adopted to segment a first output result output by a second decoding layer to obtain a first additional segmentation result; and the segmentation result is a probability fraction matrix which corresponds to the number of semantic segmentation categories and has the same size with the original image, and the final classification is judged by retrieving the probability that each pixel belongs to various categories to form the final three-dimensional bone fragment segmentation result.
In the embodiment, the bone fragment segmentation apparatus 1 for three-dimensional images receives a three-dimensional stereo image through the receiving unit 11, and performs feature extraction on a three-dimensional image to be segmented through the feature extraction module 121 in the segmentation model in the segmentation unit 12 to obtain a basic feature map, taking into account the correlation of tissues in the stereo image; the intermediate module 122 extracts features of the basic feature map to obtain a segmentation map with the same size as the original three-dimensional image to be segmented, and the segmentation module 123 is used for fusing the segmentation map and the basic feature map so as to obtain a three-dimensional broken bone segmentation result, thereby saving the segmentation time and improving the segmentation precision and efficiency.
EXAMPLE III
In order to achieve the above object, the present invention further provides a computer device 2, the computer device 2 includes a plurality of computer devices 2, components of the three-dimensional image bone-breaking segmentation apparatus 1 according to the second embodiment may be dispersed in different computer devices 2, and the computer device 2 may be a smartphone, a tablet computer, a laptop computer, a desktop computer, a rack-mounted server, a blade server, a tower server or a cabinet server (including an independent server or a server cluster formed by a plurality of servers) for executing programs, and the like. The computer device 2 of the present embodiment includes at least, but is not limited to: a memory 21, a processor 23, a network interface 22, and a bone fragment segmentation apparatus 1 for three-dimensional images (refer to fig. 5) which can be communicatively connected to each other through a system bus. It is noted that fig. 5 only shows the computer device 2 with components, but it is to be understood that not all of the shown components are required to be implemented, and that more or less components may be implemented instead.
In this embodiment, the memory 21 includes at least one type of computer-readable storage medium, which includes a flash memory, a hard disk, a multimedia card, a card-type memory (e.g., SD or DX memory, etc.), a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a Read Only Memory (ROM), an Electrically Erasable Programmable Read Only Memory (EEPROM), a Programmable Read Only Memory (PROM), a magnetic memory, a magnetic disk, an optical disk, and the like. In some embodiments, the storage 21 may be an internal storage unit of the computer device 2, such as a hard disk or a memory of the computer device 2. In other embodiments, the memory 21 may also be an external storage device of the computer device 2, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), or the like provided on the computer device 2. Of course, the memory 21 may also comprise both an internal storage unit of the computer device 2 and an external storage device thereof. In this embodiment, the memory 21 is generally used for storing an operating system installed in the computer device 2 and various types of application software, such as a program code of the bone segmentation method of the three-dimensional image according to the first embodiment. Further, the memory 21 may also be used to temporarily store various types of data that have been output or are to be output.
The processor 23 may be a Central Processing Unit (CPU), a controller, a microcontroller, a microprocessor, or other data Processing chip in some embodiments. The processor 23 is typically used for controlling the overall operation of the computer device 2, such as performing control and processing related to data interaction or communication with the computer device 2. In this embodiment, the processor 23 is configured to execute the program code stored in the memory 21 or process data, such as the bone fragment segmentation apparatus 1 for executing the three-dimensional image.
The network interface 22 may comprise a wireless network interface or a wired network interface, and the network interface 22 is typically used to establish a communication connection between the computer device 2 and other computer devices 2. For example, the network interface 22 is used to connect the computer device 2 to an external terminal through a network, establish a data transmission channel and a communication connection between the computer device 2 and the external terminal, and the like. The network may be a wireless or wired network such as an Intranet (Intranet), the Internet (Internet), a Global System of Mobile communication (GSM), Wideband Code Division Multiple Access (WCDMA), a 4G network, a 5G network, Bluetooth (Bluetooth), Wi-Fi, and the like.
It is noted that fig. 5 only shows the computer device 2 with components 21-23, but it is to be understood that not all shown components are required to be implemented, and that more or less components may be implemented instead.
In this embodiment, the bone fragment segmentation apparatus 1 for three-dimensional images stored in the memory 21 may be further segmented into one or more program modules, which are stored in the memory 21 and executed by one or more processors (in this embodiment, the processor 23) to complete the present invention.
Example four
To achieve the above objects, the present invention also provides a computer-readable storage medium including a plurality of storage media such as a flash memory, a hard disk, a multimedia card, a card type memory (e.g., SD or DX memory, etc.), a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a Read Only Memory (ROM), an Electrically Erasable Programmable Read Only Memory (EEPROM), a Programmable Read Only Memory (PROM), a magnetic memory, a magnetic disk, an optical disk, a server, an App application store, etc., on which a computer program is stored, which when executed by the processor 23, implements corresponding functions. The computer-readable storage medium of the present embodiment is used for the bone fragment segmentation apparatus 1 storing a three-dimensional image, and when being executed by the processor 23, the computer-readable storage medium implements the bone fragment segmentation method of the three-dimensional image according to the first embodiment.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (10)

1. A method for fragmenting bone in a three-dimensional image, comprising:
acquiring a three-dimensional image to be segmented;
identifying the three-dimensional image to be segmented by adopting a segmentation model to obtain a three-dimensional broken bone segmentation result;
the step of adopting a segmentation model to identify the three-dimensional image to be segmented to obtain a three-dimensional broken bone segmentation result comprises the following steps:
extracting the features of the three-dimensional image to be segmented to obtain a basic feature map;
carrying out feature extraction on the basic feature map to obtain a segmentation map;
and fusing the segmentation graph and the basic feature graph to generate the three-dimensional broken bone segmentation result.
2. The method for fragmenting bone of three-dimensional image according to claim 1, wherein the extracting the feature of the three-dimensional image to be fragmented to obtain the basic feature map comprises:
obtaining a first characteristic result by performing convolution on the three-dimensional image to be segmented;
down-sampling the first characteristic result to obtain a first sampling result;
adding the first sampling result and the first characteristic result element by element and then performing convolution to obtain a second characteristic result;
down-sampling the second feature result to obtain a second sampling result;
adding the second sampling result and the second characteristic result element by element and then performing convolution to obtain a third characteristic result;
down-sampling the third feature result to obtain a third sampling result;
adding the third sampling result and the third characteristic result element by element and then performing convolution to obtain a fourth characteristic result;
down-sampling the fourth feature result to obtain a fourth sampling result;
and adding the fourth sampling result and the fourth feature result element by element and then performing convolution to obtain the basic feature map.
3. The method for fragmenting bone according to claim 2, wherein the extracting features of the basic feature map to obtain a fragmentation map comprises:
up-sampling the basic feature map to obtain a first segmentation result;
fusing, decoding and upsampling the first segmentation result and the fourth feature result to obtain a second segmentation result;
fusing and decoding the second segmentation result and the third feature result to obtain a first output result, and upsampling the first output result to obtain a third segmentation result;
fusing, decoding and upsampling the third segmentation result and the second feature result to obtain a fourth segmentation result;
and fusing the fourth segmentation result and the first characteristic result, convolving and segmenting to obtain a fifth segmentation result, and taking the fifth segmentation result as the segmentation graph.
4. The method of bone fragment segmentation of a three-dimensional image according to claim 3, wherein the step of fusing the segmentation map and the basic feature map to generate the three-dimensional bone fragment segmentation result comprises:
segmenting the first output result to obtain a second output result, and performing up-sampling to obtain a first additional segmentation result;
upsampling the second output result and the first additional segmentation result element-by-element to obtain a second additional result;
and adding the segmentation graph and the second additional result element by element and then classifying to obtain the three-dimensional broken bone segmentation result.
5. The method for segmenting bone fragments of a three-dimensional image according to claim 1, wherein the three-dimensional image to be segmented is identified by a segmentation model to obtain a three-dimensional bone fragment segmentation result, and the method further comprises:
an initial classification model is trained to obtain the segmentation model.
6. The method of bone fragment segmentation of three-dimensional images according to claim 5, wherein training an initial classification model to obtain the segmentation model comprises:
reconstructing a two-dimensional sequence diagram in a sample through a three-dimensional diagram to obtain a three-dimensional training image;
normalizing the three-dimensional training image to obtain a three-dimensional sample image;
extracting the features of the three-dimensional sample image to obtain a basic training feature map;
carrying out feature extraction on the basic training feature map to obtain a segmentation training map;
fusing the segmentation training diagram with the basic training characteristic diagram to generate the three-dimensional bone fragment training segmentation result;
and adjusting parameters in the initial classification model according to the training segmentation result to obtain the segmentation model.
7. The method for bone fragment segmentation of three-dimensional images according to claim 5, wherein adjusting parameters in the initial classification model according to the training segmentation result to obtain the segmentation model comprises:
and adjusting parameters in the initial classification model according to the training segmentation result by adopting an Adam optimizer to obtain the segmentation model.
8. A bone fragment segmentation apparatus for a three-dimensional image, comprising:
the receiving unit is used for acquiring a three-dimensional image to be segmented;
the segmentation unit is used for identifying the three-dimensional image to be segmented by adopting a segmentation model to obtain a three-dimensional broken bone segmentation result;
the segmentation model comprises a feature extraction module, an intermediate module and a segmentation module;
the segmentation unit performs feature extraction on the three-dimensional image to be segmented through the feature extraction module to obtain a basic feature map, performs feature extraction on the basic feature map through the intermediate module to obtain a segmentation map, and fuses the segmentation map and the basic feature map through the segmentation module to generate the three-dimensional broken bone segmentation result.
9. A computer arrangement comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps of the method of any one of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 7.
CN202011161212.XA 2020-10-27 2020-10-27 Broken bone segmentation method and device for three-dimensional image, computer equipment and storage medium Active CN112241955B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202011161212.XA CN112241955B (en) 2020-10-27 2020-10-27 Broken bone segmentation method and device for three-dimensional image, computer equipment and storage medium
PCT/CN2020/134546 WO2021179702A1 (en) 2020-10-27 2020-12-08 Method and apparatus for segmenting bone fragments from three-dimensional image, computer device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011161212.XA CN112241955B (en) 2020-10-27 2020-10-27 Broken bone segmentation method and device for three-dimensional image, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112241955A true CN112241955A (en) 2021-01-19
CN112241955B CN112241955B (en) 2023-08-25

Family

ID=74169897

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011161212.XA Active CN112241955B (en) 2020-10-27 2020-10-27 Broken bone segmentation method and device for three-dimensional image, computer equipment and storage medium

Country Status (2)

Country Link
CN (1) CN112241955B (en)
WO (1) WO2021179702A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113240681A (en) * 2021-05-20 2021-08-10 推想医疗科技股份有限公司 Image processing method and device

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114581396A (en) * 2022-02-28 2022-06-03 腾讯科技(深圳)有限公司 Method, device, equipment, storage medium and product for identifying three-dimensional medical image
CN116187476B (en) * 2023-05-04 2023-07-21 珠海横琴圣澳云智科技有限公司 Lung lobe segmentation model training and lung lobe segmentation method and device based on mixed supervision

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109872328A (en) * 2019-01-25 2019-06-11 腾讯科技(深圳)有限公司 A kind of brain image dividing method, device and storage medium
CN109903298A (en) * 2019-03-12 2019-06-18 数坤(北京)网络科技有限公司 Restorative procedure, system and the computer storage medium of blood vessel segmentation image fracture
US20190311223A1 (en) * 2017-03-13 2019-10-10 Beijing Sensetime Technology Development Co., Ltd. Image processing methods and apparatus, and electronic devices
CN111127636A (en) * 2019-12-24 2020-05-08 诸暨市人民医院 Intelligent desktop-level three-dimensional diagnosis system for complex intra-articular fracture
CN111402216A (en) * 2020-03-10 2020-07-10 河海大学常州校区 Three-dimensional broken bone segmentation method and device based on deep learning
CN111429460A (en) * 2020-06-12 2020-07-17 腾讯科技(深圳)有限公司 Image segmentation method, image segmentation model training method, device and storage medium
CN111598893A (en) * 2020-04-17 2020-08-28 哈尔滨工业大学 Regional fluorine bone disease grading diagnosis system based on multi-type image fusion neural network

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108257118B (en) * 2018-01-08 2020-07-24 浙江大学 Fracture adhesion segmentation method based on normal corrosion and random walk
CN111192277A (en) * 2019-12-31 2020-05-22 华为技术有限公司 Instance partitioning method and device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190311223A1 (en) * 2017-03-13 2019-10-10 Beijing Sensetime Technology Development Co., Ltd. Image processing methods and apparatus, and electronic devices
CN109872328A (en) * 2019-01-25 2019-06-11 腾讯科技(深圳)有限公司 A kind of brain image dividing method, device and storage medium
CN109903298A (en) * 2019-03-12 2019-06-18 数坤(北京)网络科技有限公司 Restorative procedure, system and the computer storage medium of blood vessel segmentation image fracture
CN111127636A (en) * 2019-12-24 2020-05-08 诸暨市人民医院 Intelligent desktop-level three-dimensional diagnosis system for complex intra-articular fracture
CN111402216A (en) * 2020-03-10 2020-07-10 河海大学常州校区 Three-dimensional broken bone segmentation method and device based on deep learning
CN111598893A (en) * 2020-04-17 2020-08-28 哈尔滨工业大学 Regional fluorine bone disease grading diagnosis system based on multi-type image fusion neural network
CN111429460A (en) * 2020-06-12 2020-07-17 腾讯科技(深圳)有限公司 Image segmentation method, image segmentation model training method, device and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113240681A (en) * 2021-05-20 2021-08-10 推想医疗科技股份有限公司 Image processing method and device
CN113240681B (en) * 2021-05-20 2022-07-08 推想医疗科技股份有限公司 Image processing method and device

Also Published As

Publication number Publication date
CN112241955B (en) 2023-08-25
WO2021179702A1 (en) 2021-09-16

Similar Documents

Publication Publication Date Title
CA3041140C (en) Systems and methods for segmenting an image
CN112241955B (en) Broken bone segmentation method and device for three-dimensional image, computer equipment and storage medium
CN111291825B (en) Focus classification model training method, apparatus, computer device and storage medium
US20180165305A1 (en) Systems and methods for image search
CN111145181A (en) Skeleton CT image three-dimensional segmentation method based on multi-view separation convolutional neural network
CN110659667A (en) Picture classification model training method and system and computer equipment
CN111476719A (en) Image processing method, image processing device, computer equipment and storage medium
CN114494296A (en) Brain glioma segmentation method and system based on fusion of Unet and Transformer
CN112150470B (en) Image segmentation method, device, medium and electronic equipment
CN114742802B (en) Pancreas CT image segmentation method based on 3D transform mixed convolution neural network
CN116030259B (en) Abdominal CT image multi-organ segmentation method and device and terminal equipment
CN112348819A (en) Model training method, image processing and registering method, and related device and equipment
CN115439651A (en) DSA (digital Signal amplification) cerebrovascular segmentation system and method based on space multi-scale attention network
US20230386067A1 (en) Systems and methods for segmenting 3d images
CN114692725A (en) Decoupling representation learning method and system for multi-temporal image sequence
US20220270209A1 (en) Removing compression artifacts from digital images and videos utilizing generative machine-learning models
WO2024087858A1 (en) Image processing model training method and apparatus, electronic device, computer program product, and computer storage medium
CN116740081A (en) Method, device, terminal equipment and medium for segmenting pulmonary vessels in CT image
CN113409324B (en) Brain segmentation method fusing differential geometric information
CN116091763A (en) Apple leaf disease image semantic segmentation system, segmentation method, device and medium
DE102022120117A1 (en) On-device detection of digital objects and generation of object masks
CN115147606A (en) Medical image segmentation method and device, computer equipment and storage medium
CN114240930A (en) Deep learning-based lumbar vertebra identification and positioning device and method and electronic equipment
CN114118411A (en) Training method of image recognition network, image recognition method and device
CN112633285A (en) Domain adaptation method, domain adaptation device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40043401

Country of ref document: HK

SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant