CN118298178A - Medical image segmentation method, device, equipment and storage medium - Google Patents

Medical image segmentation method, device, equipment and storage medium Download PDF

Info

Publication number
CN118298178A
CN118298178A CN202410496846.2A CN202410496846A CN118298178A CN 118298178 A CN118298178 A CN 118298178A CN 202410496846 A CN202410496846 A CN 202410496846A CN 118298178 A CN118298178 A CN 118298178A
Authority
CN
China
Prior art keywords
image
feature
order
low
feature extraction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410496846.2A
Other languages
Chinese (zh)
Inventor
熊思
程斌
闵祥德
梁靖雯
陈诗如
彭旺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tongji Hospital Affiliated To Tongji Medical College Of Huazhong University Of Science & Technology
Original Assignee
Tongji Hospital Affiliated To Tongji Medical College Of Huazhong University Of Science & Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tongji Hospital Affiliated To Tongji Medical College Of Huazhong University Of Science & Technology filed Critical Tongji Hospital Affiliated To Tongji Medical College Of Huazhong University Of Science & Technology
Priority to CN202410496846.2A priority Critical patent/CN118298178A/en
Publication of CN118298178A publication Critical patent/CN118298178A/en
Pending legal-status Critical Current

Links

Landscapes

  • Measuring And Recording Apparatus For Diagnosis (AREA)

Abstract

The invention discloses a medical image segmentation method, a medical image segmentation device, medical image segmentation equipment and a storage medium. The method comprises the following steps: extracting the characteristics of the input medical image to be segmented through a characteristic extraction network in the target image segmentation model, and outputting at least one low-order image characteristic, at least one high-order image characteristic and a global characteristic image; outputting a target segmentation result of the medical image to be segmented according to an output result of the feature extraction network through an image segmentation network in the target image segmentation model; the feature extraction network is provided with a low-order feature extraction module, a high-order feature extraction module and a global decoding module, and the global decoding module is used for fusing at least one low-order image feature output by the low-order feature extraction module and at least one high-order image feature output by the high-order feature extraction module to obtain a global feature image, so that the segmentation effect of medical images is improved.

Description

Medical image segmentation method, device, equipment and storage medium
Technical Field
The present invention relates to the field of medical image processing technologies, and in particular, to a method, an apparatus, a device, and a storage medium for segmenting a medical image.
Background
In recent years, many image segmentation methods based on deep learning have been developed, but most of these methods focus on segmenting the entire region of the target object, and labeling the segmentation result with a bounding box ignoring the boundary constraint of the region.
In the field of medical image processing, the clear segmentation boundary has important guiding significance for assisting subsequent medical diagnosis/treatment.
Disclosure of Invention
The embodiment of the invention provides a medical image segmentation method, device, equipment and storage medium, which are used for solving the problem that a traditional image segmentation method ignores segmentation boundaries and improving the boundary segmentation effect of a medical image.
According to an embodiment of the present invention, there is provided a segmentation method of a medical image, the method including:
inputting the medical image to be segmented into a target image segmentation model which is trained in advance; the target image segmentation model comprises a feature extraction network and an image segmentation network;
performing feature extraction on the input medical image to be segmented through the feature extraction network, and outputting at least one low-order image feature, at least one high-order image feature and a global feature image;
Outputting a target segmentation result of the medical image to be segmented according to the at least one low-order image feature, the at least one high-order image feature and the global feature image through the image segmentation network;
the feature extraction network comprises a low-order feature extraction module, a high-order feature extraction module and a global decoding module, wherein the low-order feature extraction module is used for extracting at least one low-order image feature of the medical image to be segmented, the high-order feature extraction module is used for extracting at least one high-order image feature of the medical image to be segmented, and the global decoding module is used for fusing the at least one low-order image feature and the at least one high-order image feature to obtain a global feature image.
According to another embodiment of the present invention, there is provided a medical image segmentation apparatus including:
The medical image to be segmented input module is used for inputting the medical image to be segmented into a target image segmentation model which is trained in advance; the target image segmentation model comprises a feature extraction network and an image segmentation network;
the global feature image output module is used for carrying out feature extraction on the input medical image to be segmented through the feature extraction network and outputting at least one low-order image feature, at least one high-order image feature and a global feature image;
The target segmentation result output module is used for outputting a target segmentation result of the medical image to be segmented according to the at least one low-order image feature, the at least one high-order image feature and the global feature image through the image segmentation network;
the feature extraction network comprises a low-order feature extraction module, a high-order feature extraction module and a global decoding module, wherein the low-order feature extraction module is used for extracting at least one low-order image feature of the medical image to be segmented, the high-order feature extraction module is used for extracting at least one high-order image feature of the medical image to be segmented, and the global decoding module is used for fusing the at least one low-order image feature and the at least one high-order image feature to obtain a global feature image.
According to another embodiment of the present invention, there is provided an electronic apparatus including:
at least one processor; and
A memory communicatively coupled to the at least one processor; wherein,
The memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the method of segmentation of medical images according to any one of the embodiments of the present invention.
According to another embodiment of the present invention, there is provided a computer readable storage medium storing computer instructions for causing a processor to execute a method for segmenting a medical image according to any of the embodiments of the present invention.
According to the technical scheme, the low-order feature extraction module, the high-order feature extraction module and the global decoding module are arranged in the feature extraction network in the target image segmentation model, the low-order feature extraction module is used for extracting at least one low-order image feature of a medical image to be segmented, the high-order feature extraction module is used for extracting at least one high-order image feature of the medical image to be segmented, the global decoding module is used for fusing the at least one low-order image feature and the at least one high-order image feature to obtain a global feature image, and the global feature image in the embodiment fuses the low-order image feature and the high-order image feature of the medical image to be segmented, so that the target image segmentation model has stronger feature extraction capability and is combined with more complete image features, the problem that a segmentation boundary is ignored in a traditional image segmentation method is solved, and the segmentation effect of the medical image is improved.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the invention or to delineate the scope of the invention. Other features of the present invention will become apparent from the description that follows.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a method for segmenting medical images according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a target image segmentation model according to an embodiment of the present invention;
FIG. 3 is a block diagram of a reverse-attention block according to one embodiment of the present invention;
FIG. 4 is a flow chart of another method for segmenting medical images according to an embodiment of the present invention;
FIG. 5 is a schematic structural diagram of a medical image segmentation apparatus according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
In order that those skilled in the art will better understand the present invention, a technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, shall fall within the scope of the present invention.
It should be noted that the terms "first," "second," "initial," "target," "reference," and the like in the description and claims of the present invention and in the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the invention described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Fig. 1 is a flowchart of a method for segmenting a medical image according to an embodiment of the present invention, where the method is applicable to a case of segmenting a medical image, and particularly to a case of boundary segmentation of a medical image, and the method may be performed by a device for segmenting a medical image, where the device for segmenting a medical image may be implemented in a form of hardware and/or software, and the device for segmenting a medical image may be configured in a terminal device. As shown in fig. 1, the method includes:
S110, inputting the medical image to be segmented into a target image segmentation model which is trained in advance.
By way of example, the image types of the medical image to be segmented include, but are not limited to, direct digital radiography (DIRECTDIGIT RADIOGRAPHY, DR), computed tomography (Computed Tomography, CT), magnetic resonance imaging (Magnetic Resonance Imaging, MRI), positron emission computed tomography (Positron Emission Computed Tomography, PET) or ultrasound, etc., the image types of the medical image to be segmented are not limited and can be specifically customized according to the actual requirements.
For example, the segmented object of the medical image to be segmented includes, but is not limited to, a blood vessel, a cell, a bone or a tissue region of interest, for example, the tissue region of interest may be polyp tissue or tumor tissue, etc., where the segmented object of the medical image to be segmented is not limited, and may be specifically set in a customized manner according to actual requirements.
In the present embodiment, the target image segmentation model includes a feature extraction network and an image segmentation network. Specifically, the input data of the feature extraction network is a medical image to be segmented, the output data is at least one low-order image feature, at least one high-order image feature and a global feature image, the input data of the image segmentation network is the output data of the feature extraction network, and the output data of the image segmentation network is a target segmentation result of the medical image to be segmented.
S120, performing feature extraction on the input medical image to be segmented through a feature extraction network, and outputting at least one low-order image feature, at least one high-order image feature and a global feature image.
In this embodiment, the feature extraction network includes a low-order feature extraction module, a high-order feature extraction module, and a global decoding module, where the low-order feature extraction module is used to extract at least one low-order image feature of the medical image to be segmented, the high-order feature extraction module is used to extract at least one high-order image feature of the medical image to be segmented, and the global decoding module is used to fuse the at least one low-order image feature and the at least one high-order image feature to obtain a global feature image.
In an alternative embodiment, the low-order feature extraction module includes at least one low-order feature extraction layer for performing feature extraction on the medical image to be segmented layer by layer, outputting at least one low-order image feature, the high-order feature extraction module includes at least one high-order feature extraction layer for performing feature extraction on the low-order image feature output by the last low-order feature extraction layer by layer, outputting at least one high-order image feature, and the global decoding module is used for performing feature fusion on the at least one low-order image feature and the at least one high-order image feature to obtain a global feature image.
Specifically, at least one low-order feature extraction layer and at least one high-order feature extraction layer are sequentially connected in series, wherein input data of a first low-order feature extraction layer is a medical image to be segmented, input data of a first high-order feature extraction layer is a low-order image feature output by a last low-order feature extraction layer, input data of a global decoding module comprises at least one low-order image feature and at least one high-order image feature, and output data is a global feature image.
The global decoding module is exemplified by a full connection module, a scale feature aggregation module or a parallel Partial Decoder (PD), and the fusion algorithm adopted by the global decoding module is not limited herein, and can be specifically set in a self-defined manner according to actual requirements.
In another optional embodiment, the global decoding module includes a low-order decoding unit and a high-order decoding unit, the low-order feature extraction module includes at least two low-order feature extraction layers sequentially connected in series, the at least two low-order feature extraction layers are used for performing feature extraction on the medical image to be segmented layer by layer to output at least two low-order image features, the low-order decoding unit is used for performing parallel aggregation and splicing on the at least two low-order image features to output low-order feature images, the at least one high-order feature extraction layer is used for performing feature extraction on the low-order feature images layer by layer to output at least one high-order image feature, and the high-order decoding unit is used for performing parallel aggregation and splicing on the low-order feature images and the at least one high-order image feature to obtain the global feature image.
Specifically, at least two low-order feature extraction layers, a low-order decoding unit, at least one high-order feature extraction layer and a high-order decoding unit are sequentially connected in series, input data of a first low-order feature extraction layer is a medical image to be segmented, input data of the low-order decoding unit comprises at least two low-order image features, output data is a low-order feature image, input data of the first high-order feature extraction layer is a low-order feature image, input data of the high-order decoding unit comprises a low-order feature image and at least one high-order image feature, and output data is a global feature image.
In an alternative embodiment, the unit architecture of the low order decoding unit and the high order decoding unit is a parallel partial decoder.
Based on the above embodiments, the network architecture of the low-order feature extraction layer or the high-order feature extraction layer may be a convolution structure or a residual structure, and specifically, the network architectures corresponding to the different low-order feature extraction layers and the different high-order feature extraction layers may be the same or different.
In an alternative embodiment, the low-order feature extraction module and the high-order feature extraction module are determined according to the total number of feature extraction layers connected in series in the feature extraction network and a preset division ratio. The method includes the steps of providing a feature extraction network, wherein the feature extraction network comprises x feature extraction layers connected in series, sequentially dividing at least one low-order feature extraction layer from a first feature extraction layer in the x feature extraction layers based on a preset division ratio to form a low-order feature extraction module, and taking at least one feature extraction layer except the low-order feature extraction module in the x feature extraction layers as a high-order feature extraction layer to form a high-order feature extraction module. For example, assuming that x=5, if the preset division ratio is 2/5, the 1 st and 2 nd feature extraction layers of the 5 feature extraction layers are respectively taken as low-order feature extraction layers, and the 3 rd, 4 th and 5 th feature extraction layers of the 5 feature extraction layers are respectively taken as high-order feature extraction layers. If the preset division ratio is 3/5, the 1 st, 2 nd and 3 rd feature extraction layers in the 5 feature extraction layers are respectively used as low-order feature extraction layers, and the 4 th and 5 th feature extraction layers in the 5 feature extraction layers are respectively used as high-order feature extraction layers.
The number of feature extraction layers connected in series in the feature extraction network and the preset dividing ratio are not limited, and can be specifically set in a self-defined manner according to actual requirements.
In another alternative embodiment, the low-order feature extraction module and the high-order feature extraction module are determined according to a preset low-order feature number.
For example, assuming that the feature extraction network includes x feature extraction layers connected in series, a low-order feature extraction layer with a low-order feature number is sequentially divided from a first feature extraction layer in the x feature extraction layers to form a low-order feature extraction module, and at least one feature extraction layer except the low-order feature extraction module in the x feature extraction layers is used as a high-order feature extraction layer to form a high-order feature extraction module. For example, assuming that the number of low-order features is 3, if x=5, the 1 st, 2 nd, and 3 rd feature extraction layers of the 5 feature extraction layers are respectively taken as low-order feature extraction layers, and the 4 th and 5 th feature extraction layers of the 5 feature extraction layers are respectively taken as high-order feature extraction layers, and if x=7, the 1 st, 2 nd, and 3 rd feature extraction layers of the 7 feature extraction layers are respectively taken as low-order feature extraction layers, and the 4 th, 5 th, 6 th, and 7 th feature extraction layers of the 7 feature extraction layers are respectively taken as high-order feature extraction layers.
The advantage of this arrangement is that, because the resolution of the low-order image features is larger, more low-order image features may cause decoding errors of the low-order decoding unit, and stability of the target image segmentation model can be effectively ensured by controlling the feature quantity of the low-order image features.
Fig. 2 is a model architecture diagram of a target image segmentation model according to an embodiment of the present invention, as shown in fig. 2, a medical image I to be segmented is input into the target image segmentation model, wherein a dashed box represents a feature extraction network, 5 levels of feature extraction layers are disposed in the feature extraction network, and output results of the feature extraction layers form an image feature set { f i, i=1, 2,3,4,5}. In this example, image feature f 1 and image feature f 2 are taken as low-order image features, and image feature f 3, image feature f 4, and image feature f 5 are taken as high-order image features.
As shown in fig. 2, "LPD" represents a low-order decoding unit, and "HPD" represents a high-order decoding unit, where the low-order decoding unit is configured to perform parallel aggregation and stitching on the image feature f 1 and the image feature f 2 to output a low-order feature image, and the high-order decoding unit is configured to perform parallel aggregation and stitching on the low-order feature image, the image feature f 3, the image feature f 4, and the image feature f 5 to obtain a global feature image S 6. Illustratively, the higher order decoding unit may be denoted as S 6=pd(pd(f1,f2),f3,f4,f5), where p d denotes a parallel aggregation splice.
S130, outputting a target segmentation result of the medical image to be segmented according to at least one low-order image feature, at least one high-order image feature and a global feature image through an image segmentation network.
In an alternative embodiment, when the sum of the feature numbers of the low-order image feature and the high-order image feature is two, the image segmentation network comprises a first feature fusion module and a third feature fusion module which are sequentially connected in series, wherein the first feature fusion module is used for outputting a first fusion image according to the last high-order image feature and the global feature image; and the third feature fusion module is used for outputting a target segmentation result of the medical image to be segmented according to the first low-order image feature and the first fusion image.
In another optional embodiment, when the sum of the feature numbers of the low-order image feature and the high-order image feature is at least three, the image segmentation network includes a first feature fusion module, at least one second feature fusion module and a third feature fusion module sequentially connected in series, where the first feature fusion module is configured to output a first fused image according to the last high-order image feature and the global feature image; the second feature fusion module is used for outputting a second fusion image according to the input target image features and the target fusion image; and the third feature fusion module is used for outputting a target segmentation result of the medical image to be segmented according to the first low-order image feature and the second fusion image output by the last second feature fusion module connected in series.
In this embodiment, the target image features are high-order image features or low-order image features except for the last high-order image feature and the first low-order image feature, and the target fusion image is a first fusion image output by a first feature fusion module connected in series or a second fusion image output by a last second feature fusion module connected in series.
As shown in fig. 2, the first feature fusion module is a feature fusion module connected to the high-order decoding unit and the last high-order feature extraction layer, the third feature fusion module is a feature fusion module connected to the first low-order feature extraction layer, and the second feature fusion module is a feature fusion module other than the first feature fusion module and the third feature fusion module.
On the basis of the above embodiment, optionally, the first feature fusion module includes a downsampling unit, a reverse attention unit connected downstream of the downsampling unit, and a fusion unit connected downstream of the downsampling unit and the reverse attention unit, where input data of the downsampling unit is a global feature image, and input data of the reverse attention unit includes a last high-order image feature. As shown in fig. 2, the first feature fusion module includes a downsampling unit (do), a reverse attention unit (RA) and a fusion unit, which have a connection relationship at the far right side in the image segmentation network
On the basis of the above embodiment, optionally, the second feature fusion module includes an upsampling unit, a reverse attention unit connected downstream of the upsampling unit, and a fusion unit connected downstream of the upsampling unit and the reverse attention unit, where input data of the upsampling unit is a target fusion image, and input data of the reverse attention unit includes a target image feature. As shown in fig. 2, the second feature fusion module includes an upsampling unit, a reverse attention unit (RA), and a fusion unit in which a connection relationship exists in the image segmentation network
On the basis of the above embodiment, optionally, the third feature fusion module includes an up-sampling unit, a reverse attention unit connected downstream of the up-sampling unit, a fusion unit connected downstream of the up-sampling unit and the reverse attention unit, and an activation unit connected downstream of the fusion unit, input data of the up-sampling unit is a second fusion image output by a last second feature fusion module connected in series, and input data of the reverse attention unit includes a first low-order image feature. As shown in fig. 2, the third feature fusion module includes an upsampling unit, a reverse attention unit (RA), and a fusion unit, which are connected in the leftmost side of the image segmentation networkAnd an activating unit.
As shown in fig. 2, the reverse attention unit obtains an output reverse attention image R i by multiplying the image feature f i by the reverse attention weight a i, which can be expressed as R i=fi⊙Ai. Wherein the reverse attention weight A i can be described asWherein,Representing the inverse operator of the subtraction of the input from the all 1 matrix E, σ represents the Sigmoid function, P represents the up-sampling operation, and S i+1 is the global feature image or the last second fused image in series.
Fig. 3 is a block diagram of a reverse attention block according to an embodiment of the present invention, specifically, the reverse attention block sequentially performs up-sampling, activation and inversion on an input feature image S i+1, multiplies the processing result by an input image feature f i, and convolves the product result to output a reverse attention image R i.
In this embodiment, the target segmentation result includes a region segmentation result and/or a boundary segmentation result corresponding to the segmentation object.
The erasure mechanism driven by the reverse attention unit can establish the complementary relation between the region and the boundary of the segmented object, and effectively captures the missing part of the image cavity and/or the boundary in the segmentation process, so that the inaccurately and roughly estimated global feature image is thinned into an accurate and complete reverse attention image, and the reverse attention image has a more complete region and/or an accurate and smooth boundary. And simultaneously, the iterative interaction mechanism among the feature fusion modules can further correct contradictory areas and/or boundaries in the fused image, so that the segmentation effect of image segmentation is further ensured.
Because of the smaller resolution of the higher-order image features, if only the higher-order image features are fused or only the higher-order image features are subjected to the inverse attention computation, the target segmentation result is smaller than the image size of the medical image to be segmented, and there may be a case where boundary information of the segmented object is filtered out.
The resolution of the low-order image features is larger, the low-order image features contain more boundary information, and according to the technical scheme of the embodiment, the low-order feature extraction module, the high-order feature extraction module and the global decoding module are arranged in the feature extraction network in the target image segmentation model, the low-order feature extraction module is used for extracting at least one low-order image feature of a medical image to be segmented, the high-order feature extraction module is used for extracting at least one high-order image feature of the medical image to be segmented, the global decoding module is used for fusing the at least one low-order image feature and the at least one high-order image feature to obtain a global feature image, and the global feature image in the embodiment fuses the low-order image feature and the high-order image feature of the medical image to be segmented, so that the target image segmentation model has stronger feature extraction capability, and combines the more complete image features, the problem that the traditional image segmentation method ignores the segmentation boundary is solved, the accuracy and the integrity of the boundary information of the segmented object is guaranteed, and the segmentation effect of the medical image is improved.
Fig. 4 is a flowchart of another method for segmenting medical images according to an embodiment of the present invention, which further refines the method for segmenting medical images according to the above embodiment. As shown in fig. 4, the method includes:
s210, inputting the medical image to be segmented into a target image segmentation model which is trained in advance.
S210 in this embodiment is the same as or similar to S110 shown in fig. 1 in the above embodiment, and this embodiment is not described herein.
On the basis of the above embodiment, optionally, the method further includes: inputting the training medical image into an initial image segmentation model which is not trained, and obtaining an output prediction segmentation result; determining a target loss function according to the prediction segmentation result and a standard segmentation result corresponding to the training medical image; according to the target loss function, adjusting model parameters of the initial image segmentation model; and taking the initial image segmentation model in the current iteration process as a target image segmentation model after training is completed until the target loss function is converged.
Exemplary types of objective loss functions include, but are not limited to, square loss functions, logarithmic loss functions, exponential loss functions, mean square error loss functions, logistic regression loss functions, huber loss functions, cross entropy loss functions, kullback-Leibler divergence loss functions, and the like.
In an alternative embodiment, determining the target loss function based on the predicted segmentation result and the standard segmentation result corresponding to the training medical image comprises: and respectively determining a weighted cross-over ratio loss function and a binary cross entropy loss function according to the prediction segmentation result and a standard segmentation result corresponding to the training medical image, and taking the sum of the weighted cross-over ratio loss function and the binary cross entropy loss function as a target loss function.
Illustratively, the weighted overlap ratio loss function L IoU satisfies the formula:
the binary cross entropy loss function L BCE satisfies the formula:
Wherein, T i represents the standard label value of the ith pixel in the training medical image, P i represents the predicted label value of the ith pixel in the training medical image, total represents the number of pixels in the training medical image.
Accordingly, the target loss function L may be expressed as l=l IoU+LBCE. In contrast to the standard cross-ratio loss function, the weighted cross-ratio loss function highlights the importance of difficult sample pixels by increasing their weights, rather than equally weighting all pixels.
On the basis of the above embodiment, optionally, the method further includes: and preprocessing the original medical image to obtain a training medical image. Exemplary preprocessing operations include, but are not limited to, cropping operations, normalization operations, image enhancement operations, and the like.
Specifically, the cropping operation may be to crop the cropped medical image from the original medical image based on a preset cropping size. For example, the preset cutting size may be 352×352, which is not limited herein, and specifically may be set in a user-defined manner according to actual requirements. The advantage of this arrangement is that the computational effort of the initial image segmentation model is reduced, thereby improving the training efficiency of the target image segmentation model.
Specifically, the normalization process may be used to normalize the pixel values in the image from [0 ] 255 to [ -11], and the normalized medical image X new satisfies the formula:
wherein X represents an original medical image or a cropped medical image. The method has the advantages that the target image segmentation model can adapt to medical images with different brightness and contrast ratios, and the generalization capability and training efficiency of the target image segmentation model are improved.
Specifically, image enhancement operations include, but are not limited to, random rotation by any angle, random flipping, random panning by 10 pixels, random brightness contrast adjustment, and the like. The advantage of this arrangement is that the diversity of training samples is increased, the overfitting of the target segmentation model is reduced, and the generalization capability of the target image segmentation model is improved.
In an alternative embodiment, the target image segmentation model includes two channels, one is an output value of a background image channel and the other is an output value of a segmentation object channel, and a prediction segmentation result of the training image is output through argmax operation.
S220, performing feature extraction on the input medical image to be segmented through a feature extraction network, and outputting at least one low-order image feature, at least one high-order image feature and a global feature image.
S230, outputting a target segmentation result of the medical image to be segmented according to at least one low-order image feature, at least one high-order image feature and the global feature image through the image segmentation network.
The S220-S230 in this embodiment are the same as or similar to the S120-S130 shown in fig. 1 in the above embodiment, and the description of this embodiment is omitted here.
S240, representing the boundary segmentation result corresponding to the target segmentation result in the form of an implicit level set, and constructing a Lagrange equation.
In an alternative embodiment, the Lagrangian equation satisfies the following formula:
Wherein, Representing constraints on the image segmentation results, J (x) represents an optimization objective function,X t and x t+1 respectively represent iteration division boundaries corresponding to the t-th iteration solution and the t+1st iteration solution, α represents a learning rate, V (x) represents a step length of one iteration solution, V Max represents a maximum step length of one iteration solution, and μ and σ respectively represent lagrangian coefficients.
S250, determining a partial differential equation corresponding to the Lagrange equation by adopting a gradient descent algorithm.
In an alternative embodiment, the partial differential equation satisfies the following formula:
Wherein V 1 represents a velocity function caused by contrast variation in the medical image to be segmented, I (x) represents a pixel value of the medical image to be segmented at the iterative segmentation boundary x, V 2 represents a velocity function caused by a boundary shape of the iterative segmentation boundary, k (x) represents a boundary curvature at the iterative segmentation boundary x, V 3 represents a velocity function caused by a boundary tension of the iterative segmentation boundary, d (x) represents a position distance between the iterative segmentation boundary x obtained by the current iterative solution and the iterative segmentation boundary obtained by the previous iterative solution, and m and n represent velocity coefficients, respectively.
And S260, carrying out iterative solution on the partial differential equation to obtain an image segmentation result of the medical image to be segmented.
Specifically, the end condition of the iterative solution includes that the iteration number reaches an iteration number threshold and/or the partial differential equation converges. The iteration number threshold may be 1000, which is not limited herein, and may be specifically set in a user-defined manner according to actual requirements.
In this embodiment, the image segmentation result includes an image segmentation region and/or an image segmentation boundary corresponding to the segmentation object.
According to the technical scheme, the boundary segmentation result corresponding to the target segmentation result is expressed in the form of an implicit level set, a Lagrange equation is constructed, a gradient descent algorithm is adopted, a partial differential equation corresponding to the Lagrange equation is determined, iterative solution is carried out on the partial differential equation, the image segmentation result of the medical image to be segmented is obtained, the problem that the segmentation boundary of the segmented object is adhered is solved, the capturing capability of the boundary details of the segmented object in the medical image to be segmented is improved, and the segmentation effect of the medical image is further improved.
The following is an embodiment of a medical image segmentation apparatus provided in the embodiment of the present invention, which belongs to the same inventive concept as the medical image segmentation method of the above embodiment, and details of the medical image segmentation apparatus that are not described in detail in the embodiment of the medical image segmentation apparatus may refer to the content of the medical image segmentation method in the above embodiment.
Fig. 5 is a schematic structural diagram of a medical image segmentation apparatus according to an embodiment of the present invention. As shown in fig. 5, the apparatus includes: a medical image input module 310 to be segmented, a global feature image output module 320 and a target segmentation result output module 330.
The medical image to be segmented input module 310 is configured to input a medical image to be segmented into a target image segmentation model that is trained in advance; the target image segmentation model comprises a feature extraction network and an image segmentation network;
The global feature image output module 320 is configured to perform feature extraction on the input medical image to be segmented through a feature extraction network, and output at least one low-order image feature, at least one high-order image feature and a global feature image;
the target segmentation result output module 330 is configured to output a target segmentation result of the medical image to be segmented according to at least one low-order image feature, at least one high-order image feature, and a global feature image through the image segmentation network;
The feature extraction network comprises a low-order feature extraction module, a high-order feature extraction module and a global decoding module, wherein the low-order feature extraction module is used for extracting at least one low-order image feature of a medical image to be segmented, the high-order feature extraction module is used for extracting at least one high-order image feature of the medical image to be segmented, and the global decoding module is used for fusing the at least one low-order image feature and the at least one high-order image feature to obtain a global feature image.
According to the technical scheme, the low-order feature extraction module, the high-order feature extraction module and the global decoding module are arranged in the feature extraction network in the target image segmentation model, the low-order feature extraction module is used for extracting at least one low-order image feature of a medical image to be segmented, the high-order feature extraction module is used for extracting at least one high-order image feature of the medical image to be segmented, the global decoding module is used for fusing the at least one low-order image feature and the at least one high-order image feature to obtain a global feature image, and the global feature image in the embodiment fuses the low-order image feature and the high-order image feature of the medical image to be segmented, so that the target image segmentation model has stronger feature extraction capability and is combined with more complete image features, the problem that a segmentation boundary is ignored in a traditional image segmentation method is solved, and the segmentation effect of the medical image is improved.
In an alternative embodiment, the global decoding module includes a low-order decoding unit and a high-order decoding unit, the low-order feature extraction module includes at least two low-order feature extraction layers sequentially connected in series, the at least two low-order feature extraction layers are used for performing feature extraction on the medical image to be segmented layer by layer to output at least two low-order image features, the low-order decoding unit is used for performing parallel aggregation and splicing on the at least two low-order image features to output low-order feature images, the at least one high-order feature extraction layer is used for performing feature extraction on the low-order feature images layer by layer to output at least one high-order image feature, and the high-order decoding unit is used for performing parallel aggregation and splicing on the low-order feature images and the at least one high-order image feature to obtain the global feature image.
In an alternative embodiment, the image segmentation network comprises a first feature fusion module, at least one second feature fusion module and a third feature fusion module which are sequentially connected in series, wherein the first feature fusion module is used for outputting a first fusion image according to the last high-order image feature and the global feature image; the second feature fusion module is used for outputting a second fusion image according to the input target image features and the target fusion image; the third feature fusion module is used for outputting a target segmentation result of the medical image to be segmented according to the first low-order image feature and the second fusion image output by the last second feature fusion module connected in series;
The target image features are high-order image features or low-order image features except the last high-order image feature and the first low-order image feature, and the target fusion image is a first fusion image output by a first feature fusion module connected in series or a second fusion image output by a last second feature fusion module connected in series.
In an alternative embodiment, the first feature fusion module includes a downsampling unit, a reverse attention unit connected downstream of the downsampling unit, and a fusion unit connected downstream of the downsampling unit and the reverse attention unit, wherein input data of the downsampling unit is a global feature image, and input data of the reverse attention unit includes a last high-order image feature;
The second feature fusion module comprises an up-sampling unit, a reverse attention unit connected with the downstream of the up-sampling unit and a fusion unit connected with the downstream of the up-sampling unit and the reverse attention unit respectively, wherein the input data of the up-sampling unit is a target fusion image, and the input data of the reverse attention unit comprises target image features;
The third feature fusion module comprises an up-sampling unit, a reverse attention unit connected with the downstream of the up-sampling unit, a fusion unit connected with the downstream of the up-sampling unit and the reverse attention unit respectively, and an activation unit connected with the downstream of the fusion unit, wherein the input data of the up-sampling unit is a second fusion image output by the last second feature fusion module connected in series, and the input data of the reverse attention unit comprises first low-order image features.
In an alternative embodiment, the apparatus further comprises:
The image segmentation result determining module is used for expressing the boundary segmentation result corresponding to the target segmentation result in the form of an implicit level set and constructing a Lagrange equation;
determining a partial differential equation corresponding to the Lagrange equation by adopting a gradient descent algorithm;
and carrying out iterative solution on the partial differential equation to obtain an image segmentation result of the medical image to be segmented.
In an alternative embodiment, the Lagrangian equation satisfies the following formula:
Wherein, Representing constraints on the image segmentation results, J (x) represents an optimization objective function,X t and x t+1 respectively represent iteration division boundaries corresponding to the t-th iteration solution and the t+1st iteration solution, α represents a learning rate, V (x) represents a step length of one iteration solution, V Max represents a maximum step length of one iteration solution, and μ and σ respectively represent lagrangian coefficients.
In an alternative embodiment, the partial differential equation satisfies the following formula:
Wherein V 1 represents a velocity function caused by contrast variation in the medical image to be segmented, I (x) represents a pixel value of the medical image to be segmented at the iterative segmentation boundary x, V 2 represents a velocity function caused by a boundary shape of the iterative segmentation boundary, k (x) represents a boundary curvature at the iterative segmentation boundary x, V 3 represents a velocity function caused by a boundary tension of the iterative segmentation boundary, d (x) represents a position distance between the iterative segmentation boundary x obtained by the current iterative solution and the iterative segmentation boundary obtained by the previous iterative solution, and m and n represent velocity coefficients, respectively.
The medical image segmentation device provided by the embodiment of the invention can execute the medical image segmentation method provided by any embodiment of the invention, and has the corresponding functional network and beneficial effects of the execution method.
Fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present invention. The electronic device 10 is intended to represent various forms of digital computers, such as laptops, desktops, workstations, servers, blade servers, mainframes, and other appropriate computers. Electronic equipment may also represent various forms of mobile devices, such as personal digital assistants, cellular telephones, smartphones, wearable devices (e.g., helmets, eyeglasses, watches, etc.), and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the inventions described and/or claimed herein.
As shown in fig. 6, the electronic device 10 includes at least one processor 11, and a Memory such as a Read-Only Memory (ROM) 12, a random access Memory (Random Access Memory, RAM) 13, etc. communicatively connected to the at least one processor 11, wherein the Memory stores a computer program executable by the at least one processor 11, and the processor 11 can perform various appropriate actions and processes according to the computer program stored in the Read-Only Memory (ROM) 12 or the computer program loaded from the storage unit 18 into the Random Access Memory (RAM) 13. In the RAM 13, various programs and data required for the operation of the electronic device 10 may also be stored. The processor 11, the ROM 12 and the RAM 13 are connected to each other via a bus 14. An Input/Output (I/O) interface 15 is also connected to bus 14.
Various components in the electronic device 10 are connected to the I/O interface 15, including: an input unit 16 such as a keyboard, a mouse, etc.; an output unit 17 such as various types of displays, speakers, and the like; a storage unit 18 such as a magnetic disk, an optical disk, or the like; and a communication unit 19 such as a network card, modem, wireless communication transceiver, etc. The communication unit 19 allows the electronic device 10 to exchange information or data with other devices via a computer network, such as the internet, and/or various telecommunications networks.
The processor 11 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of processor 11 include, but are not limited to, a central Processing unit (Central Processing Unit, CPU), a graphics Processing unit (Graphics Processing Unit, GPU), various specialized artificial intelligence (ARTIFICIAL INTELLIGENCE, AI) computing chips, various processors running machine learning model algorithms, digital signal processors (DIGITAL SIGNAL Processing, DSP), and any suitable processor, controller, microcontroller, etc. The processor 11 performs the various methods and processes described above, such as the segmentation method of medical images provided by the above embodiments.
In some embodiments, the medical image segmentation method provided by the above embodiments may be implemented as a computer program, which is tangibly embodied in a computer-readable storage medium, such as the storage unit 18. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 10 via the ROM 12 and/or the communication unit 19. When the computer program is loaded into the RAM 13 and executed by the processor 11, one or more steps of the above described method of segmentation of medical images may be performed. Alternatively, in other embodiments, the processor 11 may be configured to perform the segmentation method of the medical image by any other suitable means (e.g. by means of firmware).
Various embodiments of the systems and techniques described herein above may be implemented in the following systems or combinations thereof: digital electronic circuitry, integrated circuitry, field programmable gate array (Field Programmable GATE ARRAY, FPGA), application SPECIFIC INTEGRATED Circuit (ASIC), application SPECIFIC STANDARD PARTS, ASSP, system On Chip (SOC), complex programmable logic device (Complex Programmable Logic Device, CPLD), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
The computer program for implementing the medical image segmentation method of the present invention may be written in any combination of one or more programming languages. These computer programs may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the computer programs, when executed by the processor, cause the functions/acts specified in the flowchart and/or block diagram block or blocks to be implemented. The computer program may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of the present application, a computer-readable storage medium may be a tangible medium that can contain, or store a computer program for use by or in connection with an instruction execution system, apparatus, or device. The computer readable storage medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. Alternatively, the computer readable storage medium may be a machine readable storage medium. Examples of machine-readable storage media may include at least one wire-based electrical connection, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-Only Memory (ROM), an erasable programmable read-Only Memory (EPROM), a flash Memory, an optical fiber, a portable compact disc read-Only Memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a terminal device having: a display device (e.g., cathode-Ray Tube (CRT) or Liquid crystal display (Liquid CRYSTAL DISPLAY, LCD) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the terminal device. Other kinds of devices may also provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local area network (Local Area Network, LAN), wide area network (Wide Area Network, WAN), blockchain network, and the internet.
The computing system may include clients and servers. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical host and Virtual special server (Virtual PRIVATE SERVER, VPS) service are overcome.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps described in the present invention may be performed in parallel, sequentially, or in a different order, so long as the desired results of the technical solution of the present invention are achieved, and the present invention is not limited herein.
The above embodiments do not limit the scope of the present invention. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present invention should be included in the scope of the present invention.

Claims (10)

1. A method of segmenting a medical image, comprising:
inputting the medical image to be segmented into a target image segmentation model which is trained in advance; the target image segmentation model comprises a feature extraction network and an image segmentation network;
performing feature extraction on the input medical image to be segmented through the feature extraction network, and outputting at least one low-order image feature, at least one high-order image feature and a global feature image;
Outputting a target segmentation result of the medical image to be segmented according to the at least one low-order image feature, the at least one high-order image feature and the global feature image through the image segmentation network;
the feature extraction network comprises a low-order feature extraction module, a high-order feature extraction module and a global decoding module, wherein the low-order feature extraction module is used for extracting at least one low-order image feature of the medical image to be segmented, the high-order feature extraction module is used for extracting at least one high-order image feature of the medical image to be segmented, and the global decoding module is used for fusing the at least one low-order image feature and the at least one high-order image feature to obtain a global feature image.
2. The method according to claim 1, wherein the global decoding module comprises a low-order decoding unit and a high-order decoding unit, the low-order feature extraction module comprises at least two low-order feature extraction layers sequentially connected in series, the at least two low-order feature extraction layers are used for carrying out feature extraction on the medical image to be segmented layer by layer to output at least two low-order image features, the low-order decoding unit is used for carrying out parallel aggregation and splicing on the at least two low-order image features to output a low-order feature image, the at least one high-order feature extraction layer is used for carrying out feature extraction on the low-order feature image layer by layer to output at least one high-order image feature, and the high-order decoding unit is used for carrying out parallel aggregation and splicing on the low-order feature image and the at least one high-order image feature to obtain a global feature image.
3. The method of claim 1, wherein the image segmentation network comprises a first feature fusion module, at least one second feature fusion module, and a third feature fusion module in series, wherein the first feature fusion module is configured to output a first fused image according to a last higher-order image feature and the global feature image; the second feature fusion module is used for outputting a second fusion image according to the input target image features and the target fusion image; the third feature fusion module is used for outputting a target segmentation result of the medical image to be segmented according to the first low-order image feature and the second fusion image output by the last second feature fusion module connected in series;
The target image features are high-order image features or low-order image features except the last high-order image feature and the first low-order image feature, and the target fusion image is a first fusion image output by a first feature fusion module connected in series or a second fusion image output by a last second feature fusion module connected in series.
4. A method according to claim 3, wherein the first feature fusion module comprises a downsampling unit, a reverse attention unit connected downstream of the downsampling unit, and a fusion unit connected downstream of the downsampling unit and the reverse attention unit, respectively, the input data of the downsampling unit being the global feature image, the input data of the reverse attention unit comprising the last higher order image feature;
The second feature fusion module comprises an up-sampling unit, a reverse attention unit connected with the downstream of the up-sampling unit and a fusion unit connected with the downstream of the up-sampling unit and the reverse attention unit respectively, wherein the input data of the up-sampling unit is the target fusion image, and the input data of the reverse attention unit comprises the target image features;
The third feature fusion module comprises an up-sampling unit, a reverse attention unit connected with the downstream of the up-sampling unit, a fusion unit connected with the downstream of the up-sampling unit and the reverse attention unit respectively, and an activation unit connected with the downstream of the fusion unit, wherein the input data of the up-sampling unit is a second fusion image output by the last second feature fusion module connected in series, and the input data of the reverse attention unit comprises a first low-order image feature.
5. The method according to any one of claims 1-4, further comprising:
the boundary segmentation result corresponding to the target segmentation result is expressed in the form of an implicit level set, and a Lagrange equation is constructed;
determining a partial differential equation corresponding to the Lagrangian equation by adopting a gradient descent algorithm;
And carrying out iterative solution on the partial differential equation to obtain an image segmentation result of the medical image to be segmented.
6. The method of claim 5, wherein the lagrangian equation satisfies the following formula:
Wherein, Representing constraints of the image segmentation result, J (x) representing an optimization objective function,X t and x t+1 respectively represent iteration division boundaries corresponding to the t-th iteration solution and the t+1st iteration solution, α represents a learning rate, V (x) represents a step length of one iteration solution, V Max represents a maximum step length of one iteration solution, and μ and σ respectively represent lagrangian coefficients.
7. The method of claim 5, wherein the partial differential equation satisfies the formula:
Wherein V 1 represents a speed function caused by contrast variation in the medical image to be segmented, I (x) represents a pixel value of the medical image to be segmented at an iterative segmentation boundary x, V 2 represents a speed function caused by a boundary shape of the iterative segmentation boundary, k (x) represents a boundary curvature at the iterative segmentation boundary x, V 3 represents a speed function caused by a boundary tension of the iterative segmentation boundary, d (x) represents a position distance between the iterative segmentation boundary x obtained by the current iterative solution and the iterative segmentation boundary obtained by the last iterative solution, and m and n represent speed coefficients respectively.
8. A medical image segmentation apparatus, comprising:
The medical image to be segmented input module is used for inputting the medical image to be segmented into a target image segmentation model which is trained in advance; the target image segmentation model comprises a feature extraction network and an image segmentation network;
the global feature image output module is used for carrying out feature extraction on the input medical image to be segmented through the feature extraction network and outputting at least one low-order image feature, at least one high-order image feature and a global feature image;
The target segmentation result output module is used for outputting a target segmentation result of the medical image to be segmented according to the at least one low-order image feature, the at least one high-order image feature and the global feature image through the image segmentation network;
the feature extraction network comprises a low-order feature extraction module, a high-order feature extraction module and a global decoding module, wherein the low-order feature extraction module is used for extracting at least one low-order image feature of the medical image to be segmented, the high-order feature extraction module is used for extracting at least one high-order image feature of the medical image to be segmented, and the global decoding module is used for fusing the at least one low-order image feature and the at least one high-order image feature to obtain a global feature image.
9. An electronic device, the electronic device comprising:
at least one processor; and
A memory communicatively coupled to the at least one processor; wherein,
The memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the method of segmentation of medical images according to any one of claims 1-7.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium stores computer instructions for causing a processor to implement the medical image segmentation method according to any one of claims 1-7 when executed.
CN202410496846.2A 2024-04-24 2024-04-24 Medical image segmentation method, device, equipment and storage medium Pending CN118298178A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410496846.2A CN118298178A (en) 2024-04-24 2024-04-24 Medical image segmentation method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410496846.2A CN118298178A (en) 2024-04-24 2024-04-24 Medical image segmentation method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN118298178A true CN118298178A (en) 2024-07-05

Family

ID=91680006

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410496846.2A Pending CN118298178A (en) 2024-04-24 2024-04-24 Medical image segmentation method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN118298178A (en)

Similar Documents

Publication Publication Date Title
US20210158533A1 (en) Image processing method and apparatus, and storage medium
CN110298844B (en) X-ray radiography image blood vessel segmentation and identification method and device
CN110475505A (en) Utilize the automatic segmentation of full convolutional network
US11030750B2 (en) Multi-level convolutional LSTM model for the segmentation of MR images
CN112396605B (en) Network training method and device, image recognition method and electronic equipment
CN113554665A (en) Blood vessel segmentation method and device
CN111899244B (en) Image segmentation method, network model training method, device and electronic equipment
CN112465834A (en) Blood vessel segmentation method and device
Zhao et al. D2a u-net: Automatic segmentation of covid-19 lesions from ct slices with dilated convolution and dual attention mechanism
CN115018805A (en) Segmentation model training method, image segmentation method, device, equipment and medium
CN116245832B (en) Image processing method, device, equipment and storage medium
CN117437423A (en) Weak supervision medical image segmentation method and device based on SAM collaborative learning and cross-layer feature aggregation enhancement
Adiga et al. Anatomically-aware uncertainty for semi-supervised image segmentation
CN115690143B (en) Image segmentation method, device, electronic equipment and storage medium
CN118298178A (en) Medical image segmentation method, device, equipment and storage medium
CN115861255A (en) Model training method, device, equipment, medium and product for image processing
CN111598870B (en) Method for calculating coronary artery calcification ratio based on convolutional neural network end-to-end reasoning
CN115375706A (en) Image segmentation model training method, device, equipment and storage medium
CN114926479A (en) Image processing method and device
CN115482248A (en) Image segmentation method and device, electronic device and storage medium
US20240161382A1 (en) Texture completion
Yi et al. Priors-assisted dehazing network with attention supervision and detail preservation
CN114419375A (en) Image classification method, training method, device, electronic equipment and storage medium
CN115423832B (en) Pulmonary artery segmentation model construction method, and pulmonary artery segmentation method and device
CN117809092B (en) Medical image processing method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination