CN115375787A - Artifact correction method, computer device and readable storage medium - Google Patents

Artifact correction method, computer device and readable storage medium Download PDF

Info

Publication number
CN115375787A
CN115375787A CN202211030978.3A CN202211030978A CN115375787A CN 115375787 A CN115375787 A CN 115375787A CN 202211030978 A CN202211030978 A CN 202211030978A CN 115375787 A CN115375787 A CN 115375787A
Authority
CN
China
Prior art keywords
artifact
model
region
target
correction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211030978.3A
Other languages
Chinese (zh)
Inventor
不公告发明人
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Xiaowei Changxing Robot Co ltd
Original Assignee
Suzhou Xiaowei Changxing Robot Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Xiaowei Changxing Robot Co ltd filed Critical Suzhou Xiaowei Changxing Robot Co ltd
Priority to CN202211030978.3A priority Critical patent/CN115375787A/en
Publication of CN115375787A publication Critical patent/CN115375787A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • G06T11/008Specific post-processing after tomographic reconstruction, e.g. voxelisation, metal artifact correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/41Medical

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The present application relates to an artifact correction method, a computer device and a computer readable storage medium. The method comprises the following steps: acquiring preoperative image data; carrying out artifact identification on the preoperative image data through an artifact identification model obtained by pre-training to obtain a target artifact area; acquiring attribute information of the target artifact area; and when the target artifact region is determined to need artifact correction according to the attribute information, carrying out artifact correction on the target artifact region needing artifact correction through an artifact correction model obtained by pre-training. The method can improve the treatment efficiency.

Description

Artifact correction method, computer device and readable storage medium
Technical Field
The present application relates to the field of medical image processing technologies, and in particular, to an artifact correction method, a computer device, and a readable storage medium.
Background
Computer imaging (CT) is an advanced medical imaging technique that uses X-ray beams to scan specific regions of the body to reconstruct lesions, providing important information for diagnosis. In the preoperative surgical planning, metal artifacts are generated due to the metal contained in the patient body, for example, since the metal body is a high-density substance, the existence of the metal body can cause strong attenuation of the X-ray during scanning imaging, and the metal artifacts can appear as bright and dark stripes or radial area artifacts in the CT image, and the metal artifacts can affect the effect of the preoperative surgical planning.
In the conventional technology, the influence caused by the metal artifact is eliminated by correcting the metal artifact, however, the current direct correction causes a large data processing amount and reduces the efficiency.
Disclosure of Invention
In view of the above, it is desirable to provide an artifact correction method, an artifact correction apparatus, and a storage medium capable of improving processing efficiency.
In a first aspect, the present application provides a method for artifact correction, the method comprising:
acquiring preoperative image data;
carrying out artifact identification on the preoperative image data through an artifact identification model obtained by pre-training to obtain a target artifact area;
acquiring attribute information of the target artifact area;
and when the target artifact region is determined to need artifact correction according to the attribute information, performing artifact correction on the target artifact region needing artifact correction through an artifact correction model obtained by pre-training.
In one embodiment, after the artifact correction model obtained by pre-training performs artifact correction on the target artifact region needing artifact correction, the method includes:
acquiring attribute information of the corrected artifact region, taking the corrected artifact region as a target artifact region, and continuing to execute the step of determining whether the target artifact region needs artifact correction according to the attribute information until the target artifact region is judged not to need artifact correction according to the attribute information.
In one embodiment, the obtaining attribute information of the target artifact region includes:
acquiring at least one of the number, the area position, the area volume ratio and the area gray value intensity distribution of the target artifact area as attribute information;
before the step of performing artifact correction on the target artifact region needing artifact correction by using an artifact correction model obtained by pre-training when it is determined that the target artifact region needs artifact correction according to the attribute information, the method further includes:
determining the weight corresponding to each attribute information and the reference index value corresponding to each attribute information;
calculating a correction index value of the target artifact area according to the weight and the reference index value;
and determining whether the target artifact area needs artifact correction or not according to the correction index value.
In one embodiment, the pre-training image data is subjected to artifact identification by a pre-trained artifact identification model to obtain a target artifact region, including:
performing feature extraction on the preoperative image data through a feature extraction module of an artifact identification model obtained through pre-training to obtain features to be processed;
inputting the features to be processed into a central point prediction module, a central point deviation prediction module and a length, width and height prediction module of the artifact identification model respectively for processing to obtain corresponding artifact region information;
and calculating to obtain a target artifact area according to the artifact area information.
In one embodiment, the preoperative imaging data is a CT image sequence; the artifact identification model is a three-dimensional convolutional neural network model, and/or the artifact correction model is a three-dimensional convolutional neural network model.
In one embodiment, the method for training the artifact identification model or the artifact correction model includes:
acquiring sample medical image data, wherein the sample medical image data carries a label;
processing the sample medical image data through an initial model to obtain a model processing result;
calculating to obtain model loss according to the label and the model processing result;
and when the model loss does not meet the requirement, optimizing the network parameters of the initial model, processing the sample medical image data through the optimized model to obtain a model processing result, and continuously executing the step of calculating the model loss according to the label and the model processing result until the model loss meets the requirement to obtain an artifact identification model or an artifact correction model.
In one embodiment, the label is a pre-labeled artifact region; the processing of the sample medical image data through the initial model to obtain a model processing result includes:
carrying out feature extraction on the sample medical image data through a feature extraction module of an initial model to obtain sample features; respectively inputting the sample characteristics into a central point prediction module, a central point offset prediction module and a length, width and height prediction module of the initial model for processing to obtain artifact area information of corresponding samples;
the calculating according to the label and the model processing result to obtain the model loss comprises the following steps:
obtaining a first central point position and first length, width and height information according to a pre-marked artifact area;
calculating the artifact region information obtained by the central point prediction module and a first central point position of a pre-marked artifact region to obtain a first loss value;
calculating the position of a first central point of an artifact region pre-labeled with the artifact region information obtained by the central point offset prediction module to obtain a second loss value,
calculating artifact region information obtained by the length, width and height prediction module of the target object and first length, width and height information of a pre-marked artifact region to obtain a third loss value;
and calculating to obtain model loss according to the first loss value, the second loss value and the third loss value.
In one embodiment, the calculating the artifact region information obtained by the central point prediction module and the position of the first central point of the pre-labeled artifact region to obtain the first loss value includes:
determining a sample positive voxel and a sample negative voxel according to a pre-labeled artifact region, processing artifact region information through the central point prediction module to obtain a predicted positive voxel, a predicted negative voxel and a second central point position, calculating first losses of the sample positive voxel and the predicted positive voxel, calculating second losses of the sample negative voxel and the predicted negative voxel, calculating a third loss of the first central point position and the second central point position, and calculating a first loss value corresponding to the central point prediction module according to the first loss, the second loss and the third loss;
the calculating the position of the first central point of the artifact region pre-labeled with the artifact region information obtained by the central point offset prediction module to obtain a second loss value comprises:
determining a prediction deviation value through the central point deviation prediction module, determining a real deviation value according to the pre-labeled artifact area, and calculating to obtain a second loss value corresponding to the central point deviation prediction module according to the prediction deviation value, the real deviation value and the number of central points;
the calculating the artifact region information obtained by the length, width and height prediction module of the target object and the pre-labeled first length, width and height information of the artifact region to obtain a third loss value includes:
and determining second length, width and height information through the length, width and height prediction module of the target object, and calculating a third loss value corresponding to the length, width and height prediction module of the target object according to the first length, width and height information, the second length, width and height information and the number of the central points.
In one embodiment, the labels are matching artifact-free regions; the calculating according to the label and the model processing result to obtain the model loss comprises the following steps:
determining the model processing result and corresponding pixel point pairs of the matched artifact-free areas;
and calculating according to the pixel values of the pixel point pairs to obtain the model loss.
In a second aspect, the present application further provides a computer device comprising a memory and a processor, the memory storing a computer program, the processor implementing the steps of the method in any of the above embodiments when executing the computer program.
In a third aspect, the present application also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the method in any of the above-described embodiments.
According to the artifact correction method, the computer device and the computer readable storage medium, after the preoperative image data is acquired, artifact recognition is performed on the preoperative image data through an artifact recognition model obtained through pre-training to determine the target artifact area, whether artifact correction is needed or not is determined according to the attribute information of the target artifact area, and the artifact correction is performed only in the target artifact area needing the artifact correction, so that all artifacts are not needed to be corrected, the data processing amount is reduced, and the processing efficiency is improved.
Drawings
FIG. 1 is a diagram of an exemplary embodiment of a method for artifact correction;
FIG. 2 is a flow diagram illustrating an exemplary method for artifact correction;
FIG. 3 is a flow chart of an artifact correction method in another embodiment;
FIG. 4 is a schematic representation of cross-sectional, coronal, and sagittal plane locations in one embodiment;
FIG. 5 is a diagram illustrating an example embodiment with an artifact level H;
FIG. 6 is a diagram of an embodiment with an artifact level M;
FIG. 7 is a diagram of an embodiment with an artifact level L;
FIG. 8 is a schematic illustration of a CT image sequence in one embodiment;
FIG. 9 is a schematic diagram illustrating a comparison of a three-dimensional convolution and a two-dimensional convolution, in one embodiment;
FIG. 10 is a schematic flow chart diagram of a method for training a target model in one embodiment;
FIG. 11 is a schematic diagram of a structure of an object model in one embodiment;
FIG. 12 is a schematic diagram of an encoder structure in one embodiment;
FIG. 13 is a schematic diagram of a decoder architecture in one embodiment;
FIG. 14 is a flow diagram for training an artifact identification model in one embodiment;
FIG. 15 is a diagram of structural entities of an artifact identification model in one embodiment;
FIG. 16 is a flow diagram of training of an artifact correction model in one embodiment;
FIG. 17 is a diagram illustrating an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The artifact correction method provided by the embodiment of the application can be applied to an application environment as shown in fig. 1. Wherein the terminal 102 communicates with the medical imaging device 104 via a network. The data storage system may store data that the terminal 102 needs to process. The data storage system may be integrated on the terminal 102, or may be placed on the cloud or other network server.
The terminal 102 acquires preoperative image data from the medical imaging device 104, performs artifact identification on the preoperative image data through an artifact identification model obtained through pre-training to obtain a target artifact region, and extracts attribute information of the target artifact region, so that when the target artifact region is determined to need artifact correction according to the attribute information, the target artifact region needing artifact correction is subjected to artifact correction through the artifact correction model obtained through pre-training. Therefore, the artifact correction is only carried out in the target artifact area which needs artifact correction, so that all artifacts do not need to be corrected, the data processing amount is reduced, and the processing efficiency is improved.
The terminal 102 may be, but is not limited to, various personal computers, notebook computers, smart phones, tablet computers, and portable wearable devices, and the medical imaging device 104 includes, but is not limited to, various imaging devices, such as a CT imaging device (Computed Tomography, which uses a precisely collimated X-ray beam to perform cross-sectional scanning around a certain part of a human body together with a highly sensitive detector, and can reconstruct a precise three-dimensional position image of a tumor and the like through CT scanning), a magnetic resonance device (which is a Tomography device that uses a magnetic resonance phenomenon to obtain an electromagnetic signal from a human body and reconstruct a human body information image), a Positron Emission Computed Tomography (Positron Emission Computed Tomography) device, a Positron Emission magnetic resonance imaging system (PET/MR), and the like.
In one embodiment, as shown in fig. 2, an artifact correction method is provided, which is described by taking the method as an example applied to the terminal in fig. 1, and includes the following steps:
s202: and acquiring preoperative image data.
Specifically, the preoperative image data is image data of a target area acquired before an operation, such as an operation area, for example, preoperative medical image data acquired by a medical imaging device, and specifically, the preoperative image data may be joint image data, tooth image data, and the like, which is not limited herein. The preoperative imaging data may be selected as a sequence of images, such as a sequence of CT images, so that depth information may be extracted.
The preoperative image data is generated mainly for surgical planning based on the preoperative image data before surgery. However, in order to influence the artifact region, the artifact region is identified first to correct the artifact region, so that the situation that the artifact region causes inaccurate surgical planning is subsequently avoided.
S204: and carrying out artifact identification on preoperative image data through an artifact identification model obtained by pre-training to obtain a target artifact area.
Specifically, the pre-trained artifact identification model is used for performing artifact identification on the preoperative image data to determine the target artifact region. The artifact identification model may be a three-dimensional convolutional neural network model, so that the identified target artifact region is a three-dimensional region, for example, the target region of the identified target artifact is surrounded by a three-dimensional outer frame. The specific definition of the artifact identification model can be seen below, and is not described herein again.
The target artifact region refers to a region in which a metal artifact exists in the preoperative image data.
Optionally, the terminal performs preprocessing on the preoperative image data and then performs recognition through an artifact recognition model. The preprocessing includes, but is not limited to, adjusting window width and window level, so that the artifact model can process the preoperative image data in a targeted manner.
S206: and acquiring attribute information of the target artifact area.
Specifically, the attribute information includes, but is not limited to, the number of target artifact regions, region location, region volume fraction, and region gray value intensity distribution. The region position can be represented by the position of the center point of the region and the length, width and height of the region, the region volume where the artifact exists accounts for the ratio = the intersection volume of the artifact region and the operation selection region/the volume of the operation selection region, and the region gray value intensity distribution is the gray value intensity distribution of the artifact region.
S208: and when the target artifact region is determined to need artifact correction according to the attribute information, carrying out artifact correction on the target artifact region needing artifact correction through an artifact correction model obtained by pre-training.
Specifically, the terminal judges whether the artifact region needs to be corrected according to the attribute information of the artifact region, if so, the artifact region is corrected through the artifact correction model, otherwise, the artifact region is not processed.
Whether the artifact region needs to be corrected or not is judged according to the attribute information of the artifact region, the artifact region can be measured by a weighted average value of a plurality of attribute information, and in other embodiments, calculation can be performed according to single attribute information, which is not specifically limited herein. Alternatively, the type of the attribute information may be preset by a user or selected according to a calculation amount of the system, and is not particularly limited herein.
The artifact correction model obtained by pre-training is used for artifact correction of the artifact region. The artifact correction model may be a three-dimensional convolutional neural network model, so that the input target artifact region to be corrected is a three-dimensional region, for example, a CT image sequence is input, and the output is a corrected image sequence and also three-dimensional data. The specific definition of the artifact correction model can be seen below, and is not described herein again.
According to the artifact correction method, after the preoperative image data is acquired, artifact recognition is performed on the preoperative image data through an artifact recognition model obtained through pre-training to determine the target artifact area, whether artifact correction is needed or not is determined according to the attribute information of the target artifact area, and only the target artifact area needing artifact correction is subjected to artifact correction, so that all artifacts are not needed to be corrected, the data processing amount is reduced, and the processing efficiency is improved.
In one embodiment, after performing artifact correction on a target artifact region that needs to be subjected to artifact correction by using an artifact correction model obtained through pre-training, the method includes: and acquiring attribute information of the corrected artifact region, taking the corrected artifact region as a target artifact region, and continuously executing the step of determining whether the target artifact region needs artifact correction according to the attribute information until the target artifact region is determined not to need artifact correction according to the attribute information.
Specifically, referring to fig. 3, fig. 3 is a flowchart of an artifact correction method in another embodiment, in this embodiment, the foregoing processing process is substantially the same as that in the embodiment shown in fig. 2, after artifact correction is performed, attribute information of a corrected artifact region is further obtained, and it is continuously determined whether artifact correction is further required for the corrected artifact region according to the attribute information of the corrected artifact region, if yes, artifact correction is continuously performed until it is determined that artifact correction is not required for a target artifact region according to the attribute information, so that accuracy of final artifact correction is ensured by multiple corrections.
In the embodiment, the accuracy of the final artifact correction is ensured by a multi-correction mode, so that the prior preoperative planning process is perfected, and the applicability range of the image navigation trolley operation planning is improved. And the intelligent and automatic degree of preoperative planning risk analysis is improved through artifact identification and scoring. By correcting the artifacts with low risk scores, the image quality can be improved, and the reliability and accuracy of preoperative planning can be improved.
In one embodiment, obtaining attribute information of the target artifact region includes: acquiring at least one of the number, the position, the volume ratio and the gray value intensity distribution of the target artifact areas as attribute information; determining whether the target artifact area needs artifact correction according to the attribute information, wherein the step of determining whether the target artifact area needs artifact correction comprises the following steps: determining weights corresponding to the attribute information and reference index values corresponding to the attribute information; calculating a correction index value of the target artifact area according to the weight and the reference index value; and determining whether the target artifact area needs artifact correction according to the correction index value.
Specifically, after identifying the preoperative image data and determining the target artifact region, the attribute information of the target artifact region is obtained, where the attribute information includes, but is not limited to, the number, the region position, the region volume ratio, and the region gray value intensity distribution of the target artifact region, where the number of artifact regions is also the number of artifacts included in the preoperative image data, the region position may refer to a rough position of the region, for example, whether there is a conflict with the surgical region in the left leg and the right leg, and the region volume ratio where the artifact is located = an intersection volume of the artifact region and the surgical selection region/a volume of the surgical selection region, and the region gray value intensity distribution is the gray value intensity distribution of the artifact region.
When determining whether the target artifact region needs to be subjected to artifact correction, a weight assignment method may be used for each attribute information to finally obtain a grade, and whether the target artifact region needs to be subjected to artifact correction is determined according to the grade, that is, a correction index value.
For convenience of understanding, the present embodiment takes two factors, namely "location of the artifact" and "proportion of the volume of the region where the artifact is located" as an example, and quantifies and unifies the scoring criteria.
Specifically, with reference to FIG. 4, a CT image of a patient is shown divided into three slice planes, a coronal plane, a sagittal plane, and a transverse plane. Including the "target artifact region" identified by the artifact identification model, and the "surgical region" of the patient selected by the physician.
Specifically, as shown in fig. 5 to 7, the reference index value analysis of the "position where the artifact is located" includes: as shown in fig. 5 or fig. 6, if the artifact region is in the operation selection region or the artifact region and the operation selection region intersect with each other, the artifact region is represented to directly influence the subsequent planning of the operation region, and the reference index value is set to "1", as shown in fig. 7, if the identified artifact region is not in the operation selection region, the artifact region is represented to directly influence the subsequent planning of the operation region, and the reference index value is set to "0". The setting of the reference index value may also be selected from other values, which are not limited herein.
And (3) analyzing a reference index value of 'the volume proportion of the area where the artifact is located': the region volume occupied by the artifact = the volume of intersection of the artifact region and the surgical selection region/volume of the surgical selection region.
The artifact region volume ratio result is 0.6 as shown in fig. 5, 0.2 as shown in fig. 6, and 0 as shown in fig. 7.
Each factor weight factor represents a degree of influence of each attribute information on the correction index value. The two attribute information in this embodiment each account for 50%. The final correction index value as in fig. 5 is calculated as: 1 (the position where the artifact is located affects the result) 0.5 (the factor weight) +0.6 (the volume proportion of the region where the artifact is located) 0.5 (the factor weight) =0.8. In other embodiments, the weights may be set in other manners, and are not limited in particular.
The final correction index value is set to three levels: h (indicating a high risk rating for which the result of step 4 is set between 0.7 and 1). : m (representing a risk rating for which the result of step 4 is set between 0.5 and 0.7). L (representing a low risk rating for which the result of step 4 is set between 0 and 0.5). H represents that the existing artifact has a serious influence on the later-described surgical planning, and the doctor is not advised to continue the subsequent surgical planning operation. M represents that the existing artifact has a moderate influence on the operation planning described later, and the doctor is advised to continue the subsequent operation planning operation after performing the artifact correction operation. L represents that the existing artifact has little influence on the later-described operation planning, and the subsequent operation planning operation can be continued without artifact correction. The setting of the level is only distance in this embodiment, and other values may be adopted in other embodiments, and are not specifically limited herein. The artifact level is H (high risk) as shown in fig. 5. The artifact level is M (medium risk) as shown in fig. 6. The artifact level is L (low risk) as shown in fig. 7. The terminal automatically calculates the grade according to the three-dimensional coordinate relation of the artifact area and the operation area according to a set grading principle.
In the above embodiment, the correction index value is determined by the attribute information of the target artifact region, and whether correction is required is determined according to the correction index value, and the determination is performed by the quantization standard, which is more accurate.
In one embodiment, the pre-training method for performing artifact identification on preoperative image data to obtain a target artifact region includes: performing feature extraction on preoperative image data through a feature extraction module of an artifact identification model obtained through pre-training to obtain features to be processed; respectively inputting the characteristics to be processed into a central point prediction module, a central point deviation prediction module and a length, width and height prediction module of a target object of an artifact identification model for processing to obtain corresponding artifact area information; and calculating to obtain a target artifact area according to the artifact area information.
Specifically, in this embodiment, the artifact identification model includes a feature extraction module, a central point prediction module, a central point offset prediction module, and a target object length, width, and height prediction module, and the preoperative image data is first input to the feature extraction module to obtain features to be processed, then the features to be processed are respectively input to the central point prediction module, the central point offset prediction module, and the target object length, width, and height prediction module to obtain artifact region information calculated by each module, and finally a target artifact region is calculated according to the artifact region information.
The artifact region information includes, but is not limited to, the position of a central point and the length, width and height of the artifact region, so that the target artifact region is calculated by calculating mathematical statistics of artifact region information obtained by the central point prediction module, the central point offset prediction module and the target object length, width and height prediction module, and the target artifact region is more accurate.
In the above embodiment, the three-dimensional full convolution backbone network is constructed to extract features, the center point and the length and Width of the three-dimensional box in the region where the artifact is located are obtained by constructing the center point prediction (HeatMap) branch, the center point Offset prediction branch (Offset) and the length and Width prediction branch (Height & Width & depth) of the target object, and the positions of the target in the original image are finally obtained by combining the prediction results of the three branches.
In one embodiment, the preoperative imaging data is a CT image sequence; the artifact identification model is a three-dimensional convolutional neural network model, and/or the artifact correction model is a three-dimensional convolutional neural network model.
Specifically, referring to fig. 8, the artifact in fig. 8 is represented in the form of a plurality of consecutive layers in a CT sequence, and the bone joint morphology of each slice plane in the joint CT sequence has a large difference, as shown in fig. 8, the morphology information of the bone joint at different layers and the artifact region information are shown. The information of a single fault plane reaction is too single, the mode of training an artifact identification network and an artifact correction model in a single fault plane is not favorable for convergence of the model, and the overall accuracy performance of the model is influenced. In order to fully consider the spatial information of the artifact region and the bone joint morphology, the joint CT sequence is constructed as a three-dimensional body shown in fig. 8, and then network training operations such as three-dimensional convolution are performed to obtain a final network model in the embodiment.
Specifically, in conjunction with fig. 9, in which the present embodiment performs feature extraction using three-dimensional convolution, the CT image sequence is composed of a series of two-dimensional tomographic images, which is a three-dimensional data having spatial information from the overall viewpoint. The additional of the knee joint CT image sequence is another spatial dimension. The conventional two-dimensional convolution can only extract plane features of a single slice, thereby causing spatial information of an image to be lost. Unlike the two-dimensional convolution, the input to the three-dimensional convolution has a depth dimension, which is embodied as a plurality of sequential slices in the CT image, and thus its convolution kernel will also add one dimension, the two-dimensional convolution and the three-dimensional convolution differences being shown in fig. 9.
In the above embodiment, in order to fully utilize the three-dimensional spatial information of the CT image data, the three-dimensional convolutional neural network model is used for both the artifact identification model and the artifact correction model, and the three-dimensional body constructed by the joint CT sequence is subjected to subsequent network training operations such as three-dimensional convolution to obtain a final network model, and the spatial information of the artifact region and the form of the target object is fully considered.
In an embodiment, as shown in fig. 10, a method for training an object model is provided, which is described by taking the application of the method to the terminal in fig. 1 as an example, and includes the following steps:
s1002: and acquiring sample medical image data, wherein the sample medical image data carries a label.
Specifically, sample medical image data is collected sample data, and when performing artifact identification network training on an artifact identification model, a medical image data set needs to be obtained first and data labeling needs to be completed, for example, a three-dimensional target frame where an artifact is located is labeled by using software such as a 3d slicer, the types of the data set are divided into a training set, a verification set and a test set, and data enhancement operation is performed on data in a training set sample. When the artifact correction model is subjected to artifact correction network training, an original medical image data set and a corresponding artifact CT image need to be acquired first, the artifact CT image at the position can be acquired in an artifact generation mode through simulation, then the data set types are divided into a training set, a verification set and a test set, and data enhancement operation is performed on data in a training set sample.
S1004: and processing the sample medical image data through the initial model to obtain a model processing result.
The main network of the artifact identification model or the artifact correction model may include an encoder structure, a decoder structure, and a transition portion, which may be specifically shown in fig. 11, that is, the initial model also includes the above three portions, where the encoder structure is used to extract bottom layer features of a task and to reduce dimensions of the features, the number of encoders may be set to different numbers according to different items, the number is 4 in this embodiment, and other numbers may be set in other embodiments, where the specific structure of the encoder may be shown in fig. 12, the encoder structure includes a three-dimensional pooling layer, a three-dimensional convolution layer, and a residual error module, where the three-dimensional pooling layer is used for reducing dimensions of the features, and the embodiment performs dimension reduction by using a three-dimensional convolution with a convolution kernel size of 3x3x3 and a step size of 2. The three-dimensional convolutional layer is used for extracting low-layer features and consists of three-dimensional convolution operation with convolution kernel size of 3x3x3 and step length of 1, normalization operation, relu activation operation and random deactivation operation. The number of the three-dimensional convolution layers can be set to be different according to different items, and the number is 3 in the embodiment.
The residual error module is used for adding the input and the output of the encoder, and the problem of network gradient disappearance is relieved.
The decoder structure is used for extracting task high-level features and recovering feature scales. The number of decoders may be set to different numbers according to different items, the number is 4 in this embodiment, and other numbers may be used in other embodiments. The specific structure of the decoder can be seen in fig. 13, and the decoder structure includes a three-dimensional deconvolution layer, a three-dimensional convolution layer, and a residual module, where the three-dimensional deconvolution layer is used for feature scale recovery, and in this embodiment, scale recovery is performed by using three-dimensional deconvolution with a convolution kernel size of 3x3x3 and a step length of 2. The three-dimensional convolutional layer is used for extracting high-level features and consists of three-dimensional convolution operation with convolution kernel size of 3x3x3 and step length of 1, normalization operation, relu activation operation and random deactivation operation. The number of the three-dimensional convolution layers can be set to be different according to different projects, and the number is 3 in the embodiment. The residual error module is used for adding the input and the output of the decoder, and the problem of network gradient disappearance is relieved.
The artifact identification module can calculate a model processing result in the following way: and executing the calculation of multi-artifact recognition network training, respectively sending the calculated feature maps to a class central point prediction (HeatMap) branch, a central point Offset prediction branch (Offset) and a length and Width prediction component (Height & Width & depth) of the target object, and finally obtaining the position of the target in the original picture by combining the prediction results of the three branches.
S1006: and calculating to obtain model loss according to the label and the model processing result.
S1008: and when the model loss does not meet the requirement, optimizing network parameters of the initial model, processing the sample medical image data through the optimized model to obtain a model processing result, and continuously executing the step of calculating the model loss according to the label and the model processing result until the model loss meets the requirement to obtain a target model, wherein the target model is the artifact identification model or the artifact correction model in any one of the embodiments.
Wherein the definition of the loss can be specifically referred to below, and is not specifically limited herein.
Specifically, in this embodiment, model training is performed through a loss function until the model loss meets the requirement, so as to obtain a target model.
In the above embodiment, the artifact identification model or the artifact correction model is obtained through model training, so as to facilitate subsequent artifact correction.
In one embodiment, the target model is an artifact identification model, and the label is a pre-labeled artifact area; the method for processing the sample medical image data through the initial model to obtain a model processing result comprises the following steps: carrying out feature extraction on the sample medical image data through a feature extraction module of the initial model to obtain sample features; respectively inputting the sample characteristics into a central point prediction module, a central point deviation prediction module and a length, width and height prediction module of a target object of an initial model for processing to obtain artifact area information of corresponding samples; calculating to obtain model loss according to the label and the model processing result, wherein the model loss comprises the following steps: obtaining a first central point position and first length, width and height information according to a pre-marked artifact area; calculating artifact region information obtained by the central point prediction module and a first central point position of a pre-marked artifact region to obtain a first loss value; calculating a first central point position of an artifact region marked in advance by the artifact region information obtained by the central point offset prediction module to obtain a second loss value, and calculating the artifact region information obtained by the length, width and height prediction module of the target object and the first length, width and height information of the artifact region marked in advance to obtain a third loss value; and calculating the model loss according to the first loss value, the second loss value and the third loss value.
In one embodiment, calculating the artifact region information obtained by the central point prediction module and a first central point position of a pre-labeled artifact region to obtain a first loss value includes: determining a sample positive voxel and a sample negative voxel according to a pre-labeled artifact region, processing artifact region information through a central point prediction module to obtain a predicted positive voxel, a predicted negative voxel and a second central point position, calculating first losses of the sample positive voxel and the predicted positive voxel, calculating second losses of the sample negative voxel and the predicted negative voxel, calculating third losses of the first central point position and the second central point position, and calculating a first loss value corresponding to the central point prediction module according to the first losses, the second losses and the third losses; calculating the position of a first central point of an artifact region pre-labeled with the artifact region information obtained by the central point offset prediction module to obtain a second loss value, wherein the method comprises the following steps: determining a prediction deviation value through a central point deviation prediction module, determining a real deviation value according to a pre-marked artifact area, and calculating to obtain a second loss value corresponding to the central point deviation prediction module according to the prediction deviation value, the real deviation value and the number of central points; calculating artifact region information obtained by the length, width and height prediction module of the target object and first length, width and height information of a pre-labeled artifact region to obtain a third loss value, wherein the method comprises the following steps: and determining second length, width and height information through a length, width and height prediction module of the target object, and calculating a third loss value corresponding to the length, width and height prediction module of the target object according to the first length, width and height information, the second length, width and height information and the number of the central points.
Specifically, referring to fig. 14, fig. 14 is a flowchart of training a model for artifact identification in an embodiment, in this embodiment, first, data set collection is performed, specifically including acquiring a medical image data set and completing data labeling, for example, using software such as 3dslicer to label a three-dimensional target frame in which an artifact is located, labeling coordinates of a center point and a length, a width and a height of the three-dimensional target frame, then, dividing the data set into a training set, a verification set and a test set, and performing a data enhancement operation on data in a training set sample. Then, preprocessing is performed, such as adjusting window width and level. Thirdly, the network parameters of the artifact identification model are calculated forward, specifically, the network parameters are used for executing the calculation of multi-artifact identification network training, the calculated feature maps are respectively sent to a class center point prediction (HeatMap) branch, a center point Offset prediction branch (Offset) and a length and Width prediction branch (Height & Width & depth) of the target object, and the prediction results of the three branches are combined to finally obtain the position of the target in the original picture. Fourthly, calculating the combined loss, for example, the adopted loss function is the combined loss function of three task branches, and fifthly, when the network test results of the verification set and the test set meet the expected conditions, terminating the network training, saving the final artifact identification and outputting the final artifact identification model.
Specifically, referring to fig. 15, fig. 15 is a structural entity diagram of an artifact identification model in an embodiment, wherein in order to accurately identify a three-dimensional outer frame included in an artifact area, in the embodiment, a three-dimensional full convolution trunk network is constructed, that is, a feature extraction module extracts features, a central point Offset prediction branch (Offset) and a length and Width prediction branch (Height & Width & depth) of a target object are constructed to obtain a central point and a length and Width of a three-dimensional box in the artifact area, and prediction results of the three branches are combined to finally obtain a position of the target in an original picture.
For ease of understanding, the overall penalty function of the artifact identification model in this embodiment includes three components, the center point penalty (HeatMap) l k Center point Offset loss (Offset) l offset And length width depth loss (Height) of the target object&Width&depth)l size
Wherein the transmission loss function of the artifact identification model is:
l det =l k +w s1 ×l size +w s2 ×l offset
wherein w si And dynamically adjusting the weights in the training process.
Specifically, as shown in connection with FIG. 14, center point loss l k The loss function of (d) is:
l k =w l1 ×l pos +w l2 ×l neg +w l3 ×l dis
wherein, w li The training process is dynamically adjusted for each weight. Firstly, extracting a voxel set with a threshold value larger than 0 from a real thermodynamic diagram generated according to key points as a positive voxel, and taking voxels with the threshold value smaller than or equal to 0 as a negative voxel; then, a prediction positive voxel and a prediction negative voxel are extracted from the prediction thermodynamic diagram by utilizing the black-white voxel range of the real thermodynamic diagram; the computational loss between real and predicted voxels is/ pos The calculation loss between the true negative voxel and the predicted negative voxel is l neg ,l dis Is the average euclidean distance between the predicted anatomical landmark point location and the true anatomical landmark point location.
Center point offset function l of artifact identification model offset Comprises the following steps:
Figure BDA0003817374350000141
wherein the content of the first and second substances,
Figure BDA0003817374350000142
predicted offset value for network output, p stands forThe center point position of the target frame at the current point, R represents the down-sampling multiple,
Figure BDA0003817374350000143
representing the true offset value and N representing the number of center points.
Length, width and height loss function l of artifact identification model size Comprises the following steps:
Figure BDA0003817374350000144
wherein the content of the first and second substances,
Figure BDA0003817374350000145
predicted length, width and height values, S, for network output P The actual length, width and height values are N representing the number of the central points.
In the above embodiment, a metal artifact identification model based on deep learning is provided, where an input of the model is an original CT image sequence, an output of the model is a three-dimensional target region where an artifact exists, and a risk level score of the artifact is performed according to an established scoring principle. The scoring rule is to score by attribute information of a target artifact region in a three-dimensional target region where the identified artifact exists.
In one embodiment, the target model is an artifact correction model, and the labels are matched artifact-free regions; calculating to obtain model loss according to the label and the model processing result, wherein the model loss comprises the following steps: determining a model processing result and corresponding pixel point pairs of the matched artifact-free areas; and calculating the model loss according to the pixel values of the pixel point pairs.
In the embodiment, first, data set collection is performed, specifically including acquiring an original data set of a medical image and a corresponding artifact CT image, where the artifact CT image may be acquired by simulating an artifact generation manner, then the data set types are divided into a training set, a validation set, and a test set, and data enhancement operation is performed on data in a training set sample. Then, preprocessing is performed, such as adjusting window width and level. Thirdly, the network parameters of the artifact correction model are calculated forward, the correction result is output, and the loss is calculated in a fourth mode, for example, the loss function of mean square error is adopted for calculation. In the embodiment, the features are extracted by constructing a three-dimensional full convolution trunk network, and the correction result is output through an artifact correction image output module.
Wherein, the artifact correction model defines a mean square error loss function as:
Figure BDA0003817374350000146
wherein, wXhXd represents the total number of pixel points, I (i,j,k) Is the pixel value of the label image at pixel point (i, j, K), K (i,j,k) The pixel values of the image are output for the network at pixel point (i, j, k).
The embodiment provides a metal artifact correction model based on deep learning, the input of the model is a CT image sequence to be corrected, the output of the model is the CT image sequence after artifact correction, artifact correction is performed on low-risk artifacts, the quality of original images is improved, efficient proceeding of subsequent preoperative planning is facilitated, the model can be applied to a shutdown replacement surgical robot, and the automation and intelligentization degrees are improved.
It should be understood that, although the steps in the flowcharts related to the embodiments are shown in sequence as indicated by the arrows, the steps are not necessarily executed in sequence as indicated by the arrows. The steps are not limited to being performed in the exact order illustrated and, unless explicitly stated herein, may be performed in other orders. Moreover, at least a part of the steps in the flowcharts related to the above embodiments may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of performing the steps or stages is not necessarily sequential, but may be performed alternately or alternately with other steps or at least a part of the steps or stages in other steps.
Based on the same inventive concept, the embodiment of the present application further provides an artifact correction device for implementing the above artifact correction method. The implementation scheme for solving the problem provided by the apparatus is similar to the implementation scheme described in the above method, so specific limitations in one or more embodiments of the artifact correction apparatus provided below may refer to the limitations on the artifact correction method in the foregoing, and details are not described here.
In one embodiment, there is provided an artifact correction apparatus, including:
the preoperative image data acquisition module is used for acquiring preoperative image data.
And the artifact identification module is used for carrying out artifact identification on the preoperative image data through an artifact identification model obtained by pre-training so as to obtain a target artifact area.
And the attribute information acquisition module is used for acquiring the attribute information of the target artifact area.
And the artifact correction module is used for performing artifact correction on the target artifact region needing artifact correction through an artifact correction model obtained by pre-training when the target artifact region is determined to need artifact correction according to the attribute information.
In one embodiment, the artifact correction module is further configured to acquire attribute information of a corrected artifact region, use the corrected artifact region as a target artifact region, and continue to perform the step of determining whether the target artifact region needs to be subjected to artifact correction according to the attribute information until it is determined that the target artifact region does not need to be subjected to artifact correction according to the attribute information.
In one embodiment, the attribute information acquiring module is further configured to acquire at least one of the number of the target artifact regions, a region position, a region volume ratio, and a region gray value intensity distribution as the attribute information;
the device further comprises:
the judging module is used for determining the weight corresponding to each attribute information and the reference index value corresponding to each attribute information; calculating a correction index value of the target artifact area according to the weight and the reference index value; and determining whether the target artifact area needs artifact correction or not according to the correction index value.
In one embodiment, the artifact identification module is further configured to perform feature extraction on the preoperative image data through a feature extraction module of an artifact identification model obtained through pre-training to obtain features to be processed; inputting the features to be processed into a central point prediction module, a central point deviation prediction module and a length, width and height prediction module of the artifact identification model respectively for processing to obtain corresponding artifact region information; and calculating to obtain a target artifact area according to the artifact area information.
In one embodiment, the preoperative imaging data is a CT image sequence; the artifact identification model is a three-dimensional convolutional neural network model, and/or the artifact correction model is a three-dimensional convolutional neural network model.
In one embodiment, there is provided a target model training apparatus including:
the system comprises a sample acquisition module and a sample processing module, wherein the sample acquisition module is used for acquiring sample medical image data, and the sample medical image data carries a label.
And the model processing module is used for processing the sample medical image data through the initial model to obtain a model processing result.
And the loss calculation module is used for calculating the model loss according to the label and the model processing result.
And the training module is used for optimizing the network parameters of the initial model when the model loss does not meet the requirement, processing the sample medical image data through the optimized model to obtain a model processing result, and continuously executing the step of calculating the model loss according to the label and the model processing result until the model loss meets the requirement to obtain a target model, wherein the target model is the artifact identification model or the artifact correction model in any one embodiment.
In one embodiment, the model processing module is further configured to perform feature extraction on the sample medical image data through a feature extraction module of an initial model to obtain sample features; and respectively inputting the sample characteristics to a central point prediction module, a central point offset prediction module and a length, width and height prediction module of the initial model for processing to obtain artifact region information of corresponding samples.
The loss calculation module is further used for obtaining a first central point position and first length, width and height information according to the pre-marked artifact area; calculating the artifact region information obtained by the central point prediction module and a first central point position of a pre-marked artifact region to obtain a first loss value; calculating a first central point position of an artifact region pre-labeled with the artifact region information obtained by the central point offset prediction module to obtain a second loss value, and calculating the artifact region information obtained by the length, width and height prediction module of the target object and the first length, width and height information of the pre-labeled artifact region to obtain a third loss value; and calculating to obtain model loss according to the first loss value, the second loss value and the third loss value.
In one embodiment, the loss calculating module is further configured to determine a sample positive voxel and a sample negative voxel according to a pre-labeled artifact region, process, by the central point predicting module, the artifact region information to obtain a predicted positive voxel, a predicted negative voxel, and a second central point position, calculate a first loss of the sample positive voxel and the predicted positive voxel, calculate a second loss of the sample negative voxel and the predicted negative voxel, calculate a third loss of the first central point position and the second central point position, and calculate, according to the first loss, the second loss, and the third loss, a first loss value corresponding to the central point predicting module; determining a prediction deviation value through the central point deviation prediction module, determining a real deviation value according to the pre-marked artifact area, and calculating to obtain a second loss value corresponding to the central point deviation prediction module according to the prediction deviation value, the real deviation value and the number of central points; and determining second length, width and height information through the length, width and height prediction module of the target object, and calculating a third loss value corresponding to the length, width and height prediction module of the target object according to the first length, width and height information, the second length, width and height information and the number of central points.
In one embodiment, the above-mentioned loss calculation module is further configured to determine corresponding pairs of pixel points of the model processing result and the matched artifact-free region; and calculating according to the pixel values of the pixel point pairs to obtain the model loss.
The modules in the artifact correction device and the target model training device can be wholly or partially implemented by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a terminal, and its internal structure diagram may be as shown in fig. 17. The computer device comprises a processor, a memory, a communication interface, a display screen and an input device which are connected through a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operating system and the computer program to run on the non-volatile storage medium. The communication interface of the computer device is used for communicating with an external terminal in a wired or wireless manner, and the wireless manner can be realized through WIFI, a mobile cellular network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement an artifact correction method, a target model training method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
It will be appreciated by those skilled in the art that the configuration shown in fig. 17 is a block diagram of only a portion of the configuration associated with the present application, and is not intended to limit the computing device to which the present application may be applied, and that a particular computing device may include more or fewer components than shown, or may combine certain components, or have a different arrangement of components.
In an embodiment, a computer device is further provided, which includes a memory and a processor, the memory stores a computer program, and the processor implements the steps of the above method embodiments when executing the computer program.
In an embodiment, a computer-readable storage medium is provided, on which a computer program is stored which, when being executed by a processor, carries out the steps of the above-mentioned method embodiments.
In an embodiment, a computer program product is provided, comprising a computer program which, when being executed by a processor, carries out the steps of the above-mentioned method embodiments.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, database, or other medium used in the embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include a Read-Only Memory (ROM), a magnetic tape, a floppy disk, a flash Memory, an optical Memory, a high-density embedded nonvolatile Memory, a resistive Random Access Memory (ReRAM), a Magnetic Random Access Memory (MRAM), a Ferroelectric Random Access Memory (FRAM), a Phase Change Memory (PCM), a graphene Memory, and the like. Volatile Memory can include Random Access Memory (RAM), external cache Memory, and the like. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), for example. The databases referred to in various embodiments provided herein may include at least one of relational and non-relational databases. The non-relational database may include, but is not limited to, a block chain based distributed database, and the like. The processors referred to in the various embodiments provided herein may be, without limitation, general purpose processors, central processing units, graphics processors, digital signal processors, programmable logic devices, quantum computing-based data processing logic devices, or the like.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present application should be subject to the appended claims.

Claims (11)

1. A method of artifact correction, the method comprising:
acquiring preoperative image data;
carrying out artifact identification on the preoperative image data through an artifact identification model obtained by pre-training to obtain a target artifact area;
acquiring attribute information of the target artifact area;
and when the target artifact region is determined to need artifact correction according to the attribute information, performing artifact correction on the target artifact region needing artifact correction through an artifact correction model obtained by pre-training.
2. The method of claim 1, wherein after the artifact correction model obtained by pre-training performs artifact correction on the target artifact region requiring artifact correction, the method comprises:
acquiring attribute information of the corrected artifact region, taking the corrected artifact region as a target artifact region, and continuing to execute the step of determining whether the target artifact region needs artifact correction according to the attribute information until the target artifact region is judged not to need artifact correction according to the attribute information.
3. The method of claim 1 or 2, wherein the obtaining of the attribute information of the target artifact region comprises:
acquiring at least one of the number, the area position, the area volume ratio and the area gray value intensity distribution of the target artifact areas as attribute information;
before the step of performing artifact correction on the target artifact region needing artifact correction by using an artifact correction model obtained by pre-training when it is determined that the target artifact region needs artifact correction according to the attribute information, the method further includes:
determining the weight corresponding to each attribute information and the reference index value corresponding to each attribute information;
calculating a correction index value of the target artifact region according to the weight and the reference index value;
and determining whether the target artifact area needs artifact correction or not according to the correction index value.
4. The method of claim 1, wherein performing artifact identification on the preoperative image data through a pre-trained artifact identification model to obtain a target artifact region comprises:
performing feature extraction on the preoperative image data through a feature extraction module of an artifact identification model obtained through pre-training to obtain features to be processed;
inputting the features to be processed into a central point prediction module, a central point deviation prediction module and a length, width and height prediction module of the artifact identification model respectively for processing to obtain corresponding artifact region information;
and calculating to obtain a target artifact area according to the artifact area information.
5. The method of claim 1, wherein the preoperative imaging data is a CT image sequence; the artifact identification model is a three-dimensional convolutional neural network model, and/or the artifact correction model is a three-dimensional convolutional neural network model.
6. The method of claim 1, wherein the training method of the artifact identification model or the artifact correction model comprises:
acquiring sample medical image data, wherein the sample medical image data carries a label;
processing the sample medical image data through an initial model to obtain a model processing result;
calculating to obtain model loss according to the label and the model processing result;
and when the model loss does not meet the requirement, optimizing the network parameters of the initial model, processing the sample medical image data through the optimized model to obtain a model processing result, and continuously executing the step of calculating the model loss according to the label and the model processing result until the model loss meets the requirement to obtain an artifact identification model or an artifact correction model.
7. The method of claim 6, wherein the label is a pre-labeled artifact region; the processing of the sample medical image data through the initial model to obtain a model processing result includes:
performing feature extraction on the sample medical image data through a feature extraction module of an initial model to obtain sample features; respectively inputting the sample characteristics into a central point prediction module, a central point offset prediction module and a length, width and height prediction module of the initial model for processing to obtain artifact area information of corresponding samples;
the calculating according to the label and the model processing result to obtain the model loss comprises the following steps:
obtaining a first central point position and first length, width and height information according to a pre-marked artifact area;
calculating artifact region information obtained by the central point prediction module and a first central point position of a pre-marked artifact region to obtain a first loss value;
calculating the position of a first central point of an artifact region pre-labeled with the artifact region information obtained by the central point offset prediction module to obtain a second loss value,
calculating artifact region information obtained by the length, width and height prediction module of the target object and first length, width and height information of a pre-marked artifact region to obtain a third loss value;
and calculating to obtain model loss according to the first loss value, the second loss value and the third loss value.
8. The method of claim 7, wherein the calculating the artifact region information obtained by the center point prediction module and the first center point position of the pre-labeled artifact region to obtain the first loss value comprises:
determining a sample positive voxel and a sample negative voxel according to a pre-labeled artifact region, processing artifact region information through the central point prediction module to obtain a predicted positive voxel, a predicted negative voxel and a second central point position, calculating first losses of the sample positive voxel and the predicted positive voxel, calculating second losses of the sample negative voxel and the predicted negative voxel, calculating third losses of the first central point position and the second central point position, and calculating a first loss value corresponding to the central point prediction module according to the first losses, the second losses and the third losses;
the calculating the position of the first center point of the artifact region pre-labeled with the artifact region information obtained by the center point offset prediction module to obtain a second loss value includes:
determining a prediction deviation value through the central point deviation prediction module, determining a real deviation value according to the pre-labeled artifact area, and calculating to obtain a second loss value corresponding to the central point deviation prediction module according to the prediction deviation value, the real deviation value and the number of central points;
the calculating the artifact region information obtained by the length, width and height prediction module of the target object and the pre-labeled first length, width and height information of the artifact region to obtain a third loss value includes:
and determining second length, width and height information through the length, width and height prediction module of the target object, and calculating a third loss value corresponding to the length, width and height prediction module of the target object according to the first length, width and height information, the second length, width and height information and the number of central points.
9. The method of claim 6, wherein the label is a matching artifact-free region; the calculating according to the label and the model processing result to obtain the model loss comprises the following steps:
determining the model processing result and corresponding pixel point pairs of the matched artifact-free areas;
and calculating according to the pixel values of the pixel point pairs to obtain the model loss.
10. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor realizes the steps of the method of any one of claims 1 to 9 when executing the computer program.
11. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 9.
CN202211030978.3A 2022-08-26 2022-08-26 Artifact correction method, computer device and readable storage medium Pending CN115375787A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211030978.3A CN115375787A (en) 2022-08-26 2022-08-26 Artifact correction method, computer device and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211030978.3A CN115375787A (en) 2022-08-26 2022-08-26 Artifact correction method, computer device and readable storage medium

Publications (1)

Publication Number Publication Date
CN115375787A true CN115375787A (en) 2022-11-22

Family

ID=84067479

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211030978.3A Pending CN115375787A (en) 2022-08-26 2022-08-26 Artifact correction method, computer device and readable storage medium

Country Status (1)

Country Link
CN (1) CN115375787A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117808718A (en) * 2024-02-29 2024-04-02 江西科技学院 Method and system for improving medical image data quality based on Internet

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117808718A (en) * 2024-02-29 2024-04-02 江西科技学院 Method and system for improving medical image data quality based on Internet
CN117808718B (en) * 2024-02-29 2024-05-24 江西科技学院 Method and system for improving medical image data quality based on Internet

Similar Documents

Publication Publication Date Title
CN109993726B (en) Medical image detection method, device, equipment and storage medium
US20230267611A1 (en) Optimization of a deep learning model for performing a medical imaging analysis task
US10134141B2 (en) System and methods for image segmentation using convolutional neural network
CN109906470B (en) Image segmentation using neural network approach
US9968257B1 (en) Volumetric quantification of cardiovascular structures from medical imaging
CN111008984B (en) Automatic contour line drawing method for normal organ in medical image
CN111105424A (en) Lymph node automatic delineation method and device
CN112258530A (en) Neural network-based computer-aided lung nodule automatic segmentation method
CN109147940A (en) From the device and system of the medical image automatic Prediction physiological status of patient
CN112614133B (en) Three-dimensional pulmonary nodule detection model training method and device without anchor point frame
CN111932492B (en) Medical image processing method and device and computer readable storage medium
WO2022213654A1 (en) Ultrasonic image segmentation method and apparatus, terminal device, and storage medium
US20200167911A1 (en) Medical image data
CN110751187B (en) Training method of abnormal area image generation network and related product
CN113240661B (en) Deep learning-based lumbar vertebra bone analysis method, device, equipment and storage medium
CN114648541A (en) Automatic segmentation method for non-small cell lung cancer gross tumor target area
CN112308846A (en) Blood vessel segmentation method and device and electronic equipment
CN114693671B (en) Lung nodule semi-automatic segmentation method, device, equipment and medium based on deep learning
Zhao et al. Automatic Cobb angle measurement method based on vertebra segmentation by deep learning
CN113256670A (en) Image processing method and device, and network model training method and device
CN115375787A (en) Artifact correction method, computer device and readable storage medium
CN116309551B (en) Method, device, electronic equipment and readable medium for determining focus sampling area
CN112802036A (en) Method, system and device for segmenting target area of three-dimensional medical image
CN112750110A (en) Evaluation system for evaluating lung lesion based on neural network and related products
CN116128895A (en) Medical image segmentation method, apparatus and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination