CN111932467A - Image processing method and device - Google Patents

Image processing method and device Download PDF

Info

Publication number
CN111932467A
CN111932467A CN202010671009.0A CN202010671009A CN111932467A CN 111932467 A CN111932467 A CN 111932467A CN 202010671009 A CN202010671009 A CN 202010671009A CN 111932467 A CN111932467 A CN 111932467A
Authority
CN
China
Prior art keywords
image data
neural network
loss
data
dose
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010671009.0A
Other languages
Chinese (zh)
Inventor
逄岭
郑凌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Neusoft Medical Systems Co Ltd
Original Assignee
Neusoft Medical Systems Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Neusoft Medical Systems Co Ltd filed Critical Neusoft Medical Systems Co Ltd
Priority to CN202010671009.0A priority Critical patent/CN111932467A/en
Publication of CN111932467A publication Critical patent/CN111932467A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The present specification provides an image processing method and an image processing apparatus, wherein the method includes: acquiring image data to be processed; denoising the image data by using a denoising algorithm to obtain denoised data; inputting the de-noising data into a neural network obtained by pre-training, and simulating and outputting a target image by the neural network according to the de-noising data; the neural network is obtained by training low-dose image data input as a network and high-dose image data input as label data; the radiation dose at the time of the low-dose image data acquisition is smaller than the radiation dose at the time of the high-dose image data acquisition.

Description

Image processing method and device
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to an image processing method and apparatus.
Background
During the acquisition, conversion and transmission of medical images, the medical images are often interfered by imaging equipment or external environment, so that the medical images contain noise. Noise in the image greatly affects medical image analysis, and therefore it is necessary to perform denoising processing on the medical image.
The related denoising algorithm can cause image unnaturalness and image blurring after denoising the medical image.
Disclosure of Invention
At least one embodiment of the present specification provides an image processing method to make an image more natural and clearer.
In a first aspect, an image processing method is provided, the method comprising:
acquiring image data to be processed;
denoising the image data by using a denoising algorithm to obtain denoised data;
inputting the de-noising data into a neural network obtained by pre-training, and simulating and outputting a target image by the neural network according to the de-noising data; the neural network is obtained by training low-dose image data input as a network and high-dose image data input as label data; the radiation dose at the time of the low-dose image data acquisition is smaller than the radiation dose at the time of the high-dose image data acquisition.
In a second aspect, there is provided an image processing apparatus, the apparatus comprising:
the acquisition module is used for acquiring image data to be processed;
the denoising module is used for denoising the image data by utilizing a denoising algorithm to obtain denoised data;
the network module is used for inputting the denoising data into a neural network obtained by pre-training, and the neural network simulates and outputs a target image according to the denoising data; the neural network is obtained by training low-dose image data input as a network and high-dose image data input as label data; the radiation dose at the time of the low-dose image data acquisition is smaller than the radiation dose at the time of the high-dose image data acquisition.
In a third aspect, a computer device is provided, which includes a memory, a processor, and a computer program stored on the memory and executable on the processor, and when the processor executes the computer program, the processor implements the image processing method according to any embodiment of the present specification.
In a fourth aspect, a computer-readable storage medium is provided, on which a computer program is stored, which when executed by a processor, implements the steps of the image processing method according to any one of the embodiments of the present specification.
According to the technical scheme, in at least one embodiment of the specification, the image to be processed is denoised by a denoising algorithm, the denoised result data is used as the input of the neural network, and the pre-trained neural network is used for simulating and outputting the image with higher dosage, so that the obtained image is more natural and clear.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the specification.
Drawings
FIG. 1 is a flow diagram illustrating a method of image processing according to an exemplary embodiment;
FIG. 2 is a flow diagram illustrating a neural network training method in accordance with an exemplary embodiment;
FIG. 3 is a schematic diagram of an image processing apparatus according to an exemplary embodiment;
FIG. 4 is a schematic diagram of yet another image processing apparatus according to an exemplary embodiment;
FIG. 5 is a schematic diagram illustrating yet another image processing apparatus according to an example embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The specific manner described in the following exemplary embodiments does not represent all aspects consistent with the present specification. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the specification, as detailed in the appended claims.
The terminology used in the description herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the description. As used in this specification and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used herein to describe various information, these information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, the first information may also be referred to as second information, and similarly, the second information may also be referred to as first information, without departing from the scope of the present specification. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
Various noises often exist in medical images, and the analysis of the disease condition by a doctor is seriously interfered. Therefore, in the field of medical image processing, denoising of medical images is an important research direction.
The related denoising algorithm comprises the following steps: a TV (Total Variation model) algorithm, NL-Means (non-local Means) algorithm, BM3D (Block Matching 3D) algorithm, etc. The denoising algorithms can remove noise in an image to a certain extent, but the denoising algorithms can also cause image unnaturalness and image blurring.
To this end, the present specification provides an image processing method. By combining the denoising algorithm with the neural network, the problems of unnatural images and fuzzy images after denoising processing is performed by the denoising algorithm are solved.
In order to make the image processing method provided in the present specification clearer, the following describes in detail the implementation procedure of the scheme provided in the present specification with reference to the drawings and specific embodiments.
Referring to fig. 1, fig. 1 is a flowchart illustrating an image processing method according to an embodiment provided in the present specification. The method is used for processing medical images. As shown in fig. 1, the process includes:
step 101, acquiring image data to be processed.
For example, image data of a target object is acquired by an image acquisition device, and the acquired image data is used as image data to be processed. Alternatively, the already existing image data is directly acquired as the image data to be processed. It will be appreciated that there are many more specific ways of acquiring image data to be processed.
Specifically, image data of an affected area of a patient may be collected by a CT machine in a hospital, and the collected image data of the patient may be used as image data to be processed in this step. The affected area may be an area where a patient exists or may exist on the head, the lung, the waist and the like of the patient. After the image data of the affected area of the patient is collected, the image of the affected area can be constructed according to the image data, so that a doctor can analyze the illness state of the patient according to the image of the affected area.
And 102, carrying out denoising processing on the image data by using a denoising algorithm to obtain denoising data.
The images directly constructed according to the acquired image data often have a large amount of noise, which affects the analysis of the doctor on the patient's condition. Therefore, it is necessary to process the acquired image data to reduce noise in the image, i.e., to perform a denoising process on the image data.
The image data is denoised by using a relevant denoising algorithm, and the processed result data is used as denoising data so as to further process the result data.
The denoising algorithm in this step may be some conventional denoising algorithms, which may include, but are not limited to, TV algorithm, NL-Means algorithm, BM3D algorithm. Alternatively, the denoising algorithm may be a combination of different algorithms, and the embodiment is not limited. The process of denoising image data by using these denoising algorithms is a routine practice in the art, and the detailed process is not described in detail.
Through the denoising processing in the step, the noise in the image can be eliminated to a certain extent, the contrast is enhanced, and artifacts, such as MPR artifacts, cone angle artifacts, and the like, are removed. In some realizable manners, the step can be executed repeatedly to further improve the denoising effect.
103, inputting the denoising data into a neural network obtained by pre-training, and simulating and outputting a target image by the neural network according to the denoising data; the neural network is obtained by training low-dose image data input as a network and high-dose image data input as label data; the radiation dose at the time of the low-dose image data acquisition is smaller than the radiation dose at the time of the high-dose image data acquisition.
After the image data is denoised by the denoising algorithm to obtain the denoised data, if an image is directly constructed according to the denoised data, the obtained image has the problems of unnatural image, fuzzy image and the like. Wherein the image artifacts include noise morphology artifacts or tissue structure artifacts. For example, after the head image is denoised by a denoising algorithm, the bone structure under the bone window may generate a "plasticizing effect".
In the step, the neural network obtained by pre-training can be utilized to further process the de-noising data, so that a more natural and clear target image can be obtained.
In the field of medical images, the size of radiation dose can be set according to acquisition equipment when the images are acquired, and the images are relatively divided into low-dose images and high-dose images. For example, an image with a radiation dose of 50% is referred to as a low-dose image, and an image with a radiation dose of more than 50% is referred to as a high-dose image. The data corresponding to both images are referred to as low dose image data and high dose image data, respectively.
Wherein, compared with the low-dose image, the high-dose image has more natural and clearer images due to larger radiation dose. In this embodiment, low-dose image data is input as a network of a neural network, high-dose image data is used as label data of the neural network, and the neural network is trained in advance, so that the neural network learns to output a high-dose image in a simulated manner according to the low-dose image. The specific training process is described in detail later.
After the neural network is trained, the denoising data can be input into the neural network, and the neural network can obtain image data with higher dosage according to the denoising data in a simulation mode, so that a natural and clear target image is obtained. In some realizable modes, the step can be executed repeatedly to simulate a higher-dose image by fully utilizing the neural network obtained by training, so that the obtained target image is more natural and clearer.
In the image processing method of the embodiment, the image data is subjected to denoising processing by using a denoising algorithm, so that noise in the image can be eliminated to a certain extent. And then, inputting the de-noised data obtained after de-noising into a neural network obtained by pre-training, and simulating and outputting image data with higher dose by using the neural network, so that a more natural and clearer target image is obtained, the image edge is smoother, and the contrast is better.
On the other hand, the end-to-end training process of a neural network is uncontrollable due to the black-box effect. In this embodiment, after the image data is processed by using the denoising algorithm, the processed result data is input to the neural network for secondary processing. The processing of the denoising algorithm imposes limitation on the processing of the neural network, so that the controllability of the result is enhanced.
In an example, after the neural network in step 103 simulates and outputs the target image according to the denoising data, a noise parameter of the target image may be further obtained, and in a case that the noise parameter does not meet a preset noise condition, the target image may be used as image data to be processed, and the flow of the image processing method provided in this specification may be executed from step 101 again. Wherein, the step of executing the loop can perform a plurality of iterations until the noise parameter of the target image meets the preset noise condition.
In the above-described embodiment, the noise parameter is a parameter for reflecting the degree of noise. For example, the variance is taken as a specific parameter in the image data, and the magnitude of the noise is reflected by the magnitude of the variance. The preset noise condition is a judgment condition for limiting the number of loop iterations, and may be set empirically. For example, in the case of using the variance of data in the target image data as a noise parameter, a preset noise condition may be set to a certain variance threshold value or variance range value.
In this example, under the condition that the noise of the target image is still large, the image processing method provided in this specification may be executed in a loop until the target image meets the preset noise condition position, so that the obtained target image is more natural and clearer.
The image processing method provided in the present specification requires pre-training to obtain a neural network that meets the requirement before executing step 103. As shown in fig. 2, the process of training the neural network includes:
step 201, inputting the low-dose image data into a neural network to be trained to obtain predicted image data output by the neural network.
In this embodiment, a large amount of low-dose image data needs to be collected in advance as training data, input to the neural network to be trained, and predicted image data is obtained by the neural network according to prediction of the low-dose image data.
Wherein the low-dose image data collected in advance needs to be determined according to the image to be processed.
For example, if the image to be processed is an image of the head of a patient, the low-dose image data collected in advance when the training data is collected is image data of the head of a different patient. For example, if the image to be processed is an image of a lung of a patient, the low-quality image data collected in advance when the training data is collected is image data of a lung of a different patient.
In this embodiment, the neural network to be trained may include, but is not limited to, a VGG (Oxford Visual Geometry Group, second name of ILSVRC race 2014), UNet network, and ResNet network.
In one example, the low-dose image data may be subjected to an augmentation operation to obtain augmented low-dose image data; and inputting the augmented low-dose image data into the neural network to be trained.
In the above examples, the augmentation operations may include, but are not limited to, cropping, rotating, scaling, and the like. Particularly, when the scaling operation is performed, the image noise changes, more noise forms are introduced for training, and the robustness of the model result is stronger. In addition, images of various scales can be used for training after cutting and scaling, so that the trained model can denoise various matrix sizes in the CT image.
Step 202, adjusting network parameters of the neural network according to the error between the predicted image data and the tag data.
In the embodiment, high-dose image data is used as label data in the process of training the neural network. Wherein the high dose image data is identical to the low dose image data in the region of the image object. For example, if the image corresponding to the low-dose image data is an image of the head of a patient, the image corresponding to the high-dose image data may be an image of the head of another patient.
Then, network parameters are adjusted based on the error between the predicted image data and the tag data. The error calculation can be realized by various loss functions, and the network parameters are adjusted according to the loss values obtained by the loss functions.
In the present embodiment, an image corresponding to predicted image data is referred to as a predicted image, and an image corresponding to tag data is referred to as a tag image.
In one example, the network parameters may be adjusted based on a loss of difference in data between the predicted image and the tag image. For example, the network parameters may be adjusted based on the difference between the pixel values of the corresponding locations between the predicted image and the tag image. The average absolute error loss between the predicted image data and the tag data can be calculated using an average absolute error loss function, and the network parameters can be adjusted based on the average absolute error loss.
The average absolute error loss is the sum of the absolute values of the differences between the pixel values of the corresponding positions of the predicted image and the tag image.
The mean absolute error loss function, as follows:
Figure BDA0002582278860000071
where n is the number of pixels in the image, yiIs the pixel value at the i-th position of the predicted image,
Figure BDA0002582278860000072
is the pixel value at the i-th position of the label image.
In the above example, the network parameters are adjusted from the error in the pixel values.
In another example, network parameters of the neural network may be adjusted based on structural similarity between the predicted image and the tag image.
For example, a loss of structural similarity between the predicted image data and the tag data is obtained; and adjusting network parameters of the neural network according to the structural similarity loss. The structural similarity loss can be obtained through a structural similarity loss function, and is used for measuring the structural similarity between the predicted image and the tag image.
The structural similarity loss function, as follows:
Figure BDA0002582278860000081
wherein, muxIs the average value of the pixel values in the label image, μyIs the average of the pixel values in the predicted image,
Figure BDA0002582278860000082
is the variance of the pixel values in the label image,
Figure BDA0002582278860000083
is the variance, σ, of pixel values in the predicted imagexyIs the covariance of the pixel values in the label image and the predicted image.
c1=(k1L)2,c2=(k2L)2Is a constant used to maintain stability; l is the dynamic range of the pixel value, k1、k2Is a preset constant. E.g. k1=0.01,k2=0.03。
The structural similarity loss function can reflect the attribute of an object structure in a scene from the perspective of image composition by enabling structural information to be independent of brightness and contrast, and is more beneficial to learning of an image anatomical structure.
Since the restoration algorithm amplifies noise, noise on an image has a great influence on the restoration result in the process of restoring an image by using image data. For this reason, some regular terms can be added in the model of the neural network, and the smoothness of the image is kept. The total variation loss function is a commonly used regular term, and can make image grains finer.
The total variation loss function, as follows:
Figure BDA0002582278860000084
wherein u isxThe gradient of the image in the x-direction is predicted. u. ofyThe gradient of the image in the y-direction is predicted.
DuRefers to the image domain, the spatial span of the image. For example, the two-dimensional image (x, y) has a value ranging from (0,0) to (512 ). For example, the three-dimensional image (x, y, z) has a value ranging from (0,0,0) to (512,512,512).
In one example, a first loss between the predicted image data and the tag data can be obtained according to a difference loss and a total variation loss; and adjusting network parameters of the neural network according to the first loss.
Wherein the first loss is obtained from the difference loss and the total variation loss. For example, the difference loss and the total variation loss may be weighted separately and summed to obtain the first loss. The corresponding first loss function, as follows:
Loss1=α*L1Lossn+γ*TVLoss
wherein, α and γ are weights of the difference loss and the total variation loss, respectively, and can be adjusted according to the imaging effect of the predicted image.
In the above example, the total variation loss function is used as a regular term of the average absolute error loss function, and the smoothness of the image can be maintained while the network parameters are adjusted from the error of the pixel value.
In another example, a second loss between the predicted image data and the tag data can be derived from a structural similarity loss and a total variation loss; and adjusting network parameters of the neural network according to the second loss.
Wherein the second loss is obtained from the structural similarity loss and the total variation loss. For example, the structural similarity loss and the total variation loss may be weighted separately and summed to obtain the second loss. The corresponding second loss function, as follows:
Loss2=β*SSIMLoss+γ*TVLoss
wherein, β and γ are weights of the structural similarity loss and the total variation loss, respectively, and can be adjusted according to the imaging effect of the predicted image.
In the above example, the total variation loss function is used as the regular term of the structure similarity loss function, and the smoothness of the image can be maintained while the network parameters are adjusted from the perspective of the image composition.
In another example, a third loss between the predicted image data and the tag data can be obtained according to a difference loss, a structural similarity loss, and a total variation loss; and adjusting network parameters of the neural network according to the third loss.
Wherein the third loss is obtained from the difference loss, the structural similarity loss and the total variation loss. For example, the difference loss, the structural similarity loss, and the total variation loss may be weighted respectively and then summed to obtain a third loss. The corresponding third loss function, as follows:
Loss3=α*L1Lossn+β*SSIMLoss+γ*TVLoss
wherein, α, β, γ are weights of the difference loss, the structural similarity loss, and the total variation loss, respectively, and can be adjusted according to the imaging effect of the predicted image.
In the above example, the total variation loss function is used as a regularization term of the average absolute error loss function and the structural similarity loss function, and the smoothness of the image is improved while network parameters are adjusted from the pixel error angle and the angle of the image composition.
As shown in fig. 3, the present specification provides an image processing apparatus that can execute the image processing method according to any of the embodiments of the present specification. The apparatus may include an acquisition module 301, a denoising module 302, and a network module 303. Wherein:
an obtaining module 301, configured to obtain image data to be processed;
a denoising module 302, configured to perform denoising processing on the image data by using a denoising algorithm to obtain denoised data;
the network module 303 is configured to input the denoising data into a neural network obtained through pre-training, and the neural network simulates and outputs a target image according to the denoising data; the neural network is obtained by training low-dose image data input as a network and high-dose image data input as label data; the radiation dose at the time of the low-dose image data acquisition is smaller than the radiation dose at the time of the high-dose image data acquisition.
Optionally, as shown in fig. 4, the apparatus further includes:
a loop module 401, configured to obtain a noise parameter of the target image; and if the noise parameters do not accord with the preset noise conditions, taking the target image as the image data to be processed, returning to continue denoising, and inputting the image data into a neural network to simulate and output the target image.
Optionally, as shown in fig. 5, the apparatus further includes:
the prediction module 501 is configured to input the low-dose image data into a neural network to be trained, so as to obtain predicted image data output by the neural network;
a parameter adjusting module 502, configured to adjust a network parameter of the neural network according to an error between the predicted image data and the tag data.
Optionally, the predicting module 501, when configured to input the low-dose image data into a neural network to be trained, includes:
carrying out augmentation operation on the low-dose image data to obtain augmented low-dose image data;
and inputting the augmented low-dose image data into the neural network to be trained.
Optionally, the parameter adjusting module 502 is specifically configured to:
acquiring a difference loss between the predicted image data and the tag data;
and adjusting the network parameters of the neural network according to the difference loss.
Optionally, the parameter adjusting module 502 is specifically configured to:
acquiring a structural similarity loss between the predicted image data and the tag data;
and adjusting network parameters of the neural network according to the structural similarity loss.
Optionally, the parameter adjusting module is specifically configured to:
obtaining a first loss between the predicted image data and the tag data according to the difference loss and the total variation loss; adjusting network parameters of the neural network according to the first loss;
or obtaining a second loss between the predicted image data and the tag data according to the structural similarity loss and the total variation loss; adjusting network parameters of the neural network according to the second loss;
or obtaining a third loss between the predicted image data and the tag data according to the difference loss, the structural similarity loss and the total variation loss; and adjusting network parameters of the neural network according to the third loss.
The implementation process of the functions and actions of each module in the above device is specifically described in the implementation process of the corresponding step in the above method, and is not described herein again.
For the device embodiments, since they substantially correspond to the method embodiments, reference may be made to the partial description of the method embodiments for relevant points. The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules can be selected according to actual needs to achieve the purpose of the solution of at least one embodiment of the present specification. One of ordinary skill in the art can understand and implement it without inventive effort.
The present specification also provides a computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor being capable of implementing the image processing method of any of the embodiments of the specification when executing the program.
The present specification also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, is capable of implementing the image processing method of any of the embodiments of the specification.
The non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, etc., which is not limited in this application.
Other embodiments of the present description will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This specification is intended to cover any variations, uses, or adaptations of the specification following, in general, the principles of the specification and including such departures from the present disclosure as come within known or customary practice within the art to which the specification pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the specification being indicated by the following claims.
It will be understood that the present description is not limited to the precise arrangements described above and shown in the drawings, and that various modifications and changes may be made without departing from the scope thereof. The scope of the present description is limited only by the appended claims.
The above description is only a preferred embodiment of the present disclosure, and should not be taken as limiting the present disclosure, and any modifications, equivalents, improvements, etc. made within the spirit and principle of the present disclosure should be included in the scope of the present disclosure.

Claims (16)

1. An image processing method, characterized in that the method comprises:
acquiring image data to be processed;
denoising the image data by using a denoising algorithm to obtain denoised data;
inputting the de-noising data into a neural network obtained by pre-training, and simulating and outputting a target image by the neural network according to the de-noising data; the neural network is obtained by training low-dose image data input as a network and high-dose image data input as label data; the radiation dose at the time of the low-dose image data acquisition is smaller than the radiation dose at the time of the high-dose image data acquisition.
2. The method of claim 1, wherein after inputting the de-noised data into a pre-trained neural network, and simulating an output target image by the neural network according to the de-noised data, the method further comprises:
acquiring a noise parameter of the target image;
and if the noise parameters do not accord with the preset noise conditions, taking the target image as the image data to be processed, returning to continue denoising, and inputting the image data into a neural network to simulate and output the target image.
3. The method of claim 1, further comprising, before said inputting said de-noised data into a pre-trained neural network:
inputting the low-dose image data into a neural network to be trained to obtain predicted image data output by the neural network;
and adjusting network parameters of the neural network according to the error between the predicted image data and the tag data.
4. The method of claim 3, wherein inputting the low-dose image data into a neural network to be trained comprises:
carrying out augmentation operation on the low-dose image data to obtain augmented low-dose image data;
and inputting the augmented low-dose image data into the neural network to be trained.
5. The method of claim 3, wherein said adjusting network parameters of said neural network based on errors between said predictive image data and said tag data comprises:
acquiring a difference loss between the predicted image data and the tag data;
and adjusting the network parameters of the neural network according to the difference loss.
6. The method of claim 3, wherein said adjusting network parameters of said neural network based on errors between said predictive image data and said tag data comprises:
acquiring a structural similarity loss between the predicted image data and the tag data;
and adjusting network parameters of the neural network according to the structural similarity loss.
7. The method of claim 3, wherein said adjusting network parameters of said neural network based on errors between said predictive image data and said tag data comprises:
obtaining a first loss between the predicted image data and the tag data according to the difference loss and the total variation loss; adjusting network parameters of the neural network according to the first loss;
or obtaining a second loss between the predicted image data and the tag data according to the structural similarity loss and the total variation loss; adjusting network parameters of the neural network according to the second loss;
or obtaining a third loss between the predicted image data and the tag data according to the difference loss, the structural similarity loss and the total variation loss; and adjusting network parameters of the neural network according to the third loss.
8. An image processing apparatus, characterized in that the apparatus comprises:
the acquisition module is used for acquiring image data to be processed;
the denoising module is used for denoising the image data by utilizing a denoising algorithm to obtain denoised data;
the network module is used for inputting the denoising data into a neural network obtained by pre-training, and the neural network simulates and outputs a target image according to the denoising data; the neural network is obtained by training low-dose image data input as a network and high-dose image data input as label data; the radiation dose at the time of the low-dose image data acquisition is smaller than the radiation dose at the time of the high-dose image data acquisition.
9. The apparatus of claim 8, further comprising:
the circulation module is used for acquiring the noise parameter of the target image; and if the noise parameters do not accord with the preset noise conditions, taking the target image as the image data to be processed, returning to continue denoising, and inputting the image data into a neural network to simulate and output the target image.
10. The apparatus of claim 8, further comprising:
the prediction module is used for inputting the low-dose image data into a neural network to be trained to obtain predicted image data output by the neural network;
and the parameter adjusting module is used for adjusting the network parameters of the neural network according to the error between the predicted image data and the tag data.
11. The apparatus of claim 10, wherein the prediction module, when configured to input the low-dose image data into a neural network to be trained, comprises:
carrying out augmentation operation on the low-dose image data to obtain augmented low-dose image data;
and inputting the augmented low-dose image data into the neural network to be trained.
12. The apparatus of claim 10, wherein the parameter adjustment module is specifically configured to:
acquiring a difference loss between the predicted image data and the tag data;
and adjusting the network parameters of the neural network according to the difference loss.
13. The apparatus of claim 10, wherein the parameter adjustment module is specifically configured to:
acquiring a structural similarity loss between the predicted image data and the tag data;
and adjusting network parameters of the neural network according to the structural similarity loss.
14. The apparatus of claim 10, wherein the parameter adjustment module is specifically configured to:
obtaining a first loss between the predicted image data and the tag data according to the difference loss and the total variation loss; adjusting network parameters of the neural network according to the first loss;
or obtaining a second loss between the predicted image data and the tag data according to the structural similarity loss and the total variation loss; adjusting network parameters of the neural network according to the second loss;
or obtaining a third loss between the predicted image data and the tag data according to the difference loss, the structural similarity loss and the total variation loss; and adjusting network parameters of the neural network according to the third loss.
15. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the image processing method according to any one of claims 1 to 7 when executing the program.
16. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 7.
CN202010671009.0A 2020-07-13 2020-07-13 Image processing method and device Pending CN111932467A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010671009.0A CN111932467A (en) 2020-07-13 2020-07-13 Image processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010671009.0A CN111932467A (en) 2020-07-13 2020-07-13 Image processing method and device

Publications (1)

Publication Number Publication Date
CN111932467A true CN111932467A (en) 2020-11-13

Family

ID=73312975

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010671009.0A Pending CN111932467A (en) 2020-07-13 2020-07-13 Image processing method and device

Country Status (1)

Country Link
CN (1) CN111932467A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112949761A (en) * 2021-03-31 2021-06-11 东莞中国科学院云计算产业技术创新与育成中心 Training method and device for three-dimensional image neural network model and computer equipment
CN113421191A (en) * 2021-06-28 2021-09-21 Oppo广东移动通信有限公司 Image processing method, device, equipment and storage medium
CN113539440A (en) * 2021-07-20 2021-10-22 东软医疗***股份有限公司 CT image reconstruction method and device, storage medium and computer equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106600568A (en) * 2017-01-19 2017-04-26 沈阳东软医疗***有限公司 Low-dose CT image denoising method and device
CN107610195A (en) * 2017-07-28 2018-01-19 上海联影医疗科技有限公司 The system and method for image conversion
US20180240219A1 (en) * 2017-02-22 2018-08-23 Siemens Healthcare Gmbh Denoising medical images by learning sparse image representations with a deep unfolding approach
CN108564553A (en) * 2018-05-07 2018-09-21 南方医科大学 Low-dose CT image noise suppression method based on convolutional neural networks
CN109472747A (en) * 2018-10-18 2019-03-15 北京大学 A kind of deep learning method of microwave remote sensing image speckle noise reduction
CN110222717A (en) * 2019-05-09 2019-09-10 华为技术有限公司 Image processing method and device
CN111047524A (en) * 2019-11-13 2020-04-21 浙江工业大学 Low-dose CT lung image denoising method based on deep convolutional neural network

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106600568A (en) * 2017-01-19 2017-04-26 沈阳东软医疗***有限公司 Low-dose CT image denoising method and device
US20180240219A1 (en) * 2017-02-22 2018-08-23 Siemens Healthcare Gmbh Denoising medical images by learning sparse image representations with a deep unfolding approach
CN107610195A (en) * 2017-07-28 2018-01-19 上海联影医疗科技有限公司 The system and method for image conversion
CN108564553A (en) * 2018-05-07 2018-09-21 南方医科大学 Low-dose CT image noise suppression method based on convolutional neural networks
CN109472747A (en) * 2018-10-18 2019-03-15 北京大学 A kind of deep learning method of microwave remote sensing image speckle noise reduction
CN110222717A (en) * 2019-05-09 2019-09-10 华为技术有限公司 Image processing method and device
CN111047524A (en) * 2019-11-13 2020-04-21 浙江工业大学 Low-dose CT lung image denoising method based on deep convolutional neural network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
董洪义: "《深度学习之PyTorch物体检测实战》", 中国轻工业出版社, pages: 226 - 227 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112949761A (en) * 2021-03-31 2021-06-11 东莞中国科学院云计算产业技术创新与育成中心 Training method and device for three-dimensional image neural network model and computer equipment
CN113421191A (en) * 2021-06-28 2021-09-21 Oppo广东移动通信有限公司 Image processing method, device, equipment and storage medium
CN113539440A (en) * 2021-07-20 2021-10-22 东软医疗***股份有限公司 CT image reconstruction method and device, storage medium and computer equipment
CN113539440B (en) * 2021-07-20 2023-11-21 东软医疗***股份有限公司 CT image reconstruction method and device, storage medium and computer equipment

Similar Documents

Publication Publication Date Title
TWI754195B (en) Image processing method and device, electronic device and computer-readable storage medium
CN108537794B (en) Medical image data processing method, apparatus and computer readable storage medium
CN111932467A (en) Image processing method and device
Ye et al. Deep residual learning for model-based iterative ct reconstruction using plug-and-play framework
Ko et al. Rigid and non-rigid motion artifact reduction in X-ray CT using attention module
CN112822982B (en) Image forming apparatus, image forming method, and method for forming learning model
CN107862665B (en) CT image sequence enhancement method and device
CN111166362B (en) Medical image display method and device, storage medium and electronic equipment
WO2023202265A1 (en) Image processing method and apparatus for artifact removal, and device, product and medium
CN111008943A (en) Low-dose DR image noise reduction method and system
CN112884792A (en) Lung image segmentation method and device, electronic equipment and storage medium
KR102036834B1 (en) Image processing method
Chan et al. An attention-based deep convolutional neural network for ultra-sparse-view CT reconstruction
JP2005020338A (en) Method, apparatus and program for detecting abnormal shadow
CN111798535B (en) CT image enhancement display method and computer readable storage medium
CN111080736B (en) Low-dose CT image reconstruction method based on sparse transformation
JP2015173923A (en) Image processing device, image processing method, and program
CN116894783A (en) Metal artifact removal method for countermeasure generation network model based on time-varying constraint
EP4343680A1 (en) De-noising data
CN110506294B (en) Detection of regions with low information content in digital X-ray images
CN111311531A (en) Image enhancement method and device, console equipment and medical imaging system
Yang et al. X‐Ray Breast Images Denoising Method Based on the Convolutional Autoencoder
JP4571378B2 (en) Image processing method, apparatus, and program
Tzikas et al. Variational bayesian blind image deconvolution with student-t priors
US20220292742A1 (en) Generating Synthetic X-ray Images and Object Annotations from CT Scans for Augmenting X-ray Abnormality Assessment Systems

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination