CN112215779B - Image processing method, device, equipment and computer readable storage medium - Google Patents

Image processing method, device, equipment and computer readable storage medium Download PDF

Info

Publication number
CN112215779B
CN112215779B CN202011170966.1A CN202011170966A CN112215779B CN 112215779 B CN112215779 B CN 112215779B CN 202011170966 A CN202011170966 A CN 202011170966A CN 112215779 B CN112215779 B CN 112215779B
Authority
CN
China
Prior art keywords
value
variable
iteration
model
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011170966.1A
Other languages
Chinese (zh)
Other versions
CN112215779A (en
Inventor
魏伟波
宋田田
潘振宽
王静
李青
葛林尧
董田田
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao University
Original Assignee
Qingdao University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao University filed Critical Qingdao University
Priority to CN202011170966.1A priority Critical patent/CN112215779B/en
Publication of CN112215779A publication Critical patent/CN112215779A/en
Application granted granted Critical
Publication of CN112215779B publication Critical patent/CN112215779B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/14Fourier, Walsh or analogous domain transformations, e.g. Laplace, Hilbert, Karhunen-Loeve, transforms
    • G06F17/141Discrete Fourier transforms
    • G06F17/142Fast Fourier transforms, e.g. using a Cooley-Tukey type algorithm
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/15Correlation function computation including computation of convolution operations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20192Edge enhancement; Edge preservation

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Theoretical Computer Science (AREA)
  • Computational Mathematics (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Data Mining & Analysis (AREA)
  • Algebra (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Discrete Mathematics (AREA)
  • Computing Systems (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The application discloses an image processing method, an image processing device, image processing equipment and a computer readable storage medium, wherein the method comprises the following steps: carrying out iterative computation on a higher-order variation model of an input image to be processed to obtain a current iterative value of each variable; calculating a combined residual error according to the current iteration value and the previous iteration value of the variable, and judging whether the combined residual error is smaller than a preset value or not; if yes, performing acceleration iteration according to the updated step length, if not, enabling the step length to be equal to the initial value, taking the last iteration value of the variable as the current iteration value of the variable, and restarting according to the step length and the current iteration value of the variable; judging whether the energy value of the high-order variation model is converged, if not, carrying out the next iteration, and executing the step of calculating a combined residual error according to the current iteration value and the last iteration value of the variable, if so, outputting the processed image. According to the technical scheme disclosed by the application, the combined residual error is restarted at the value not smaller than the preset value, so that the oscillation phenomenon is avoided, and the image processing efficiency is improved.

Description

Image processing method, device, equipment and computer readable storage medium
Technical Field
The present application relates to the field of image processing technology, and more particularly, to an image processing method, apparatus, device, and computer readable storage medium.
Background
When the image is subjected to denoising, segmentation and other processes, the variational model based on second-order and higher derivatives (namely higher order) can effectively overcome the step effect problem caused by the first-order derivative model, and meanwhile, the edge and the smooth characteristic of the image are kept, so that the variational model is widely focused and applied in the image processing.
However, since the rule term in the higher derivative-based variational model includes higher derivatives, that is, the rule term has nonlinearity, non-smoothness, or even non-convexity, it results in that the optimal step size is difficult to estimate and calculate during the image processing using the higher derivative model, and this results in that the oscillation phenomenon easily occurs during the image processing using the higher derivative model, which results in lower image processing efficiency.
In summary, how to improve the image processing efficiency is a technical problem to be solved by those skilled in the art.
Disclosure of Invention
In view of the foregoing, it is an object of the present application to provide an image processing method, apparatus, device, and computer-readable storage medium for improving image processing efficiency.
In order to achieve the above object, the present application provides the following technical solutions:
an image processing method, comprising:
inputting an image to be processed into a high-order variational model, and performing iterative computation on the high-order variational model to obtain the current iterative value of each variable corresponding to the energy value of the high-order variational model;
calculating a combined residual error according to the current iteration value and the previous iteration value of the variable, and judging whether the combined residual error is smaller than a preset value or not;
if the step size is smaller than the initial value, updating the step size according to the iteration value of the last step of the step size in the iteration calculation, accelerating iteration according to the updated step size, if the step size is not smaller than the initial value, enabling the step size to be equal to the initial value, taking the iteration value of the last step of the variable as the current iteration value of the variable, and restarting according to the step size and the current iteration value of the variable;
and judging whether the energy value of the high-order variation model is converged, if not, performing the next iteration, executing the step of calculating a combined residual error according to the current iteration value and the previous iteration value of the variable, and if so, outputting the processed image.
Preferably, performing iterative computation on the higher-order variation model to obtain a current iteration value of each variable corresponding to an energy value of the higher-order variation model, including:
Introducing an auxiliary variable, a Lagrange multiplier and penalty parameters, and converting the high-order variation model by using the auxiliary variable, the Lagrange multiplier and the penalty parameters through a preset acceleration algorithm to obtain a corresponding sub-optimization problem;
and solving the corresponding sub-optimization problem through a soft threshold formula in a fast Fourier transform and analytic form to obtain the current iteration value of each variable corresponding to the energy value of the high-order variation model.
Preferably, when the higher-order variation model is a TL model, an auxiliary variable, a Lagrange multiplier and a penalty parameter are introduced, and the higher-order variation model is converted by using the auxiliary variable, the Lagrange multiplier and the penalty parameter and through a preset acceleration algorithm, and a corresponding sub-optimization problem is obtained, including:
introducing two auxiliary variables w and v, and lettingAnd two Lagrange multipliers beta are introduced 1 And beta 2 And two penalty parameters mu are introduced 1 Sum mu 2 Using ADMM algorithm to solve the problem of energy functional minimum of TL modelIs transformed into->Wherein the augmented Lagrange function is +.>The corresponding sub-optimization problem is: u (u) k+1 =argminE(u,w k ,v k )、w k+1 =argminE(u k+1 ,w,v k )、v k+1 =argminE(u k+1 ,w k+1 ,v k );
Wherein E (u) is the energy functional of the TL model, u is the processed image, f is the image to be processed, gamma is a penalty parameter, K represents the iteration step number, K is an iteration limit,representing the first derivative.
Preferably, calculating a combined residual according to the current iteration value and the previous iteration value of the variable includes:
by means ofCalculating a combined residual c k+1
Correspondingly, making the step length of iterative calculation equal to the initial value, and taking the last iterative value of the variable as the current iterative value of the variable, comprising:
let alpha k+1 =1, let w k+1 ←w k
Updating the step length according to the iteration value of the last step of the step length, and performing acceleration iteration according to the updated step length, wherein the method comprises the following steps:
by means ofUpdate the step size and use +.>
For w k+1Updating to be according to the updated w k+1 、/>Performing acceleration iteration; which is a kind ofIn, alpha k And theta is an inertial parameter for the iteration value of the last step of the step length.
Preferably, when the higher-order variation model is an EE model, an auxiliary variable, a Lagrange multiplier and a penalty parameter are introduced, and the higher-order variation model is converted by using the auxiliary variable, the Lagrange multiplier and the penalty parameter and through a preset acceleration algorithm, and a corresponding sub-optimization problem is obtained, including:
introducing four auxiliary variables w, p, v and m, and letting |w|-m·w≥0、/>m=p, |m|is less than or equal to 1, and four Lagrange multipliers beta are introduced 1 、β 2 、β 3 、β 4 And four penalty parameters μ are introduced 1 、μ 2 、μ 3 、μ 4 ADMM algorithm is used to solve the problem of minimum value of energy function of the EE model +.>Conversion to
The corresponding sub-optimization problem is: u (u) k+1 =argminE(u,w k ,p k ,v k ,m k )、w k+1 =argminE(u k+1 ,w,p k ,v k ,m k )、p k+1 =argminE(u k+1 ,w k+1 ,p,v k ,m k ),v k+1 =argminE(u k+1 ,w k+1 ,p k+1 ,v,m k )、m k+1 =argminE(u k+1 ,w k+1 ,p k+1 ,v k+1 ,m);
Wherein E (u) is the energy functional of the EE model, u is the processed image, f is the image to be processed, a and b are normal parameters, K represents the number of iteration steps, K is the iteration limit, v represents the first derivative,
preferably, calculating a combined residual according to the current iteration value and the previous iteration value of the variable includes:
by means ofCalculating a combined residual c k+1
Correspondingly, making the step length of iterative calculation equal to the initial value, and taking the last iterative value of the variable as the current iterative value of the variable, comprising:
let alpha k+1 =1, let w k+1 ←w k
Updating the step length according to the iteration value of the last step of the step length, and performing acceleration iteration according to the updated step length, wherein the method comprises the following steps:
by means ofUpdate the step size and use +.>
For w k+1Updating to be according to the updated w k+1 、/>Performing acceleration iteration;
wherein alpha is k And theta is an inertial parameter for the iteration value of the last step of the step length.
An image processing apparatus comprising:
The first calculation module is used for inputting the image to be processed into a high-order variation model, and carrying out iterative calculation on the high-order variation model to obtain the current iterative value of each variable corresponding to the energy value of the high-order variation model;
the second calculation module is used for calculating a combined residual according to the current iteration value and the previous iteration value of the variable and judging whether the combined residual is smaller than a preset value or not;
the restarting module is used for updating the step length according to the iteration value of the last step of the step length in the iterative computation and accelerating iteration according to the updated step length if the step length is smaller than the initial value, enabling the step length of the iterative computation to be equal to the initial value, taking the iteration value of the last step of the variable as the current iteration value of the variable, and restarting according to the step length and the current iteration value of the variable;
and the judging module is used for judging whether the energy value of the high-order variation model is converged, if not, performing the next iteration, executing the step of calculating a combined residual according to the current iteration value and the previous iteration value of the variable, and if so, outputting the processed image.
Preferably, the first computing module includes:
Introducing an auxiliary variable, a Lagrange multiplier and penalty parameters, and converting the high-order variation model by using the auxiliary variable, the Lagrange multiplier and the penalty parameters through a preset acceleration algorithm to obtain a corresponding sub-optimization problem;
and solving the corresponding sub-optimization problem through a soft threshold formula in a fast Fourier transform and analytic form to obtain the current iteration value of each variable corresponding to the energy value of the high-order variation model.
An image processing apparatus comprising:
a memory for storing a computer program;
a processor for implementing the steps of the image processing method according to any one of the preceding claims when executing the computer program.
A computer readable storage medium having stored therein a computer program which, when executed by a processor, implements the steps of the image processing method according to any of the preceding claims.
The application provides an image processing method, an image processing device and a computer readable storage medium, wherein the method comprises the following steps: inputting the image to be processed into a high-order variational model, and carrying out iterative computation on the high-order variational model to obtain the current iterative value of each variable corresponding to the energy value of the high-order variational model; calculating a combined residual error according to the current iteration value and the previous iteration value of the variable, and judging whether the combined residual error is smaller than a preset value or not; if the value is smaller than the initial value, updating the step length according to the last iteration value of the step length in the iterative calculation, accelerating iteration according to the updated step length, enabling the step length to be equal to the initial value, taking the last iteration value of the variable as the current iteration value of the variable, and restarting according to the step length and the current iteration value of the variable; judging whether the energy value of the high-order variation model is converged or not, if not, performing the next iteration, executing the step of calculating a combined residual according to the current iteration value and the previous iteration value of the variable, and if so, outputting the processed image.
According to the technical scheme disclosed by the application, when the higher-order variational model of an input image to be processed is subjected to iterative computation to obtain the current iterative value of each variable corresponding to the energy value of the higher-order variational model, the combined residual error can be calculated according to the current iterative value of the variable and the last-step iterative value corresponding to the variable, whether the combined residual error is smaller than a preset value or not is judged, if the combined residual error is smaller than the preset value, the change amplitude of the variable is indicated to be smaller than the change amplitude of the last iteration, therefore, no oscillation phenomenon is generated, at this moment, the step size can be updated according to the last-step iterative value of the step size, so as to accelerate iteration according to the updated step size, if the step size is not smaller than the preset value, namely, the oscillation phenomenon can be possibly caused, the step size of the iterative computation is equal to the initial value, the last-step iterative value of the variable is taken as the current iterative value of the variable, and the current iterative value of the variable is restarted according to the step size which is equal to the initial value, if the step size is not equal to the initial value, the convergence of the previous iteration value is not indicated to be converged, the combined value is not completed, if the combined value is not completed, the combined value is calculated, and the combined value is not completed after the step size is calculated, the combined value is calculated, at this time, if the step size is not equal to the initial value is calculated, and the combined value is calculated, and the image is calculated is completed, the method and the device can avoid the situation that the energy value is higher than the corresponding energy value of the previous iterative computation in the process of processing the image by using the high-order variational model, namely realize the monotonic decrease of the energy value so as to avoid the oscillation phenomenon, so that the energy value of the high-order variational model can be converged rapidly, and further the image processing efficiency is improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only embodiments of the present application, and that other drawings can be obtained according to the provided drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of an image processing method according to an embodiment of the present application;
FIG. 2 (a) is a diagram of a "Lena" raw image provided by an embodiment of the present application;
FIG. 2 (b) is a "Lena" noise image provided by an embodiment of the present application;
FIG. 2 (c) is a view of a "Castle" raw image provided by an embodiment of the present application;
FIG. 2 (d) is a "Castle" noise image provided by an embodiment of the present application;
FIG. 2 (e) is a view of a "Texture" raw image provided by an embodiment of the present application;
FIG. 2 (f) is a block diagram of a "Texture" noise image provided by an embodiment of the present application;
FIG. 3 (a) is a diagram showing the effect of the ADMM algorithm for denoising a "Texture" image according to an embodiment of the present application;
FIG. 3 (b) is a diagram showing the effect of the fast ADMM algorithm on denoising a "Texture" image according to an embodiment of the present application;
FIG. 3 (c) is a diagram showing the effect of denoising a "Texture" image by the restarting fast ADMM algorithm provided by the embodiment of the present application;
FIG. 4 (a) is a graph of energy value changes for three algorithms iterated 40 times in the TL model with the "Lena" image as the image to be processed;
FIG. 4 (b) is a graph of energy value changes for three algorithms iterated 40 times in the TL model using the "Castle" image as the image to be processed;
FIG. 4 (c) is a graph of energy value changes for three algorithms iterated 40 times in the TL model with the "Texture" image as the image to be processed;
FIG. 5 (a) is a plot of the convergence of three algorithms in the TL model with the "Lena" image as the image to be processed and the relative energy error as the stopping criterion;
FIG. 5 (b) is a plot of the convergence of three algorithms in the TL model with the "Castle" image as the image to be processed and the relative energy error as the stopping criterion;
FIG. 5 (c) is a graph of convergence of three algorithms in the TL model with the "Texture" image as the image to be processed and the relative energy error as the stopping criterion;
FIG. 6 (a) is a graph of energy value changes for three algorithms iterated 40 times in the TL model with the "Lena" image as the image to be processed;
FIG. 6 (b) is a graph of energy value changes for three algorithms iterated 40 times in the EE model with the "Castle" image as the image to be processed;
FIG. 6 (c) is a graph of energy value changes for three algorithms iterated 40 times in the EE model with the "Texture" image as the image to be processed;
FIG. 7 (a) is a graph of the convergence of three algorithms in an EE model with the "Lena" image as the image to be processed and the relative energy error as the stopping criterion;
FIG. 7 (b) is a graph of the convergence of three algorithms in the EE model with the "Castle" image as the image to be processed and the relative energy error as the stopping criterion;
FIG. 7 (c) is a graph of the convergence of three algorithms in the EE model with the "Texture" image as the image to be processed and the relative energy error as the stopping criterion;
fig. 8 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
Referring to fig. 1, a flowchart of an image processing method provided by an embodiment of the present application is shown, where the image processing method provided by the embodiment of the present application may include:
s11: inputting the image to be processed into a high-order variational model, and performing iterative computation on the high-order variational model to obtain the current iterative value of each variable corresponding to the energy value of the high-order variational model.
Because the nonlinearity, the non-smoothness and even the non-convexity of the high-order variation model rule term can cause that the optimal step length is difficult to estimate and calculate when the image processing is performed, the oscillation phenomenon is easy to generate in the image processing process, and the image processing efficiency is lower due to the occurrence of the oscillation phenomenon.
Specifically, an image to be processed is acquired first, the image to be processed is input into a higher-order variational model, and then iterative calculation can be performed on the higher-order variational model, and current iterative values of variables corresponding to energy values of the higher-order variational model are obtained.
The method for obtaining the current iteration value of each variable corresponding to the energy value of the high-order variation model through carrying out iterative calculation on the high-order variation model can facilitate reducing the order of the rule item of the high-order variation model and facilitate calculation.
S12: calculating a combined residual error according to the current iteration value and the previous iteration value of the variable, and judging whether the combined residual error is smaller than a preset value or not; if not, step S13 is performed, and if not, step S14 is performed.
After performing iterative computation on the higher-order variational model to obtain the current iteration value of each variable corresponding to the energy value of the higher-order variational model, a combined residual can be calculated according to the current iteration value of the variable and the iteration value obtained by the last iterative computation of the variable (namely, the last iterative value of the variable), and whether the combined residual is smaller than a preset value can be judged, wherein the magnitude of the preset value can be changed according to the magnitude of the combined residual, and specifically, the preset value can be equal to eta c k Wherein eta is the attenuation coefficient, and eta epsilon (0, 1), c k For the combined residual calculated in the previous iterative calculation, the magnitude of the preset value may be set to a fixed value in advance according to experience.
S13: the step length is equal to the initial value, the last iteration value of the variable is used as the current iteration value of the variable, and restarting is carried out according to the step length and the current iteration value of the variable;
if the combined residual is not smaller than the preset value, the variation amplitude of the iteration of the variable is larger than the variation amplitude of the previous iteration, namely, the high-order variable component model is larger than the energy value of the previous iteration in the iterative calculation, namely, if the subsequent iterative calculation is directly performed according to the current value and the step length of the variable, the oscillation phenomenon can be caused, therefore, in order to avoid the oscillation phenomenon, the step length of the iterative calculation is equal to the initial value, the previous iteration value of the variable in the previous iterative calculation is taken as the current iteration value of the variable, the restarting is performed according to the step length equal to the initial value and the current iteration value of the newly obtained variable, namely, the iterative calculation is continuously performed according to the step length equal to the initial value and the current iteration value of the newly obtained variable, so that the situation that the energy value is higher than the energy value of the previous iteration calculation occurs in the image processing process by using the high-order variable component model is avoided, the monotonic variable component value is realized through the process when the combined residual is not smaller than the preset value, the high-order variable component model can be conveniently converged, and the image processing efficiency is further improved.
S14: updating the step length according to the iteration value of the last step of the step length, and accelerating iteration according to the updated step length.
When step S12 is executed to determine whether the combined residual is smaller than the preset value, if the determination result is that the combined residual is smaller than the preset value, it indicates that the variation amplitude of the iteration of the variable step is not larger than the variation amplitude of the previous iteration, so that no oscillation phenomenon occurs, at this time, the step size can be updated according to the iteration value corresponding to the step size in the previous iteration calculation (i.e. the previous iteration value of the step size), and the current iteration value of the variable calculated in step S11 can be updated according to the updated step size, so as to accelerate the iteration, thereby facilitating accelerating the convergence of the energy value of the higher-order variation model, and further facilitating improving the image processing efficiency.
S15: judging whether the energy value of the high-order variation model is converged or not; if not, executing step S16: performing the next iteration, returning to the execution step S12, and executing the step S17 if the convergence is performed;
s16: performing the next iteration;
s17: and outputting the processed image.
After each iterative calculation, it may be determined whether the energy value of the higher-order variational model converges, if not, it indicates that the processing of the image to be processed is not completed, at this time, the next iteration may be performed according to the step size and the current iteration value of the variable, and step S12 may be performed, if converging, it indicates that the processing of the image to be processed is completed, at this time, the processed image may be output.
It should be noted that, the image processing mentioned herein may specifically refer to image denoising, and correspondingly, the image to be processed specifically refers to an image to be denoised, and the image after processing refers to an image after denoising. Of course, the above-mentioned image processing may also specifically refer to image segmentation or the like.
According to the technical scheme disclosed by the application, when the higher-order variational model of an input image to be processed is subjected to iterative computation to obtain the current iterative value of each variable corresponding to the energy value of the higher-order variational model, the combined residual error can be calculated according to the current iterative value of the variable and the last-step iterative value corresponding to the variable, whether the combined residual error is smaller than a preset value or not is judged, if the combined residual error is smaller than the preset value, the change amplitude of the variable is indicated to be smaller than the change amplitude of the last iteration, therefore, no oscillation phenomenon is generated, at this moment, the step size can be updated according to the last-step iterative value of the step size, so as to accelerate iteration according to the updated step size, if the step size is not smaller than the preset value, namely, the oscillation phenomenon can be possibly caused, the step size of the iterative computation is equal to the initial value, the last-step iterative value of the variable is taken as the current iterative value of the variable, and the current iterative value of the variable is restarted according to the step size which is equal to the initial value, if the step size is not equal to the initial value, the convergence of the previous iteration value is not indicated to be converged, the combined value is not completed, if the combined value is not completed, the combined value is calculated, and the combined value is not completed after the step size is calculated, the combined value is calculated, at this time, if the step size is not equal to the initial value is calculated, and the combined value is calculated, and the image is calculated is completed, the method and the device can avoid the situation that the energy value is higher than the corresponding energy value of the previous iterative computation in the process of processing the image by using the high-order variational model, namely realize the monotonic decrease of the energy value so as to avoid the oscillation phenomenon, so that the energy value of the high-order variational model can be converged rapidly, and further the image processing efficiency is improved.
The image processing method provided by the embodiment of the application carries out iterative computation on the high-order variation model to obtain the current iterative value of each variable corresponding to the energy value of the high-order variation model, and can comprise the following steps:
introducing auxiliary variables, lagrange multipliers and penalty parameters, and converting the high-order variation model by using the auxiliary variables, the Lagrange multipliers and the penalty parameters through a preset acceleration algorithm to obtain corresponding sub-optimization problems;
and solving the corresponding sub-optimization problem through a soft threshold formula in a fast Fourier transform and analytic form to obtain the current iteration value of each variable corresponding to the energy value of the high-order variation model.
In the application, the process of performing iterative computation on the higher-order variational model of the input image to be processed to obtain the current iterative value of each variable corresponding to the energy value of the higher-order variational model can be specifically as follows: and introducing auxiliary variables, lagrange multipliers and penalty parameters, converting the high-order variation models by using the introduced auxiliary variables, lagrange multipliers and penalty parameters through a preset acceleration algorithm to convert the original complex optimization problem into a series of simple sub-optimization problems, and then solving the corresponding sub-optimization problems by using a soft threshold formula in a fast Fourier transform and analysis form to obtain the current iteration value of each variable corresponding to the energy value of the high-order variation model.
The order of the rule item can be reduced through the process of introducing variables such as auxiliary variables and converting the high-order variation model through a preset acceleration algorithm, so that iterative calculation of the high-order variation model is facilitated. In addition, it should be noted that the above-mentioned preset acceleration algorithm may be an ADMM (Alternating Direction Methods of Multipliers, alternate direction multiplier method) algorithm, and of course, other acceleration algorithms having an equivalent relationship with the ADMM algorithm may be adopted. When the acceleration algorithm is an ADMM algorithm, the method is equivalent to the design of restarting the rapid ADMM algorithm by taking the ADMM algorithm as a basis and combining an inertial acceleration method and the thought of avoiding the occurrence of the oscillation phenomenon, so that the oscillation phenomenon is avoided, and the calculation efficiency of the high-order variation model is improved conveniently.
When the high-order variation model is a TL model, the image processing method provided by the embodiment of the present application introduces an auxiliary variable, a Lagrange multiplier and a penalty parameter, converts the high-order variation model by using the auxiliary variable, the Lagrange multiplier and the penalty parameter and through a preset acceleration algorithm, and obtains a corresponding sub-optimization problem, which may include:
Introducing two auxiliary variables w and v, and lettingAnd two Lagrange multipliers beta are introduced 1 And beta 2 And two penalty parameters mu are introduced 1 Sum mu 2 Problem of energy functional minima of TL model by ADMM algorithmIs transformed into->Wherein the augmented Lagrange function is +.>Corresponding sub-optimizationsThe title is: u (u) k+1 =argminE(u,w k ,v k )、w k+1 =argminE(u k+1 ,w,v k )、v k+1 =argminE(u k+1 ,w k+1 ,v k );
Wherein E (u) is the energy functional of the TL model, u is the processed image, f is the image to be processed, gamma is a penalty parameter, K is the iteration step number, K is the iteration limit, and V is the first derivative.
In the present application, when the higher order variational model is a TL (Total Laplacian) model (the rule terms of which contain nonlinearity and non-smoothness), the problem of the minimum value of the energy function of the TL model is thatThe method comprises the steps of (1), wherein I delta u I is a rule term of a TL model and is a non-smooth convex functional, E (u) is an energy functional of the TL model, u (x) is omega-R, u is a processed image, f (x) is omega-R, f is an image to be processed, omega represents an image area, R represents a real number and gamma is a punishment parameter, introducing an auxiliary variable, lagrange multiplier and punishment parameter, converting a high-order variation model by utilizing the auxiliary variable, lagrange multiplier and punishment parameter and through a preset acceleration algorithm, and obtaining a corresponding sub-optimization problem, wherein the process comprises the following steps of:
Introducing two auxiliary variables w and v, and lettingAnd two Lagrange multipliers beta are introduced 1 And beta 2 And two penalty parameters mu are introduced 1 Sum mu 2 Converting formula (1) to an ADMM solution:
wherein the augmented Lagrange function is:
three sub-optimization problems in equation (2) are:
u k+1 =argminE(u,w k ,v k ) (4)
w k+1 =argminE(u k+1 ,w,v k ) (5)
v k+1 =argminE(u k+1 ,w k+1 ,v k ) (6)
Where K represents the number of iteration steps, K is the iteration limit, and v represents the first derivative. The original complex optimization problem is converted into a series of simple sub-optimization problem solutions of alternative optimization through the process, so that the calculation difficulty caused by TL rule item variation is overcome, and iterative calculation of the TL model is facilitated.
Corresponding to the above process, the process of solving the corresponding sub-optimization problem through the soft threshold formula in the form of fast fourier transform and analysis to obtain the current iteration value of each variable corresponding to the energy value of the high-order variation model comprises the following steps:
solving the equation (4) using standard variational methods to obtain the Euler-Lagrange equation for u:
order then represents the unit divergence of the boundary. Equation (7) can be solved using the following fast fourier transform:
wherein, the liquid crystal display device comprises a liquid crystal display device,
(s=0,1,...,N-1),(r=0,1,...,M-1),inverse fast fourier transform, +.>Representing the inverse fast fourier transform and taking only the real part, i, j represents the position location of the point in the image (i and j can be seen as the abscissa and ordinate of the image, respectively), d represents a series of variables (for ease of representation of the series of variables), s is the abscissa of the image (with rows 0 to N-1), and r is the ordinate of the image (with columns 0 to M-1).
Similarly, equation (5) is solved using standard variational methods to obtain the Euler-Lagrange equation for w:
equation (9) may also be solved using a fast fourier transform, namely:
wherein, the liquid crystal display device comprises a liquid crystal display device,
/>
h 1 ,h 2 a large number of variables appear in the equation solving process, and h is used for the convenience of representation 1 ,h 2 The two variables are represented, D represents a determinant, a 11 ,a 12 ,a 21 ,a 22 Representing the value required for determinant, +.>The inverse fast fourier transform is represented and only the real part is taken.
For equation (6), its solution v can be expressed in analytical form as a soft threshold equation:
the image processing method provided by the embodiment of the application calculates the combined residual error according to the current iteration value and the previous iteration value of the variable, and can comprise the following steps:
by means ofCalculating a combined residual c k+1
Accordingly, making the step length of iterative calculation equal to the initial value and taking the last iterative value of the variable as the current iterative value of the variable may include:
let alpha k+1 =1, let w k+1 ←w k
Updating the step length according to the iteration value of the last step of the step length, and accelerating iteration according to the updated step length, which can comprise:
by means ofUpdate step size and use +.>
For w k+1Updating to be according to the updated w k+1 、/>Performing acceleration iteration; wherein alpha is k And theta is an inertial parameter, which is the iteration value of the last step of the step length.
When the high-order variation model is a TL model, the concrete process for calculating the combined residual error according to the current iteration value and the previous iteration value of the variable is as follows:
by means ofCalculation of combined residual error c of TL model original variable and dual variable k+1 Correspondingly, the step length of iterative calculation is equal to the initial value, and the process of taking the last iterative value of the variable as the current iterative value of the variable is as follows: let alpha k+1 =1, let w k+1 ←w kUpdating the step length according to the iteration value of the last step of the step length, and accelerating the iteration according to the updated step length, namely utilizing +.>Update step size and use +.> For w k+1 、/>Updating to be according to the updated w k+1 、/>Performing acceleration iteration; wherein alpha is k And theta is an inertial parameter, which is the iteration value of the last step of the step length.
The algorithm of restarting the fast ADMM algorithm applied to the TL model is as follows:
when the higher-order variation model is an EE model, the image processing method provided by the embodiment of the application introduces an auxiliary variable, a Lagrange multiplier and penalty parameters, converts the higher-order variation model by using the auxiliary variable, the Lagrange multiplier and the penalty parameters and through a preset acceleration algorithm, and obtains a corresponding sub-optimization problem, and the method can comprise the following steps:
Introducing four auxiliary variables w, p, v and m, and letting|w|-m·w≥0、/>m=p, |m|is less than or equal to 1, and four Lagrange multipliers beta are introduced 1 、β 2 、β 3 、β 4 And four penalty parameters μ are introduced 1 、μ 2 、μ 3 、μ 4 Energy of EE model by ADMM algorithmProblem of quantitative functional minima->Conversion toWherein the Lagrange function is augmented to
The corresponding sub-optimization problem is: u (u) k+1 =argminE(u,w k ,p k ,v k ,m k )、w k+1 =argminE(u k+1 ,w,p k ,v k ,m k )、p k+1 =argminE(u k+1 ,w k+1 ,p,v k ,m k ),v k+1 =argminE(u k+1 ,w k+1 ,p k+1 ,v,m k )、m k+1 =argminE(u k+1 ,w k+1 ,p k+1 ,v k+1 ,m);
Wherein E (u) is the energy functional of the EE model, u is the processed image, f is the image to be processed, a and b are normal quantity parameters, K represents the iteration step number, K is the iteration limit, v represents the first derivative,
in the present application, when the higher order variational model is an EE (Euler's elastic, euler elastic energy) model (the rule terms of which contain nonlinearity, non-smoothness and non-convexity), the problem of the minimum value of the energy functional of the EE model is thatThe EE model combines the TV rule term and the curvature term, uses the elastic term of Euler as rule term, in which model the elastic term of Euler +.>Is non-convex and non-smooth, which is the source of computational difficulties in the model form (13), wherein E #u) is the energy functional of the EE model, u (x): omega→R, u is the processed image, f (x): omega→R, f is the image to be processed, omega represents the image area, a and b are normal quantity parameters, auxiliary variables, lagrange multipliers and penalty parameters are introduced, the auxiliary variables, lagrange multipliers and penalty parameters are utilized to convert the high-order variation model through a preset acceleration algorithm, and the process of obtaining the corresponding sub-optimization problem is as follows:
Introducing auxiliary variables w, p, v, and lettingThen equation (13) of the EE model can be converted into +.>But->The direct use of nonlinear non-convex constraints can lead to difficult solution of sub-optimization problems. Since |p|.ltoreq.1, according to +.>Inequality, constraint->Can relax to |p| is less than or equal to 1, |w| -p.w is more than or equal to 0 (formula (15)).
To further simplify the objective function of each sub-optimization problem, a new auxiliary variable m is introduced to replace the variable p in equation (15), and after the auxiliary variable m is introduced, the model shares the following five constraints
For convenience of expression, introduceThe expression constraint |m| is less than or equal to 1.
Four auxiliary variables w, p, v and m are introduced into EE model, four Lagrange multipliers beta 1 、β 2 、β 3 、β 4 And four penalty parameters μ 1 、μ 2 、μ 3 、μ 4 And converting equation (14) into using ADMM algorithm:
wherein the Lagrange function is augmented to
The five sub-optimization problems in equation (17) are respectively
Where K represents the number of iteration steps, K is the iteration limit, and v represents the first derivative. The original complex optimization problem is converted into a series of simple sub-optimization problem solutions of alternative optimization through the process, so that the calculation difficulty caused by EE rule item variation is overcome, and iterative calculation of an EE model is facilitated.
Corresponding to the above process, the process of solving the corresponding sub-optimization problem through the soft threshold formula in the form of fast fourier transform and analysis to obtain the current iteration value of each variable corresponding to the energy value of the high-order variation model comprises the following steps:
Solving equation (19) using standard variational methods yields the Euler-Lagrange equation for u:
order theThe fast fourier transform solution (24) is used to obtain:
wherein, the liquid crystal display device comprises a liquid crystal display device,
for equation (20), its solution w can be formulated as a generalized soft threshold in analytical form as:
solving equation (21) also using standard variational methods yields the Euler-Lagrange equation for p as:
equation (28) may also be solved using a fast fourier transform, namely:
wherein, the liquid crystal display device comprises a liquid crystal display device, />
solving equation (22) also using standard variational methods can yield the Euler-Lagrange equation for v as:
solving equation (23) also using standard variational methods can yield the Euler-Lagrange equation for m as:
in order to meet the constraint condition that the absolute value of m is less than or equal to 1, the application is characterized in thatAbove introduces a projection formula
The image processing method provided by the embodiment of the application calculates the combined residual error according to the current iteration value and the previous iteration value of the variable, and can comprise the following steps:
by means ofCalculating a combined residual c k+1
Accordingly, making the step length of iterative calculation equal to the initial value and taking the last iterative value of the variable as the current iterative value of the variable may include:
let alpha k+1 =1, let w k+1 ←w k
Updating the step length according to the iteration value of the last step of the step length, and accelerating iteration according to the updated step length, which can comprise:
By means ofUpdate step size and use +.>w k+1 ←w k+1k+1 (w k+1 -w k )、/>For w k+1 、/>Updating to according to the updatedW of (2) k+1 、/>Performing acceleration iteration; wherein alpha is k And theta is an inertial parameter, which is the iteration value of the last step of the step length. />
When the higher-order variation model is an EE model, the specific process of calculating the combined residual error according to the current iteration value and the previous iteration value of the variable is as follows:
by means ofCalculating a combined residual error c of an EE model original variable and a dual variable k+1 Correspondingly, the step length of iterative calculation is equal to the initial value, and the process of taking the last iterative value of the variable as the current iterative value of the variable is as follows: let alpha k+1 =1, let w k+1 ←w kUpdating the step length according to the iteration value of the last step of the step length, and accelerating the iteration according to the updated step length, namely utilizing +.>Update step size and use +.>w k+1 ←w k+1k+1 (w k+1 -w k )、/>For w k+1Updating to be according to the updated w k+1 、/>Performing acceleration iteration; wherein alpha is k And theta is an inertial parameter, which is the iteration value of the last step of the step length.
The algorithm of restarting the fast ADMM algorithm applied to the EE model is as follows:
it should be noted that, the TL model and the EE model have been studied in a great deal on the performance aspects of noise removal, edge preservation, etc., and the key point of the present application is to compare the calculation efficiency of the three methods of the original ADMM, the fast ADMM and the restarting fast ADMM on the basis of maintaining the noise removal performance of the original model under the condition of determining the relevant penalty parameters.
3 test images used for the experiment: "Lena", "Castle" and "Texture" are all from the university of California signal and image processing institute database in the United states, as shown in FIG. 2 (a), FIG. 2 (c) and FIG. 2 (e), wherein FIG. 2 (a) is the "Lena" raw image provided by the embodiment of the present application, FIG. 2 (c) is the "Castle" raw image provided by the embodiment of the present application, and FIG. 2 (e) is the "Texture" raw image provided by the embodiment of the present application. The image is scaled with pixel values in the range of 0-255 and then contaminated with gaussian white noise with variance d=0.01. The image samples polluted by noise are shown in fig. 2 (b), fig. 2 (d) and fig. 2 (f), wherein fig. 2 (b) is a "Lena" noise image provided by the embodiment of the present application, fig. 2 (d) is a "Castle" noise image provided by the embodiment of the present application, and fig. 2 (f) is a "Texture" noise image provided by the embodiment of the present application. Since it is desirable to restart the method as little as possible, it is recommended that the value of η is close to 1. In all numerical experiments presented in the present application, η takes a value of 0.99.
Since the solutions of TL and EE models are generally not unique, RMSE cannot be compared, instead the relative energy error |e is used k -E k-1 |/E k < 0.001 as stopping criteria. E (E) k Representing the original energy of the current kth iteration, E k-1 Representing the original energy of the previous step k-1 iteration. When the relative energy error cannot reach the stopping standard, the algorithm willAutomatically stopping after 500 iterations. The experiment of the application uses a 3.19GHz CPU and is carried out by Matlab R2016a version under a Windows 10 (64-bit) operating system.
For the TL model, when f represents a noisy image, as shown in formula (1), the purpose of minimizing formula (1) is to find a denoised image u similar to the noisy image f, while ensuring that the image is smooth, edge preserving, and no edge stair effect occurs. The parameter gamma controls the trade-off between the data item and the rule item, the model parameter gamma mainly influences the quality of the image, and the penalty parameter mu is displayed more clearly 1 Sum mu 2 Relation to algorithm performance, fixed γ=1.
The goal of the fast algorithm is to improve the convergence rate of the algorithm so as to accelerate the calculation efficiency on the basis of ensuring that the picture quality is not reduced. Taking a "Texture" map as an example, the parameter values are μ 1 =5,μ 2 The images output after the three algorithms of the original ADMM, the rapid ADMM and the restart rapid ADMM algorithm are applied to the TL model are shown in fig. 3 (a) (b) (c), wherein fig. 3 (a) is an effect diagram of denoising a "Texture" image by the ADMM algorithm provided by the embodiment of the present application, fig. 3 (b) is an effect diagram of denoising a "Texture" image by the rapid ADMM algorithm provided by the embodiment of the present application, and fig. 3 (c) is an effect diagram of denoising a "Texture" image by the restart rapid ADMM algorithm provided by the embodiment of the present application, wherein PSNR of fig. 3 (a) is equal to 26.518, PSNR of fig. 3 (b) is equal to 26.542, and PSNR of fig. 3 (c) is equal to 26.969.
Next, three algorithms perform performance comparisons in TL-problems, including the number of iterations of each algorithm, the CPU total run time, and PSNR. As shown in table 1 in particular, the results of the operation of the TL model are shown.
Table 1 results of the TL model
/>
Experiments show that firstly, when penalty parameters are the same, peak signal to noise ratios (PSNR) of three algorithms of the same model are basically consistent, which means that the quick algorithm does not damage the performance of the original model and does not affect the image quality of the original ADMM algorithm; second, a large number of penalty parameters are chosen for different images, and most of the fast ADMM algorithms can improve the computational efficiency by 6% -33% for the original ADMM algorithm. It should be noted that the fast ADMM algorithm has a situation where acceleration is not effective. This is because the optimal step size of the fast ADMM algorithm depends on convexity and smoothness of the objective function, but the TL model is non-smooth, and theoretically related smooth parameters and Lipschitz parameters are difficult to estimate, so the fast ADMM algorithm cannot calculate the optimal step size; thirdly, the same model has 3 iterations of restarting the fast ADMM algorithm for the same image, no matter what parameters are taken, and the running time is correspondingly and obviously reduced, which proves that the restarting fast ADMM algorithm is very robust. For the original ADMM algorithm, the restarting rapid ADMM algorithm can improve the calculation efficiency by 70% -81%; finally, the iterative steps are greatly reduced compared with the traditional ADMM method due to the adoption of a soft threshold formula in a fast FFT solving and resolving mode for each sub-problem.
In order to more clearly see the change trend of the energy values and the relative energy errors of the three algorithms in the calculation process, FIG. 4 (a) is an energy value change graph with the "Lena" image as the image to be processed, the three algorithms iterate 40 times in the TL model, FIG. 4 (b) is an energy value change graph with the "Castle" image as the image to be processed, the three algorithms iterate 40 times in the TL model, FIG. 4 (c) is an energy value change graph with the "Texture" image as the image to be processed, the three algorithms iterate 40 times in the TL model, FIG. 5 (a) is a convergence graph with the "Lena" image as the image to be processed and the relative energy error as the stop criterion, FIG. 5 (b) is a convergence graph with the "Castle" image as the image to be processed and the relative energy error as the stop criterion, FIG. 5 (c) is a convergence graph with the "Texture" image as the image to be processed and the relative energy error as the stop criterion, wherein the convergence graph in the TL model is 4, the graph is taken with the parameters in the model 1 =5,μ 2 =0.001,γ=1. Wherein in fig. 4 the abscissa represents the number of iterative steps and the ordinate represents the energy value, and in fig. 5 the abscissa represents the number of iterative steps and the ordinate represents the relative energy error.
As can be intuitively seen from fig. 4 and 5, the energy value and the relative energy error of the restarting fast ADMM algorithm are monotonically decreasing, and the energy value reaches a steady state at the fastest; the fast ADMM algorithm will oscillate. This is due to the non-smoothness of the TL model, the smoothness parameters and Lipschitz parameters are difficult to estimate, resulting in a very difficult estimation of the period of the oscillations. And the restarting rapid ADMM algorithm adjusts the step length in a self-adaptive manner according to the magnitude of the combined residual error in the calculation process, so that the oscillation phenomenon is eliminated, and the calculation efficiency is improved.
For the EE model, the EE model is shown in equation (13), where u, f E x are the unknown solution and noisy data, respectively,is the curvature of the horizontal curve u (x, y) =c.
The use of the fourth derivative can attenuate the high frequency components of the image faster than the second Partial Differential Equation (PDE) based approach, so equation (13) can reduce the ladder effect and better approximate the original image. In practice, it is also able to preserve image edges while reducing noise. The EE model is one of the most important second order models. Penalty parameter μ for clearer presentation 1 、μ 2 、μ 3 、μ 4 The relation with algorithm performance is fixed a=0.2, b=2.
The performance of the three algorithms in the EE problem is compared below, including the number of iterations of each algorithm, the CPU total run time, and the PSNR. Table 2 only shows the data results for the "Lena" image due to the excessive data.
Table 2 results of the EE model
/>
By analyzing the data in table 2, the following several conclusions can be drawn. First, when penalty parameters are the same, peak signal-to-noise ratios (PSNR) of three algorithms of the same model are basically consistent, which indicates that the quick algorithm does not affect the image quality of the original ADMM algorithm; second, most fast ADMM algorithms can improve the computational efficiency by 13% -35% for the original ADMM algorithm with a large number of penalty parameters being different. Similar to the TL model, the rapid ADMM algorithm has the condition of ineffective acceleration in the EE model, and the non-convexity and non-smoothness of the EE model increase the difficulty of calculating the optimal step length; thirdly, the same model has 3 iteration times of the restarting rapid ADMM algorithm no matter what parameters are taken, and the running time is correspondingly obviously reduced, which proves that the restarting rapid ADMM algorithm is very robust. For the original ADMM algorithm, restarting the fast ADMM algorithm can improve the computational efficiency by 50% -91%.
Also, in order to more clearly see the variation trend of the energy values and the relative energy errors of the three algorithms in the calculation process, fig. 6 (a) is an energy value variation graph of the three algorithms in the TL model with the "Lena" image as the image to be processed, fig. 6 (b) is an energy value variation graph of the three algorithms in the EE model with the "Castle" image as the image to be processed, the three algorithms in the EE model with the iteration 40 times, fig. 6 (c) is an energy value variation graph of the three algorithms in the EE model with the "Texture" image as the image to be processed, the three algorithms in the EE model with the iteration 40 times, fig. 7 (a) is a convergence graph of the three algorithms in the EE model with the "Lena" image as the image to be processed and the relative energy error as the stop criterion, fig. 7 (b) is a convergence graph of the three algorithms in the EE model with the "Castle" image as the image to be processed and the relative energy error as the stop criterion, and fig. 7 (c) is a convergence graph of the three algorithms in the EE model with the image to be processed and the relative energy error as the stop criterion, wherein values in fig. 2 = 0.2 1 =0.06,μ 2 =2,μ 3 =2000,μ 4 =400. Wherein in FIG. 6, a user sits sidewaysThe scale represents the number of iterative steps, the ordinate represents the energy value, and in fig. 7, the abscissa represents the number of iterative steps, and the ordinate represents the relative energy error.
Figure 6 shows the function values of the iteration 40 steps when three different algorithms are applied to different images in the EE model. First, the function value of the restarting fast ADMM can be more quickly stabilized near the function value curve than the other two algorithms. Second, the fast ADMM algorithm has the problem that the function value is not monotonically decreased in all three pictures, and the defect greatly reduces the algorithm efficiency. This is because the acceleration step of the fast ADMM is too large, which causes it to miss a minimum value and thus causes a non-monotonic decrease in the function value.
It is also observed from fig. 6 and 7 that the energy value and the relative energy error of restarting the fast ADMM algorithm are monotonically decreasing and the energy value reaches steady state at the fastest; and because the optimal step length cannot be calculated, the rapid ADMM algorithm misses the minimum value due to the overlarge acceleration step length, so that oscillation is generated.
The embodiment of the present application further provides an image processing apparatus, referring to fig. 8, which shows a schematic structural diagram of the image processing apparatus provided by the embodiment of the present application, and may include:
A first calculation module 21, configured to input an image to be processed into the higher-order variation model, and perform iterative calculation on the higher-order variation model to obtain a current iteration value of each variable corresponding to an energy value of the higher-order variation model;
the second calculation module 22 is configured to calculate a combined residual according to the current iteration value and the previous iteration value of the variable, and determine whether the combined residual is smaller than a preset value;
a restarting module 23, configured to update the step size according to the previous iteration value of the step size in the iterative computation and accelerate the iteration according to the updated step size if the step size is smaller than the previous iteration value, and if the step size is not smaller than the previous iteration value, make the step size of the iterative computation equal to the initial value, take the previous iteration value of the variable as the current iteration value of the variable, and restart the variable according to the step size and the current iteration value of the variable;
the judging module 24 is configured to judge whether the energy value of the higher-order variational model converges, if not, perform the next iteration, and perform the step of calculating the combined residual according to the current iteration value and the previous iteration value of the variable, and if so, output the processed image.
In the image processing apparatus provided in the embodiment of the present application, the first calculating module 21 may include:
The transformation unit is used for introducing auxiliary variables, lagrange multipliers and penalty parameters, transforming the high-order variation model by utilizing the auxiliary variables, the Lagrange multipliers and the penalty parameters through a preset acceleration algorithm, and obtaining corresponding sub-optimization problems;
and the solving unit is used for solving the corresponding sub-optimization problem through a soft threshold formula in a fast Fourier transform and analytic form to obtain the current iteration value of each variable corresponding to the energy value of the high-order variation model.
In the image processing apparatus provided by the embodiment of the present application, when the higher-order variation model is a TL model, the transformation unit may include:
a first converter subunit for introducing two auxiliary variables w and v and lettingAnd two Lagrange multipliers beta are introduced 1 And beta 2 And two penalty parameters mu are introduced 1 Sum mu 2 ADMM algorithm is used for solving the problem of energy functional minima of TL model>Is transformed into->Wherein the augmented Lagrange function is +.>The corresponding sub-optimization problem is: u (u) k+1 =argminE(u,w k ,v k )、w k+1 =argminE(u k+1 ,w,v k )、v k+1 =argminE(u k+1 ,w k+1 ,v k );
Wherein E (u) is the energy functional of the TL model, u is the processed image, f is the image to be processed, gamma is a penalty parameter, K is the iteration step number, K is the iteration limit, and V is the first derivative.
In one embodiment of the present application, the second computing module 22 may include:
A first computing unit for utilizing
Calculating a combined residual c k+1
Accordingly, the restart module 23 may include:
a first assignment unit for letting alpha k+1 =1, let w k+1 ←w k
The update module may include:
a first updating unit for utilizingUpdating step length and utilizing For w k+1 、/>Updating to be according to the updated w k+1 、/>Performing acceleration iteration; wherein alpha is k And theta is an inertial parameter, which is the iteration value of the last step of the step length.
In the image processing apparatus provided by the embodiment of the present application, when the higher-order variation model is an EE model, the conversion unit may include:
a second transformation subunit for introducing four auxiliary variables w, p, v and m, and letting m=p, |m|is less than or equal to 1, and four Lagrange multipliers beta are introduced 1 、β 2 、β 3 、β 4 And four penalty parameters μ are introduced 1 、μ 2 、μ 3 、μ 4 Problem of minimum value of energy functional of EE model by ADMM algorithmIs transformed into->Wherein the Lagrange function is augmented to
The corresponding sub-optimization problem is: u (u) k+1 =argminE(u,w k ,p k ,v k ,m k )、w k+1 =argminE(u k+1 ,w,p k ,v k ,m k )、p k+1 =argminE(u k+1 ,w k+1 ,p,v k ,m k ),v k+1 =argminE(u k+1 ,w k+1 ,p k+1 ,v,m k )、m k+1 =argminE(u k+1 ,w k+1 ,p k+1 ,v k+1 ,m);
Wherein E (u) is the energy functional of the EE model, u is the processed image, f is the image to be processed, a and b are normal quantity parameters, K represents the iteration step number, K is the iteration limit, v represents the first derivative,
in one embodiment of the present application, the second computing module 22 may include:
A second calculation unit for utilizing
Calculating a combined residual
Accordingly, the restart module 23 may include:
a second assignment unit for letting alpha k+1 =1, let w k+1 ←w k/>
The update module may include:
a second updating unit for utilizingUpdating step length and utilizing For w k+1Updating to be according to the updated w k+1 、/> Performing acceleration iteration; wherein alpha is k And theta is an inertial parameter, which is the iteration value of the last step of the step length.
The embodiment of the application also provides an image processing device, referring to fig. 9, which shows a schematic structural diagram of the image processing device provided by the embodiment of the application, and may include:
a memory 31 for storing a computer program;
the processor 32, when executing the computer program stored in the memory 31, may implement the following steps:
inputting the image to be processed into a high-order variational model, and carrying out iterative computation on the high-order variational model to obtain the current iterative value of each variable corresponding to the energy value of the high-order variational model; calculating a combined residual error according to the current iteration value and the previous iteration value of the variable, and judging whether the combined residual error is smaller than a preset value or not; if the value is smaller than the initial value, updating the step length according to the last iteration value of the step length in the iterative calculation, accelerating iteration according to the updated step length, enabling the step length to be equal to the initial value, taking the last iteration value of the variable as the current iteration value of the variable, and restarting according to the step length and the current iteration value of the variable; judging whether the energy value of the high-order variation model is converged or not, if not, performing the next iteration, executing the step of calculating a combined residual according to the current iteration value and the previous iteration value of the variable, and if so, outputting the processed image.
The embodiment of the application also provides a computer readable storage medium, wherein the computer readable storage medium stores a computer program, and when the computer program is executed by a processor, the following steps can be realized:
inputting the image to be processed into a high-order variational model, and carrying out iterative computation on the high-order variational model to obtain the current iterative value of each variable corresponding to the energy value of the high-order variational model; calculating a combined residual error according to the current iteration value and the previous iteration value of the variable, and judging whether the combined residual error is smaller than a preset value or not; if the value is smaller than the initial value, updating the step length according to the last iteration value of the step length in the iterative calculation, accelerating iteration according to the updated step length, enabling the step length to be equal to the initial value, taking the last iteration value of the variable as the current iteration value of the variable, and restarting according to the step length and the current iteration value of the variable; judging whether the energy value of the high-order variation model is converged or not, if not, performing the next iteration, executing the step of calculating a combined residual according to the current iteration value and the previous iteration value of the variable, and if so, outputting the processed image.
The computer readable storage medium may include: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The description of the related parts in the image processing apparatus, the device and the computer readable storage medium provided in the embodiments of the present application may refer to the detailed description of the corresponding parts in the image processing method provided in the embodiments of the present application, which is not repeated here.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements is inherent to. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. In addition, the parts of the above technical solutions provided in the embodiments of the present application, which are consistent with the implementation principles of the corresponding technical solutions in the prior art, are not described in detail, so that redundant descriptions are avoided.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. An image processing method, comprising:
inputting an image to be processed into a high-order variational model, and performing iterative computation on the high-order variational model to obtain the current iterative value of each variable corresponding to the energy value of the high-order variational model;
calculating a combined residual error according to the current iteration value and the previous iteration value of the variable, and judging whether the combined residual error is smaller than a preset value or not;
if the step size is smaller than the initial value, updating the step size according to the iteration value of the last step of the step size in the iteration calculation, accelerating iteration according to the updated step size, if the step size is not smaller than the initial value, enabling the step size to be equal to the initial value, taking the iteration value of the last step of the variable as the current iteration value of the variable, and restarting according to the step size and the current iteration value of the variable;
And judging whether the energy value of the high-order variation model is converged, if not, performing the next iteration, executing the step of calculating a combined residual error according to the current iteration value and the previous iteration value of the variable, and if so, outputting the processed image.
2. The image processing method according to claim 1, wherein performing iterative computation on the higher-order variational model to obtain a current iteration value of each variable corresponding to an energy value of the higher-order variational model, comprises:
introducing an auxiliary variable, a Lagrange multiplier and penalty parameters, and converting the high-order variation model by using the auxiliary variable, the Lagrange multiplier and the penalty parameters through a preset acceleration algorithm to obtain a corresponding sub-optimization problem;
and solving the corresponding sub-optimization problem through a soft threshold formula in a fast Fourier transform and analytic form to obtain the current iteration value of each variable corresponding to the energy value of the high-order variation model.
3. The image processing method according to claim 2, wherein when the higher-order variation model is a TL model, introducing an auxiliary variable, a Lagrange multiplier, and a penalty parameter, converting the higher-order variation model by using the auxiliary variable, the Lagrange multiplier, and the penalty parameter and through a preset acceleration algorithm, and obtaining a corresponding sub-optimization problem, comprising:
Two auxiliary variables w and v are introduced, let w= u, v= v·w, and two Lagrange multipliers β are introduced 1 And beta 2 And two penalty parameters mu are introduced 1 Sum mu 2 Using ADMM algorithm to solve the problem of energy functional minimum of TL modelIs transformed into->Wherein the augmented Lagrange function is +.>The corresponding sub-optimization problem is: u (u) k+1 =arg min E(u,w k ,v k )、w k+1 =arg min E(u k+1 ,w,v k )、v k+1 =arg min E(u k+1 ,w k+1 ,v k );
Wherein E (u) is an energy functional of the TL model, u is the processed image, f is the image to be processed, gamma is a penalty parameter, K represents an iteration step number, K is an iteration limit, and V represents a first derivative.
4. The image processing method according to claim 3, wherein calculating a combined residual from the current iteration value and the previous iteration value of the variable comprises:
by means ofCalculating a combined residual c k+1
Correspondingly, making the step length of iterative calculation equal to the initial value, and taking the last iterative value of the variable as the current iterative value of the variable, comprising:
let alpha k+1 =1, let w k+1 ←w k
Updating the step length according to the iteration value of the last step of the step length, and performing acceleration iteration according to the updated step length, wherein the method comprises the following steps:
by means ofUpdate the step size and use +.>w k+1 ←w k+1k+1 (w k+1 -w k )、/>For w k+1 、/>Updating to be according to the updated w k+1 、/>Performing acceleration iteration; wherein alpha is k And theta is an inertial parameter for the iteration value of the last step of the step length.
5. The image processing method according to claim 2, wherein when the higher-order variation model is an EE model, an auxiliary variable, a Lagrange multiplier, and a penalty parameter are introduced, the higher-order variation model is converted by using the auxiliary variable, the Lagrange multiplier, and the penalty parameter and through a preset acceleration algorithm, and a corresponding sub-optimization problem is obtained, including:
four auxiliary variables w, p, v and m are introduced, w= u, |w| -m.w is greater than or equal to 0, v= v.p, m=p, |m| is less than or equal to 1, and four Lagrange multipliers beta are introduced 1 、β 2 、β 3 、β 4 And four penalty parameters μ are introduced 1 、μ 2 、μ 3 、μ 4 Using ADMM algorithm to solve the problem of minimum value of energy functional of EE modelConversion toWherein the Lagrange function is augmented toThe corresponding sub-optimization problem is: u (u) k+1 =arg min E(u,w k ,p k ,v k ,m k )、w k+1 =arg min E(u k+1 ,w,p k ,v k ,m k )、p k+1 =arg min E(u k+1 ,w k+1 ,p,v k ,m k ),v k+1 =arg min E(u k+1 ,w k+1 ,p k+1 ,v,m k )、m k+1 =arg min E(u k +1 ,w k+1 ,p k+1 ,v k+1 ,m);
Wherein E (u) is the energy functional of the EE model, u is the processed image, f is the image to be processed, a and b are normal parameters, K represents the number of iteration steps, K is the iteration limit, v represents the first derivative,
6. the image processing method according to claim 5, wherein calculating a combined residual from the current iteration value and the previous iteration value of the variable comprises:
By means ofCalculating a combined residual c k+1
Correspondingly, making the step length of iterative calculation equal to the initial value, and taking the last iterative value of the variable as the current iterative value of the variable, comprising:
let alpha k+1 =1, let w k+1 ←w k
Updating the step length according to the iteration value of the last step of the step length, and performing acceleration iteration according to the updated step length, wherein the method comprises the following steps:
by means ofUpdate the step size and use +.>w k+1 ←w k+1k+1 (w k+1 -w k )、/>For w k+1 、/>Updating to be according to the updated w k+1 、/>Performing acceleration iteration; wherein alpha is k And theta is an inertial parameter for the iteration value of the last step of the step length.
7. An image processing apparatus, comprising:
the first calculation module is used for inputting the image to be processed into a high-order variation model, and carrying out iterative calculation on the high-order variation model to obtain the current iterative value of each variable corresponding to the energy value of the high-order variation model;
the second calculation module is used for calculating a combined residual according to the current iteration value and the previous iteration value of the variable and judging whether the combined residual is smaller than a preset value or not;
the restarting module is used for updating the step length according to the iteration value of the last step of the step length in the iterative computation and accelerating iteration according to the updated step length if the step length is smaller than the initial value, enabling the step length of the iterative computation to be equal to the initial value, taking the iteration value of the last step of the variable as the current iteration value of the variable, and restarting according to the step length and the current iteration value of the variable;
And the judging module is used for judging whether the energy value of the high-order variation model is converged, if not, performing the next iteration, executing the step of calculating a combined residual according to the current iteration value and the previous iteration value of the variable, and if so, outputting the processed image.
8. The image processing apparatus of claim 7, wherein the first computing module comprises:
the transformation unit is used for introducing auxiliary variables, lagrange multipliers and penalty parameters, transforming the high-order variation model by using the auxiliary variables, the Lagrange multipliers and the penalty parameters through a preset acceleration algorithm, and obtaining corresponding sub-optimization problems;
and the solving unit is used for solving the corresponding sub-optimization problem through a soft threshold formula in a fast Fourier transform and analytic form to obtain the current iteration value of each variable corresponding to the energy value of the high-order variation model.
9. An image processing apparatus, characterized by comprising:
a memory for storing a computer program;
a processor for implementing the steps of the image processing method according to any one of claims 1 to 6 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, implements the steps of the image processing method according to any one of claims 1 to 6.
CN202011170966.1A 2020-10-28 2020-10-28 Image processing method, device, equipment and computer readable storage medium Active CN112215779B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011170966.1A CN112215779B (en) 2020-10-28 2020-10-28 Image processing method, device, equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011170966.1A CN112215779B (en) 2020-10-28 2020-10-28 Image processing method, device, equipment and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN112215779A CN112215779A (en) 2021-01-12
CN112215779B true CN112215779B (en) 2023-10-03

Family

ID=74057316

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011170966.1A Active CN112215779B (en) 2020-10-28 2020-10-28 Image processing method, device, equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN112215779B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017006764A1 (en) * 2015-07-08 2017-01-12 株式会社日立製作所 Image computing device, image computing method, and tomograph
CN107665494A (en) * 2017-10-11 2018-02-06 青岛大学 A kind of adaptive noisy full variation dividing method of SAR image
CN107945121A (en) * 2017-11-06 2018-04-20 上海斐讯数据通信技术有限公司 A kind of image recovery method and system based on full variation

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9558712B2 (en) * 2014-01-21 2017-01-31 Nvidia Corporation Unified optimization method for end-to-end camera image processing for translating a sensor captured image to a display image

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017006764A1 (en) * 2015-07-08 2017-01-12 株式会社日立製作所 Image computing device, image computing method, and tomograph
CN107665494A (en) * 2017-10-11 2018-02-06 青岛大学 A kind of adaptive noisy full variation dividing method of SAR image
CN107945121A (en) * 2017-11-06 2018-04-20 上海斐讯数据通信技术有限公司 A kind of image recovery method and system based on full variation

Also Published As

Publication number Publication date
CN112215779A (en) 2021-01-12

Similar Documents

Publication Publication Date Title
Luo et al. Image restoration with mean-reverting stochastic differential equations
Yuan et al. $\ell _0 $ TV: A Sparse Optimization Method for Impulse Noise Image Restoration
Chan et al. Constrained total variation deblurring models and fast algorithms based on alternating direction method of multipliers
Wohlberg ADMM penalty parameter selection by residual balancing
Matakos et al. Accelerated edge-preserving image restoration without boundary artifacts
Zhao et al. A new convex optimization model for multiplicative noise and blur removal
CN107705265B (en) SAR image variational denoising method based on total curvature
Yaghoobi et al. Noise aware analysis operator learning for approximately cosparse signals
Lampe et al. Large-scale Tikhonov regularization via reduction by orthogonal projection
TWI765264B (en) Device and method of handling image super-resolution
CN116721179A (en) Method, equipment and storage medium for generating image based on diffusion model
CN111142065A (en) Low-complexity sparse Bayesian vector estimation method and system
Chikin et al. Channel balancing for accurate quantization of winograd convolutions
CN114385964B (en) State space model calculation method, system and equipment of multi-element fractional order system
JP4945532B2 (en) Degraded image restoration method, degraded image restoration device, and program
CN112215779B (en) Image processing method, device, equipment and computer readable storage medium
WO2017088391A1 (en) Method and apparatus for video denoising and detail enhancement
CN111062878B (en) Image denoising method and device and computer readable storage medium
CN114066782B (en) Pneumatic optical effect correction method and device
Son et al. Iterative inverse halftoning based on texture-enhancing deconvolution and error-compensating feedback
US7592936B2 (en) Input distribution determination for denoising
Klockmann et al. Efficient nonparametric estimation of Toeplitz covariance matrices
Foi et al. Signal-dependent noise removal in pointwise shape-adaptive DCT domain with locally adaptive variance
Li et al. Efficient image completion method based on alternating direction theory
US10489675B2 (en) Robust region segmentation method and system using the same

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant