CN111784733B - Image processing method, device, terminal and computer readable storage medium - Google Patents

Image processing method, device, terminal and computer readable storage medium Download PDF

Info

Publication number
CN111784733B
CN111784733B CN202010642524.6A CN202010642524A CN111784733B CN 111784733 B CN111784733 B CN 111784733B CN 202010642524 A CN202010642524 A CN 202010642524A CN 111784733 B CN111784733 B CN 111784733B
Authority
CN
China
Prior art keywords
image
noise
algorithm
processed
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010642524.6A
Other languages
Chinese (zh)
Other versions
CN111784733A (en
Inventor
易浩平
叶超
成富平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Angell Technology Co ltd
Original Assignee
Shenzhen Angell Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Angell Technology Co ltd filed Critical Shenzhen Angell Technology Co ltd
Priority to CN202010642524.6A priority Critical patent/CN111784733B/en
Publication of CN111784733A publication Critical patent/CN111784733A/en
Application granted granted Critical
Publication of CN111784733B publication Critical patent/CN111784733B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/223Analysis of motion using block-matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Multimedia (AREA)
  • Image Processing (AREA)
  • Picture Signal Circuits (AREA)

Abstract

The application discloses an image processing method, which is applied to the field of image processing and comprises the following steps: acquiring an image to be processed in real time, and determining a motion vector of the image to be processed through a preset search algorithm; if the motion vector is not 0 vector, processing the noise of the target type in the image to be processed into Gaussian distribution noise according to a preset transformation algorithm to obtain a noise processing image; filtering the noise processing image according to a preset filtering algorithm, inversely transforming the noise of the target type in the filtered image according to an inversely transforming algorithm of the preset transforming algorithm to obtain and output a first result image; if the motion vector is 0 vector, carrying out matching point weighted average calculation on frame images of two adjacent frames before and after the current frame to obtain and output a second result image. The method also discloses an image processing device, a terminal and a computer readable storage medium, which can process the image in real time and improve the definition of the image.

Description

Image processing method, device, terminal and computer readable storage medium
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an image processing method, an image processing apparatus, a terminal, and a computer readable storage medium.
Background
DR (Digital Radiography) equipment, i.e. a direct digital radiography system, is increasingly favored by hospital radiology due to its important value in clinical auxiliary diagnosis. However, there are drawbacks, such as the poisson distribution type noise of the perspective image acquired by the flat panel detector is very obvious, which affects the clinical auxiliary diagnosis. Therefore, in image processing, a noise reduction algorithm is adopted to reduce image noise on the premise of not influencing clinical diagnosis, and the noise reduction algorithm comprises a gaussian filtering method, a bilateral filtering method, an NLM (Non-Local mean) filtering method, a BM3D (Block-matching and3D filtering) algorithm and the like.
The image processing technology has better effect of reducing Gaussian noise, but the complete BM3D algorithm cannot perform real-time processing on the image at present, and other algorithms can perform real-time processing, but the noise reduction effect on non-Gaussian noise is poor. Therefore, the X-ray perspective image processed by the technology has lower definition and can not meet the requirement of clinical auxiliary diagnosis.
Disclosure of Invention
The embodiment of the invention provides an image processing method, an image processing device, a terminal and a computer readable storage medium, which are used for solving the problems that an image cannot be processed in real time and the image processing quality is low.
The embodiment of the invention provides an image processing method, which comprises the following steps:
acquiring an image to be processed in real time, and determining a motion vector of the image to be processed through a preset search algorithm; if the motion vector is not 0 vector, processing the noise of the target type in the image to be processed into Gaussian distribution noise according to a preset transformation algorithm to obtain a noise processing image; filtering the noise processing image according to a preset filtering algorithm, inversely transforming the noise of the target type in the filtered image according to an inversely transforming algorithm of the preset transforming algorithm, and obtaining and outputting a first result image; and if the motion vector is 0 vector, carrying out matching point weighted average calculation on frame images of two adjacent frames before and after the current frame to obtain and output a second result image.
The embodiment of the invention also provides an image processing device, which comprises:
the motion estimation module is used for acquiring an image to be processed in real time and determining a motion vector of the image to be processed through a preset search algorithm;
the noise conversion module is used for processing the noise of the target type in the image to be processed into Gaussian distribution noise according to a preset conversion algorithm if the motion vector is not 0 vector, so as to obtain a noise processing image;
the filtering module is used for filtering the noise processing image according to a preset filtering algorithm; the noise transformation module is further used for carrying out inverse transformation on the noise of the target type in the filtered image according to an inverse transformation algorithm of the preset transformation algorithm to obtain a first result image; the output module is used for outputting the first result image; and the calculation module is used for carrying out matching point weighted average calculation on the frame images of the front and rear adjacent frames of the current frame if the motion vector is 0 vector, so as to obtain a second result image. The output module is further configured to output the second result image.
The embodiment of the invention also provides a terminal which comprises a memory, a processor and a computer program stored in the memory and capable of running on the processor, wherein the processor realizes the image processing method when executing the computer program.
Embodiments of the present invention also provide a computer-readable storage medium storing a computer program which, when executed by a processor, implements the image processing method as described above.
In the embodiment of the invention, an image to be processed is obtained in real time, a motion vector of the image to be processed is determined through a preset search algorithm, if the motion vector is not 0 vector, the noise of a target type in the image to be processed is processed into Gaussian distribution noise according to a preset transformation algorithm to obtain a noise processing image, the noise processing image is filtered according to a preset filtering algorithm, the noise of the target type in the filtered image is inversely transformed according to an inverse transformation algorithm of the preset transformation algorithm, and a first result image is obtained and output; if the motion vector is 0 vector, the frame images of the two adjacent frames before and after the current frame are subjected to matching point weighted average calculation to obtain and output a second result image, and the noise reduction effect of the image can be improved by searching the motion vector of the image first and respectively carrying out different noise reduction treatments according to whether the motion vector is 0 vector or not, the processing speed is high, the real-time noise reduction of the image can be realized, the information of the adjacent frames is fully utilized when the image is processed, and the definition of the image can be further improved.
Drawings
FIG. 1 is a flow chart of an image processing method according to an embodiment of the present invention;
FIG. 2 is a schematic view of an application scenario of the image processing method of the present invention;
FIG. 3 is a flowchart illustrating an image processing method according to another embodiment of the present invention;
FIG. 4 is a schematic diagram of an image processing apparatus according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of a terminal according to an embodiment of the present invention.
Detailed Description
In order to make the objects, features and advantages of the present invention more comprehensible, the technical solutions in the embodiments of the present invention will be clearly described in conjunction with the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are only some embodiments of the present invention, but not all embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1, fig. 1 is a flowchart of an image processing method according to an embodiment of the present invention, where the image processing method can process DR perspective images in real time, and can process other images, and referring to fig. 2, fig. 2 is a schematic view of an application scenario of the image processing method, where a flat panel detector 10 and a terminal 20 are connected by a wired or wireless manner to perform data transmission. Wherein the flat panel detector 10 is used for acquiring a perspective image, and the terminal 20 runs the image processing method to process the perspective image. The terminal 20 includes a terminal such as a PC. The image processing method mainly comprises the following steps:
s101, acquiring an image to be processed in real time, and determining a motion vector of the image to be processed through a preset search algorithm;
the image processing method in this embodiment can be used to process the perspective image acquired by the flat panel detector in real time. And acquiring an image to be processed in real time, wherein the image to be processed is a perspective image acquired by the flat panel detector in real time.
The motion vector, i.e. the motion vector, represents the coordinates of the position of the point of the motion estimation.
The preset searching algorithm can be a full searching method, a three-step searching method, a four-step searching method, a diamond searching method and the like.
Judging whether the motion vector is a 0 vector, if not, indicating that the image content of the adjacent frame has relatively moved, and executing step S102; if so, theoretically, it indicates that the image contents of the adjacent frames have not moved relatively, and step S104 is performed.
S102, if the motion vector is not 0 vector, processing the noise of the target type in the image to be processed into Gaussian distribution noise according to a preset transformation algorithm to obtain a noise processing image;
the noise of this target type is referred to as additive noise, whereas the additive noise of flat panel detectors belongs to poisson distribution.
Specifically, according to a preset anscom (An Sike m) transformation algorithm, additive noise in the image to be processed is transformed from poisson distribution to Gaussian distribution noise, and the noise processing image is obtained. The transformation relation is as follows:
where x is the pixel data of the image to be processed and y is the pixel data of the noise processed image.
S103, filtering the noise processing image according to a preset filtering algorithm, and carrying out inverse transformation on the noise of the target type in the filtered image according to an inverse transformation algorithm of the preset transformation algorithm to obtain and output a first result image;
filtering the noise processing image according to a preset filtering algorithm, converting additive noise in the filtered image from Gaussian distribution to Poisson distribution according to an inverse transformation algorithm of an anscam transformation algorithm to obtain a first result image, and outputting the first result image, wherein the first result image can be displayed on a screen of a terminal and provided for a user to view.
And S104, if the motion vector is 0 vector, carrying out matching point weighted average calculation on the frame images of the two adjacent frames before and after the current frame to obtain and output a second result image.
If the motion vector is 0 vector, determining a reference pixel in the current frame, and confirming a matching point of the reference pixel in images of two adjacent frames before and after the current frame, wherein the matching point meets the minimum value of var (M2/M1), M1 is the neighborhood of the reference pixel, M2 is the neighborhood of the matching point, and var represents mean square error calculation. And then obtaining a second result image through matching point weighted average calculation, and outputting the second result image, wherein the second result image can be specifically displayed on a screen of the terminal and provided for a user to view.
In the embodiment of the invention, an image to be processed is obtained in real time, a motion vector of the image to be processed is determined through a preset search algorithm, if the motion vector is not 0 vector, the noise of a target type in the image to be processed is processed into Gaussian distribution noise according to a preset transformation algorithm to obtain a noise processing image, the noise processing image is filtered according to a preset filtering algorithm, the noise of the target type in the filtered image is inversely transformed according to an inverse transformation algorithm of the preset transformation algorithm, and a first result image is obtained and output; if the motion vector is 0 vector, the frame images of the two adjacent frames before and after the current frame are subjected to matching point weighted average calculation to obtain and output a second result image, and the noise reduction effect of the image can be improved by searching the motion vector of the image first and respectively carrying out different noise reduction treatments according to whether the motion vector is 0 vector or not, the processing speed is high, the real-time noise reduction of the image can be realized, the information of the adjacent frames is fully utilized when the image is processed, and the definition of the image can be further improved.
Referring to fig. 3, fig. 3 provides an image processing method according to another embodiment of the present invention, including:
s201, acquiring an image to be processed in real time, and determining a motion vector of the image to be processed through a three-step search algorithm;
the three-step search method is a motion estimation algorithm, and is mainly characterized by comparing the central point of the square of the search area with eight search points around the square, calculating SAD (Sum of absolute difference, sum of absolute differences) values of the nine points, selecting the point with the smallest SAD value as the central point of the next search, then taking the point obtained in the last step as the center, reducing the step length of the current search to be half of the step length of the last search, then carrying out similar search, and tracking the point with the smallest block error, thus finding the best matching position in the third search, wherein the best matching position is the point of the motion estimation, and the coordinate is the motion vector.
Specifically, an image block of a preset size (e.g., 3×3) is taken as a reference block in the image of the current frame, which is 9 points in total, i.e., a center point and 8 points around. Further, in the two adjacent frames before and after the current frame, the pixel point which is the same as the center point of the reference block is taken as the origin point, the image block which takes the origin point as the center point and has the same size as the reference block is taken as the initial block, namely, the initial block is also composed of 9 points in total. The search block of the reference block is determined in the two adjacent frames with half of a preset maximum search length (e.g., 8) as a search step. Specifically, the pixel values of the corresponding positions of the reference block and the search block are subtracted, absolute values are obtained, and the absolute values are summed to obtain the SAD value, so that when the SAD value is minimum, the block error is minimum, and the search block is most similar to the reference block. The minimum block error point (MBD, minimum block distortion) corresponding to the minimum block error is taken as the center point of the next step.
Further, the step size is halved, namely, the step size is reduced from 4 to 2, the center point is moved to the MBD point of the previous step, 8 points which are 2 from the new center point step size are rearranged around the new center point, and the comparison is performed, so that the center point of the next step is obtained. Then, the step size is halved again, namely, the step size is reduced from 2 to 1, and the obtained MBD point is the point of motion estimation, and the coordinates of the MBD point are motion vectors.
Whether the motion vector is a 0 vector, that is, whether the coordinates of the MBD point are (0, 0) is determined, if yes, step S205 is executed, and if not, step S202 is executed.
S202, if the motion vector is not 0 vector, transforming additive noise in the image to be processed into Gaussian distribution noise according to an ascombe transformation algorithm to obtain the noise processing image;
s203, filtering the noise processing image according to a block matching filtering algorithm;
specifically, the filtering step includes: confirming reference blocks in the noise processing image, and confirming similar blocks of a plurality of reference blocks in the noise processing image according to a preset matching rule;
integrating a plurality of similar blocks in the noise processing image into a three-dimensional matrix Q (P), and scaling the three-dimensional matrix Q (P) by wiener filtering to realize filtering, wherein a scaling formula is as follows:
N(P)=Twein_inverse(wp·Twein(Q1(P)));
wherein N (P) represents a coefficient matrix for use as a coefficient when weighted; twaiin_reverse () represents the three-dimensional inverse transform, wp is the wiener filter coefficient; twain (Q (P)) represents a three-dimensional transformation of a three-dimensional matrix;
the three-dimensional matrix is transformed back to image estimation through three-dimensional inverse transformation, and a plurality of similar blocks in the noise processing image are restored to the original position of the noise processing image in a mode that the value of the similar block at each corresponding position is weighted with the coefficient matrix N (P) to obtain the gray value of each pixel.
S204, carrying out inverse transformation on the additive noise in the filtered image according to an inverse transformation algorithm of an anscam transformation algorithm, carrying out inverse transformation to obtain a poisson distribution of the additive noise in the filtered image converted from Gaussian distribution, and obtaining and outputting a first result image;
s205, if the motion vector is a 0 vector, performing matching point weighted average calculation on frame images of two adjacent frames before and after the current frame to obtain and output a second result image.
Specifically, a reference pixel Icur (x, y) is selected in the current frame, and a neighborhood of the reference pixel is determined, and it is to be noted that the size of the neighborhood is a preset value, and the size of the neighborhood can be set according to needs, specifically, if the reference pixel is taken as a central point, a reference block with the size of 3×3 is determined, an area where a pixel point except the reference pixel is located is determined as the neighborhood of the reference pixel in the reference block, further, matching points Ineighbor (x ', y') of the reference pixel are searched in two adjacent frames before and after the current frame respectively, the matching points meet the minimum value of var (M2/M1), wherein M1 is the neighborhood of the reference pixel, M2 is the neighborhood of the matching points, and var represents mean square error calculation;
carrying out weighted average calculation on the matching points of the front adjacent frames and the reference pixels of the current frames, and carrying out weighted average calculation on the gray values obtained by calculation and the matching points of the rear adjacent frames again to obtain the gray values of the target pixels;
and carrying out weighted average calculation on the matching points according to the following formula to obtain the gray value of the target pixel:
I(x,y)=w*Ineighbor(x’,y’)+(1-w)*Icur(x,y)
wherein w is a weight value, w=exp (-var (M2/M1)/sigma-2), sigma is a preset constant, and can be specifically set and adjusted according to an actual effect, w is less than or equal to 0.5, i (x, y) is a gray value of the target pixel, icur (x, y) is a gray value of the reference pixel or the calculated gray value, the first weighted average calculation is to calculate the gray value of the reference pixel as Icur (x, y), the second weighted average calculation is to calculate the calculated gray value as Icur (x, y), and Ineighbor (x ', y') is a gray value of two pixels of the matching point, and (x, y) and (x ', y') in the above gray values are coordinates of corresponding pixels;
and taking the image formed by the target pixels as the second result image, and outputting the second result image.
In the example, an image to be processed is obtained in real time, a motion vector of the image to be processed is determined through a preset search algorithm, if the motion vector is not 0 vector, a target type noise in the image to be processed is processed into Gaussian distribution noise according to a preset transformation algorithm to obtain a noise processing image, the noise processing image is filtered according to a preset filtering algorithm, the noise of the target type in the filtered image is inversely transformed according to an inverse transformation algorithm of the preset transformation algorithm, and a first result image is obtained and output; if the motion vector is 0 vector, the frame images of the two adjacent frames before and after the current frame are subjected to matching point weighted average calculation to obtain and output a second result image, and the noise reduction effect of the image can be improved by searching the motion vector of the image first and respectively carrying out different noise reduction treatments according to whether the motion vector is 0 vector or not, the processing speed is high, the real-time noise reduction of the image can be realized, the information of the adjacent frames is fully utilized when the image is processed, and the definition of the image can be further improved.
Referring to fig. 4, an embodiment of the present invention further provides an image processing apparatus, which may be a terminal or a module in the terminal, and may implement the above image processing method, where the apparatus includes:
the motion estimation module 301 is configured to acquire an image to be processed in real time, and determine a motion vector of the image to be processed through a preset search algorithm;
the noise conversion module 302 is configured to process, according to a preset conversion algorithm, noise of a target type in the image to be processed into gaussian distributed noise if the motion vector is not a 0 vector, so as to obtain a noise processed image;
a filtering module 303, configured to filter the noise-processed image according to a preset filtering algorithm;
the noise transformation module 302 is further configured to inverse-transform the noise of the target type in the filtered image according to an inverse transformation algorithm of the preset transformation algorithm, so as to obtain a first result image;
an output module 304, configured to output the first result image;
the calculation module 305 is configured to perform a matching point weighted average calculation on the frame images of the two adjacent frames before and after the current frame if the motion vector is a 0 vector, so as to obtain a second result image.
The output module 304 is further configured to output the second result image.
Further, the motion estimation module 301 is further configured to determine two neighboring frames before and after the current frame of the image to be processed, and determine the motion vector in the two neighboring frames by using a three-step search algorithm.
The noise conversion module 302 is specifically configured to convert the additive noise in the image to be processed into gaussian distributed noise according to an anscam conversion algorithm, so as to obtain the noise processing image.
The filtering module 303 is further configured to perform filtering preprocessing on the noise processing image to obtain a preliminary filtering image of the noise processing image;
confirming a reference block in the noise processing image, and confirming similar blocks of a plurality of the reference blocks from the noise processing image according to a preset matching rule;
integrating a plurality of similar blocks in the noise processing image into a three-dimensional matrix, and scaling the three-dimensional matrix by wiener filtering to realize filtering, wherein a scaling formula is as follows:
N(P)=Twein_inverse(wp·Twein(Q1(P)));
wherein N (P) represents a coefficient matrix; twaiin_reverse () represents the three-dimensional inverse transform, wp is the wiener filter coefficient; twain (Q (P)) represents a three-dimensional transformation of the three-dimensional matrix;
and transforming the three-dimensional matrix back to image estimation through three-dimensional inverse transformation, and restoring a plurality of similar blocks in the noise processing image to the original position of the noise processing image in a mode of weighting the value of the similar block at each corresponding position and the coefficient matrix N (P) to obtain the gray value of each pixel.
The noise transformation module 302 is further specifically configured to transform additive noise in the filtered image from a gaussian distribution to a poisson distribution according to an inverse transformation algorithm of the oscombe transformation algorithm.
A calculation module 305, further configured to select a reference pixel in the current frame and determine a neighborhood of the reference pixel;
searching a matching point of the reference pixel in the front and rear adjacent frames, wherein the matching point meets the minimum value of var (M2/M1), M1 is the neighborhood of the reference pixel, M2 is the neighborhood of the matching point, and var represents mean square error calculation;
carrying out weighted average calculation on the matching points of the front adjacent frames and the reference pixels of the current frames, and carrying out weighted average calculation on the gray values obtained by calculation and the matching points of the rear adjacent frames again to obtain the gray values of the target pixels;
and carrying out weighted average calculation on the matching points according to the following formula to obtain the gray value of the target pixel:
I(x,y)=w*Ineighbor(x’,y’)+(1-w)*Icur(x,y)
wherein w is a weight value, w=exp (-var (M2/M1)/sigma-2), sigma is a preset constant, and can be specifically set and adjusted according to actual effects, w is less than or equal to 0.5, i (x, y) is a gray value of the target pixel, icur (x, y) is a gray value of the reference pixel, or the calculated gray value, the first weighted average calculation is to calculate the gray value of the reference pixel as Icur (x, y), the second weighted average calculation is to calculate the calculated gray value as Icur (x, y), and Ineighbor (x ', y') is the gray value of the two matching point pixels;
and taking the image formed by the target pixels as the second result image, and outputting the second result image.
In this embodiment, an image to be processed is obtained in real time, a motion vector of the image to be processed is determined through a preset search algorithm, if the motion vector is not 0 vector, a target type noise in the image to be processed is processed into gaussian distributed noise according to a preset transformation algorithm to obtain a noise processing image, the noise processing image is filtered according to a preset filtering algorithm, the target type noise in the filtered image is inversely transformed according to an inverse transformation algorithm of the preset transformation algorithm, and a first result image is obtained and output; if the motion vector is 0 vector, the frame images of the two adjacent frames before and after the current frame are subjected to matching point weighted average calculation to obtain and output a second result image, and the noise reduction effect of the image can be improved by searching the motion vector of the image first and respectively carrying out different noise reduction treatments according to whether the motion vector is 0 vector or not, the processing speed is high, the real-time noise reduction of the image can be realized, the information of the adjacent frames is fully utilized when the image is processed, and the definition of the image can be further improved.
Referring to fig. 5, the embodiment of the present invention further provides a terminal 4, including a memory 401, a processor 402, and a computer program stored in the memory 401 and executable on the processor 402, where the processor 402 implements the steps of the image processing method in the embodiments shown in the foregoing fig. 1 to 3 when executing the computer program.
The processor 402 may be a central processing unit (Central Processing Unit, CPU), but may also be other general purpose processors such as microprocessors, digital signal processors (Digital Signal Processor, DSPs), application specific integrated circuits (Application Specific Integrated Circuit, ASICs), off-the-shelf programmable gate arrays (Field-Programmable Gate Array, FPGAs) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
Memory 401 may include read-only memory and random access memory, and may also include nonvolatile random access memory.
The embodiment of the invention also provides a computer readable storage medium, which stores a computer program, and the computer program realizes the steps of the image processing method in the embodiment shown in fig. 1 to 3 when being executed by a processor.
The computer program comprises computer program code, and the computer program code can be in a source code form, an object code form, an executable file or some intermediate form and the like.
The above computer readable storage medium may include: any entity or device capable of carrying the computer program code described above, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer readable Memory, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), an electrical carrier wave signal, a telecommunications signal, a software distribution medium, and so forth.
The foregoing describes an image processing method, an image processing apparatus, a terminal, and a computer readable storage medium according to the present invention, and those skilled in the art may change the specific embodiments and the application scope according to the ideas of the embodiments of the present invention, so that the present disclosure should not be construed as limiting the present invention.

Claims (5)

1. An image processing method, comprising:
acquiring an image to be processed in real time, and determining a motion vector of the image to be processed through a preset search algorithm;
if the motion vector is not 0 vector, processing the noise of the target type in the image to be processed into Gaussian distribution noise according to a preset transformation algorithm to obtain a noise processing image;
filtering the noise processing image according to a preset filtering algorithm, inversely transforming the noise of the target type in the filtered image according to an inversely transforming algorithm of the preset transforming algorithm, and obtaining and outputting a first result image;
if the motion vector is 0 vector, carrying out matching point weighted average calculation on frame images of two adjacent frames before and after the current frame to obtain and output a second result image;
the determining the motion vector of the image to be processed through a preset searching algorithm comprises the following steps:
determining two adjacent frames before and after the current frame of the image to be processed;
determining the motion vector in the front and rear two adjacent frames through a three-step search algorithm;
the processing the noise of the target type in the image to be processed into Gaussian distribution noise according to a preset transformation algorithm, and obtaining the noise processing image comprises the following steps:
according to an anscom transformation algorithm, transforming additive noise in the image to be processed into Gaussian distribution noise to obtain the noise processing image;
the filtering the noise processing image according to a preset filtering algorithm comprises the following steps:
confirming a reference block in the noise processing image, and confirming similar blocks of a plurality of the reference blocks from the noise processing image according to a preset matching rule;
integrating a plurality of similar blocks in the noise-processed image into a three-dimensional matrix;
and scaling the three-dimensional matrix by wiener filtering to realize filtering, wherein a scaling formula is as follows:
N(P)=Twein_inverse(wp·Twein(Q1(P)));
wherein N (P) represents a coefficient matrix; twaiin_reverse () represents the three-dimensional inverse transform, wp is the wiener filter coefficient; twain (Q (P)) represents a three-dimensional transformation of the three-dimensional matrix;
transforming the three-dimensional matrix back to image estimation through three-dimensional inverse transformation, and restoring a plurality of similar blocks in the noise processing image to the original position of the noise processing image in a mode that the gray value of each pixel is obtained by weighting the value of the similar block at each corresponding position and the coefficient matrix;
the step of carrying out matching point weighted average calculation on the frame images of the front and rear adjacent frames of the current frame to obtain and output a second result image comprises the following steps:
selecting a reference pixel in the current frame, and determining a neighborhood of the reference pixel;
searching for a matching point of the reference pixel in the front and rear two adjacent frames respectively, wherein each matching point meets the minimum value of var (M2/M1), M1 is the neighborhood of the reference pixel, M2 is the neighborhood of the matching point, and var represents mean square error calculation;
performing weighted average calculation on the matching points of the front adjacent frames and the reference pixels of the current frames, and performing weighted average calculation on the gray values obtained by calculation and the matching points of the rear adjacent frames again to obtain the gray values of the target pixels:
the formula of the two average weight calculations is:
I(x,y)=w*Ineighbor(x’,y’)+(1-w)*Icur(x,y)
wherein w is a weight value, w=exp (-var (M2/M1)/sigma-2), sigma is a constant, w is less than or equal to 0.5, i (x, y) is a gray value of the target pixel, icur (x, y) is a gray value of the reference pixel or the calculated gray value, ineighbor (x ', y') is a gray of two of the matching point pixels, wherein (x, y) and (x ', y') are coordinates of each corresponding pixel point;
and taking the image formed by the target pixels as the second result image, and outputting the second result image.
2. The method of claim 1, wherein said inversely transforming said target type of noise in said filtered image according to an inverse transformation algorithm of said preset transformation algorithm comprises:
and converting additive noise in the filtered image from Gaussian distribution to Poisson distribution according to an inverse transformation algorithm of the anscom transformation algorithm.
3. An image processing apparatus, comprising:
the motion estimation module is used for acquiring an image to be processed in real time and determining a motion vector of the image to be processed through a preset search algorithm;
the noise conversion module is used for processing the noise of the target type in the image to be processed into Gaussian distribution noise according to a preset conversion algorithm if the motion vector is not 0 vector, so as to obtain a noise processing image;
the filtering module is used for filtering the noise processing image according to a preset filtering algorithm;
the noise transformation module is further used for carrying out inverse transformation on the noise of the target type in the filtered image according to an inverse transformation algorithm of the preset transformation algorithm to obtain a first result image;
the output module is used for outputting the first result image;
the calculation module is used for carrying out matching point weighted average calculation on frame images of two adjacent frames before and after the current frame if the motion vector is 0 vector, so as to obtain a second result image;
the output module is further used for outputting the second result image;
the motion estimation module is further used for determining two adjacent frames before and after the current frame of the image to be processed; determining the motion vector in the front and rear two adjacent frames through a three-step search algorithm;
the noise conversion module is further used for converting additive noise in the image to be processed into Gaussian distribution noise according to an anscam conversion algorithm to obtain the noise processing image; the filtering module is further used for confirming a reference block in the noise processing image and confirming similar blocks of a plurality of reference blocks from the noise processing image according to a preset matching rule;
integrating a plurality of similar blocks in the noise-processed image into a three-dimensional matrix;
and scaling the three-dimensional matrix by wiener filtering to realize filtering, wherein a scaling formula is as follows:
N(P)=Twein_inverse(wp·Twein(Q1(P)));
wherein N (P) represents a coefficient matrix; twaiin_reverse () represents the three-dimensional inverse transform, wp is the wiener filter coefficient; twain (Q (P)) represents a three-dimensional transformation of the three-dimensional matrix;
transforming the three-dimensional matrix back to image estimation through three-dimensional inverse transformation, and restoring a plurality of similar blocks in the noise processing image to the original position of the noise processing image in a mode that the gray value of each pixel is obtained by weighting the value of the similar block at each corresponding position and the coefficient matrix;
the computing module is further used for selecting a reference pixel in the current frame and determining a neighborhood of the reference pixel; searching for a matching point of the reference pixel in the front and rear two adjacent frames respectively, wherein each matching point meets the minimum value of var (M2/M1), M1 is the neighborhood of the reference pixel, M2 is the neighborhood of the matching point, and var represents mean square error calculation; performing weighted average calculation on the matching points of the front adjacent frames and the reference pixels of the current frames, and performing weighted average calculation on the gray values obtained by calculation and the matching points of the rear adjacent frames again to obtain the gray values of the target pixels: the formula of the two average weight calculations is:
I(x,y)=w*Ineighbor(x’,y’)+(1-w)*Icur(x,y)
wherein w is a weight value, w=exp (-var (M2/M1)/sigma-2), sigma is a constant, w is less than or equal to 0.5, i (x, y) is a gray value of the target pixel, icur (x, y) is a gray value of the reference pixel or the calculated gray value, ineighbor (x ', y') is a gray of two of the matching point pixels, wherein (x, y) and (x ', y') are coordinates of each corresponding pixel point; and taking the image formed by the target pixels as the second result image, and outputting the second result image.
4. A terminal comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the image processing method according to any one of claims 1 to 2 when executing the computer program.
5. A computer-readable storage medium storing a computer program, characterized in that the computer program, when executed by a processor, implements the image processing method according to any one of claims 1 to 2.
CN202010642524.6A 2020-07-06 2020-07-06 Image processing method, device, terminal and computer readable storage medium Active CN111784733B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010642524.6A CN111784733B (en) 2020-07-06 2020-07-06 Image processing method, device, terminal and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010642524.6A CN111784733B (en) 2020-07-06 2020-07-06 Image processing method, device, terminal and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN111784733A CN111784733A (en) 2020-10-16
CN111784733B true CN111784733B (en) 2024-04-16

Family

ID=72758083

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010642524.6A Active CN111784733B (en) 2020-07-06 2020-07-06 Image processing method, device, terminal and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN111784733B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115311147A (en) * 2021-05-06 2022-11-08 影石创新科技股份有限公司 Image processing method, image processing device, electronic equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102014240A (en) * 2010-12-01 2011-04-13 深圳市蓝韵实业有限公司 Real-time medical video image denoising method
CN102201113A (en) * 2010-03-23 2011-09-28 索尼公司 Image processing apparatus, image processing method, and program
WO2014082441A1 (en) * 2012-11-30 2014-06-05 华为技术有限公司 Noise elimination method and apparatus
CN104424628A (en) * 2013-09-02 2015-03-18 南京理工大学 CCD-image-based method for reducing noise by using frame-to-frame correlation
CN110738612A (en) * 2019-09-27 2020-01-31 深圳市安健科技股份有限公司 Method for reducing noise of X-ray perspective image and computer readable storage medium
CN111353948A (en) * 2018-12-24 2020-06-30 Tcl集团股份有限公司 Image noise reduction method, device and equipment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102201113A (en) * 2010-03-23 2011-09-28 索尼公司 Image processing apparatus, image processing method, and program
CN102014240A (en) * 2010-12-01 2011-04-13 深圳市蓝韵实业有限公司 Real-time medical video image denoising method
WO2014082441A1 (en) * 2012-11-30 2014-06-05 华为技术有限公司 Noise elimination method and apparatus
CN104424628A (en) * 2013-09-02 2015-03-18 南京理工大学 CCD-image-based method for reducing noise by using frame-to-frame correlation
CN111353948A (en) * 2018-12-24 2020-06-30 Tcl集团股份有限公司 Image noise reduction method, device and equipment
CN110738612A (en) * 2019-09-27 2020-01-31 深圳市安健科技股份有限公司 Method for reducing noise of X-ray perspective image and computer readable storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于下采样的快速块匹配搜索算法及降噪应用;张莎;田逢春;谭洪涛;;计算机应用;20101001(10);全文 *

Also Published As

Publication number Publication date
CN111784733A (en) 2020-10-16

Similar Documents

Publication Publication Date Title
US11151690B2 (en) Image super-resolution reconstruction method, mobile terminal, and computer-readable storage medium
CN108694705B (en) Multi-frame image registration and fusion denoising method
CN110163237B (en) Model training and image processing method, device, medium and electronic equipment
Liu et al. Robust multi-frame super-resolution based on spatially weighted half-quadratic estimation and adaptive BTV regularization
EP2164040B1 (en) System and method for high quality image and video upscaling
Zeng et al. A generalized DAMRF image modeling for superresolution of license plates
CN106780336B (en) Image reduction method and device
Jeong et al. Multi-frame example-based super-resolution using locally directional self-similarity
CN113658085B (en) Image processing method and device
Pok et al. Efficient block matching for removing impulse noise
CN113012061A (en) Noise reduction processing method and device and electronic equipment
CN111784733B (en) Image processing method, device, terminal and computer readable storage medium
CN113793272B (en) Image noise reduction method and device, storage medium and terminal
CN110958363A (en) Image processing method and device, computer readable medium and electronic device
CN112801879B (en) Image super-resolution reconstruction method and device, electronic equipment and storage medium
CN113935934A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
Zhang et al. Nonlocal edge-directed interpolation
Goto et al. Learning-based super-resolution image reconstruction on multi-core processor
CN110689486A (en) Image processing method, device, equipment and computer storage medium
CN111062279B (en) Photo processing method and photo processing device
US9064190B2 (en) Estimating pixel values in digital image processing
Jiang et al. Learning in-place residual homogeneity for single image detail enhancement
CN110738612B (en) Method for reducing noise of X-ray perspective image and computer readable storage medium
JP2020181402A (en) Image processing system, image processing method and program
Kishan et al. Patch-based and multiresolution optimum bilateral filters for denoising images corrupted by Gaussian noise

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant