WO2021017809A1 - 视频去噪方法、装置及计算机可读存储介质 - Google Patents
视频去噪方法、装置及计算机可读存储介质 Download PDFInfo
- Publication number
- WO2021017809A1 WO2021017809A1 PCT/CN2020/101806 CN2020101806W WO2021017809A1 WO 2021017809 A1 WO2021017809 A1 WO 2021017809A1 CN 2020101806 W CN2020101806 W CN 2020101806W WO 2021017809 A1 WO2021017809 A1 WO 2021017809A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- video frame
- sub
- variance
- current video
- noise
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 27
- 238000001914 filtration Methods 0.000 claims abstract description 88
- 230000002123 temporal effect Effects 0.000 claims description 20
- 230000008878 coupling Effects 0.000 claims description 2
- 238000010168 coupling process Methods 0.000 claims description 2
- 238000005859 coupling reaction Methods 0.000 claims description 2
- 238000002156 mixing Methods 0.000 description 21
- 238000013507 mapping Methods 0.000 description 12
- 238000001514 detection method Methods 0.000 description 10
- 230000000694 effects Effects 0.000 description 8
- 238000004364 calculation method Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 238000012935 Averaging Methods 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 2
- 230000002146 bilateral effect Effects 0.000 description 2
- 238000009499 grossing Methods 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 230000011514 reflex Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/14—Picture signal circuitry for video frequency region
- H04N5/21—Circuitry for suppressing or minimising disturbance, e.g. moiré or halo
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
- G06T7/248—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving reference images or patches
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/83—Generation or processing of protective or descriptive data associated with content; Content structuring
- H04N21/845—Structuring of content, e.g. decomposing content into time segments
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/14—Picture signal circuitry for video frequency region
- H04N5/144—Movement detection
- H04N5/145—Movement estimation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20021—Dividing image into blocks, subimages or windows
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20076—Probabilistic image processing
Definitions
- This application relates to the field of video processing technology, for example, to a video denoising method, device, and computer-readable storage medium.
- Image denoising has always been a very important direction in the field of image processing.
- photography technology has undergone earth-shaking changes. From the very beginning professional digital single-lens reflex cameras to simpler smart phones Point and shoot camera. Due to the limitation of the aperture and the size of the sensor, the smart phone will generate more noise than the SLR, resulting in the reduced resolution of the received image or video compared with the original image or video, which not only affects the visual effect, but also needs to obtain or identify from it.
- the image or video of the moving target further affects the accuracy of the acquisition or recognition work. Therefore, a better denoising algorithm is needed to improve the image quality.
- the adaptive denoising algorithms all use the estimation of the noise intensity, and then dynamically adjust the denoising related parameters, so as to achieve the effect of no residual noise and preserve the image details as much as possible.
- the adaptive denoising algorithm has an effect on the current frame. The problem of low accuracy of image noise intensity estimation.
- Noise estimation algorithms mainly fall into the following two categories:
- the first category noise intensity estimation for the current image frame.
- the steps are as follows: 1) Divide the image or video frame image to be estimated into sub-image blocks of the same size; 2) Perform variance calculation on the obtained sub-image blocks respectively to obtain the variance value of each sub-image block; 3) According to each sub-image block For the variance value of the image block, a certain proportion of the smaller variance is selected to estimate the noise intensity, and then the noise intensity of the current image frame is obtained.
- This algorithm has relatively large errors for images with rich details and is easy to regard the details as noise.
- the second category Perform noise intensity estimation for the current frame and the previous frame.
- the steps are as follows: 1) Divide the current frame image and the previous frame image of the video to be estimated into one-to-one corresponding sub-image blocks of the same size; 2) Perform difference calculation on the obtained one-to-one corresponding sub-image blocks respectively, Obtain the variance value of each sub-image block; 3) According to the variance value of each sub-image block, select a certain proportion of the smaller variance to estimate the noise intensity, and then obtain the noise intensity of the current image frame.
- This algorithm is used before and after the video When the brightness of the frame changes or there is a large-scale motion in the front and back frames, misjudgment is easy to occur.
- the unreasonable denoising parameters will cause one frame of the image to be clear, one frame is blurred, or one frame has residual noise, and one frame has no residual noise flicker.
- VBM3D video block-matching and 3D filtering
- VBM4D video block-matching and 4D filtering
- the embodiments of the present invention provide a video denoising method and device, and a computer-readable storage medium, which can improve the accuracy of noise intensity estimation.
- the embodiment of the present invention provides a video denoising method, including:
- the embodiment of the present invention also provides a computer-readable storage medium, the computer-readable storage medium stores one or more programs, and the one or more programs can be executed by one or more processors to realize the above The described video denoising method.
- the embodiment of the present invention also provides a video denoising device, including a processor and a memory, the processor and the memory are connected by electrical coupling, and the processor is configured to execute a program stored in the memory to realize the above Video denoising method.
- the embodiment of the present invention also provides a video denoising device, including a noise statistics module, a noise estimation module, and a video denoising module;
- the noise statistics module is configured to divide each video frame in the input video frame sequence into sub-image blocks, and calculate the block variance of each sub-image block;
- the noise estimation module is configured to calculate the average variance of all sub-image blocks in the current video frame according to the calculated block variance, determine the noise intensity of the current video frame according to the calculated average variance, and select the filter intensity that matches the noise intensity And noise characteristic curve;
- the video denoising module is configured to filter the current video frame according to the filtering strength and the noise characteristic curve.
- FIG. 1 is a schematic flowchart of an exemplary video denoising method provided by an embodiment of the present invention
- FIG. 2 is a schematic diagram of the principle of smoothing noise intensity by a first-in first-out queue provided by an embodiment of the present invention
- FIG. 3 is a schematic flowchart of an exemplary video denoising process provided by an embodiment of the present invention.
- FIG. 4 is a schematic flowchart of an exemplary process of denoising in airspace provided by an embodiment of the present invention
- FIG. 5 is a schematic diagram of a calculation principle of a motion vector based on motion compensation according to an embodiment of the present invention
- FIG. 6 is a schematic diagram of a mapping relationship between exercise intensity and mixing coefficient provided by an embodiment of the present invention.
- FIG. 7 is a schematic diagram of an exemplary structure of a video denoising device provided by an embodiment of the present invention.
- an embodiment of the present invention provides a video denoising method, which includes the following steps:
- Step 101 Perform sub-image block division on each video frame in the input video frame sequence, and calculate the block variance of each sub-image block.
- the calculating the block variance of each sub-image block includes:
- Calculate the spatial variance of each sub-image block calculate the time-domain variance between the sub-image block in the current video frame and the sub-image block at the corresponding position of the previous video frame of the current video frame; select the spatial domain The smaller value of the variance and the time domain variance is used as the block variance of the sub-image block.
- Step 102 Calculate the average variance of all sub-image blocks in the current video frame according to the calculated block variance, determine the noise intensity of the current video frame according to the calculated average variance, and select the filter intensity and noise characteristics that match the noise intensity curve.
- the calculating the average variance of all sub-image blocks in the current video frame according to the calculated block variance includes:
- the first n sub-image blocks may be the first N% of all sub-image blocks in the current video frame. For example, it can be set as the top 10% of the sub-image blocks after sorting.
- the determining the noise intensity of the current video frame according to the calculated average variance includes:
- the calculated average variance is less than the preset variance value, record the noise intensity of the current video frame as 0; if the calculated average variance is greater than or equal to the preset variance value, use the calculated average variance as the current video frame The intensity of noise.
- the method before selecting the filter intensity and noise characteristic curve matching the noise intensity, the method further includes:
- the filtering strength includes spatial filtering strength and temporal filtering strength.
- the multiple sub-image blocks are sorted according to the variance value from small to large, and the variances of the second preset value sub-image blocks are accumulated, Calculate the average variance of all sub-image blocks according to the accumulated variance sum and the size of the second preset value. If the average variance of all sub-image blocks is less than the third preset value, write 0 into the FIFO; otherwise, write the sub-image block
- the average variance of is written into the First Input First Output (FIFO) queue, as shown in Figure 2, the depth of the FIFO can be 16, that is, the noise intensity data of the last 16 frames are stored.
- the smoothed noise level (noise level) of the current video frame is obtained, and then according to the size of the noise intensity, the spatial filtering intensity (Spatial denoise) that matches the noise intensity is selected. strength), temporal filtering strength (temporal denoise strength), and corresponding noise curve (noise curve).
- Step 103 Filter the current video frame according to the filter strength and noise characteristic curve.
- the step 103 includes:
- the spatial filtering algorithm is the BM3D denoising algorithm
- the Wiener coefficient is correspondingly performed according to the brightness value and the noise characteristic curve of the pixel. Proportional zoom operation.
- the step 103 includes five related operations: spatial denoise (Spatial denoise), motion estimation (Motion estimation), motion detection (Moition detector), and mixing coefficient mapping (motion2 ⁇ ) And blending, the input includes the current video frame f_in(n) to be filtered, the previous video frame f_out(n-1) after filtering, and the spatial filter intensity coefficient, temporal filter intensity coefficient, and noise output in step 102 Characteristic curve and noise intensity.
- spatial denoise Spatial denoise
- Motion estimation motion estimation
- Motion detector motion detection
- motion2 ⁇ mixing coefficient mapping
- the spatial denoising operation of this application can use the denoising algorithm of BM3D, but this application has improved the algorithm to make its algorithm more consistent with the characteristics of the noise introduced by the video capture terminal.
- the Wiener filter operation the Wiener coefficient is scaled according to the brightness value of the pixel and its noise characteristic curve.
- the spatial denoising operation of this application does not have to be the BM3D algorithm, and filtering algorithms such as guided filtering and bilateral filtering are also possible, but the processing effect is slightly worse than the BM3D algorithm.
- Motion estimation operation the current video frame f_in(n) to be filtered is divided into blocks according to the preset value, and the image of the current video frame is divided into sub-image blocks.
- the sub-image blocks can overlap, and then for each sub-image block Image block, perform a minimum mean square error (MSE) operation on all sub-image blocks within a certain search range centered on the corresponding position in the previous video frame f_out(n-1) after filtering, and the result will be
- MSE minimum mean square error
- the sub image block corresponding to the smallest MSE value of is set to the best matching block corresponding to the current sub image block in the current video frame, and the motion vector is set to the coordinates of the best matching block in the previous frame image minus the current sub image block
- the coordinate values are shown in Figure 5.
- each sub-image block in the current video frame to be filtered has a best matching block in the previous video frame, and each sub-image block and Its corresponding best matching block does the sum of absolute difference (Sum of Absolute Difference, SAD) operation.
- the SAD value of each sub-image block is regarded as its motion intensity value, where (i, j) is the two-dimensional coordinate of the pixel to be filtered, 0 ⁇ i ⁇ M, 0 ⁇ j ⁇ N.
- the mixing coefficient ⁇ can be obtained by mapping as shown in Figure 6, where the abscissa is the exercise intensity value, and the ordinate is the mixing coefficient value.
- Base motion Base_motion
- blend_slope blend_slope
- Top_motion top motion
- the mixing coefficient ⁇ obtained in the mixing coefficient mapping operation the motion vector obtained in the motion estimation operation and the image after spatial filtering obtained in the spatial denoising operation, the final output image can be obtained by weighted average ,Calculated as follows:
- f_out(n,i,j) f_in_spa(n,i,j)*(1- ⁇ )+f_out(n-1,i+mvi,j+mvj) ⁇
- (i, j) is the two-dimensional coordinates of the pixel to be filtered
- (mvi, mvj) is the motion vector of the pixel to be filtered
- n is the n-th video frame in the video frame sequence
- f_in_spa(n,i,j) is The pixel to be filtered of the n-th video frame after spatial filtering
- f_out(n-1, i+mvi, j+mvj) is the pixel of the n-1th video frame after the filtering.
- the video denoising method provided by the embodiment of the present invention includes dividing each video frame in the input video frame sequence into sub-image blocks, and calculating the block variance of each sub-image block; according to the calculated block Variance calculates the average variance of all sub-image blocks in the current video frame, determines the noise intensity of the current video frame according to the calculated average variance, selects the filter intensity and noise characteristic curve that match the noise intensity; according to the filter intensity and The noise characteristic curve filters the current video frame, which effectively improves the accuracy of the noise intensity estimation. According to the predicted noise intensity, the matching filter intensity and noise characteristic curve can be selected, which can effectively remove the noise and prevent the Excessive noise intensity leads to the loss of image details, thereby achieving better overall denoising performance.
- the embodiment of the present invention also provides a computer-readable storage medium, the computer-readable storage medium stores one or more programs, and the one or more programs can be executed by one or more processors to achieve the following operating:
- Each video frame in the input video frame sequence is divided into sub-image blocks, and the block variance of each sub-image block is calculated; according to the calculated block variance, the average variance of all sub-image blocks in the current video frame is calculated according to the calculated The average variance determines the noise intensity of the current video frame, selects the filter intensity and noise characteristic curve that match the noise intensity; and filters the current video frame according to the filter intensity and noise characteristic curve.
- the calculating the block variance of each sub-image block includes:
- Calculate the spatial variance of each sub-image block calculate the time-domain variance between the sub-image block in the current video frame and the sub-image block at the corresponding position of the previous frame of the current video frame; select the spatial variance and the time domain The smaller value in the variance is used as the block variance of the sub-image block.
- the calculating the average variance of all sub-image blocks in the current video frame according to the calculated block variance includes:
- the determining the noise intensity of the current video frame according to the calculated average variance includes:
- the calculated average variance is less than the preset variance value, record the noise intensity of the current video frame as 0; if the calculated average variance is greater than or equal to the preset variance value, the calculated average variance As the noise intensity of the current video frame.
- the operation before selecting the filter intensity and noise characteristic curve matching the noise intensity, the operation further includes:
- the filtering strength includes spatial filtering strength and temporal filtering strength.
- the spatial filtering algorithm is a block matching and three-dimensional filtering BM3D denoising algorithm
- the Wiener coefficient is determined according to the brightness value and noise of the pixel. Characteristic curve, zoom operation in corresponding proportion.
- An embodiment of the present invention also provides a video denoising device, including a processor and a memory, wherein: the processor is used to execute a program stored in the memory to implement the following operations:
- Each video frame in the input video frame sequence is divided into sub-image blocks, and the block variance of each sub-image block is calculated; according to the calculated block variance, the average variance of all sub-image blocks in the current video frame is calculated according to the calculated The average variance determines the noise intensity of the current video frame, selects the filter intensity and noise characteristic curve that match the noise intensity; and filters the current video frame according to the filter intensity and noise characteristic curve.
- the calculating the block variance of each sub-image block includes:
- Calculate the spatial variance of each sub-image block calculate the time-domain variance between the sub-image block in the current video frame and the sub-image block at the corresponding position of the previous frame of the current video frame; select the spatial variance sum The smaller value in the time domain variance is used as the block variance of the sub-image block.
- the calculating the average variance of all sub-image blocks in the current video frame according to the calculated block variance includes:
- the determining the noise intensity of the current video frame according to the calculated average variance includes:
- the calculated average variance is less than the preset variance value, record the noise intensity of the current video frame as 0; if the calculated average variance is greater than or equal to the preset variance value, the calculated average variance As the noise intensity of the current video frame.
- the operation before selecting the filter intensity and noise characteristic curve matching the noise intensity, the operation further includes:
- the filtering strength includes spatial filtering strength and temporal filtering strength.
- the spatial filtering algorithm is a block matching and three-dimensional filtering BM3D denoising algorithm
- the Wiener coefficient is determined according to the brightness value and noise of the pixel. Characteristic curve, zoom operation in corresponding proportion.
- an embodiment of the present invention also provides a video denoising device, including a noise statistics module 701, a noise estimation module 702, and a video denoising module 703, wherein:
- the noise statistics module 701 is configured to divide each video frame in the input video frame sequence into sub-image blocks, calculate the block variance of each sub-image block, and output the calculated block variance of the sub-image blocks to the noise estimation module 702 .
- the noise estimation module 702 is configured to calculate the average variance of all sub-image blocks in the current video frame according to the calculated block variance, determine the noise intensity of the current video frame according to the calculated average variance, and select a filter that matches the noise intensity Intensity and noise characteristic curve.
- the video denoising module 703 is configured to filter the current video frame according to the filtering strength and the noise characteristic curve.
- the calculating the block variance of each sub-image block includes:
- Calculate the spatial variance of each sub-image block calculate the time-domain variance between the sub-image block in the current video frame and the sub-image block at the corresponding position of the previous frame of the current video frame; select the spatial variance and the time domain The smaller value in the variance is used as the block variance of the sub-image block.
- the outputting the calculated block variance of the sub-image block to the noise estimation module 702 includes:
- the current video frame f_in(n) with noise and its previous video frame f_in(n-1) are input to the noise statistics module 701, and the noise statistics module 701 calculates f_in(n) according to the first preset value.
- the noise statistics module 701 calculates f_in(n) according to the first preset value.
- Is divided into sub-image blocks of the same size as f_in(n-1) and the variance ⁇ s of their spatial domain is calculated for each sub-image block in f_in(n), and the pixel value of each sub-image block in f_in(n)
- the final variance of each sub-image block in f_in(n) is the sub-image block
- the determining the noise intensity of the current video frame according to the calculated average variance includes:
- the calculated average variance is less than the preset variance value, record the noise intensity of the current video frame as 0; if the calculated average variance is greater than or equal to the preset variance value, use the calculated average variance as the current video frame The intensity of noise.
- the noise estimation module 702 is further configured to:
- the filtering strength includes spatial filtering strength and temporal filtering strength.
- the noise estimation module 702 receives the sum of variance and the number of sub image blocks output by the noise statistics module 701, and calculates the average variance of each sub image block. If the average variance of the sub image block is less than the third preset value, then Write 0 into the FIFO, otherwise the average variance of the sub-image block is written into the FIFO.
- the depth of the FIFO can be 16, that is, the noise intensity data of the latest 16 frames are stored; after summing and averaging all the data in the FIFO, the current The smoothed noise intensity of the video frame, and then according to the size of the noise intensity, select the spatial filter intensity, the temporal filter intensity and the corresponding noise characteristic curve that match the noise intensity.
- the video denoising module 703 is set to:
- the current video frame is spatially filtered; according to the current video frame and the previous video frame, the motion intensity and motion vector of each sub-image block of the current video frame are estimated; according to the estimated motion intensity Get the weight of each pixel in the current video frame, get the position of the pixel in the previous video frame that participates in the temporal filtering according to the estimated motion vector, and compare the pixel in the current video frame after spatial filtering with the The pixel points in the previous video frame pointed to by the motion vector corresponding to the pixel point are subjected to weighted average filtering to obtain the filtered pixel point.
- the spatial filtering algorithm may be the BM3D denoising algorithm, and in the Wiener filtering operation of the BM3D denoising algorithm, the Wiener coefficient is performed according to the brightness value and noise characteristic curve of the pixel. The corresponding scaling operation.
- the inputs of the video denoising module 703 are: the current video frame to be filtered f_in(n), the filtered previous video frame f_out(n-1), and the noise estimate
- the video denoising module 703 includes five submodules: spatial denoising submodule, motion estimation submodule, motion detection submodule, mixing coefficient mapping submodule and mixing submodule.
- the spatial denoising submodule performs spatial filtering on f_in(n) according to the spatial filtering intensity and noise characteristic curve transmitted from the noise estimation module 702 to obtain the spatially filtered image f_in_spa(n); the motion estimation submodule performs spatial filtering on f_in_spa(n) according to the input two Frame image, calculate the motion vector value of each sub-image block in f_in(n); the motion detection sub-module performs motion detection on all sub-image blocks in the current video frame f_in(n) in a block-based manner to obtain each sub-image block The motion intensity of the image f_in_spa(n) after spatial filtering, the motion vector output by the motion estimation submodule, and the motion intensity information output by the motion detection submodule are output to the time domain filter (including the mixing coefficient mapping submodule and the mixing submodule ) Perform temporal filtering.
- the temporal filter first obtains the weight of each pixel participating in the temporal filtering according to the motion intensity information, and obtains the position of the pixel participating in the temporal filtering in the previous video frame according to the motion vector information, and then The pixels of the current video frame and the pixels pointed to by the motion vector in the previous video frame are subjected to weighted average filtering to obtain the final filtered pixel.
- the working principle of multiple modules is as follows:
- Spatial denoising submodule As shown in Figure 4, the spatial denoising submodule of this application uses the BM3D denoising algorithm and has improved its algorithm to make its algorithm more consistent with the characteristics of the noise introduced by the video capture terminal In this application, in the Wiener filter operation, the Wiener coefficient is scaled according to the brightness value of the pixel and its noise characteristic curve.
- the spatial denoising sub-module of this application does not necessarily have to use BM3D Algorithms, guided filtering, bilateral filtering and other filtering algorithms are all possible, but the processing effect is slightly worse.
- Motion estimation sub-module The current video frame f_in(n) to be filtered is divided into blocks according to the preset value, and the image of f_in(n) is divided into sub-image blocks.
- the sub-image blocks can overlap, and then for each A sub-image block performs MSE operation on all sub-image blocks within a certain search range centered on the corresponding position in the previous video frame f_out(n-1) after filtering, and the sub-image block corresponding to the smallest MSE value obtained
- the image block is set to the best matching block corresponding to the current sub-image block in the current video frame, and the motion vector is set to the coordinate of the best matching block in the previous frame of the image minus the coordinate value of the current sub-image block, as shown in Figure 5. Show.
- Motion detection sub-module According to the motion estimation sub-module, each sub-image block in the current video frame to be filtered has a best matching block in the previous video frame, and each sub-image The block and its corresponding best matching block perform SAD operation.
- the SAD value of each sub-image block is regarded as its motion intensity value, where (i, j) are the two-dimensional coordinates of the pixel to be filtered, 0 ⁇ i ⁇ M, 0 ⁇ j ⁇ N.
- the mixing coefficient ⁇ can be obtained by mapping according to the exercise intensity value calculated by the motion detection sub-module.
- the abscissa is the exercise intensity value and the ordinate is the mixing coefficient value.
- Base_motion, blend_slope, Top_motion are three preset values. The corresponding mapping relationship can be determined through these three preset values. The three preset values must ensure that the slope of the line segment is negative, that is, the more the movement Stronger, the smaller the mixing coefficient, otherwise it will cause motion blur and even smear.
- Mixing submodule According to the mixing coefficient ⁇ obtained by the mixing coefficient mapping submodule, the motion vector obtained by the motion estimation submodule, and the spatially filtered image obtained by the spatial denoising submodule, the final output image can be weighted averaged To obtain, the calculation formula is as follows:
- f_out(n,i,j) f_in_spa(n,i,j)*(1- ⁇ )+f_out(n-1,i+mvi,j+mvj) ⁇
- (i, j) is the two-dimensional coordinates of the pixel to be filtered
- (mvi, mvj) is the motion vector of the pixel to be filtered
- n is the n-th video frame in the video frame sequence
- f_in_spa(n,i,j) is The pixel to be filtered of the n-th video frame after spatial filtering
- f_out(n-1, i+mvi, j+mvj) is the pixel of the n-1th video frame after the filtering.
- the video denoising method and device and computer-readable storage medium provided by the embodiments of the present invention solve the inaccurate image noise estimation in related technologies by combining the image noise estimation method with the video image denoising method. , And the problem that denoising performance and image quality cannot have both.
- the video denoising solution proposed in the embodiment of the present invention includes three modules: a noise statistics module 701, a noise estimation module 702, and a video denoising module 703.
- the noise statistics module 701 divides into blocks according to the input video frame sequence, and counts the information related to the noise intensity of the current video frame; the noise estimation module 702 calculates the noise intensity information (mainly block variance information) calculated by the noise statistics module 701, after After certain preprocessing, select the real-time adjusted denoising related parameters (including spatial denoising intensity, time domain denoising intensity, noise characteristic curve and noise intensity) and send them to the video denoising module 703; video denoising module 703 The video denoising is performed according to the denoising related parameters issued by the noise estimation module 702 in real time.
- the noise intensity information mainly block variance information
- the noise intensity is estimated according to the statistical characteristics of noise. There are two ways to estimate the noise intensity. One is based on the estimation of the two frames before and after the video, and the other is the noise estimation for the current video frame. The two algorithms are mutually verified. The accuracy rate is higher.
- noise intensity information calculated by this application is averaged for an (m+1) frame, and the noise intensity obtained is smoother, and there will be no large jumps in the noise intensity, resulting in a clear frame and a blurred frame. , Or there is residual noise in one frame, and the flicker phenomenon without residual noise in one frame.
- This application uses a combination of spatial domain BM3D denoising algorithm, time domain motion compensation, and motion intensity detection.
- the algorithm has better effect, and the complexity is not too high. It takes a value between the effect and the complexity. A good balance point.
- this application introduces a noise characteristic curve and dynamically adjusts the denoising intensity according to the noise brightness, thereby achieving a better denoising effect.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Picture Signal Circuits (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
Description
Claims (10)
- 一种视频去噪方法,包括:对输入视频帧序列中的每一视频帧进行子图像块划分,并计算每个子图像块的块方差;根据计算出的块方差计算当前视频帧中所有子图像块的平均方差,根据计算出的平均方差确定所述当前视频帧的噪声强度,选择与所述噪声强度相匹配的滤波强度及噪声特征曲线;根据所述滤波强度及所述噪声特征曲线,对所述当前视频帧进行滤波。
- 根据权利要求1所述的方法,其中,所述计算每个子图像块的块方差,包括:计算每个子图像块的空域方差;计算所述当前视频帧中所述子图像块与所述当前视频帧的前一帧视频帧对应位置的子图像块之间的时域方差;选择所述空域方差和所述时域方差中的较小值作为所述子图像块的块方差。
- 根据权利要求1所述的方法,其中,所述根据计算出的块方差计算当前视频帧中所有子图像块的平均方差,包括:将所述当前视频帧中所有子图像块的块方差从小到大排序;对排序后的前n个子图像块的块方差进行累加,将累加的块方差和与n的比值作为所述当前视频帧中所有子图像块的平均方差,其中,n为大于1的自然数。
- 根据权利要求1所述的方法,其中,所述根据计算出的平均方差确定所述当前视频帧的噪声强度,包括:在所述计算出的平均方差小于预设方差值的情况下,确定所述当前视频帧的噪声强度为0;在所述计算出的平均方差大于或等于预设方差值的情况下,将所述计算出的平均方差作为所述当前视频帧的噪声强度。
- 根据权利要求1所述的方法,在所述根据计算出的平均方差确定所述当前视频帧的噪声强度之后,且在所述选择与所述噪声强度相匹配的滤波强度及噪声特征曲线之前,还包括:计算所述当前视频帧及所述当前视频帧之前的m帧视频帧的噪声强度的平均值,其中,m为大于1的自然数;将计算出的噪声强度的平均值作为所述当前视频帧平滑后的噪声强度。
- 根据权利要求1所述的方法,其中,所述滤波强度包括空域滤波强度和时域滤波强度;所述根据所述滤波强度及所述噪声特征曲线,对所述当前视频帧进行滤波,包括:根据所述空域滤波强度及所述噪声特征曲线,对所述当前视频帧进行空域滤波;根据所述当前视频帧及所述当前视频帧的前一帧视频帧估算所述当前视频帧的每个子图像块的运动强度及运动矢量;根据估算出的运动强度得到所述当前视频帧中每个像素点的权重,根据估算出的运动矢量得到所述前一帧视频帧中参与时域滤波的像素点的位置,将空域滤波后的所述当前视频帧中的像素点与所述像素点对应的运动矢量指向的所述前一帧视频帧中的像素点进行加权平均滤波,得到滤波后的所述像素点。
- 根据权利要求6所述的方法,其中,所述空域滤波的算法为块匹配和三维过滤BM3D去噪声算法,且在所述BM3D去噪声算法的维纳滤波操作中对维纳系数根据像素点的亮度值及噪声特征曲线,进行相应比例的缩放操作。
- 一种计算机可读存储介质,存储有至少一个程序,所述至少一个程序可被至少一个处理器执行,以实现如权利要求1至权利要求7中任一项所述的视频去噪方法。
- 一种视频去噪装置,包括处理器及存储器,所述处理器及所述存储器通过电耦合进行连接,所述处理器设置为执行所述存储器中存储的程序,以实现如权利要求1至权利要求7中任一项所述的视频去噪方法。
- 一种视频去噪装置,包括噪声统计模块、噪声估计模块和视频去噪模块;所述噪声统计模块,设置为对输入视频帧序列中的每一视频帧进行子图像块划分,并计算每个子图像块的块方差;所述噪声估计模块,设置为根据计算出的块方差计算当前视频帧中所有子图像块的平均方差,根据计算出的平均方差确所述定当前视频帧的噪声强度,选择与所述噪声强度相匹配的滤波强度及噪声特征曲线;所述视频去噪模块,设置为根据所述滤波强度及所述噪声特征曲线,对所述当前视频帧进行滤波。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US17/624,237 US20220351335A1 (en) | 2019-07-29 | 2020-07-14 | Video denoising method and device, and computer readable storage medium |
JP2021564231A JP7256902B2 (ja) | 2019-07-29 | 2020-07-14 | ビデオノイズ除去方法、装置及びコンピュータ読み取り可能な記憶媒体 |
KR1020217034597A KR102605747B1 (ko) | 2019-07-29 | 2020-07-14 | 비디오 노이즈 제거 방법, 장치 및 컴퓨터 판독 가능 저장 매체 |
EP20848408.9A EP3944603A4 (en) | 2019-07-29 | 2020-07-14 | METHOD AND APPARATUS FOR DE-NOISED VIDEO, AND COMPUTER READABLE INFORMATION MEDIA |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910691413.1 | 2019-07-29 | ||
CN201910691413.1A CN112311962B (zh) | 2019-07-29 | 2019-07-29 | 一种视频去噪方法和装置、计算机可读存储介质 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021017809A1 true WO2021017809A1 (zh) | 2021-02-04 |
Family
ID=74228194
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/101806 WO2021017809A1 (zh) | 2019-07-29 | 2020-07-14 | 视频去噪方法、装置及计算机可读存储介质 |
Country Status (6)
Country | Link |
---|---|
US (1) | US20220351335A1 (zh) |
EP (1) | EP3944603A4 (zh) |
JP (1) | JP7256902B2 (zh) |
KR (1) | KR102605747B1 (zh) |
CN (1) | CN112311962B (zh) |
WO (1) | WO2021017809A1 (zh) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113438386A (zh) * | 2021-05-20 | 2021-09-24 | 珠海全志科技股份有限公司 | 一种应用于视频处理的动静判定方法及装置 |
CN114742727A (zh) * | 2022-03-31 | 2022-07-12 | 南通电博士自动化设备有限公司 | 一种基于图像平滑的噪声处理方法及*** |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20230059035A1 (en) * | 2021-08-23 | 2023-02-23 | Netflix, Inc. | Efficient encoding of film grain noise |
CN114648469B (zh) * | 2022-05-24 | 2022-09-27 | 上海齐感电子信息科技有限公司 | 视频图像去噪方法及其***、设备和存储介质 |
CN116016807B (zh) * | 2022-12-30 | 2024-04-19 | 广东中星电子有限公司 | 一种视频处理方法、***、可存储介质和电子设备 |
CN116523765B (zh) * | 2023-03-13 | 2023-09-05 | 湖南兴芯微电子科技有限公司 | 一种实时视频图像降噪方法、装置及存储器 |
CN116342891B (zh) * | 2023-05-24 | 2023-08-15 | 济南科汛智能科技有限公司 | 一种适用于自闭症儿童结构化教学监控数据管理*** |
CN116634284B (zh) * | 2023-07-20 | 2023-10-13 | 清华大学 | Raw域视频去噪方法、装置、电子设备及存储介质 |
CN117615146A (zh) * | 2023-11-13 | 2024-02-27 | 书行科技(北京)有限公司 | 视频处理方法及装置、电子设备及计算机可读存储介质 |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050094889A1 (en) * | 2003-10-30 | 2005-05-05 | Samsung Electronics Co., Ltd. | Global and local statistics controlled noise reduction system |
CN101489034A (zh) * | 2008-12-19 | 2009-07-22 | 四川虹微技术有限公司 | 一种视频图像噪声估计与去除方法 |
CN102118546A (zh) * | 2011-03-22 | 2011-07-06 | 上海富瀚微电子有限公司 | 一种视频图像噪声估计算法的快速实现方法 |
CN102164278A (zh) * | 2011-02-15 | 2011-08-24 | 杭州海康威视软件有限公司 | 用于去除i帧闪烁的视频编码方法及其装置 |
CN102238316A (zh) * | 2010-04-29 | 2011-11-09 | 北京科迪讯通科技有限公司 | 一种3d数字视频图像的自适应实时降噪方案 |
CN102436646A (zh) * | 2011-11-07 | 2012-05-02 | 天津大学 | 基于压缩感知的ccd噪声估计方法 |
CN102769722A (zh) * | 2012-07-20 | 2012-11-07 | 上海富瀚微电子有限公司 | 时域与空域结合的视频降噪装置及方法 |
CN103414845A (zh) * | 2013-07-24 | 2013-11-27 | 中国航天科工集团第三研究院第八三五七研究所 | 一种自适应的视频图像降噪方法及降噪*** |
CN103491282A (zh) * | 2013-09-23 | 2014-01-01 | 华为技术有限公司 | 图像消噪方法与装置 |
CN104021533A (zh) * | 2014-06-24 | 2014-09-03 | 浙江宇视科技有限公司 | 一种实时图像降噪方法及装置 |
CN104134191A (zh) * | 2014-07-11 | 2014-11-05 | 三星电子(中国)研发中心 | 图像去噪方法及其装置 |
Family Cites Families (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7983501B2 (en) * | 2007-03-29 | 2011-07-19 | Intel Corporation | Noise detection and estimation techniques for picture enhancement |
US20080316364A1 (en) * | 2007-06-25 | 2008-12-25 | The Hong Kong University Of Science And Technology | Rate distortion optimization for video denoising |
US8149336B2 (en) * | 2008-05-07 | 2012-04-03 | Honeywell International Inc. | Method for digital noise reduction in low light video |
CN104680483B (zh) * | 2013-11-25 | 2016-03-02 | 浙江大华技术股份有限公司 | 图像的噪声估计方法、视频图像去噪方法及装置 |
US9123103B2 (en) * | 2013-12-26 | 2015-09-01 | Mediatek Inc. | Method and apparatus for image denoising with three-dimensional block-matching |
US20170178309A1 (en) * | 2014-05-15 | 2017-06-22 | Wrnch Inc. | Methods and systems for the estimation of different types of noise in image and video signals |
WO2016185708A1 (ja) | 2015-05-18 | 2016-11-24 | 日本電気株式会社 | 画像処理装置、画像処理方法、および、記憶媒体 |
CN105208376B (zh) * | 2015-08-28 | 2017-09-12 | 青岛中星微电子有限公司 | 一种数字降噪方法和装置 |
EP3154021A1 (en) * | 2015-10-09 | 2017-04-12 | Thomson Licensing | Method and apparatus for de-noising an image using video epitome |
CN107645621A (zh) * | 2016-07-20 | 2018-01-30 | 阿里巴巴集团控股有限公司 | 一种视频处理的方法和设备 |
CN107016650B (zh) * | 2017-02-27 | 2020-12-29 | 苏州科达科技股份有限公司 | 视频图像3d降噪方法及装置 |
US10674045B2 (en) * | 2017-05-31 | 2020-06-02 | Google Llc | Mutual noise estimation for videos |
CN109859126B (zh) * | 2019-01-17 | 2021-02-02 | 浙江大华技术股份有限公司 | 一种视频降噪方法、装置、电子设备及存储介质 |
TWI703864B (zh) * | 2019-07-04 | 2020-09-01 | 瑞昱半導體股份有限公司 | 基於雜訊比的去雜訊方法 |
US20230351582A1 (en) * | 2020-10-07 | 2023-11-02 | Crest Solutions Limited | A line clearance system |
US20230377104A1 (en) * | 2022-05-20 | 2023-11-23 | GE Precision Healthcare LLC | System and methods for filtering medical images |
-
2019
- 2019-07-29 CN CN201910691413.1A patent/CN112311962B/zh active Active
-
2020
- 2020-07-14 EP EP20848408.9A patent/EP3944603A4/en active Pending
- 2020-07-14 KR KR1020217034597A patent/KR102605747B1/ko active IP Right Grant
- 2020-07-14 US US17/624,237 patent/US20220351335A1/en active Pending
- 2020-07-14 WO PCT/CN2020/101806 patent/WO2021017809A1/zh unknown
- 2020-07-14 JP JP2021564231A patent/JP7256902B2/ja active Active
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050094889A1 (en) * | 2003-10-30 | 2005-05-05 | Samsung Electronics Co., Ltd. | Global and local statistics controlled noise reduction system |
CN101489034A (zh) * | 2008-12-19 | 2009-07-22 | 四川虹微技术有限公司 | 一种视频图像噪声估计与去除方法 |
CN102238316A (zh) * | 2010-04-29 | 2011-11-09 | 北京科迪讯通科技有限公司 | 一种3d数字视频图像的自适应实时降噪方案 |
CN102164278A (zh) * | 2011-02-15 | 2011-08-24 | 杭州海康威视软件有限公司 | 用于去除i帧闪烁的视频编码方法及其装置 |
CN102118546A (zh) * | 2011-03-22 | 2011-07-06 | 上海富瀚微电子有限公司 | 一种视频图像噪声估计算法的快速实现方法 |
CN102436646A (zh) * | 2011-11-07 | 2012-05-02 | 天津大学 | 基于压缩感知的ccd噪声估计方法 |
CN102769722A (zh) * | 2012-07-20 | 2012-11-07 | 上海富瀚微电子有限公司 | 时域与空域结合的视频降噪装置及方法 |
CN103414845A (zh) * | 2013-07-24 | 2013-11-27 | 中国航天科工集团第三研究院第八三五七研究所 | 一种自适应的视频图像降噪方法及降噪*** |
CN103491282A (zh) * | 2013-09-23 | 2014-01-01 | 华为技术有限公司 | 图像消噪方法与装置 |
CN104021533A (zh) * | 2014-06-24 | 2014-09-03 | 浙江宇视科技有限公司 | 一种实时图像降噪方法及装置 |
CN104134191A (zh) * | 2014-07-11 | 2014-11-05 | 三星电子(中国)研发中心 | 图像去噪方法及其装置 |
Non-Patent Citations (1)
Title |
---|
See also references of EP3944603A4 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113438386A (zh) * | 2021-05-20 | 2021-09-24 | 珠海全志科技股份有限公司 | 一种应用于视频处理的动静判定方法及装置 |
CN113438386B (zh) * | 2021-05-20 | 2023-02-17 | 珠海全志科技股份有限公司 | 一种应用于视频处理的动静判定方法及装置 |
CN114742727A (zh) * | 2022-03-31 | 2022-07-12 | 南通电博士自动化设备有限公司 | 一种基于图像平滑的噪声处理方法及*** |
CN114742727B (zh) * | 2022-03-31 | 2023-05-05 | 南通电博士自动化设备有限公司 | 一种基于图像平滑的噪声处理方法及*** |
Also Published As
Publication number | Publication date |
---|---|
KR102605747B1 (ko) | 2023-11-23 |
JP7256902B2 (ja) | 2023-04-12 |
CN112311962A (zh) | 2021-02-02 |
JP2022542334A (ja) | 2022-10-03 |
US20220351335A1 (en) | 2022-11-03 |
KR20210141697A (ko) | 2021-11-23 |
EP3944603A1 (en) | 2022-01-26 |
EP3944603A4 (en) | 2022-06-01 |
CN112311962B (zh) | 2023-11-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021017809A1 (zh) | 视频去噪方法、装置及计算机可读存储介质 | |
US9202263B2 (en) | System and method for spatio video image enhancement | |
EP3099044B1 (en) | Multi-frame noise reduction method and terminal | |
US20170280073A1 (en) | Systems and Methods for Reducing Noise in Video Streams | |
US8233062B2 (en) | Image processing apparatus, image processing method, and imaging apparatus | |
KR102182695B1 (ko) | 영상 잡음 제거 장치 및 방법 | |
EP2164040B1 (en) | System and method for high quality image and video upscaling | |
KR20040098162A (ko) | 프레임 레이트 변환시의 프레임 보간 방법 및 그 장치 | |
CN106412441B (zh) | 一种视频防抖控制方法以及终端 | |
CN110418065B (zh) | 高动态范围图像运动补偿方法、装置及电子设备 | |
TWI536319B (zh) | 去雜訊方法以及影像系統 | |
Jin et al. | Quaternion-based impulse noise removal from color video sequences | |
CN104320575A (zh) | 一种用于便携式终端的图像处理方法及图像处理装置 | |
CN106791279B (zh) | 基于遮挡检测的运动补偿方法及*** | |
WO2021232963A1 (zh) | 视频去噪方法、装置、移动终端和存储介质 | |
KR20150035315A (ko) | 하이 다이나믹 레인지 영상 생성 방법 및, 그에 따른 장치, 그에 따른 시스템 | |
TW201525940A (zh) | 影像雜訊估測的方法與裝置 | |
KR101517233B1 (ko) | 움직임 추정을 이용한 잡음 제거장치 | |
JP6570304B2 (ja) | 映像処理装置、映像処理方法およびプログラム | |
WO2012172728A1 (ja) | 画像処理システム | |
JP3948616B2 (ja) | 画像のマッチング装置 | |
JP2021044652A (ja) | 動きベクトル検出装置及び動きベクトル検出方法 | |
PASHIKANTI | Contemporary ACE Algorithm on Mobile Media Visual Quality Encoder-Integrated Demising & Deploring Stabilization | |
CN117544735A (zh) | 一种视频降噪方法、装置、设备、存储介质及产品 | |
JP2015133532A (ja) | 撮像装置及び画像処理方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20848408 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 20217034597 Country of ref document: KR Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 2021564231 Country of ref document: JP Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 2020848408 Country of ref document: EP Effective date: 20211020 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |