WO2018153150A1 - Video image 3d denoising method and device - Google Patents

Video image 3d denoising method and device Download PDF

Info

Publication number
WO2018153150A1
WO2018153150A1 PCT/CN2017/117164 CN2017117164W WO2018153150A1 WO 2018153150 A1 WO2018153150 A1 WO 2018153150A1 CN 2017117164 W CN2017117164 W CN 2017117164W WO 2018153150 A1 WO2018153150 A1 WO 2018153150A1
Authority
WO
WIPO (PCT)
Prior art keywords
current
image data
image
noise reduction
area
Prior art date
Application number
PCT/CN2017/117164
Other languages
French (fr)
Chinese (zh)
Inventor
熊超
章勇
曹李军
陈卫东
Original Assignee
苏州科达科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 苏州科达科技股份有限公司 filed Critical 苏州科达科技股份有限公司
Publication of WO2018153150A1 publication Critical patent/WO2018153150A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/10Image enhancement or restoration using non-spatial domain filtering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/21Circuitry for suppressing or minimising disturbance, e.g. moiré or halo
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • G06T2207/10021Stereoscopic video; Stereoscopic image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20182Noise reduction or smoothing in the temporal domain; Spatio-temporal filtering

Definitions

  • the present invention relates to the field of video image processing, and in particular, to a video image 3D noise reduction method and apparatus.
  • Video and images are highly regarded by users because of their strong visual advantages. However, the quality of video images directly affects the user's use, and the noise reduction function of video images plays a key role in this. Good image noise reduction technology It is possible to clearly identify some moving objects in low illumination scenes. In the field of video image noise reduction technology, 3D noise reduction technology has become a research hotspot. In general, if video image denoising is performed using only the airspace-based or time-domain-based noise reduction method, image transition smoothing, detail loss, and inter-frame jitter noise are prone to occur, and time domain and airspace are combined. The 3D noise reduction method can avoid the above phenomenon as much as possible.
  • a class of methods is mainly based on motion estimation and compensation 3D noise reduction method.
  • This method performs macroblock-based motion estimation between two frames to obtain motion information of the image. Then, the motion compensation is performed on the image by using the motion information. Finally, time domain filtering is performed by using FIR (Finite Inpulse Filter) in the time domain, and the noise reduction result is output.
  • FIR Finite Inpulse Filter
  • 3D noise reduction method which is mainly based on the motion adaptive 3D noise reduction method.
  • the main drawback is that it is more difficult to get relatively accurate, such as the method of inter-frame macroblock matching or inter-frame motion intensity calculation, in the scene of lower illumination in the night.
  • the motion information easily leads to error analysis of the background and foreground pixels, the background noise does not fall down, and the foreground moving object appears severely tailed.
  • the FIR filtering in the time domain needs to store the multi-frame historical image data. The storage overhead of the device is increased, which is not conducive to the real-time performance of the 3D noise reduction method in the video image system.
  • the technical problem to be solved by the present invention is to overcome the 3D noise reduction in the prior art.
  • it is relatively difficult to obtain relatively accurate motion information, which easily leads to error analysis of background and foreground pixels.
  • the background noise does not fall down and the foreground moving object has serious tailing phenomenon.
  • the FIR filtering in the time domain needs to store the multi-frame historical image data, which increases the storage overhead of the device, which is not conducive to the 3D noise reduction method.
  • the real-time nature of the video image system is provided.
  • An embodiment of the present invention provides a video image 3D noise reduction method, including: current first image data collected from a video image; performing spatial 2D noise reduction on the first image data to obtain current second image data. Obtaining a binary image according to the current second image data; wherein the binary image includes a background area and a foreground area; and acquiring a filtering intensity coefficient of each pixel in the current second image data on the time domain filtering; And performing 3D noise reduction processing on the current second image data according to the 3D noise reduction result of the previous frame image data of the current first image data and the filter intensity coefficient.
  • acquiring the binary image according to the current second image data includes: determining whether the current pixel belongs to the background region or belongs to the foreground region; and the current pixel belongs to the In the case of the background area, the number of pixel points belonging to the foreground area in the first predetermined area near the current pixel point is acquired; when the quantity is greater than the first threshold, the current pixel point is near the second predetermined The pixels in the area are set as pixels belonging to the foreground area.
  • the second predetermined area includes a domain window centered on the current pixel point and having a second threshold value.
  • the acquiring the binary image according to the current second image data includes: acquiring the motion intensity information of the current second image data when the current pixel belongs to the foreground region; and the motion intensity information is greater than or equal to a third threshold value, and in a case where the number of pixel points belonging to the foreground area in the third predetermined area near the coordinate position is less than a fourth threshold, the current pixel point is reset to belong to the background area;
  • the same coordinate position includes the same position of the previous frame image of the current first image and the upper frame image of the current first image; and/or,
  • acquiring the motion intensity information of the current second image data includes: calculating motion intensity information of the current second image data by using an SAD algorithm.
  • performing 3D noise reduction processing on the current second image data according to the 3D noise reduction result of the previous frame image data of the current first image data and the filter intensity coefficient comprises: acquiring a solution by using the following formula The result of the current second image data for 3D noise reduction processing:
  • cur_3D ⁇ *pre_3D+(1- ⁇ )*cur_2D
  • cur_3D represents a 3D noise reduction output result of the current second image data
  • cur_2D represents a 2D noise reduction result of the current first image data
  • pre_3D represents a 3D of the previous frame image data of the current first image data.
  • the noise reduction result, ⁇ represents the time domain filter intensity coefficient.
  • the embodiment of the present invention further provides a video image 3D noise reduction device, including: an acquisition module, configured to capture current first image data from a video image; and a first noise reduction module, configured to use the first image data Performing 2D noise reduction based on the airspace to obtain the current second image data; the first obtaining module is configured to acquire the binary image according to the current second image data; wherein the binary image includes a background area and a foreground area; a second obtaining module, which acquires a filtering intensity coefficient of each pixel in the current second image data on the time domain filtering; a second noise reduction module, configured to perform 3D according to the image data of the previous frame of the current first image data The noise reduction result and the filter intensity coefficient perform 3D noise reduction processing on the current second image data.
  • a video image 3D noise reduction device including: an acquisition module, configured to capture current first image data from a video image; and a first noise reduction module, configured to use the first image data Performing 2D noise reduction based on the airspace to obtain the current second
  • the first obtaining module includes: a determining unit, configured to determine whether the current pixel point belongs to the background area or the foreground area; and the acquiring unit is configured to: when the current pixel point belongs to the background area In a case, the number of the pixel points belonging to the foreground area in the first predetermined area near the current pixel point is acquired; and the setting unit is configured to: when the quantity is greater than the first threshold, the current pixel point is near The pixels in the two predetermined areas are set as pixels belonging to the foreground area.
  • the second predetermined area includes a neighborhood window centered on the current pixel point and having a second threshold value.
  • the first acquiring module includes: a first processing unit, configured to acquire motion intensity information of the current second image data when the current pixel belongs to the foreground region; where the motion intensity information is greater than And equal to the third threshold, and if the number of the pixel points belonging to the foreground area in the third predetermined area near the coordinate position is less than the fourth threshold, the current pixel point is reset to belong to the background area;
  • the same coordinate position includes the same position of the previous frame image of the current first image and the upper frame image of the current first image; and/or a second processing unit for belonging to the current pixel point Obtaining motion intensity information of the current second image data when the background region is described; the number of pixel points belonging to the foreground region in the fourth predetermined region near the same coordinate position when the motion intensity information is less than or equal to a fifth threshold If the value is greater than the sixth threshold, the current pixel point is reset to belong to the foreground area; wherein the same coordinate position includes the The same position of the previous frame image of the current first image and
  • the first processing unit or the second processing unit is further configured to calculate motion intensity information of the current second image data by using an SAD algorithm.
  • the second noise reduction module is further configured to obtain, by using the following formula, a result of performing 3D noise reduction processing on the current second image data:
  • cur_3D ⁇ *pre_3D+(1- ⁇ )*cur_2D
  • cur_3D represents a 3D noise reduction output result of the current second image data
  • cur_2D represents a 2D noise reduction result of the current first image data
  • pre_3D represents the current first image.
  • represents the time domain filter intensity coefficient.
  • Embodiments of the present invention provide a video image 3D noise reduction method and apparatus, by using current first image data collected from a video image, and performing spatial 2D noise reduction on the first image data to obtain a current second image.
  • FIG. 1 is a flow chart of a video image 3D noise reduction method according to an embodiment of the present invention
  • FIG. 3 is a schematic structural diagram of a video image 3D noise reduction method according to an embodiment of the present invention.
  • FIG. 4 is another flow chart of a video image 3D noise reduction method according to an embodiment of the present invention.
  • FIG. 5 is a structural block diagram of a video image 3D noise reduction apparatus according to an embodiment of the present invention.
  • FIG. 6 is a structural block diagram of a first obtaining module according to an embodiment of the present invention.
  • FIG. 7 is another structural block diagram of a first acquisition module according to an embodiment of the present invention.
  • connection or integral connection; can be mechanical connection or electrical connection; can be directly connected, indirectly connected through an intermediate medium, or can be internal to two components
  • the connectivity can be either a wireless connection or a wired connection.
  • FIG. 1 is a flowchart of a video image 3D noise reduction method according to an embodiment of the present invention. As shown in FIG. 1 , the process includes the following steps:
  • Step S101 current first image data collected from the video image; for example, acquiring one frame of image data of the input video image;
  • Step S102 performing spatial 2D noise reduction on the first image data to obtain current second image data.
  • the 2D noise reduction processing based on the spatial domain relationship is performed on the YUV of the current frame, wherein the 2D noise reduction method used in the implementation method may be a 2D-DCT noise reduction method, and the method belongs to a more practical 2D noise reduction method;
  • Step S103 Acquire a binary image according to the current second image data; wherein the binary image includes a background area and a foreground area.
  • ViBe Visual Background Extractor
  • the second image obtained by the 2D noise reduction result is subjected to moving target detection and analysis, and a binary image including the background still region and the background motion region is obtained, wherein ViBe belongs to A better method based on background modeling class moving target detection method.
  • Step S104 Acquire a filter strength coefficient of each pixel in the current second image data on the time domain filter. Calculate the time domain filter intensity factor.
  • the calculation of the time domain filter intensity coefficient can be associated with multiple related information of the video image scene, such as a scene.
  • the noise standard deviation, the digital image gain of the scene, etc. the present method uses a digital image gain to calculate the time domain filtering intensity coefficient.
  • a digital image gain value For example, under a certain image gain value, a fixed value between 0 and 1 is assigned to the foreground region and the background region respectively according to the result of the binary image finally generated.
  • the digital image gain range is from 0 to 60 dB.
  • FIG. 2 in the drawing.
  • the data value of FIG. 2 is obtained through a large number of experiments, and the parameter modification of FIG. 2 can also be performed according to specific equipment and application scenarios.
  • Step S105 Perform 3D noise reduction processing on the current second image data according to the 3D noise reduction result of the previous frame image data of the current first image data and the filter intensity coefficient.
  • the filter intensity coefficient are used as the input of the IIR filter, and the output of the IIR filter is used as the output of the 3D noise reduction result.
  • the data of the current frame is subjected to time domain filtering based on IIR filter, wherein the IIR filtering formula is as follows:
  • cur_3D ⁇ *pre_2D+(1- ⁇ )*cur_3D
  • cur_3D represents the 3D noise reduction output result of the current frame
  • cur_2D represents the 2D noise reduction result of the current frame
  • pre_3D represents the 3D noise reduction result of the previous frame
  • represents the time domain filter intensity coefficient
  • the current first image data is collected from the video image; spatially-based 2D noise reduction is performed on the first image data to obtain current second image data; and the binary image is obtained according to the current second image data; , the binary image includes a background area and a foreground area; Taking the filter intensity coefficient of each pixel in the current second image data on the time domain filter; the 3D noise reduction result of the previous frame image data of the current first image data and the filter intensity coefficient to the current second image The data is subjected to 3D noise reduction processing.
  • the foregoing step S103 involves acquiring a binary image according to the current second image data.
  • determining whether the current pixel belongs to the background region or belongs to the foreground region, where the current pixel belongs to the background region. Obtaining, in the first predetermined area near the current pixel point, a number of pixel points belonging to the foreground area; when the number is greater than the first threshold, setting a pixel point in the second predetermined area near the current pixel point to belong to the The pixel of the foreground area.
  • the second predetermined area includes a domain window centered at the current pixel point with a second threshold value.
  • the horizontal direction is respectively counted in the cross-shaped window with the current pixel point as the center, and the radius is 7 in the upper, lower, left, and right directions.
  • the values in the neighborhood window are all set to belong to the foreground area. If the current pixel is judged to belong to the foreground area, no processing is performed. Among them, the threshold Th1 is set to half of the window radius.
  • Step S103 involves acquiring a binary image according to the current second image data, in an optional
  • the current pixel belongs to the foreground region
  • acquiring motion intensity information of the current second image data wherein the motion intensity information is greater than or equal to a third threshold, and the third predetermined region near the same coordinate position belongs to the foreground If the number of the pixel points of the area is less than the fourth threshold, the current pixel point is reset to belong to the background area; wherein the same coordinate position includes the previous frame image of the current first image and the current first image The same position of the upper frame image.
  • the current pixel when the current pixel belongs to the background area, acquiring motion intensity information of the current second image data; wherein the motion intensity information is less than or equal to a fifth threshold, and the fourth predetermined position is near the same coordinate position. If the number of the pixel points belonging to the foreground area in the area is greater than the sixth threshold, the current pixel point is reset to belong to the foreground area; wherein the same coordinate position includes the previous frame image of the current first image and The same position of the upper frame image of the current first image. Specifically, the binary image of the image data of the previous frame that has been saved and the binary image of the image data of the previous frame are obtained, and the obtained SAD value information is read and analyzed one by one to analyze the more perfect method generated by the morphological method.
  • the pixel is re-judgmented as belonging to the background area; if the current pixel belongs to the background area, further if the SAD value is less than or equal to the preset The threshold value Th2, and the number of pixel points belonging to the foreground region in the neighborhood window having the radius of 2 of the binary image of the previous frame and the previous frame at the same coordinate position is greater than or equal to a preset threshold Th4 , the pixel is re-judgmented to belong to the foreground area.
  • the threshold Th2 is set to 50
  • the thresholds Th3 and Th4 are set to the window radius size.
  • the above step involves obtaining the motion intensity information of the current second image data.
  • the motion intensity information of the current second image data is calculated by the SAD algorithm. Specifically, the result of the image data obtained by the 2D noise reduction based on the spatial domain and the 3D noise reduction output result of the previous frame are compared between the frames in the spatial neighborhood window of a certain size, and the absolute value is taken, that is, the so-called SAD (Sum) Of Absolute Differences), the SAD value is used as the motion intensity information of the current image. And the value of SAD is linearly mapped to the interval 0-255, wherein the radius of the neighborhood window size set is 1, and the calculation formula of SAD is:
  • pre_y_3D represents the 3D noise reduction result of the Y component of the previous frame
  • cur_y_2D represents the 2D noise reduction result of the Y component of the current frame
  • i, j respectively represent the coordinates of the horizontal and vertical directions of the pixel point.
  • Step S105 relates to performing 3D noise reduction processing on the current second image data according to the 3D noise reduction result of the previous frame image data of the current first image data and the filter intensity coefficient.
  • the formula obtains a result of performing 3D noise reduction processing on the current second image data:
  • cur_3D ⁇ *pre_3D+(1- ⁇ )*cur_2D
  • cur_3D represents a 3D noise reduction output result of the current second image data
  • cur_2D represents a 2D noise reduction result of the current first image data
  • pre_3D represents a 3D noise reduction result of the previous frame image data of the current first image data.
  • represents the time domain filter intensity coefficient.
  • the current second image data is subjected to 3D noise reduction processing according to the 3D noise reduction result of the previous frame image data of the current first image data and the filter intensity coefficient.
  • the filter intensity coefficient are used as the input of the IIR filter, and the output of the IIR filter is used as the output of the 3D noise reduction result.
  • the time domain filter coefficient is large, it indicates that the pixel may be a foreground motion region, and more previous 3D noise reduction results are referenced to the final 3D noise reduction result; if the time domain filter coefficient is small, then Note that the pixel may be the background still region, and more 2D noise reduction results are referenced in the final 3D noise reduction result.
  • FIG. 4 is another flow chart of a video image 3D noise reduction method according to an embodiment of the present invention, and the specific steps are as follows:
  • the image input information is acquired; in step 1, the video image data is subjected to 2D noise reduction; and in step 2, the result of the step 1 is processed based on the background modeled moving object detection method to obtain a binary value including the background still region and the foreground motion region.
  • Step 3 the binary image in the analysis step 2 is initially determined as the spatial neighborhood information of the pixel of the background area, that is, the distribution of the pixel points of the foreground area in the four directions of the upper, lower, left and right directions, if The number of foreground pixel points in the four directions of the pixel meets our given threshold, and is filled with foreground pixels in a certain size neighborhood window of the pixel; otherwise, no processing is performed; step 4,
  • the results of step 3 are processed by the expansion and erosion methods in morphology, respectively, to remove the pseudo background point and the pseudo front point, to obtain more perfect binary image information;
  • step 5 the result of step 1 and the 3D of the previous frame
  • the noise reduction output results in the space neighborhood window of a certain size and performs the difference between the frames and takes the absolute value, so-called SAD (Sum of Absolute Differences) calculation,
  • SAD value is used as the motion intensity information of the current image; in step 6, in combination with the result of step 5, the binary image information of the previous frame image
  • step 7 Obtaining a final binary image of the current scene image; step 7, based on the result of the binary image of step 6, Calculate the filter strength coefficient of each pixel of the current image on the time domain filter; Step 8, using the IIR filter, the 3D noise reduction result of the previous frame and the result of step 1, and the filter intensity coefficient in step 7 as the IIR filter
  • the input of the IIR filter is used as the output of the 3D noise reduction result. If the time domain filter coefficient is large, the pixel may be the foreground motion region, and more previous 3D noise reduction results are referenced. In the final 3D noise reduction result; if the time domain filter coefficient is small, the pixel may be the background still region, and more of the result of step 1 is referenced in the final 3D noise reduction result; finally 3D noise reduction The result is output.
  • module may implement a combination of software and/or hardware of a predetermined function.
  • apparatus described in the following embodiments is preferably implemented in software, hardware, or a combination of software and hardware, is also possible and contemplated.
  • FIG. 5 is a structural block diagram of a video image 3D noise reduction apparatus according to an embodiment of the present invention.
  • the apparatus includes: an acquisition module 51, configured to collect current first image data from a video image;
  • the noise module 52 is configured to perform spatial 2D noise reduction on the first image data to obtain current second image data.
  • the first obtaining module 53 is configured to obtain a binary image according to the current second image data.
  • the binary image includes a background area and a foreground area; the second obtaining module 54 acquires a filter strength coefficient of each pixel in the current second image data on the time domain filter; and the second noise reduction module 55 is configured to use the current The 3D noise reduction result of the previous frame image data of an image data and the filter intensity coefficient perform 3D noise reduction processing on the current second image data.
  • FIG. 6 is a structural block diagram of a first obtaining module according to an embodiment of the present invention.
  • the first acquiring module 53 further includes: a determining unit 531, configured to determine whether a current pixel belongs to the background area or belongs to the a obtaining unit 532, configured to acquire, when the current pixel point belongs to the background area, a number of pixel points belonging to the foreground area in the first predetermined area near the current pixel point; and a setting unit 533, configured to: When the number is greater than the first threshold, the pixel points in the second predetermined area near the current pixel point are set as the pixel points belonging to the foreground area.
  • the second predetermined area includes a neighborhood window centered on the current pixel point and having a second threshold value.
  • FIG. 7 is another structural block diagram of a first obtaining module according to an embodiment of the present invention.
  • the first acquiring module 53 further includes: a first processing unit 534, configured to belong to the foreground at a current pixel point. And acquiring the motion intensity information of the current second image data; wherein the motion intensity information is greater than or equal to a third threshold, and the number of pixel points belonging to the foreground region in the third predetermined region near the same coordinate position is less than a fourth threshold
  • the current pixel point is reset to belong to the background area; wherein the same coordinate position includes the same position of the previous frame image of the current first image and the upper frame image of the current first image; and / Or the second processing unit 535 is configured to: when the current pixel point belongs to the background area, acquire motion intensity information of the current second image data; where the motion intensity information is less than or equal to a fifth threshold, and the fourth coordinate position is near If the number of pixel points belonging to the foreground area in the predetermined area is
  • the device first processing unit 534 or the second processing unit 535 is further configured to calculate motion intensity information of the current second image data by using an SAD algorithm.
  • the second noise reduction module 55 of the device is further configured to obtain, by using the following formula, a result of performing 3D noise reduction processing on the current second image data:
  • cur_3D ⁇ *pre_3D+(1- ⁇ )*cur_2D
  • cur_3D represents a 3D noise reduction output result of the current second image data
  • cur_2D represents a 2D noise reduction result of the current first image data
  • pre_3D represents a 3D noise reduction result of the previous frame image data of the current first image data.
  • represents the time domain filter intensity coefficient.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Picture Signal Circuits (AREA)
  • Image Processing (AREA)

Abstract

A video image 3D denoising method and device, the method comprising: collecting, from video images, the current first image data (S101); performing spatial domain-based 2D denoising of the first image data to obtain the current second image data (S102); acquiring, according to the current second image data, a binary image, the binary image comprising a background area and a foreground area (S103); acquiring the filter intensity coefficient of each pixel point of the current second image data regarding time domain filtering (S104); according to the 3D noise reduction result of the previous frame image data of the current first image data and the filter intensity coefficient, performing 3D denoising process of the current second image data (S105). Thus, the invention solves the problem that it is difficult to obtain relatively accurate motion information by using the current 3D denoising technology, and the problem that FIR filtering on time domain increases the storage overheads of devices. The invention further improves the denoising effect of video images, so that 3D denoising can be used more widely.

Description

视频图像3D降噪方法及装置Video image 3D noise reduction method and device 技术领域Technical field
本发明涉及视频图像处理领域,具体涉及一种视频图像3D降噪方法及装置。The present invention relates to the field of video image processing, and in particular, to a video image 3D noise reduction method and apparatus.
背景技术Background technique
视频和图像以可视化强的优点被用户所关注,然而,视频图像质量的好坏直接影响到用户的使用,而视频图像的降噪功能无疑在其中起到关键的作用,好的图像降噪技术可以在低照度场景下能清晰地辨识一些运动物体,在视频图像的降噪技术领域中,3D降噪技术成为了研究的热点。一般来说,如果只用基于空域或只用基于时域的降噪方法进行视频图像降噪,则容易出现图像过渡平滑、细节丢失、帧间跳动噪声等现象,而采用时域与空域相结合的3D降噪方法,则可以尽量避免以上现象。在现有的视频图像3D降噪技术方法中,一类方法主要是基于运动估计与补偿的3D降噪方法,这类方法以两帧之间进行基于宏块的运动估计,得到图像的运动信息,然后以该运动信息对图像进行运动补偿,最后,在时域上利用FIR(Finite Inpulse Filter)进行时域滤波,输出降噪结果。还有另一类3D降噪方法,主要是基于运动自适应的3D降噪方法,通过帧间分析图像像素点或宏块的运动强度信息,根据该运动强度信息大小,如果运动强度较大,则尽可能进行时域降噪,反之,则尽可能进行空域降噪。Video and images are highly regarded by users because of their strong visual advantages. However, the quality of video images directly affects the user's use, and the noise reduction function of video images plays a key role in this. Good image noise reduction technology It is possible to clearly identify some moving objects in low illumination scenes. In the field of video image noise reduction technology, 3D noise reduction technology has become a research hotspot. In general, if video image denoising is performed using only the airspace-based or time-domain-based noise reduction method, image transition smoothing, detail loss, and inter-frame jitter noise are prone to occur, and time domain and airspace are combined. The 3D noise reduction method can avoid the above phenomenon as much as possible. In the existing video image 3D noise reduction technology method, a class of methods is mainly based on motion estimation and compensation 3D noise reduction method. This method performs macroblock-based motion estimation between two frames to obtain motion information of the image. Then, the motion compensation is performed on the image by using the motion information. Finally, time domain filtering is performed by using FIR (Finite Inpulse Filter) in the time domain, and the noise reduction result is output. There is another type of 3D noise reduction method, which is mainly based on the motion adaptive 3D noise reduction method. By analyzing the motion intensity information of image pixel points or macroblocks between frames, according to the magnitude of the motion intensity information, if the exercise intensity is large, Time domain noise reduction is performed as much as possible, and airspace noise reduction is performed as much as possible.
主要缺陷在于像在夜晚这中较低照度的场景下,不管是以帧间宏块匹配的方法还是以帧间运动强度计算的方法,都比较难以得到相对准确 的运动信息,容易导致对背景与前景像素点的错误分析,出现背景噪声没有降下来和前景运动物体出现严重拖尾现象等,同时,在时域上进行FIR滤波需要存储多帧历史图像数据,增加了设备的存储开销,不利于3D降噪方法在视频图像***的实时性。The main drawback is that it is more difficult to get relatively accurate, such as the method of inter-frame macroblock matching or inter-frame motion intensity calculation, in the scene of lower illumination in the night. The motion information easily leads to error analysis of the background and foreground pixels, the background noise does not fall down, and the foreground moving object appears severely tailed. At the same time, the FIR filtering in the time domain needs to store the multi-frame historical image data. The storage overhead of the device is increased, which is not conducive to the real-time performance of the 3D noise reduction method in the video image system.
发明内容Summary of the invention
有鉴于此,本发明要解决的技术问题在于克服现有技术中的3D降噪在较低照度的场景下,都比较难以得到相对准确的运动信息,容易导致对背景与前景像素点的错误分析,出现背景噪声没有降下来和前景运动物体出现严重拖尾现象等缺陷,同时,在时域上进行FIR滤波需要存储多帧历史图像数据,增加了设备的存储开销,不利于3D降噪方法在视频图像***的实时性。从而提供一种视频图像3D降噪方法及装置。In view of this, the technical problem to be solved by the present invention is to overcome the 3D noise reduction in the prior art. In a scene with lower illumination, it is relatively difficult to obtain relatively accurate motion information, which easily leads to error analysis of background and foreground pixels. The background noise does not fall down and the foreground moving object has serious tailing phenomenon. At the same time, the FIR filtering in the time domain needs to store the multi-frame historical image data, which increases the storage overhead of the device, which is not conducive to the 3D noise reduction method. The real-time nature of the video image system. Thereby, a video image 3D noise reduction method and apparatus are provided.
为此,本发明实施例提供了如下技术方案:To this end, the embodiments of the present invention provide the following technical solutions:
本发明实施例提供了一种视频图像3D降噪方法,包括:从视频图像中采集的当前第一图像数据;对所述第一图像数据进行基于空域的2D降噪,得到当前第二图像数据;根据所述当前第二图像数据获取二值图像;其中,所述二值图像包括背景区域和前景区域;获取所述当前第二图像数据中各个像素点在时域滤波上的滤波强度系数;根据所述当前第一图像数据的上一帧图像数据的3D降噪结果和所述滤波强度系数对所述当前第二图像数据进行3D降噪处理。An embodiment of the present invention provides a video image 3D noise reduction method, including: current first image data collected from a video image; performing spatial 2D noise reduction on the first image data to obtain current second image data. Obtaining a binary image according to the current second image data; wherein the binary image includes a background area and a foreground area; and acquiring a filtering intensity coefficient of each pixel in the current second image data on the time domain filtering; And performing 3D noise reduction processing on the current second image data according to the 3D noise reduction result of the previous frame image data of the current first image data and the filter intensity coefficient.
可选地,根据所述当前第二图像数据获取二值图像包括:判断当前像素点属于所述背景区域还是属于所述前景区域;在所述当前像素点属于所述 背景区域的情况下,获取所述当前像素点附近第一预定区域内的属于所述前景区域的像素点的数量;在所述数量大于第一阈值时,将所述当前像素点附近第二预定区域内的像素点设置为属于所述前景区域的像素点。Optionally, acquiring the binary image according to the current second image data includes: determining whether the current pixel belongs to the background region or belongs to the foreground region; and the current pixel belongs to the In the case of the background area, the number of pixel points belonging to the foreground area in the first predetermined area near the current pixel point is acquired; when the quantity is greater than the first threshold, the current pixel point is near the second predetermined The pixels in the area are set as pixels belonging to the foreground area.
可选地,所述第二预定区域包括以所述当前像素点为中心以第二阈值为半径的领域窗口。Optionally, the second predetermined area includes a domain window centered on the current pixel point and having a second threshold value.
可选地,根据所述当前第二图像数据获取二值图像包括:在当前像素点属于所述前景区域时,获取所述当前第二图像数据的运动强度信息;在所述运动强度信息大于等于第三阈值,并且同坐标位置附近第三预定区域内属于所述前景区域的像素点的数量小于第四阈值的情况下,将所述当前像素点重新设置为属于所述背景区域;其中,所述同坐标位置包括所述当前第一图像的上一帧图像与所述当前第一图像的上上帧图像的相同位置;和/或,Optionally, the acquiring the binary image according to the current second image data includes: acquiring the motion intensity information of the current second image data when the current pixel belongs to the foreground region; and the motion intensity information is greater than or equal to a third threshold value, and in a case where the number of pixel points belonging to the foreground area in the third predetermined area near the coordinate position is less than a fourth threshold, the current pixel point is reset to belong to the background area; The same coordinate position includes the same position of the previous frame image of the current first image and the upper frame image of the current first image; and/or,
在当前像素点属于所述背景区域时,获取所述当前第二图像数据的运动强度信息;在所述运动强度信息小于等于第五阈值,并且同坐标位置附近第四预定区域内属于所述前景区域的像素点的数量大于第六阈值的情况下,将所述当前像素点重新设置为属于所述前景区域;其中,所述同坐标位置包括所述当前第一图像的上一帧图像与所述当前第一图像的上上帧图像的相同位置。Obtaining motion intensity information of the current second image data when the current pixel belongs to the background region; wherein the motion intensity information is less than or equal to a fifth threshold, and the fourth predetermined region near the same coordinate location belongs to the foreground If the number of pixels of the area is greater than a sixth threshold, the current pixel point is reset to belong to the foreground area; wherein the same coordinate position includes an image of the previous frame of the current first image The same position of the upper frame image of the current first image.
可选地,获取所述当前第二图像数据的运动强度信息包括:通过SAD算法计算所述当前第二图像数据的运动强度信息。 Optionally, acquiring the motion intensity information of the current second image data includes: calculating motion intensity information of the current second image data by using an SAD algorithm.
可选地,根据所述当前第一图像数据的上一帧图像数据的3D降噪结果和所述滤波强度系数对所述当前第二图像数据进行3D降噪处理包括:通过如下公式获取对所述当前第二图像数据进行3D降噪处理的结果:Optionally, performing 3D noise reduction processing on the current second image data according to the 3D noise reduction result of the previous frame image data of the current first image data and the filter intensity coefficient comprises: acquiring a solution by using the following formula The result of the current second image data for 3D noise reduction processing:
cur_3D=α*pre_3D+(1-α)*cur_2Dcur_3D=α*pre_3D+(1-α)*cur_2D
其中,cur_3D表示所述当前第二图像数据的3D降噪输出结果,cur_2D表示所述当前第一图像数据的2D降噪结果,pre_3D表示所述当前第一图像数据的上一帧图像数据的3D降噪结果,α表示时域滤波强度系数。Wherein, cur_3D represents a 3D noise reduction output result of the current second image data, cur_2D represents a 2D noise reduction result of the current first image data, and pre_3D represents a 3D of the previous frame image data of the current first image data. The noise reduction result, α represents the time domain filter intensity coefficient.
本发明实施例还提供了一种视频图像3D降噪装置,包括:采集模块,用于从视频图像中采集的当前第一图像数据;第一降噪模块,用于对所述第一图像数据进行基于空域的2D降噪,得到当前第二图像数据;第一获取模块,用于根据所述当前第二图像数据获取二值图像;其中,所述二值图像包括背景区域和前景区域;第二获取模块,获取所述当前第二图像数据中各个像素点在时域滤波上的滤波强度系数;第二降噪模块,用于根据所述当前第一图像数据的上一帧图像数据的3D降噪结果和所述滤波强度系数对所述当前第二图像数据进行3D降噪处理。The embodiment of the present invention further provides a video image 3D noise reduction device, including: an acquisition module, configured to capture current first image data from a video image; and a first noise reduction module, configured to use the first image data Performing 2D noise reduction based on the airspace to obtain the current second image data; the first obtaining module is configured to acquire the binary image according to the current second image data; wherein the binary image includes a background area and a foreground area; a second obtaining module, which acquires a filtering intensity coefficient of each pixel in the current second image data on the time domain filtering; a second noise reduction module, configured to perform 3D according to the image data of the previous frame of the current first image data The noise reduction result and the filter intensity coefficient perform 3D noise reduction processing on the current second image data.
可选地,所述第一获取模块包括:判断单元,用于判断当前像素点属于所述背景区域还是属于所述前景区域;获取单元,用于在所述当前像素点属于所述背景区域的情况下,获取所述当前像素点附近第一预定区域内的属于所述前景区域的像素点的数量;设置单元,用于在所述数量大于第一阈值时,将所述当前像素点附近第二预定区域内的像素点设置为属于所述前景区域的像素点。 Optionally, the first obtaining module includes: a determining unit, configured to determine whether the current pixel point belongs to the background area or the foreground area; and the acquiring unit is configured to: when the current pixel point belongs to the background area In a case, the number of the pixel points belonging to the foreground area in the first predetermined area near the current pixel point is acquired; and the setting unit is configured to: when the quantity is greater than the first threshold, the current pixel point is near The pixels in the two predetermined areas are set as pixels belonging to the foreground area.
可选地,所述第二预定区域包括以所述当前像素点为中心以第二阈值为半径的邻域窗口。Optionally, the second predetermined area includes a neighborhood window centered on the current pixel point and having a second threshold value.
可选地,所述第一获取模块包括:第一处理单元,用于在当前像素点属于所述前景区域时,获取所述当前第二图像数据的运动强度信息;在所述运动强度信息大于等于第三阈值,并且同坐标位置附近第三预定区域内属于所述前景区域的像素点的数量小于第四阈值的情况下,将所述当前像素点重新设置为属于所述背景区域;其中,所述同坐标位置包括所述当前第一图像的上一帧图像与所述当前第一图像的上上帧图像的相同位置;和/或,第二处理单元,用于在当前像素点属于所述背景区域时,获取所述当前第二图像数据的运动强度信息;在所述运动强度信息小于等于第五阈值,并且同坐标位置附近第四预定区域内属于所述前景区域的像素点的数量大于第六阈值的情况下,将所述当前像素点重新设置为属于所述前景区域;其中,所述同坐标位置包括所述当前第一图像的上一帧图像与所述当前第一图像的上上帧图像的相同位置。Optionally, the first acquiring module includes: a first processing unit, configured to acquire motion intensity information of the current second image data when the current pixel belongs to the foreground region; where the motion intensity information is greater than And equal to the third threshold, and if the number of the pixel points belonging to the foreground area in the third predetermined area near the coordinate position is less than the fourth threshold, the current pixel point is reset to belong to the background area; The same coordinate position includes the same position of the previous frame image of the current first image and the upper frame image of the current first image; and/or a second processing unit for belonging to the current pixel point Obtaining motion intensity information of the current second image data when the background region is described; the number of pixel points belonging to the foreground region in the fourth predetermined region near the same coordinate position when the motion intensity information is less than or equal to a fifth threshold If the value is greater than the sixth threshold, the current pixel point is reset to belong to the foreground area; wherein the same coordinate position includes the The same position of the previous frame image of the current first image and the upper frame image of the current first image.
可选地,所述第一处理单元或者所述第二处理单元还用于通过SAD算法计算所述当前第二图像数据的运动强度信息。Optionally, the first processing unit or the second processing unit is further configured to calculate motion intensity information of the current second image data by using an SAD algorithm.
可选地,所述第二降噪模块还用于通过如下公式获取对所述当前第二图像数据进行3D降噪处理的结果:Optionally, the second noise reduction module is further configured to obtain, by using the following formula, a result of performing 3D noise reduction processing on the current second image data:
cur_3D=α*pre_3D+(1-α)*cur_2Dcur_3D=α*pre_3D+(1-α)*cur_2D
其中,cur_3D表示所述当前第二图像数据的3D降噪输出结果,cur_2D表示所述当前第一图像数据的2D降噪结果,pre_3D表示所述当前第一图像 数据的上一帧图像数据的3D降噪结果,α表示时域滤波强度系数。Wherein, cur_3D represents a 3D noise reduction output result of the current second image data, cur_2D represents a 2D noise reduction result of the current first image data, and pre_3D represents the current first image. The 3D noise reduction result of the previous frame image data of the data, and α represents the time domain filter intensity coefficient.
本发明实施例技术方案,具有如下优点:The technical solution of the embodiment of the present invention has the following advantages:
本发明实施例提供了一种视频图像3D降噪方法及装置,通过从视频图像中采集的当前第一图像数据;对所述第一图像数据进行基于空域的2D降噪,得到当前第二图像数据;根据所述当前第二图像数据获取二值图像;其中,所述二值图像包括背景区域和前景区域;获取所述当前第二图像数据中各个像素点在时域滤波上的滤波强度系数;根据所述当前第一图像数据的上一帧图像数据的3D降噪结果和所述滤波强度系数对所述当前第二图像数据进行3D降噪处理。对于现有3D降噪技术,在较低照度的场景下,都比较难以得到相对准确的运动信息,容易导致对背景与前景像素点的错误分析,出现背景噪声没有降下来和前景运动物体出现严重拖尾现象等缺陷,同时,在时域上进行FIR滤波需要存储多帧历史图像数据,增加了设备的存储开销,不利于3D降噪方法在视频图像***的实时性的问题,本发明实施例解决了现有技术中存在的上述问题。Embodiments of the present invention provide a video image 3D noise reduction method and apparatus, by using current first image data collected from a video image, and performing spatial 2D noise reduction on the first image data to obtain a current second image. Obtaining a binary image according to the current second image data; wherein the binary image includes a background area and a foreground area; and acquiring a filtering intensity coefficient of each pixel in the current second image data on the time domain filtering And performing 3D noise reduction processing on the current second image data according to the 3D noise reduction result of the previous frame image data of the current first image data and the filter intensity coefficient. For the existing 3D noise reduction technology, in the lower illumination scene, it is more difficult to obtain relatively accurate motion information, which easily leads to error analysis of the background and foreground pixels, and the background noise does not fall and the foreground moving object appears serious. Defects such as smearing, and the need to store multi-frame historical image data in the time domain, which increases the storage overhead of the device, which is not conducive to the real-time problem of the 3D noise reduction method in the video image system. The above problems existing in the prior art are solved.
附图说明DRAWINGS
为了更清楚地说明本发明具体实施方式或现有技术中的技术方案,下面将对具体实施方式或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图是本发明的一些实施方式,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to more clearly illustrate the specific embodiments of the present invention or the technical solutions in the prior art, the drawings to be used in the specific embodiments or the description of the prior art will be briefly described below, and obviously, the attached in the following description The drawings are some embodiments of the present invention, and those skilled in the art can obtain other drawings based on these drawings without any creative work.
图1是根据本发明实施例的视频图像3D降噪方法的流程图; 1 is a flow chart of a video image 3D noise reduction method according to an embodiment of the present invention;
图2是根据本发明实施例滤波强度系数参数表;2 is a filter strength coefficient parameter table according to an embodiment of the present invention;
图3是根据本发明实施例的视频图像3D降噪方法的结构示意图;3 is a schematic structural diagram of a video image 3D noise reduction method according to an embodiment of the present invention;
图4是根据本发明实施例的视频图像3D降噪方法的另一流程图;4 is another flow chart of a video image 3D noise reduction method according to an embodiment of the present invention;
图5是根据本发明实施例的视频图像3D降噪装置的结构框图;FIG. 5 is a structural block diagram of a video image 3D noise reduction apparatus according to an embodiment of the present invention; FIG.
图6是根据本发明实施例的第一获取模块的结构框图;FIG. 6 is a structural block diagram of a first obtaining module according to an embodiment of the present invention; FIG.
图7是根据本发明实施例的第一获取模块的另一结构框图。FIG. 7 is another structural block diagram of a first acquisition module according to an embodiment of the present invention.
具体实施方式detailed description
下面将结合附图对本发明的技术方案进行清楚、完整地描述,显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。The technical solutions of the present invention will be clearly and completely described in the following with reference to the accompanying drawings. It is obvious that the described embodiments are a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments obtained by those skilled in the art based on the embodiments of the present invention without creative efforts are within the scope of the present invention.
在本发明的描述中,需要说明的是,术语“中心”、“上”、“下”、“左”、“右”、“竖直”、“水平”、“内”、“外”等指示的方位或位置关系为基于附图所示的方位或位置关系,仅是为了便于描述本发明和简化描述,而不是指示或暗示所指的装置或元件必须具有特定的方位、以特定的方位构造和操作,因此不能理解为对本发明的限制。此外,术语“第一”、“第二”、“第三”仅用于描述目的,而不能理解为指示或暗示相对重要性。In the description of the present invention, it is to be noted that the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inside", "outside", etc. The orientation or positional relationship of the indications is based on the orientation or positional relationship shown in the drawings, and is merely for the convenience of the description of the invention and the simplified description, rather than indicating or implying that the device or component referred to has a specific orientation, in a specific orientation. The construction and operation are therefore not to be construed as limiting the invention. Moreover, the terms "first," "second," and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
在本发明的描述中,需要说明的是,除非另有明确的规定和限定,术语“安装”、“相连”、“连接”应做广义理解,例如,可以是固定连接,也可以是可拆卸连接,或一体地连接;可以是机械连接,也可以是电连接;可以是直接相连,也可以通过中间媒介间接相连,还可以是两个元件内部 的连通,可以是无线连接,也可以是有线连接。对于本领域的普通技术人员而言,可以具体情况理解上述术语在本发明中的具体含义。In the description of the present invention, it should be noted that the terms "installation", "connected", and "connected" are to be understood broadly, and may be fixed or detachable, for example, unless otherwise explicitly defined and defined. Connection, or integral connection; can be mechanical connection or electrical connection; can be directly connected, indirectly connected through an intermediate medium, or can be internal to two components The connectivity can be either a wireless connection or a wired connection. The specific meaning of the above terms in the present invention can be understood in a specific case by those skilled in the art.
此外,下面所描述的本发明不同实施方式中所涉及的技术特征只要彼此之间未构成冲突就可以相互结合。Further, the technical features involved in the different embodiments of the present invention described below may be combined with each other as long as they do not constitute a conflict with each other.
实施例1Example 1
本发明实施例提供了一种视频图像3D降噪方法,图1是根据本发明实施例的视频图像3D降噪方法的流程图,如图1所示,该流程包括如下步骤:The embodiment of the present invention provides a video image 3D noise reduction method. FIG. 1 is a flowchart of a video image 3D noise reduction method according to an embodiment of the present invention. As shown in FIG. 1 , the process includes the following steps:
步骤S101,从视频图像中采集的当前第一图像数据;例如获取输入视频图像的一帧图像数据;Step S101, current first image data collected from the video image; for example, acquiring one frame of image data of the input video image;
步骤S102,对该第一图像数据进行基于空域的2D降噪,得到当前第二图像数据。对当前帧的YUV进行基于空域关系的2D降噪处理,其中本实施方法所用到的2D降噪方法可以为2D-DCT降噪方法,该方法属于较实用的2D降噪方法;Step S102, performing spatial 2D noise reduction on the first image data to obtain current second image data. The 2D noise reduction processing based on the spatial domain relationship is performed on the YUV of the current frame, wherein the 2D noise reduction method used in the implementation method may be a 2D-DCT noise reduction method, and the method belongs to a more practical 2D noise reduction method;
步骤S103,根据该当前第二图像数据获取二值图像;其中,该二值图像包括背景区域和前景区域。利用基于背景建模的ViBe(Visual Background Extractor)运动目标检测方法对该2D降噪结果得到的第二图像进行运动目标检测分析,得到包含背景静止区域和背景运动区域的二值图像,其中ViBe属于基于背景建模类运动目标检测方法较好的一种方法。Step S103: Acquire a binary image according to the current second image data; wherein the binary image includes a background area and a foreground area. Using the ViBe (Visual Background Extractor) moving object detection method based on background modeling, the second image obtained by the 2D noise reduction result is subjected to moving target detection and analysis, and a binary image including the background still region and the background motion region is obtained, wherein ViBe belongs to A better method based on background modeling class moving target detection method.
步骤S104,获取该当前第二图像数据中各个像素点在时域滤波上的滤波强度系数。计算时域滤波强度系数。通过实践仿真测试验证,时域滤波强度系数的计算可以跟视频图像场景的多个相关信息进行关联,比如场景 的噪声标准差、场景的数字图像增益等,本实施方法采用基于数字图像增益来计算时域滤波强度系数。比如,在某一图像增益数值下,依据最终生成的二值图像结果,分别对前景区域和背景区域赋予一个固定的0~1之间的数值。以数字图像增益范围从0~60dB为例,具体可参考附图中的图2,其中图2的数据值通过大量实验获取,也可以根据具体设备和应用场景进行图2的参数修改。Step S104: Acquire a filter strength coefficient of each pixel in the current second image data on the time domain filter. Calculate the time domain filter intensity factor. Through the practice simulation test, the calculation of the time domain filter intensity coefficient can be associated with multiple related information of the video image scene, such as a scene. The noise standard deviation, the digital image gain of the scene, etc., the present method uses a digital image gain to calculate the time domain filtering intensity coefficient. For example, under a certain image gain value, a fixed value between 0 and 1 is assigned to the foreground region and the background region respectively according to the result of the binary image finally generated. For example, the digital image gain range is from 0 to 60 dB. For details, refer to FIG. 2 in the drawing. The data value of FIG. 2 is obtained through a large number of experiments, and the parameter modification of FIG. 2 can also be performed according to specific equipment and application scenarios.
步骤S105,根据该当前第一图像数据的上一帧图像数据的3D降噪结果和该滤波强度系数对该当前第二图像数据进行3D降噪处理。利用IIR滤波器,以上一帧的3D降噪结果和当前帧的2D降噪结果、滤波强度系数作为IIR滤波器的输入,IIR滤波器的输出作为3D降噪结果的输出。即如果时域滤波系数较大,则说明该像素点可能为前景运动区域,更多的上一帧3D降噪结果被引用到最终的3D降噪结果中;如果时域滤波系数较小,则说明该像素点可能为背景静止区域,更多的2D降噪的结果被引用的最终的3D降噪结果中。结合附图中的图3,对当前帧的数据进行基于IIR滤波器的时域滤波,其中IIR滤波公式如下:Step S105: Perform 3D noise reduction processing on the current second image data according to the 3D noise reduction result of the previous frame image data of the current first image data and the filter intensity coefficient. Using the IIR filter, the 3D noise reduction result of the previous frame and the 2D noise reduction result of the current frame, the filter intensity coefficient are used as the input of the IIR filter, and the output of the IIR filter is used as the output of the 3D noise reduction result. That is, if the time domain filter coefficient is large, it indicates that the pixel may be a foreground motion region, and more previous 3D noise reduction results are referenced to the final 3D noise reduction result; if the time domain filter coefficient is small, then Note that the pixel may be the background still region, and more 2D noise reduction results are referenced in the final 3D noise reduction result. Referring to FIG. 3 in the drawing, the data of the current frame is subjected to time domain filtering based on IIR filter, wherein the IIR filtering formula is as follows:
cur_3D=α*pre_2D+(1-α)*cur_3Dcur_3D=α*pre_2D+(1-α)*cur_3D
其中,cur_3D表示当前帧的3D降噪输出结果,cur_2D表示当前帧的2D降噪结果,pre_3D表示上一帧的3D降噪结果,α表示时域滤波强度系数。Among them, cur_3D represents the 3D noise reduction output result of the current frame, cur_2D represents the 2D noise reduction result of the current frame, pre_3D represents the 3D noise reduction result of the previous frame, and α represents the time domain filter intensity coefficient.
通过上述步骤,从视频图像中采集的当前第一图像数据;对该第一图像数据进行基于空域的2D降噪,得到当前第二图像数据;根据该当前第二图像数据获取二值图像;其中,该二值图像包括背景区域和前景区域;获 取该当前第二图像数据中各个像素点在时域滤波上的滤波强度系数;根据该当前第一图像数据的上一帧图像数据的3D降噪结果和该滤波强度系数对该当前第二图像数据进行3D降噪处理。解决了在较低照度的场景下,都比较难以得到相对准确的运动信息,容易导致对背景与前景像素点的错误分析,出现背景噪声没有降下来和前景运动物体出现严重拖尾现象等缺陷,同时,在时域上进行FIR滤波需要存储多帧历史图像数据,增加了设备的存储开销,不利于3D降噪方法在视频图像***的实时性。Through the above steps, the current first image data is collected from the video image; spatially-based 2D noise reduction is performed on the first image data to obtain current second image data; and the binary image is obtained according to the current second image data; , the binary image includes a background area and a foreground area; Taking the filter intensity coefficient of each pixel in the current second image data on the time domain filter; the 3D noise reduction result of the previous frame image data of the current first image data and the filter intensity coefficient to the current second image The data is subjected to 3D noise reduction processing. It is solved that in the scene of lower illumination, it is more difficult to obtain relatively accurate motion information, which easily leads to the error analysis of the background and foreground pixels, the background noise is not lowered, and the foreground moving object has serious tailing phenomenon. At the same time, performing FIR filtering on the time domain requires storing multiple frames of historical image data, which increases the storage overhead of the device, which is not conducive to the real-time performance of the 3D noise reduction method in the video image system.
上述步骤S103涉及到根据当前第二图像数据获取二值图像,在一个可选实施例中,判断当前像素点属于该背景区域还是属于该前景区域,在该当前像素点属于该背景区域的情况下,获取该当前像素点附近第一预定区域内的属于该前景区域的像素点的数量;在该数量大于第一阈值时,将该当前像素点附近第二预定区域内的像素点设置为属于该前景区域的像素点。该第二预定区域包括以该当前像素点为中心以第二阈值为半径的领域窗口。具体地,如果当前的像素点被判为属于背景区域,则在以当前像素点为中心,在其上、下、左、右四个方向且半径为7的十字型窗口内分别统计水平方向和垂直方向的属于前景区域的像素点的数量分布情况。如果水平方向或垂直方向所得到的前景区域的像素点的数量大于我们预设的阈值Th1,则需要对该当前像素点进行填充,其中填充方法为把以当前像素点为中心的半径为2的邻域窗口内的值全部置为属于前景区域。如果当前的像素点被判为属于前景区域,则不进行任何处理。其中,阈值Th1设为窗口半径的一半。The foregoing step S103 involves acquiring a binary image according to the current second image data. In an optional embodiment, determining whether the current pixel belongs to the background region or belongs to the foreground region, where the current pixel belongs to the background region. Obtaining, in the first predetermined area near the current pixel point, a number of pixel points belonging to the foreground area; when the number is greater than the first threshold, setting a pixel point in the second predetermined area near the current pixel point to belong to the The pixel of the foreground area. The second predetermined area includes a domain window centered at the current pixel point with a second threshold value. Specifically, if the current pixel is judged to belong to the background area, the horizontal direction is respectively counted in the cross-shaped window with the current pixel point as the center, and the radius is 7 in the upper, lower, left, and right directions. The distribution of the number of pixels belonging to the foreground area in the vertical direction. If the number of pixels in the foreground area obtained in the horizontal direction or the vertical direction is greater than the threshold Th1 that is preset, the current pixel point needs to be filled, wherein the filling method is to set the radius of the current pixel point to 2 The values in the neighborhood window are all set to belong to the foreground area. If the current pixel is judged to belong to the foreground area, no processing is performed. Among them, the threshold Th1 is set to half of the window radius.
步骤S103涉及根据当前第二图像数据获取二值图像中,在一个可选实 施例中,在当前像素点属于该前景区域时,获取该当前第二图像数据的运动强度信息;在该运动强度信息大于等于第三阈值,并且同坐标位置附近第三预定区域内属于该前景区域的像素点的数量小于第四阈值的情况下,将该当前像素点重新设置为属于该背景区域;其中,该同坐标位置包括该当前第一图像的上一帧图像与该当前第一图像的上上帧图像的相同位置。在另一个可选实施例中,在当前像素点属于该背景区域时,获取该当前第二图像数据的运动强度信息;在该运动强度信息小于等于第五阈值,并且同坐标位置附近第四预定区域内属于该前景区域的像素点的数量大于第六阈值的情况下,将该当前像素点重新设置为属于该前景区域;其中,该同坐标位置包括该当前第一图像的上一帧图像与该当前第一图像的上上帧图像的相同位置。具体地,获取已被保存的上一帧图像数据的二值图像和上上一帧图像数据的二值图像,以及得到的SAD值信息,并逐个读取分析经过形态学方法生成的更加完善的二值图像。如果当前的像素点属于前景区域,进一步如果其SAD值大于等于预设定的阈值Th2,且同坐标位置上的在上一帧和上上一帧的二值图像的半径为2的邻域窗口内的属于前景区域的像素点的数量都小于预设定的阈值Th3,则该像素点被重新判为属于背景区域;如果当前的像素点属于背景区域,进一步如果其SAD值小于等于预设定的阈值Th2,且同坐标位置上的在上一帧和上上一帧的二值图像的半径为2的邻域窗口内的属于前景区域的像素点的数量都大于等于预设定的阈值Th4,则该像素点被重新判为属于前景区域。其中,阈值Th2设为50,阈值Th3和Th4设为窗口半径大小。通过上述方法得到当前场景图像的最终的二值图像。 Step S103 involves acquiring a binary image according to the current second image data, in an optional In an embodiment, when the current pixel belongs to the foreground region, acquiring motion intensity information of the current second image data; wherein the motion intensity information is greater than or equal to a third threshold, and the third predetermined region near the same coordinate position belongs to the foreground If the number of the pixel points of the area is less than the fourth threshold, the current pixel point is reset to belong to the background area; wherein the same coordinate position includes the previous frame image of the current first image and the current first image The same position of the upper frame image. In another optional embodiment, when the current pixel belongs to the background area, acquiring motion intensity information of the current second image data; wherein the motion intensity information is less than or equal to a fifth threshold, and the fourth predetermined position is near the same coordinate position. If the number of the pixel points belonging to the foreground area in the area is greater than the sixth threshold, the current pixel point is reset to belong to the foreground area; wherein the same coordinate position includes the previous frame image of the current first image and The same position of the upper frame image of the current first image. Specifically, the binary image of the image data of the previous frame that has been saved and the binary image of the image data of the previous frame are obtained, and the obtained SAD value information is read and analyzed one by one to analyze the more perfect method generated by the morphological method. Binary image. If the current pixel belongs to the foreground area, further if the SAD value is greater than or equal to the preset threshold Th2, and the neighboring window of the binary image of the previous frame and the previous frame in the same coordinate position has a radius of 2 If the number of pixels in the foreground area is less than the preset threshold Th3, the pixel is re-judgmented as belonging to the background area; if the current pixel belongs to the background area, further if the SAD value is less than or equal to the preset The threshold value Th2, and the number of pixel points belonging to the foreground region in the neighborhood window having the radius of 2 of the binary image of the previous frame and the previous frame at the same coordinate position is greater than or equal to a preset threshold Th4 , the pixel is re-judgmented to belong to the foreground area. Among them, the threshold Th2 is set to 50, and the thresholds Th3 and Th4 are set to the window radius size. The final binary image of the current scene image is obtained by the above method.
上述步骤中涉及到获取该当前第二图像数据的运动强度信息,在一个可选实施例中,通过SAD算法计算该当前第二图像数据的运动强度信息。具体地,把基于空域的2D降噪得到图像数据的结果和上一帧的3D降噪输出结果在一定大小的空间邻域窗口内进行帧间做差并取绝对值,即所谓的SAD(Sum of Absolute Differences)计算,以该SAD值作为当前图像的运动强度信息。并把SAD的值进行线性映射到0~255区间,其中设置的邻域窗口大小的半径为1,SAD的计算公式为:The above step involves obtaining the motion intensity information of the current second image data. In an optional embodiment, the motion intensity information of the current second image data is calculated by the SAD algorithm. Specifically, the result of the image data obtained by the 2D noise reduction based on the spatial domain and the 3D noise reduction output result of the previous frame are compared between the frames in the spatial neighborhood window of a certain size, and the absolute value is taken, that is, the so-called SAD (Sum) Of Absolute Differences), the SAD value is used as the motion intensity information of the current image. And the value of SAD is linearly mapped to the interval 0-255, wherein the radius of the neighborhood window size set is 1, and the calculation formula of SAD is:
Figure PCTCN2017117164-appb-000001
Figure PCTCN2017117164-appb-000001
其中,pre_y_3D表示上一帧的Y分量的3D降噪结果,cur_y_2D表示当前帧的Y分量的2D降噪结果,i,j分别表示像素点的水平和垂直方向的坐标。Among them, pre_y_3D represents the 3D noise reduction result of the Y component of the previous frame, cur_y_2D represents the 2D noise reduction result of the Y component of the current frame, and i, j respectively represent the coordinates of the horizontal and vertical directions of the pixel point.
步骤S105涉及到根据该当前第一图像数据的上一帧图像数据的3D降噪结果和该滤波强度系数对该当前第二图像数据进行3D降噪处理,在一个可选实施例中,通过如下公式获取对该当前第二图像数据进行3D降噪处理的结果:Step S105 relates to performing 3D noise reduction processing on the current second image data according to the 3D noise reduction result of the previous frame image data of the current first image data and the filter intensity coefficient. In an optional embodiment, The formula obtains a result of performing 3D noise reduction processing on the current second image data:
cur_3D=α*pre_3D+(1-α)*cur_2Dcur_3D=α*pre_3D+(1-α)*cur_2D
其中,cur_3D表示该当前第二图像数据的3D降噪输出结果,cur_2D表示该当前第一图像数据的2D降噪结果,pre_3D表示该当前第一图像数据的上一帧图像数据的3D降噪结果,α表示时域滤波强度系数。Wherein, cur_3D represents a 3D noise reduction output result of the current second image data, cur_2D represents a 2D noise reduction result of the current first image data, and pre_3D represents a 3D noise reduction result of the previous frame image data of the current first image data. , α represents the time domain filter intensity coefficient.
具体地,根据该当前第一图像数据的上一帧图像数据的3D降噪结果和该滤波强度系数对该当前第二图像数据进行3D降噪处理。利用IIR滤波器, 以上一帧的3D降噪结果和当前帧的2D降噪结果、滤波强度系数作为IIR滤波器的输入,IIR滤波器的输出作为3D降噪结果的输出。即如果时域滤波系数较大,则说明该像素点可能为前景运动区域,更多的上一帧3D降噪结果被引用到最终的3D降噪结果中;如果时域滤波系数较小,则说明该像素点可能为背景静止区域,更多的2D降噪的结果被引用的最终的3D降噪结果中。Specifically, the current second image data is subjected to 3D noise reduction processing according to the 3D noise reduction result of the previous frame image data of the current first image data and the filter intensity coefficient. Using the IIR filter, The 3D noise reduction result of the above frame and the 2D noise reduction result of the current frame, the filter intensity coefficient are used as the input of the IIR filter, and the output of the IIR filter is used as the output of the 3D noise reduction result. That is, if the time domain filter coefficient is large, it indicates that the pixel may be a foreground motion region, and more previous 3D noise reduction results are referenced to the final 3D noise reduction result; if the time domain filter coefficient is small, then Note that the pixel may be the background still region, and more 2D noise reduction results are referenced in the final 3D noise reduction result.
图4是根据本发明实施例的视频图像3D降噪方法的另一流程图,具体步骤如下:4 is another flow chart of a video image 3D noise reduction method according to an embodiment of the present invention, and the specific steps are as follows:
首先获取图像输入信息;步骤1,对视频图像数据进行2D降噪;步骤2,基于背景建模的运动目标检测方法对步骤1的结果进行处理,得到包含背景静止区域和前景运动区域的二值图像;步骤3,分析步骤2中的二值图像被初步判断为背景区域的像素点的空间邻域信息,即其上、下、左、右四个方向的前景区域像素点的分布情况,如果该像素点四个方向的前景像素点的分布数量满足我们给定的阈值,则在该像素点的一定大小的邻域窗口内用前景像素点进行填充;否则,不做任何处理;步骤4,分别用形态学中的膨胀和腐蚀方法对步骤3的结果进行处理,去除伪背景点和伪前景点,得到更加完善的二值图像信息;步骤5,把步骤1的结果和上一帧的3D降噪输出结果在一定大小的空间邻域窗口内进行帧间做差并取绝对值,即所谓的SAD(Sum of Absolute Differences)计算,以该SAD值作为当前图像的运动强度信息;步骤6,结合步骤5的结果、上一帧图像二值图信息以及上上一帧图像的二值图像信息,把步骤4的二值图像进行深入分析,得到当前场景图像的最终的二值图像;步骤7,以步骤6的二值图结果为依据, 计算出当前图像每个像素点在时域滤波上的滤波强度系数;步骤8,利用IIR滤波器,以上一帧的3D降噪结果和步骤1的结果、步骤7中的滤波强度系数作为IIR滤波器的输入,IIR滤波器的输出作为3D降噪结果的输出,即如果时域滤波系数较大,则说明该像素点可能为前景运动区域,更多的上一帧3D降噪结果被引用到最终的3D降噪结果中;如果时域滤波系数较小,则说明该像素点可能为背景静止区域,更多的步骤1的结果被引用的最终的3D降噪结果中;最后将3D降噪结果输出。First, the image input information is acquired; in step 1, the video image data is subjected to 2D noise reduction; and in step 2, the result of the step 1 is processed based on the background modeled moving object detection method to obtain a binary value including the background still region and the foreground motion region. Image; Step 3, the binary image in the analysis step 2 is initially determined as the spatial neighborhood information of the pixel of the background area, that is, the distribution of the pixel points of the foreground area in the four directions of the upper, lower, left and right directions, if The number of foreground pixel points in the four directions of the pixel meets our given threshold, and is filled with foreground pixels in a certain size neighborhood window of the pixel; otherwise, no processing is performed; step 4, The results of step 3 are processed by the expansion and erosion methods in morphology, respectively, to remove the pseudo background point and the pseudo front point, to obtain more perfect binary image information; step 5, the result of step 1 and the 3D of the previous frame The noise reduction output results in the space neighborhood window of a certain size and performs the difference between the frames and takes the absolute value, so-called SAD (Sum of Absolute Differences) calculation, The SAD value is used as the motion intensity information of the current image; in step 6, in combination with the result of step 5, the binary image information of the previous frame image, and the binary image information of the previous frame image, the binary image of step 4 is deeply analyzed. Obtaining a final binary image of the current scene image; step 7, based on the result of the binary image of step 6, Calculate the filter strength coefficient of each pixel of the current image on the time domain filter; Step 8, using the IIR filter, the 3D noise reduction result of the previous frame and the result of step 1, and the filter intensity coefficient in step 7 as the IIR filter The input of the IIR filter is used as the output of the 3D noise reduction result. If the time domain filter coefficient is large, the pixel may be the foreground motion region, and more previous 3D noise reduction results are referenced. In the final 3D noise reduction result; if the time domain filter coefficient is small, the pixel may be the background still region, and more of the result of step 1 is referenced in the final 3D noise reduction result; finally 3D noise reduction The result is output.
实施例2Example 2
在本实施例中还提供了一种图像视频3D降噪装置,该装置用于实现上述实施例及优选实施方式,已经进行过说明的不再赘述。如以下所使用的,术语“模块”可以实现预定功能的软件和/或硬件的组合。尽管以下实施例所描述的装置较佳地以软件来实现,但是硬件,或者软件和硬件的组合的实现也是可能并被构想的。An image video 3D noise reduction device is also provided in the embodiment, and the device is used to implement the foregoing embodiments and preferred embodiments, and details are not described herein. As used below, the term "module" may implement a combination of software and/or hardware of a predetermined function. Although the apparatus described in the following embodiments is preferably implemented in software, hardware, or a combination of software and hardware, is also possible and contemplated.
图5是根据本发明实施例的视频图像3D降噪装置的结构框图,如图5所示,该装置包括:采集模块51,用于从视频图像中采集的当前第一图像数据;第一降噪模块52,用于对该第一图像数据进行基于空域的2D降噪,得到当前第二图像数据;第一获取模块53,用于根据该当前第二图像数据获取二值图像;其中,该二值图像包括背景区域和前景区域;第二获取模块54,获取该当前第二图像数据中各个像素点在时域滤波上的滤波强度系数;第二降噪模块55,用于根据该当前第一图像数据的上一帧图像数据的3D降噪结果和该滤波强度系数对该当前第二图像数据进行3D降噪处理。 FIG. 5 is a structural block diagram of a video image 3D noise reduction apparatus according to an embodiment of the present invention. As shown in FIG. 5, the apparatus includes: an acquisition module 51, configured to collect current first image data from a video image; The noise module 52 is configured to perform spatial 2D noise reduction on the first image data to obtain current second image data. The first obtaining module 53 is configured to obtain a binary image according to the current second image data. The binary image includes a background area and a foreground area; the second obtaining module 54 acquires a filter strength coefficient of each pixel in the current second image data on the time domain filter; and the second noise reduction module 55 is configured to use the current The 3D noise reduction result of the previous frame image data of an image data and the filter intensity coefficient perform 3D noise reduction processing on the current second image data.
图6是根据本发明实施例的第一获取模块的结构框图,如图6所示,该装置第一获取模块53还包括:判断单元531,用于判断当前像素点属于该背景区域还是属于该前景区域;获取单元532,用于在该当前像素点属于该背景区域的情况下,获取该当前像素点附近第一预定区域内的属于该前景区域的像素点的数量;设置单元533,用于在该数量大于第一阈值时,将该当前像素点附近第二预定区域内的像素点设置为属于该前景区域的像素点。FIG. 6 is a structural block diagram of a first obtaining module according to an embodiment of the present invention. As shown in FIG. 6, the first acquiring module 53 further includes: a determining unit 531, configured to determine whether a current pixel belongs to the background area or belongs to the a obtaining unit 532, configured to acquire, when the current pixel point belongs to the background area, a number of pixel points belonging to the foreground area in the first predetermined area near the current pixel point; and a setting unit 533, configured to: When the number is greater than the first threshold, the pixel points in the second predetermined area near the current pixel point are set as the pixel points belonging to the foreground area.
可选地,第二预定区域包括以该当前像素点为中心以第二阈值为半径的邻域窗口。Optionally, the second predetermined area includes a neighborhood window centered on the current pixel point and having a second threshold value.
图7是根据本发明实施例的第一获取模块的另一结构框图,如图7所示,该装置第一获取模块53还包括:第一处理单元534,用于在当前像素点属于该前景区域时,获取该当前第二图像数据的运动强度信息;在该运动强度信息大于等于第三阈值,并且同坐标位置附近第三预定区域内属于该前景区域的像素点的数量小于第四阈值的情况下,将该当前像素点重新设置为属于该背景区域;其中,该同坐标位置包括该当前第一图像的上一帧图像与该当前第一图像的上上帧图像的相同位置;和/或,第二处理单元535,用于在当前像素点属于该背景区域时,获取该当前第二图像数据的运动强度信息;在该运动强度信息小于等于第五阈值,并且同坐标位置附近第四预定区域内属于该前景区域的像素点的数量大于第六阈值的情况下,将该当前像素点重新设置为属于该前景区域;其中,该同坐标位置包括该当前第一图像的上一帧图像与该当前第一图像的上上帧图像的相同位置。 FIG. 7 is another structural block diagram of a first obtaining module according to an embodiment of the present invention. As shown in FIG. 7, the first acquiring module 53 further includes: a first processing unit 534, configured to belong to the foreground at a current pixel point. And acquiring the motion intensity information of the current second image data; wherein the motion intensity information is greater than or equal to a third threshold, and the number of pixel points belonging to the foreground region in the third predetermined region near the same coordinate position is less than a fourth threshold In the case, the current pixel point is reset to belong to the background area; wherein the same coordinate position includes the same position of the previous frame image of the current first image and the upper frame image of the current first image; and / Or the second processing unit 535 is configured to: when the current pixel point belongs to the background area, acquire motion intensity information of the current second image data; where the motion intensity information is less than or equal to a fifth threshold, and the fourth coordinate position is near If the number of pixel points belonging to the foreground area in the predetermined area is greater than a sixth threshold, the current pixel point is reset to belong to the foreground Domain; wherein, with the same coordinate position of the current location comprises a first image on the frame image and the current on the first image.
可选地,该装置第一处理单元534或者该第二处理单元535还用于通过SAD算法计算该当前第二图像数据的运动强度信息。Optionally, the device first processing unit 534 or the second processing unit 535 is further configured to calculate motion intensity information of the current second image data by using an SAD algorithm.
可选地,该装置的第二降噪模块55还用于通过如下公式获取对该当前第二图像数据进行3D降噪处理的结果:Optionally, the second noise reduction module 55 of the device is further configured to obtain, by using the following formula, a result of performing 3D noise reduction processing on the current second image data:
cur_3D=α*pre_3D+(1-α)*cur_2Dcur_3D=α*pre_3D+(1-α)*cur_2D
其中,cur_3D表示该当前第二图像数据的3D降噪输出结果,cur_2D表示该当前第一图像数据的2D降噪结果,pre_3D表示该当前第一图像数据的上一帧图像数据的3D降噪结果,α表示时域滤波强度系数。Wherein, cur_3D represents a 3D noise reduction output result of the current second image data, cur_2D represents a 2D noise reduction result of the current first image data, and pre_3D represents a 3D noise reduction result of the previous frame image data of the current first image data. , α represents the time domain filter intensity coefficient.
显然,上述实施例仅仅是为清楚地说明所作的举例,而并非对实施方式的限定。对于所属领域的普通技术人员来说,在上述说明的基础上还可以做出其它不同形式的变化或变动。这里无需也无法对所有的实施方式予以穷举。而由此所引伸出的显而易见的变化或变动仍处于本发明创造的保护范围之中。 It is apparent that the above-described embodiments are merely illustrative of the examples, and are not intended to limit the embodiments. Other variations or modifications of the various forms may be made by those skilled in the art in light of the above description. There is no need and no way to exhaust all of the implementations. Obvious changes or variations resulting therefrom are still within the scope of the invention.

Claims (12)

  1. 一种视频图像3D降噪方法,其特征在于,包括:A video image 3D noise reduction method, comprising:
    从视频图像中采集的当前第一图像数据;Current first image data acquired from the video image;
    对所述第一图像数据进行基于空域的2D降噪,得到当前第二图像数据;Performing spatial domain-based 2D noise reduction on the first image data to obtain current second image data;
    根据所述当前第二图像数据获取二值图像;其中,所述二值图像包括背景区域和前景区域;Obtaining a binary image according to the current second image data; wherein the binary image includes a background area and a foreground area;
    获取所述当前第二图像数据中各个像素点在时域滤波上的滤波强度系数;Obtaining a filter strength coefficient of each pixel in the current second image data on a time domain filter;
    根据所述当前第一图像数据的上一帧图像数据的3D降噪结果和所述滤波强度系数对所述当前第二图像数据进行3D降噪处理。And performing 3D noise reduction processing on the current second image data according to the 3D noise reduction result of the previous frame image data of the current first image data and the filter intensity coefficient.
  2. 根据权利要求1所述的方法,其特征在于,根据所述当前第二图像数据获取二值图像包括:The method according to claim 1, wherein the acquiring the binary image according to the current second image data comprises:
    判断当前像素点属于所述背景区域还是属于所述前景区域;Determining whether the current pixel belongs to the background area or belongs to the foreground area;
    在所述当前像素点属于所述背景区域的情况下,获取所述当前像素点附近第一预定区域内的属于所述前景区域的像素点的数量;And acquiring, in the case that the current pixel point belongs to the background area, the number of pixel points belonging to the foreground area in the first predetermined area near the current pixel point;
    在所述数量大于第一阈值时,将所述当前像素点附近第二预定区域内的像素点设置为属于所述前景区域的像素点。 When the number is greater than the first threshold, the pixel points in the second predetermined area near the current pixel point are set as the pixel points belonging to the foreground area.
  3. 根据权利要求2所述的方法,其特征在于,所述第二预定区域包括以所述当前像素点为中心以第二阈值为半径的领域窗口。The method of claim 2 wherein said second predetermined area comprises a domain window centered at said current pixel point and having a second threshold value.
  4. 根据权利要求1所述的方法,其特征在于,根据所述当前第二图像数据获取二值图像包括:The method according to claim 1, wherein the acquiring the binary image according to the current second image data comprises:
    在当前像素点属于所述前景区域时,获取所述当前第二图像数据的运动强度信息;在所述运动强度信息大于等于第三阈值,并且同坐标位置附近第三预定区域内属于所述前景区域的像素点的数量小于第四阈值的情况下,将所述当前像素点重新设置为属于所述背景区域;其中,所述同坐标位置包括所述当前第一图像的上一帧图像与所述当前第一图像的上上帧图像的相同位置;和/或,Obtaining motion intensity information of the current second image data when the current pixel belongs to the foreground region; wherein the motion intensity information is greater than or equal to a third threshold, and the third predetermined region in the vicinity of the same coordinate location belongs to the foreground If the number of pixels of the area is less than the fourth threshold, the current pixel point is reset to belong to the background area; wherein the same coordinate position includes the image of the previous frame of the current first image The same position of the upper frame image of the current first image; and/or,
    在当前像素点属于所述背景区域时,获取所述当前第二图像数据的运动强度信息;在所述运动强度信息小于等于第五阈值,并且同坐标位置附近第四预定区域内属于所述前景区域的像素点的数量大于第六阈值的情况下,将所述当前像素点重新设置为属于所述前景区域;其中,所述同坐标位置包括所述当前第一图像的上一帧图像与所述当前第一图像的上上帧图像的相同位置。Obtaining motion intensity information of the current second image data when the current pixel belongs to the background region; wherein the motion intensity information is less than or equal to a fifth threshold, and the fourth predetermined region near the same coordinate location belongs to the foreground If the number of pixels of the area is greater than a sixth threshold, the current pixel point is reset to belong to the foreground area; wherein the same coordinate position includes an image of the previous frame of the current first image The same position of the upper frame image of the current first image.
  5. 根据权利要求4所述的方法,其特征在于,获取所述当前第二图像数据的运动强度信息包括:The method according to claim 4, wherein the obtaining the motion intensity information of the current second image data comprises:
    通过SAD算法计算所述当前第二图像数据的运动强度信息。The exercise intensity information of the current second image data is calculated by the SAD algorithm.
  6. 根据权利要求1至5中任一所述的方法,其特征在于,根据所述当前 第一图像数据的上一帧图像数据的3D降噪结果和所述滤波强度系数对所述当前第二图像数据进行3D降噪处理包括:Method according to any one of claims 1 to 5, characterized in that The 3D noise reduction result of the previous frame image data of the first image data and the filtering intensity coefficient performing 3D noise reduction processing on the current second image data includes:
    通过如下公式获取对所述当前第二图像数据进行3D降噪处理的结果:Obtaining a result of performing 3D noise reduction processing on the current second image data by using the following formula:
    cur_3D=α*pre_3D+(1-α)*cur_2Dcur_3D=α*pre_3D+(1-α)*cur_2D
    其中,cur_3D表示所述当前第二图像数据的3D降噪输出结果,cur_2D表示所述当前第一图像数据的2D降噪结果,pre_3D表示所述当前第一图像数据的上一帧图像数据的3D降噪结果,α表示时域滤波强度系数。Wherein, cur_3D represents a 3D noise reduction output result of the current second image data, cur_2D represents a 2D noise reduction result of the current first image data, and pre_3D represents a 3D of the previous frame image data of the current first image data. The noise reduction result, α represents the time domain filter intensity coefficient.
  7. 一种视频图像3D降噪装置,其特征在于,包括:A video image 3D noise reduction device, comprising:
    采集模块,用于从视频图像中采集的当前第一图像数据;An acquisition module, configured to capture current first image data from the video image;
    第一降噪模块,用于对所述第一图像数据进行基于空域的2D降噪,得到当前第二图像数据;a first noise reduction module, configured to perform spatial domain-based 2D noise reduction on the first image data to obtain current second image data;
    第一获取模块,用于根据所述当前第二图像数据获取二值图像;其中,所述二值图像包括背景区域和前景区域;a first acquiring module, configured to acquire a binary image according to the current second image data; wherein the binary image includes a background area and a foreground area;
    第二获取模块,获取所述当前第二图像数据中各个像素点在时域滤波上的滤波强度系数;a second acquiring module, acquiring a filtering intensity coefficient of each pixel in the current second image data on the time domain filtering;
    第二降噪模块,用于根据所述当前第一图像数据的上一帧图像数据的3D降噪结果和所述滤波强度系数对所述当前第二图像数据进行3D降噪处理。a second noise reduction module, configured to perform 3D noise reduction processing on the current second image data according to the 3D noise reduction result of the previous frame image data of the current first image data and the filter intensity coefficient.
  8. 根据权利要求7所述的装置,其特征在于,所述第一获取模块包括: The device according to claim 7, wherein the first obtaining module comprises:
    判断单元,用于判断当前像素点属于所述背景区域还是属于所述前景区域;a determining unit, configured to determine whether the current pixel belongs to the background area or belongs to the foreground area;
    获取单元,用于在所述当前像素点属于所述背景区域的情况下,获取所述当前像素点附近第一预定区域内的属于所述前景区域的像素点的数量;And an acquiring unit, configured to acquire, when the current pixel point belongs to the background area, a number of pixel points belonging to the foreground area in a first predetermined area near the current pixel point;
    设置单元,用于在所述数量大于第一阈值时,将所述当前像素点附近第二预定区域内的像素点设置为属于所述前景区域的像素点。And a setting unit, configured to set a pixel point in the second predetermined area near the current pixel point as a pixel point belonging to the foreground area when the quantity is greater than the first threshold.
  9. 根据权利要求8所述的装置,其特征在于,所述第二预定区域包括以所述当前像素点为中心以第二阈值为半径的邻域窗口。The apparatus of claim 8 wherein said second predetermined area comprises a neighborhood window centered at said current pixel point and having a second threshold value.
  10. 根据权利要求7所述的装置,其特征在于,所述第一获取模块包括:The device according to claim 7, wherein the first obtaining module comprises:
    第一处理单元,用于在当前像素点属于所述前景区域时,获取所述当前第二图像数据的运动强度信息;在所述运动强度信息大于等于第三阈值,并且同坐标位置附近第三预定区域内属于所述前景区域的像素点的数量小于第四阈值的情况下,将所述当前像素点重新设置为属于所述背景区域;其中,所述同坐标位置包括所述当前第一图像的上一帧图像与所述当前第一图像的上上帧图像的相同位置;和/或,a first processing unit, configured to acquire, when the current pixel point belongs to the foreground area, motion intensity information of the current second image data; where the motion intensity information is greater than or equal to a third threshold, and the third position is near the same coordinate position If the number of pixel points belonging to the foreground area in the predetermined area is less than a fourth threshold, the current pixel point is reset to belong to the background area; wherein the same coordinate position includes the current first image The same position of the previous frame image and the upper frame image of the current first image; and/or,
    第二处理单元,用于在当前像素点属于所述背景区域时,获取所述当前第二图像数据的运动强度信息;在所述运动强度信息小于等于第五阈值,并且同坐标位置附近第四预定区域内属于所述前景区域的像素点的数量大于第六阈值的情况下,将所述当前像素点重新设置为属于所述前景区域; 其中,所述同坐标位置包括所述当前第一图像的上一帧图像与所述当前第一图像的上上帧图像的相同位置。a second processing unit, configured to acquire motion intensity information of the current second image data when the current pixel belongs to the background region; where the motion intensity information is less than or equal to a fifth threshold, and the fourth coordinate position is near If the number of pixel points belonging to the foreground area in the predetermined area is greater than a sixth threshold, the current pixel point is reset to belong to the foreground area; The same coordinate position includes the same position of the previous frame image of the current first image and the upper frame image of the current first image.
  11. 根据权利要求10所述的装置,其特征在于,所述第一处理单元或者所述第二处理单元还用于通过SAD算法计算所述当前第二图像数据的运动强度信息。The apparatus according to claim 10, wherein the first processing unit or the second processing unit is further configured to calculate motion intensity information of the current second image data by using an SAD algorithm.
  12. 根据权利要求7至11中任一所述的装置,其特征在于,所述第二降噪模块还用于通过如下公式获取对所述当前第二图像数据进行3D降噪处理的结果:The device according to any one of claims 7 to 11, wherein the second noise reduction module is further configured to obtain a result of performing 3D noise reduction processing on the current second image data by using the following formula:
    cur_3D=α*pre_3D+(1-α)*cur_2Dcur_3D=α*pre_3D+(1-α)*cur_2D
    其中,cur_3D表示所述当前第二图像数据的3D降噪输出结果,cur_2D表示所述当前第一图像数据的2D降噪结果,pre_3D表示所述当前第一图像数据的上一帧图像数据的3D降噪结果,α表示时域滤波强度系数。 Wherein, cur_3D represents a 3D noise reduction output result of the current second image data, cur_2D represents a 2D noise reduction result of the current first image data, and pre_3D represents a 3D of the previous frame image data of the current first image data. The noise reduction result, α represents the time domain filter intensity coefficient.
PCT/CN2017/117164 2017-02-27 2017-12-19 Video image 3d denoising method and device WO2018153150A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710107692.3 2017-02-27
CN201710107692.3A CN107016650B (en) 2017-02-27 2017-02-27 3D noise reduction method and device for video image

Publications (1)

Publication Number Publication Date
WO2018153150A1 true WO2018153150A1 (en) 2018-08-30

Family

ID=59440606

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/117164 WO2018153150A1 (en) 2017-02-27 2017-12-19 Video image 3d denoising method and device

Country Status (2)

Country Link
CN (1) CN107016650B (en)
WO (1) WO2018153150A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113538255A (en) * 2021-05-31 2021-10-22 浙江大华技术股份有限公司 Motion fusion noise reduction method and device and computer readable storage medium
EP3944603A4 (en) * 2019-07-29 2022-06-01 ZTE Corporation Video denoising method and apparatus, and computer-readable storage medium

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107016650B (en) * 2017-02-27 2020-12-29 苏州科达科技股份有限公司 3D noise reduction method and device for video image
CN111754437B (en) * 2020-06-24 2023-07-14 成都国科微电子有限公司 3D noise reduction method and device based on motion intensity
CN113628138B (en) * 2021-08-06 2023-10-20 北京爱芯科技有限公司 Hardware multiplexing image noise reduction device
CN114331899A (en) * 2021-12-31 2022-04-12 上海宇思微电子有限公司 Image noise reduction method and device
CN115937013B (en) * 2022-10-08 2023-08-11 上海为旌科技有限公司 Luminance denoising method and device based on airspace

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070040943A1 (en) * 2005-08-19 2007-02-22 Kabushiki Kaisha Toshiba Digital noise reduction apparatus and method and video signal processing apparatus
CN103369209A (en) * 2013-07-31 2013-10-23 上海通途半导体科技有限公司 Video noise reduction device and video noise reduction method
CN103679196A (en) * 2013-12-05 2014-03-26 河海大学 Method for automatically classifying people and vehicles in video surveillance
CN104915655A (en) * 2015-06-15 2015-09-16 西安电子科技大学 Multi-path monitor video management method and device
CN107016650A (en) * 2017-02-27 2017-08-04 苏州科达科技股份有限公司 Video image 3 D noise-reduction method and device

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101448077B (en) * 2008-12-26 2010-06-23 四川虹微技术有限公司 Self-adapting video image 3D denoise method
US20110149040A1 (en) * 2009-12-17 2011-06-23 Ilya Klebanov Method and system for interlacing 3d video
CN102238316A (en) * 2010-04-29 2011-11-09 北京科迪讯通科技有限公司 Self-adaptive real-time denoising scheme for 3D digital video image
CN101964863B (en) * 2010-05-07 2012-10-24 镇江唐桥微电子有限公司 Self-adaptive time-space domain video image denoising method
CN103108109B (en) * 2013-01-31 2016-05-11 深圳英飞拓科技股份有限公司 A kind of digital video noise reduction system and method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070040943A1 (en) * 2005-08-19 2007-02-22 Kabushiki Kaisha Toshiba Digital noise reduction apparatus and method and video signal processing apparatus
CN103369209A (en) * 2013-07-31 2013-10-23 上海通途半导体科技有限公司 Video noise reduction device and video noise reduction method
CN103679196A (en) * 2013-12-05 2014-03-26 河海大学 Method for automatically classifying people and vehicles in video surveillance
CN104915655A (en) * 2015-06-15 2015-09-16 西安电子科技大学 Multi-path monitor video management method and device
CN107016650A (en) * 2017-02-27 2017-08-04 苏州科达科技股份有限公司 Video image 3 D noise-reduction method and device

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3944603A4 (en) * 2019-07-29 2022-06-01 ZTE Corporation Video denoising method and apparatus, and computer-readable storage medium
CN113538255A (en) * 2021-05-31 2021-10-22 浙江大华技术股份有限公司 Motion fusion noise reduction method and device and computer readable storage medium

Also Published As

Publication number Publication date
CN107016650B (en) 2020-12-29
CN107016650A (en) 2017-08-04

Similar Documents

Publication Publication Date Title
WO2018153150A1 (en) Video image 3d denoising method and device
US10521885B2 (en) Image processing device and image processing method
JP6336117B2 (en) Building height calculation method, apparatus and storage medium
US8508605B2 (en) Method and apparatus for image stabilization
CN108541374B (en) Image fusion method and device and terminal equipment
CN107316326B (en) Edge-based disparity map calculation method and device applied to binocular stereo vision
CN112311962B (en) Video denoising method and device and computer readable storage medium
CN107481271B (en) Stereo matching method, system and mobile terminal
KR20090062049A (en) Video compression method and system for enabling the method
WO2019221013A4 (en) Video stabilization method and apparatus and non-transitory computer-readable medium
US10269099B2 (en) Method and apparatus for image processing
TW201742001A (en) Method and device for image noise estimation and image capture apparatus
TW201929521A (en) Method, apparatus, and circuitry of noise reduction
CN110866882A (en) Layered joint bilateral filtering depth map restoration algorithm based on depth confidence
WO2023169281A1 (en) Image registration method and apparatus, storage medium, and electronic device
CN110689565A (en) Depth map determination method and device and electronic equipment
Shen et al. Depth map enhancement method based on joint bilateral filter
KR101907451B1 (en) Filter based high resolution color image restoration and image quality enhancement apparatus and method
CN109001674B (en) WiFi fingerprint information rapid acquisition and positioning method based on continuous video sequence
CN114219845B (en) Residential unit area judgment method and device based on deep learning
Sonawane et al. Image quality assessment techniques: An overview
KR100925794B1 (en) Global contrast enhancement using block based local contrast improvement
Chai et al. Fpga-based ROI encoding for HEVC video bitrate reduction
CN111630569B (en) Binocular matching method, visual imaging device and device with storage function
KR20130044793A (en) Video processing apparatus and method for removing rain from video

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17898107

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17898107

Country of ref document: EP

Kind code of ref document: A1