CN113709324A - Video noise reduction method, video noise reduction device and video noise reduction terminal - Google Patents

Video noise reduction method, video noise reduction device and video noise reduction terminal Download PDF

Info

Publication number
CN113709324A
CN113709324A CN202010437187.7A CN202010437187A CN113709324A CN 113709324 A CN113709324 A CN 113709324A CN 202010437187 A CN202010437187 A CN 202010437187A CN 113709324 A CN113709324 A CN 113709324A
Authority
CN
China
Prior art keywords
frame
video
noise reduction
video frame
result
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010437187.7A
Other languages
Chinese (zh)
Inventor
滕健
刘阳兴
张传昊
林染染
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan TCL Group Industrial Research Institute Co Ltd
Original Assignee
Wuhan TCL Group Industrial Research Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan TCL Group Industrial Research Institute Co Ltd filed Critical Wuhan TCL Group Industrial Research Institute Co Ltd
Priority to CN202010437187.7A priority Critical patent/CN113709324A/en
Publication of CN113709324A publication Critical patent/CN113709324A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/21Circuitry for suppressing or minimising disturbance, e.g. moiré or halo

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Picture Signal Circuits (AREA)
  • Image Processing (AREA)

Abstract

The application is applicable to the technical field of computers, and provides a video noise reduction method, a video noise reduction device and a video noise reduction terminal, which comprise: carrying out pre-noise reduction processing on each frame of video frame in a video to be processed to obtain a first noise reduction result of each frame of video frame; performing Kalman filtering fusion processing on a first noise reduction result of a non-first frame video frame and a first noise reduction result of an adjacent frame of the non-first frame video frame to obtain a fusion result of the non-first frame video frame; determining a second noise reduction result of the non-first frame video frame based on the fusion result; and determining a noise reduction video corresponding to the video to be processed according to the first noise reduction result of the first frame video frame and the second noise reduction result corresponding to each frame of non-first frame video frame. According to the mode, the video frames in the video to be processed are subjected to pre-denoising processing and Kalman filtering fusion processing, so that the time spent on denoising is reduced, the operation efficiency is improved, and the video quality after denoising is improved.

Description

Video noise reduction method, video noise reduction device and video noise reduction terminal
Technical Field
The application belongs to the technical field of computers, and particularly relates to a video denoising method, a video denoising device and a video denoising terminal.
Background
In order to ensure that a video shot by the terminal device in a dark light environment is clear enough and ensure the shooting quality of the video, the shot video is generally subjected to real-time noise reduction. However, in a conventional video denoising scheme such as time domain video denoising, in the denoising process, motion estimation and compensation are performed on a moving object in a video by using methods such as optical flow estimation and block matching, which results in long time consumption for denoising, low computational efficiency, and poor quality of the denoised video.
Disclosure of Invention
In view of this, embodiments of the present application provide a video denoising method, a video denoising device, and a video denoising terminal, so as to solve the problems of long denoising time, low computational efficiency, and poor quality of denoised video caused by the conventional video denoising scheme.
A first aspect of an embodiment of the present application provides a video denoising method, including:
carrying out pre-noise reduction processing on each frame of video frame in a video to be processed to obtain a first noise reduction result of each frame of video frame;
performing Kalman filtering fusion processing on a first noise reduction result of each frame of non-first frame video frame in the video to be processed and a first noise reduction result of an adjacent frame of the non-first frame video frame to obtain a fusion result of the non-first frame video frame;
determining a second noise reduction result of the non-first frame video frame based on the fusion result;
and determining the noise reduction video corresponding to the video to be processed according to the first noise reduction result of the first frame video frame of the video to be processed and the second noise reduction result corresponding to each frame of non-first frame video frame.
In a possible implementation manner, performing pre-noise reduction processing on each frame of video frame in a video to be processed to obtain a first noise reduction result of each frame of video frame includes:
performing Gaussian noise reduction processing on each frame of video frame to obtain a Gaussian noise reduction result of each frame of video frame;
and carrying out bilateral filtering noise reduction processing on each frame of video frame to obtain a bilateral filtering noise reduction result of each frame of video frame.
In a possible implementation manner, an adjacent frame of a non-first frame video frame is a previous frame video frame adjacent to the non-first frame video frame, and performing kalman filtering fusion processing on a first noise reduction result of the non-first frame video frame and a first noise reduction result of the adjacent frame of the non-first frame video frame to obtain a fusion result of the non-first frame video frame includes:
calculating Kalman filtering gain of the non-first frame video frame according to the Gaussian noise reduction result of the non-first frame video frame;
determining a system state of the non-leading frame video frame based on adjacent frames of the non-leading frame video frame;
determining a state observation value of the non-first frame video frame based on the system state;
and fusing the Kalman filtering gain, the system state, the state observation value and the bilateral filtering noise reduction result of the non-first frame video frame to obtain a fusion result of the non-first frame video frame.
In a possible implementation manner, the calculating a kalman filtering gain of the non-leading frame video frame according to the gaussian noise reduction result of the non-leading frame video frame includes:
calculating a system error corresponding to the non-first frame video frame based on the Gaussian noise reduction result of the non-first frame video frame and the Gaussian noise reduction result of the adjacent frame of the non-first frame video frame;
calculating the observation error corresponding to the non-first frame video frame based on the observation error corresponding to the adjacent frame of the non-first frame video frame and the Kalman filtering gain of the adjacent frame of the non-first frame video frame;
calculating a state covariance matrix of the non-leading frame video frame based on the state covariance matrix of the adjacent frame of the non-leading frame video frame and the system error;
and calculating the Kalman filtering gain of the non-first frame video frame based on the state covariance matrix of the non-first frame video frame and the observation error corresponding to the non-first frame video frame.
In one possible implementation, the present application further includes: and correcting the state covariance matrix of the non-first frame video frame based on the Kalman filtering gain of the non-first frame video frame and the state covariance matrix of the non-first frame video frame.
In one possible implementation, the present application further includes: and initializing parameters of the first frame of video frame, and determining Kalman filtering gain, observation error and state covariance matrix of the first frame of video frame.
In one possible implementation, determining the second noise reduction result of the non-first frame video frame based on the fusion result includes:
and carrying out single-frame noise reduction processing on the fusion result of the non-first-frame video frame to obtain a second noise reduction result of the non-first-frame video frame.
A second aspect of an embodiment of the present application provides a video noise reduction apparatus, including:
the first processing unit is used for carrying out pre-noise reduction processing on each frame of video frame in a video to be processed to obtain a first noise reduction result of each frame of video frame;
the second processing unit is used for performing Kalman filtering fusion processing on a first noise reduction result of a non-first frame video frame and a first noise reduction result of an adjacent frame of the non-first frame video frame aiming at each non-first frame video frame in the video to be processed to obtain a fusion result of the non-first frame video frame;
a third processing unit, configured to determine a second denoising result of the non-first frame video frame based on the fusion result;
and the determining unit is used for determining the noise reduction video corresponding to the video to be processed according to the first noise reduction result of the first frame video frame of the video to be processed and the second noise reduction result corresponding to each frame of non-first frame video frame.
Further, the first processing unit is specifically configured to:
performing Gaussian noise reduction processing on each frame of video frame to obtain a Gaussian noise reduction result of each frame of video frame;
and carrying out bilateral filtering noise reduction processing on each frame of video frame to obtain a bilateral filtering noise reduction result of each frame of video frame.
Further, the adjacent frame of the non-leading frame video frame is a previous frame video frame adjacent to the non-leading frame video frame, and the second processing unit includes:
the computing unit is used for computing Kalman filtering gain of the non-first frame video frame according to the Gaussian noise reduction result of the non-first frame video frame;
the system state determining unit is used for determining the system state of the non-head frame video frame based on the adjacent frame of the non-head frame video frame;
a state observation value determining unit, configured to determine a state observation value of the non-first-frame video frame based on the system state;
and the fusion unit is used for fusing the Kalman filtering gain, the system state, the state observation value and the bilateral filtering noise reduction result of the non-first frame video frame to obtain a fusion result of the non-first frame video frame.
Further, the computing unit is specifically configured to:
calculating a system error corresponding to the non-first frame video frame based on the Gaussian noise reduction result of the non-first frame video frame and the Gaussian noise reduction result of the adjacent frame of the non-first frame video frame;
calculating the observation error corresponding to the non-first frame video frame based on the observation error corresponding to the adjacent frame of the non-first frame video frame and the Kalman filtering gain of the adjacent frame of the non-first frame video frame;
calculating a state covariance matrix of the non-leading frame video frame based on the state covariance matrix of the adjacent frame of the non-leading frame video frame and the system error;
and calculating the Kalman filtering gain of the non-first frame video frame based on the state covariance matrix of the non-first frame video frame and the observation error corresponding to the non-first frame video frame.
Further, the video noise reduction apparatus further includes:
and the correcting unit is used for correcting the state covariance matrix of the non-first frame video frame based on the Kalman filtering gain of the non-first frame video frame and the state covariance matrix of the non-first frame video frame.
Further, the video noise reduction apparatus further includes:
and the initialization unit is used for carrying out parameter initialization on the first frame of video frame and determining Kalman filtering gain, observation error and state covariance matrix of the first frame of video frame.
Further, the third processing unit is specifically configured to:
and carrying out single-frame noise reduction processing on the fusion result of the non-first-frame video frame to obtain a second noise reduction result of the non-first-frame video frame.
A third aspect of the embodiments of the present application provides another video noise reduction terminal, including a processor, an input device, an output device, and a memory, where the processor, the input device, the output device, and the memory are connected to each other, where the memory is used to store a computer program that supports the terminal to execute the above method, where the computer program includes program instructions, and the processor is configured to call the program instructions to perform the following steps:
carrying out pre-noise reduction processing on each frame of video frame in a video to be processed to obtain a first noise reduction result of each frame of video frame;
performing Kalman filtering fusion processing on a first noise reduction result of each frame of non-first frame video frame in the video to be processed and a first noise reduction result of an adjacent frame of the non-first frame video frame to obtain a fusion result of the non-first frame video frame;
determining a second noise reduction result of the non-first frame video frame based on the fusion result;
and determining the noise reduction video corresponding to the video to be processed according to the first noise reduction result of the first frame video frame of the video to be processed and the second noise reduction result corresponding to each frame of non-first frame video frame.
A fourth aspect of embodiments of the present application provides a computer-readable storage medium storing a computer program which, when executed by a processor, performs the steps of:
carrying out pre-noise reduction processing on each frame of video frame in a video to be processed to obtain a first noise reduction result of each frame of video frame;
performing Kalman filtering fusion processing on a first noise reduction result of each frame of non-first frame video frame in the video to be processed and a first noise reduction result of an adjacent frame of the non-first frame video frame to obtain a fusion result of the non-first frame video frame;
determining a second noise reduction result of the non-first frame video frame based on the fusion result;
and determining the noise reduction video corresponding to the video to be processed according to the first noise reduction result of the first frame video frame of the video to be processed and the second noise reduction result corresponding to each frame of non-first frame video frame.
The video denoising method, the video denoising device and the video denoising terminal provided by the embodiment of the application have the following beneficial effects:
in the embodiment of the application, the video denoising terminal performs denoising pre-processing on each frame of video frame in a video to be processed to obtain a first denoising result of each frame of video frame; performing Kalman filtering fusion processing on a first noise reduction result of a non-first frame video frame in a video to be processed and a first noise reduction result of an adjacent frame of the non-first frame video frame aiming at each non-first frame video frame in the video to be processed to obtain a fusion result of the non-first frame video frame; determining a second noise reduction result of the non-first frame video frame based on the fusion result; and determining the noise reduction video corresponding to the video to be processed according to the first noise reduction result of the first frame video frame of the video to be processed and the second noise reduction result corresponding to each frame of non-first frame video frame. When the video denoising terminal performs denoising processing on a video to be processed, motion estimation and compensation are not needed to be performed on an object in the video, so that the denoising operation efficiency is improved, and the operation time is reduced; and moreover, the video frames in the video to be processed are subjected to pre-denoising processing and Kalman filtering fusion processing, so that the quality of the denoised video is further improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a flowchart illustrating an implementation of a video denoising method according to an embodiment of the present application;
fig. 2 is a flowchart of an implementation of a video denoising method according to another embodiment of the present application;
FIG. 3 is a reference diagram of a video frame without noise reduction provided by the present application;
FIG. 4 is a video frame of FIG. 3 after denoising as provided herein;
fig. 5 is a schematic diagram of a video noise reduction apparatus according to an embodiment of the present application;
fig. 6 is a schematic diagram of a video denoising terminal according to another embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
Referring to fig. 1, fig. 1 is a schematic flowchart of a video denoising method according to an embodiment of the present disclosure. In this embodiment, the main execution body of the video denoising method is a video denoising terminal, and the video denoising terminal includes, but is not limited to, a mobile terminal such as a smart phone, a tablet computer, a Personal Digital Assistant (PDA), and the like, and may also include a terminal such as a desktop computer. The video denoising method as shown in fig. 1 may include:
s101: and carrying out pre-noise reduction processing on each frame of video frame in the video to be processed to obtain a first noise reduction result of each frame of video frame.
The video to be processed refers to the video needing noise reduction processing. For example, the video to be processed may be a video obtained by real-time shooting, such as a monitoring video shot by a real-time monitoring device, a real-time video during a video call, and the like; the video to be processed may also be a video shot in advance. The pre-noise reduction process may include a gaussian noise reduction process and/or a bilateral filtering noise reduction process, or other processing algorithms that reduce noise in the image.
After the video denoising terminal obtains the video to be processed, it may perform denoising pre-processing on each frame of video frame to obtain a first denoising result of each frame of video frame. For example, when the video to be processed is a pre-shot video, the video denoising terminal may first detect whether the format of the video is a color space (YUV) format. If not, the format of the video is converted into a YUV format, and then each frame of video frame in the video is subjected to pre-noise reduction processing to obtain a first noise reduction result of each frame of video frame. Because the real-time video usually converts the video format into the YUV format, when the video to be processed is the video shot in real time, the pre-noise reduction processing can be directly performed on each frame of video frame in the real-time video, so as to obtain the first noise reduction result of each frame of video frame. In order to reduce the operation time and the operation amount, one frame of video frame can be converted and then subjected to pre-noise reduction processing, and format conversion of all the video frames is not required in advance.
Illustratively, taking the pre-noise reduction processing including gaussian noise reduction processing and bilateral filtering noise reduction processing as an example, the above S101 may include S1011 to S1012, specifically as follows:
s1011: and carrying out Gaussian noise reduction processing on each frame of video frame to obtain a Gaussian noise reduction result of each frame of video frame.
And the video noise reduction terminal performs Gaussian noise reduction processing on each frame of video frame in a spatial domain to obtain a Gaussian noise reduction result of each frame of video frame. The spatial domain is also called an image space, and the process of gray level change or filtering each pixel point in the image is called spatial domain processing by taking the distance between each pixel point in the image space as an independent variable.
Exemplarily, the video noise reduction terminal determines a gaussian template (gaussian filtering kernel size) corresponding to each frame of video frame, scans each pixel point in each frame of video frame, determines a weighted average value of gray values of pixel points in a neighborhood of a certain pixel point through the gaussian template, and uses the weighted average value as a pixel value corresponding to the pixel point. By analogy, the above processing is performed on each pixel point in each frame of video frame, and finally, a gaussian noise reduction result corresponding to each frame of video frame is obtained. Or inputting each frame of video frame into a preset gaussian noise reduction model for gaussian noise reduction, and outputting a gaussian noise reduction result corresponding to each frame of video frame by the gaussian noise reduction model. In this application, F may be usediRepresenting each incoming video frame to be processed, by GiAnd representing the output Gaussian noise reduction result obtained after the Gaussian noise reduction processing is carried out on the video frame.
S1012: and carrying out bilateral filtering noise reduction processing on each frame of video frame to obtain a bilateral filtering noise reduction result of each frame of video frame.
And the video noise reduction terminal performs bilateral filtering noise reduction processing of a spatial domain on each frame of video frame to obtain a bilateral filtering noise reduction result of each frame of video frame. Specifically, each frame of video frame may be input into a preset bilateral filtering algorithm to perform bilateral filtering denoising processing, so as to obtain a bilateral filtering denoising result corresponding to each frame of video frame. In this application, F may be usediRepresenting each frame of video to be processed of the input, by BiAnd showing a bilateral filtering and noise reducing result output after bilateral filtering and noise reducing processing is carried out on the video frame.
In the application, each frame of video frame in a video to be processed is respectively subjected to Gaussian noise reduction processing and bilateral filtering noise reduction processing, Gaussian noise on a spatial domain can be removed through the Gaussian noise reduction processing, but details and edges can be blurred, the bilateral filtering noise reduction processing can well store the detail texture of each frame of video frame, the two noise reduction processing results are complementary, and the video quality after noise reduction can be further improved.
S102: and performing Kalman filtering fusion processing on a first noise reduction result of the non-first frame video frame and a first noise reduction result of an adjacent frame of the non-first frame video frame aiming at each non-first frame video frame in the video to be processed to obtain a fusion result of the non-first frame video frame.
In the embodiment of the present application, except for the first frame of video frame, each frame of video frame starting from the second frame of video frame in the video to be processed is a non-first frame of video frame. The adjacent frame of the non-first frame video frame is the previous frame video frame adjacent to the non-first frame video frame, for example, the 1 st frame video frame is the adjacent frame of the 2 nd frame video frame, the 2 nd frame video frame is the adjacent frame of the 3 rd frame video frame, and so on. The adjacent frame of the non-first frame video frame may also be a next frame video frame adjacent to the non-first frame video frame, for example, the 4 th frame video frame is an adjacent frame of the 3 rd frame video frame, the 3 rd frame video frame is an adjacent frame of the 2 nd frame video frame, and so on.
Exemplarily, when the adjacent frame of the non-first frame video frame is a next frame video frame adjacent to the non-first frame video frame, S102 may be that the video denoising terminal performs kalman filtering fusion processing on the first denoising result of the 4 th frame video frame and the first denoising result of the 3 rd frame video frame to obtain a fusion result of the 4 th frame video frame.
For example, when the adjacent frame of the non-top frame video frame is a previous frame video frame adjacent to the non-top frame video frame, the above S102 may include S1021 to S1024, specifically as follows:
s1021: and calculating Kalman filtering gain of the non-first frame video frame according to the Gaussian noise reduction result of the non-first frame video frame.
And the video noise reduction terminal calculates the Kalman filtering gain of the non-first frame video frame according to the Gaussian noise reduction result of the non-first frame video frame. Specifically, the video denoising terminal may calculate the kalman filtering gain of the non-first frame video frame according to the gaussian denoising result of the non-first frame video frame, the gaussian denoising result of the adjacent frame of the non-first frame video frame, the observation error corresponding to the adjacent frame of the non-first frame video frame, the kalman filtering gain of the adjacent frame of the non-first frame video frame, the state covariance matrix of the adjacent frame of the non-first frame video frame, and the system error.
For example, the kalman filter gain of the 3 rd frame video frame in the video to be processed needs to be calculated. The video noise reduction terminal can calculate the system error corresponding to the 3 rd frame video frame according to the Gaussian noise reduction result of the 3 rd frame video frame and the Gaussian noise reduction result of the 2 nd frame video frame; calculating the observation error corresponding to the 3 rd frame of video according to the observation error corresponding to the 2 nd frame of video and the Kalman filtering gain of the 2 nd frame of video; calculating a state covariance matrix of the 3 rd frame video frame based on the state covariance matrix of the 2 nd frame video frame and the system error; and calculating the Kalman filtering gain of the 3 rd frame of video based on the state covariance matrix of the 3 rd frame of video and the observation error corresponding to the 3 rd frame of video.
It should be noted that, when the kalman filter gain of the 2 nd frame video frame in the video to be processed needs to be calculated, the observation error, the kalman filter gain, the state covariance matrix, and the like of the 1 st frame video frame that need to be used are determined by the video denoising terminal when the parameters of the 1 st frame video frame are initialized in advance.
Illustratively, S1021 may include S10211-S10214, as follows:
s10211: and calculating a system error corresponding to the non-first frame video frame based on the Gaussian noise reduction result of the non-first frame video frame and the Gaussian noise reduction result of the adjacent frame of the non-first frame video frame.
Specifically, the system error corresponding to the non-first frame video frame can be calculated by the following formula:
Qi=q·(Gi-1-Gi)2
wherein Q isiRepresenting the system error of a non-first frame video frame in the video to be processed; q is a preset scale factor, which is used to represent the difference of the error between two adjacent video frames, and is generally set to a smaller value, such as 0.01, 0.005, etc., which is only an exemplary illustration and is not limited thereto; giRepresenting a Gaussian noise reduction result of a non-first frame video frame; gi-1Representing non-first frame video framesAnd the adjacent Gaussian noise reduction result of the previous frame video frame.
For example, when the system error of the 3 rd frame video frame in the video to be processed needs to be calculated, the gaussian noise reduction result G of the 3 rd frame video frame is used3And the Gaussian noise reduction result G of the 2 nd frame video frame2Substituting the formula to obtain: q3=q·(G2-G3)2
S10212: and calculating the observation error corresponding to the non-first frame video frame based on the observation error corresponding to the adjacent frame of the non-first frame video frame and the Kalman filtering gain of the adjacent frame of the non-first frame video frame.
Specifically, the observation error corresponding to the non-first frame video frame can be calculated by the following formula:
Figure BDA0002502722610000091
wherein R isiRepresenting the observation error of a non-first frame video frame in the video to be processed; ri-1Representing the observation error of a previous frame video frame adjacent to the non-first frame video frame in the video to be processed; ki-1Representing the Kalman filtering gain of a previous frame video frame adjacent to a non-first frame video frame in the video to be processed; i is a constant.
For example, when the observation error of the 3 rd frame video frame in the video to be processed needs to be calculated, the observation error of the 2 nd frame video frame and the kalman filter gain of the 2 nd frame video frame are substituted into the above formula to obtain:
Figure BDA0002502722610000092
when the observation error of the 2 nd frame video frame in the video to be processed is calculated, the observation error of the 1 st frame video frame and the Kalman filtering gain of the 1 st frame video frame which are needed to be used can be determined by the video noise reduction terminal when the parameters of the 1 st frame video frame are initialized in advance.
S10213: and calculating the state covariance matrix of the non-head frame video frame based on the state covariance matrix of the adjacent frame of the non-head frame video frame and the system error.
Specifically, the state covariance matrix corresponding to the non-first frame video frame can be calculated by the following formula:
Figure BDA0002502722610000093
Figure BDA0002502722610000094
wherein the content of the first and second substances,
Figure BDA0002502722610000095
a state covariance matrix representing a non-first frame video frame; pi-1A state covariance matrix representing a previous frame of video frame adjacent to the non-first frame of video frame; a represents a system state transition matrix, and in the whole video frame of the video to be processed, A is a matrix with all elements being 1, so that a first formula can be abbreviated as a second formula; a. theTRepresenting a transpose of a system state transition matrix; qiRepresenting the system error of a non-first frame video frame in the video to be processed; denotes element-by-element multiplication.
For example, when the state covariance matrix of the 3 rd frame of video frame in the video to be processed needs to be calculated, the state covariance matrix of the 2 nd frame of video frame and the system error of the 3 rd frame of video frame are substituted into the above formula to obtain:
Figure BDA0002502722610000101
when the state covariance matrix of the 2 nd frame video frame in the video to be processed is calculated, the state covariance matrix of the 1 st frame video frame required to be used can be determined by the video noise reduction terminal when the parameter initialization is performed on the 1 st frame video frame in advance.
S10214: and calculating the Kalman filtering gain of the non-first frame video frame based on the state covariance matrix of the non-first frame video frame and the observation error corresponding to the non-first frame video frame.
Specifically, the kalman filtering gain corresponding to the non-first frame video frame may be calculated by the following formula:
Figure BDA0002502722610000102
Figure BDA0002502722610000103
wherein K represents the kalman filter gain of the non-first frame video frame (the video frame currently being processed);
Figure BDA0002502722610000104
a state covariance matrix representing a non-first frame video frame; riRepresenting the observation error of a non-first frame video frame in the video to be processed; represents an element-by-element multiplication; h represents an observation matrix; hTRepresents a transpose of the observation matrix; h is a matrix with all elements 1, so the first formula can be abbreviated as the second formula.
For example, when the kalman filter gain of the 3 rd frame of video frame in the video to be processed needs to be calculated, the state covariance matrix of the 3 rd frame of video frame and the observation error of the 3 rd frame of video frame are substituted into the above formula to obtain:
Figure BDA0002502722610000105
according to the method and the device, the Kalman filtering gain of the non-first frame video frame is accurately calculated based on the Gaussian noise reduction result of the non-first frame video frame, the Gaussian noise reduction result of the adjacent frame of the non-first frame video frame, the observation error corresponding to the adjacent frame of the non-first frame video frame and the like, so that the relation between the non-first frame video frame and the adjacent frame of the non-first frame video frame is established, the follow-up video noise reduction terminal can conveniently perform Kalman filtering fusion processing on the adjacent frame of the non-first frame video frame and the adjacent frame of the non-first frame video frame, and the video quality after noise reduction is improved.
S1022: and determining the system state of the non-head frame video frame based on the adjacent frame of the non-head frame video frame.
The system state of the non-first frame video frame may be determined based on the fusion result corresponding to the adjacent frame of the non-first frame video frame. Specifically, the system status of the non-first frame video frame can be determined by the following formula:
Figure BDA0002502722610000111
Figure BDA0002502722610000112
wherein the content of the first and second substances,
Figure BDA0002502722610000113
representing the system state of a non-first frame video frame; xi-1Representing the fusion result of the previous frame video frame adjacent to the non-first frame video frame; represents an element-by-element multiplication; a is a matrix with all elements 1, so the first formula can be abbreviated as the second formula.
For example, when the system state of the 3 rd frame video frame in the video to be processed needs to be calculated, the fusion result of the 2 nd frame video frame is used as the input of the 3 rd frame video frame, that is, as the system state of the 3 rd frame video frame. When the system state of the 2 nd frame video frame in the video to be processed is calculated, the fusion result of the 1 st frame video frame required to be used can be determined when the video noise reduction terminal initializes the 1 st frame video frame in advance; the fusion result of the 1 st frame video frame can also be preset.
S1023: determining a state observation for the non-leading frame of video frames based on the system state.
Specifically, the state observation value of the non-first frame video frame can be determined by the following formula:
Xi (w,)=Zi (w,)
wherein, Xi (w,)Representing the system state of a non-first frame video frame; (w,) denotes the width and height of the non-first frame video frame; zi (w,)Representing the state observation of the non-first frame video frame.
For example, when it is necessary to calculate the state observation value of the 3 rd frame of video frame in the video to be processed, it can be understood that the system state of the 3 rd frame of video frame is taken as the state observation value of the current 3 rd frame of video frame.
S1024: and fusing the Kalman filtering gain, the system state, the state observation value and the bilateral filtering noise reduction result of the non-first frame video frame to obtain a fusion result of the non-first frame video frame.
Specifically, the information can be determined to be fused by the following formula:
Figure BDA0002502722610000114
wherein, XiRepresenting the corresponding fusion result of the non-first frame video frame;
Figure BDA0002502722610000115
representing the system state of a non-first frame video frame; kiRepresenting the Kalman filtering gain of a non-first frame video frame; i is a constant; ziRepresenting a state observation of a non-first frame video frame; b isiRepresenting the bilateral filtering noise reduction result of the non-first frame video frame; denotes element-by-element multiplication.
For example, when the fusion result of the 3 rd frame of video frame in the video to be processed needs to be calculated, the kalman filter gain of the 3 rd frame of video frame, the system state of the 3 rd frame of video frame, the state observation value of the 3 rd frame of video frame, and the bilateral filter noise reduction result of the 3 rd frame of video frame may be substituted into the above formula, and the fusion result of the 3 rd frame of video frame may be calculated.
It can be understood that, by performing S1021-S1024 to obtain the fusion result of the non-first-frame video frame, when the fusion result of the next-frame video frame adjacent to the non-first-frame video frame is obtained, the fusion result of the non-first-frame video frame can be used as its input parameter, and the above steps are repeated in this way until each video frame in the video to be processed is processed. For example, when the video denoising terminal performs parameter initialization on the 1 st frame of video frame in the video to be processed in advance, the fusion result of the 1 st frame of video frame is determined, and the fusion result of the 1 st frame of video frame is used as one of the parameters for calculating the fusion result of the 2 nd frame of video frame. And after the fusion result of the 2 nd frame of video frame is obtained, taking the fusion result of the 2 nd frame of video frame as one of the parameters for calculating the fusion result of the 3 rd frame of video frame, and processing all video frames in the video to be processed by analogy.
When Kalman filtering fusion is carried out, the information of the non-first frame video frame is needed to be used and is calculated and obtained based on each information of the adjacent frame of the non-first frame video frame. For example, when kalman filtering fusion is performed on a non-first frame video frame, there are actually gaussian noise reduction results fused to the non-first frame video frame, gaussian noise reduction results of adjacent frames of the non-first frame video frame, bilateral filtering noise reduction results of the non-first frame video frame, a state observation value of the non-first frame video frame, and the like, so that fusion noise reduction of the non-first frame video frame on the spatial domain (gaussian noise reduction processing and bilateral filtering noise reduction processing) level and the time domain (non-first frame video frame and adjacent frames of the non-first frame video frame) level is realized; because the spatial domain and the time domain are associated, the situation of video frame skipping can not occur during noise reduction, and the visual effect is very good; for moving objects in the video, motion estimation and compensation are not needed, noise reduction operation efficiency is improved, and operation time is reduced; the quality of the denoised video is further improved.
S103: and determining a second noise reduction result of the non-first frame video frame based on the fusion result.
And the terminal performs noise reduction processing on the fusion result of the non-first frame video frame to obtain a second noise reduction result of the non-first frame video frame. Specifically, when the noise reduction processing is single frame noise reduction processing, the step S103 specifically includes: and carrying out single-frame noise reduction processing on the fusion result of the non-first-frame video frame to obtain a second noise reduction result of the non-first-frame video frame.
The single frame noise reduction processing may be gaussian noise reduction processing, or bilateral filtering noise reduction processing, or median filtering processing, or mean filtering processing, or the like. That is, the video denoising terminal can perform any one of the above single-frame denoising processes on the fusion result corresponding to the non-first-frame video frame to obtain a second denoising result of the non-first-frame video frame.
Specifically, the video denoising terminal performs gaussian denoising processing in a spatial domain on the fusion result of the non-first frame video frame, and the specific processing process refers to the description in S1011, which is not repeated here. According to the method and the device, after Kalman filtering fusion is carried out, single-frame denoising processing is carried out on the fusion result again, a small amount of residual noise in the fusion result can be further removed, the denoising effect of the output second denoising result is better, and the denoising video quality generated finally is remarkably improved.
S104: and determining the noise reduction video corresponding to the video to be processed according to the first noise reduction result of the first frame video frame of the video to be processed and the second noise reduction result corresponding to each frame of non-first frame video frame.
And the video denoising terminal splices a first denoising result corresponding to a first frame video frame of the video to be processed and a second denoising result corresponding to each frame non-first frame video frame according to the processing sequence of each frame video frame in the video to be processed to generate a denoising video corresponding to the video to be processed. The first frame of video frame of the video to be processed is subjected to pre-noise reduction processing, so that a Gaussian noise reduction result corresponding to the first frame of video frame and/or a bilateral filtering noise reduction result corresponding to the first frame of video frame can be obtained.
In the embodiment of the application, the video denoising terminal performs denoising pre-processing on each frame of video frame in a video to be processed to obtain a first denoising result of each frame of video frame; performing Kalman filtering fusion processing on a first noise reduction result of a non-first frame video frame in a video to be processed and a first noise reduction result of an adjacent frame of the non-first frame video frame aiming at each non-first frame video frame in the video to be processed to obtain a fusion result of the non-first frame video frame; determining a second noise reduction result of the non-first frame video frame based on the fusion result; and determining the noise reduction video corresponding to the video to be processed according to the first noise reduction result of the first frame video frame of the video to be processed and the second noise reduction result corresponding to each frame of non-first frame video frame. When the video denoising terminal performs denoising processing on a video to be processed, motion estimation and compensation are not needed to be performed on an object in the video, so that the denoising operation efficiency is improved, and the operation time is reduced; and moreover, the video frames in the video to be processed are subjected to pre-denoising processing, Kalman filtering fusion processing and single-frame denoising processing, so that the quality of the denoised video is further improved. Furthermore, the method and the device realize the fusion noise reduction of the video frames on the level of a space domain (Gaussian noise reduction processing and bilateral filtering noise reduction processing) and the level of a time domain (non-first frame video frame and adjacent frames of the non-first frame video frame); because the spatial domain and the time domain are associated, the situation of video frame skipping can not occur during noise reduction, and the visual effect is very good.
Referring to fig. 2, fig. 2 is a schematic flow chart of a video denoising method according to another embodiment of the present application. In this embodiment, the main execution body of the video denoising method is a video denoising terminal, and the video denoising terminal includes, but is not limited to, a mobile terminal such as a smart phone, a tablet computer, a personal digital assistant, and the like, and may also include a terminal such as a desktop computer.
The difference between the present embodiment and the previous embodiment is S202 and S204, and S201, S203, S205, and S206 in the present embodiment are completely the same as S101, S102, S103, and S104 in the previous embodiment, and specific reference is made to the description related to S101, S102, S103, and S104 in the previous embodiment, which is not repeated herein.
For example, in order to facilitate kalman filtering fusion processing on the 2 nd frame video frame in the video to be processed, S202 may be further included after S201 and before S203, specifically as follows:
s202: and initializing parameters of the first frame of video frame, and determining Kalman filtering gain, observation error and state covariance matrix of the first frame of video frame.
The video denoising terminal initializes parameters of a first frame video frame in a video to be processed, and can be understood as presetting parameters such as Kalman filtering gain, observation errors, state covariance matrix and the like of the first frame video frame. Specifically, the kernel size of gaussian filtering, the kernel size of bilateral filtering, kalman filtering gain, the initial system state, the initial state covariance matrix, the state observation value, the system error, the observation error, the system transition matrix, the observation transition matrix, the width and height of the video frame, the fusion result, and the like of the first frame of video frame may be preset.
For example, to obtain a better fusion result, S204 may be further included after S203, specifically as follows:
s204: and correcting the state covariance matrix of the non-first frame video frame based on the Kalman filtering gain of the non-first frame video frame and the state covariance matrix of the non-first frame video frame.
And the video noise reduction terminal acquires the Kalman filtering gain of the non-first frame video frame and the state covariance matrix of the non-first frame video frame, and corrects (updates) the state covariance matrix of the non-first frame video frame according to the Kalman filtering gain and the state covariance matrix. Specifically, the information can be determined to be fused by the following formula:
Figure BDA0002502722610000141
Figure BDA0002502722610000142
wherein the content of the first and second substances,
Figure BDA0002502722610000143
representing the state covariance matrix of the modified non-first frame video frame; piA state covariance matrix representing a non-first frame video frame; i is a constant; h is a matrix with all elements 1, so the first formula can be abbreviated as the second formula; kiRepresenting the kalman filter gain of the non-first frame video frame.
For example, when the state covariance matrix of the 3 rd frame video frame in the video to be processed needs to be corrected, the state covariance matrix of the 3 rd frame video frame and the kalman filter gain of the 3 rd frame video frame are substituted into the above formula to obtain:
Figure BDA0002502722610000151
based on the Kalman filtering gain of each frame of non-first frame video frame in the video to be processed and the state covariance matrix of the non-first frame video frame, the state covariance matrix of the non-first frame video frame is corrected, so that Kalman filtering fusion processing is conveniently performed on the next frame video frame adjacent to the non-first frame video frame, the fusion effect of the non-first frame video frame is better, and the quality of the video subjected to noise reduction is improved. For example, the state covariance matrix of the 3 rd frame video frame is modified, so that the 4 th frame video frame is convenient for Kalman filtering fusion processing.
For ease of understanding, the above scheme will now be described by taking an application scenario as an example. When the video to be processed is the real-time video which is being shot, the video noise reduction terminal acquires a 1 st frame of video frame, and performs pre-noise reduction processing on the 1 st frame of video frame to obtain a Gaussian noise reduction result corresponding to the 1 st frame of video frame and a bilateral filtering noise reduction result corresponding to the 1 st frame of video frame. Because the frame video frame is the first frame video frame, the video noise reduction terminal performs parameter initialization on the frame video frame and is used for determining the kernel size of Gaussian filter, the kernel size of bilateral filter, Kalman filter gain, the initial system state, the initial state covariance matrix, the state observation value, the system error, the observation error, the system transition matrix, the observation transition matrix, the width and height of the video frame, the fusion result and the like of the 1 st frame video frame.
And acquiring a 2 nd frame video frame, and performing pre-noise reduction processing on the 2 nd frame video frame to obtain a Gaussian noise reduction result corresponding to the 2 nd frame video frame and a bilateral filtering noise reduction result corresponding to the 2 nd frame video frame. Calculating the system error of the 2 nd frame video frame according to the Gaussian noise reduction result of the 2 nd frame video frame and the Gaussian noise reduction result of the 1 st frame video frame; calculating the observation error of the 2 nd frame of video frame based on the observation error of the 1 st frame of video frame and the Kalman filtering gain of the 1 st frame of video frame; calculating a state covariance matrix of the 2 nd frame video frame according to the state covariance matrix of the 1 st frame video frame and the system error of the 2 nd frame video frame; calculating Kalman filtering gain of the 2 nd frame video frame based on the state covariance matrix of the 2 nd frame video frame and the observation error of the 2 nd frame video frame; determining the system state of the 2 nd frame video frame based on the fusion result of the 1 st frame video frame; determining a state observation for a frame 2 video frame based on a system state of the frame 2 video frame; and fusing the Kalman filtering gain of the 2 nd frame of video frame, the system state of the 2 nd frame of video frame, the state observation value of the 2 nd frame of video frame and the bilateral filtering noise reduction result of the non-first frame of video frame of the 2 nd frame of video frame to obtain the fusion result of the 2 nd frame of video frame. And correcting the state covariance matrix of the 2 nd frame video frame based on the Kalman filtering gain of the 2 nd frame video frame and the state covariance matrix of the 2 nd frame video frame. And carrying out single-frame noise reduction processing on the fusion result of the 2 nd frame video frame to obtain a second noise reduction result of the 2 nd frame video frame.
And continuously acquiring the 3 rd frame of video frame, wherein the processing of the 3 rd frame of video frame is the same as the processing of the 2 nd frame of video frame, and the like is performed until each video frame in the video to be processed is processed. And then, according to the processing sequence of each frame of video frame in the video to be processed, splicing a first noise reduction result corresponding to the first frame of video frame of the video to be processed and a second noise reduction result corresponding to each frame of non-first frame of video frame to generate a noise reduction video corresponding to the video to be processed.
It is worth noting that the video noise reduction method in the application has a more significant noise reduction effect on videos shot under dim light or backlight conditions. Referring to fig. 3 and 4, fig. 3 is a certain video frame in a video to be processed, which is shot under a dim light condition and is not subjected to noise reduction, and fig. 4 is a video frame obtained by performing noise reduction on the video frame in fig. 3 by using the noise reduction method in the present application.
In the embodiment of the application, the video denoising terminal performs denoising pre-processing on each frame of video frame in a video to be processed to obtain a first denoising result of each frame of video frame; performing Kalman filtering fusion processing on a first noise reduction result of a non-first frame video frame in a video to be processed and a first noise reduction result of an adjacent frame of the non-first frame video frame aiming at each non-first frame video frame in the video to be processed to obtain a fusion result of the non-first frame video frame; determining a second noise reduction result of the non-first frame video frame based on the fusion result; and determining the noise reduction video corresponding to the video to be processed according to the first noise reduction result of the first frame video frame of the video to be processed and the second noise reduction result corresponding to each frame of non-first frame video frame. When the video denoising terminal performs denoising processing on a video to be processed, motion estimation and compensation are not needed to be performed on an object in the video, so that the denoising operation efficiency is improved, and the operation time is reduced; and moreover, the video frames in the video to be processed are subjected to pre-denoising processing, Kalman filtering fusion processing and single-frame denoising processing, so that the quality of the denoised video is further improved. Furthermore, the method and the device realize the fusion noise reduction of the video frames on the level of a space domain (Gaussian noise reduction processing and bilateral filtering noise reduction processing) and the level of a time domain (non-first frame video frame and adjacent frames of the non-first frame video frame); because the spatial domain and the time domain are associated, the situation of video frame skipping can not occur during noise reduction, and the visual effect is very good.
Referring to fig. 5, fig. 5 is a schematic diagram of a video denoising apparatus according to an embodiment of the present disclosure. The video noise reduction apparatus includes units for performing the steps in the embodiments corresponding to fig. 1 and 2. Please refer to fig. 1 and fig. 2 for the corresponding embodiments. For convenience of explanation, only the portions related to the present embodiment are shown. Referring to fig. 5, it includes:
the first processing unit 310 is configured to perform pre-noise reduction processing on each frame of video frame in a video to be processed to obtain a first noise reduction result of each frame of video frame;
the second processing unit 320 is configured to perform kalman filtering fusion processing on the first noise reduction result of the non-first frame video frame and the first noise reduction result of an adjacent frame of the non-first frame video frame to obtain a fusion result of the non-first frame video frame for each non-first frame video frame in the video to be processed;
a third processing unit 330, configured to determine a second denoising result of the non-leading frame video frame based on the fusion result;
the determining unit 340 is configured to determine a noise-reduced video corresponding to the video to be processed according to a first noise-reduction result of a first frame of the video to be processed and a second noise-reduction result corresponding to each non-first frame of the video to be processed.
Further, the first processing unit 310 is specifically configured to:
performing Gaussian noise reduction processing on each frame of video frame to obtain a Gaussian noise reduction result of each frame of video frame;
and carrying out bilateral filtering noise reduction processing on each frame of video frame to obtain a bilateral filtering noise reduction result of each frame of video frame.
Further, the adjacent frame of the non-leading frame video frame is a previous frame video frame adjacent to the non-leading frame video frame, and the second processing unit 320 includes:
the computing unit is used for computing Kalman filtering gain of the non-first frame video frame according to the Gaussian noise reduction result of the non-first frame video frame;
the system state determining unit is used for determining the system state of the non-head frame video frame based on the adjacent frame of the non-head frame video frame;
a state observation value determining unit, configured to determine a state observation value of the non-first-frame video frame based on the system state;
and the fusion unit is used for fusing the Kalman filtering gain, the system state, the state observation value and the bilateral filtering noise reduction result of the non-first frame video frame to obtain a fusion result of the non-first frame video frame.
Further, the computing unit is specifically configured to:
calculating a system error corresponding to the non-first frame video frame based on the Gaussian noise reduction result of the non-first frame video frame and the Gaussian noise reduction result of the adjacent frame of the non-first frame video frame;
calculating the observation error corresponding to the non-first frame video frame based on the observation error corresponding to the adjacent frame of the non-first frame video frame and the Kalman filtering gain of the adjacent frame of the non-first frame video frame;
calculating a state covariance matrix of the non-leading frame video frame based on the state covariance matrix of the adjacent frame of the non-leading frame video frame and the system error;
and calculating the Kalman filtering gain of the non-first frame video frame based on the state covariance matrix of the non-first frame video frame and the observation error corresponding to the non-first frame video frame.
Further, the video noise reduction apparatus further includes:
and the correcting unit is used for correcting the state covariance matrix of the non-first frame video frame based on the Kalman filtering gain of the non-first frame video frame and the state covariance matrix of the non-first frame video frame.
Further, the video noise reduction apparatus further includes:
and the initialization unit is used for carrying out parameter initialization on the first frame of video frame and determining Kalman filtering gain, observation error and state covariance matrix of the first frame of video frame.
Further, the third processing unit 330 is specifically configured to:
and carrying out single-frame noise reduction processing on the fusion result of the non-first-frame video frame to obtain a second noise reduction result of the non-first-frame video frame.
Referring to fig. 6, fig. 6 is a schematic diagram of a video denoising terminal according to another embodiment of the present application. As shown in fig. 6, the video noise reduction terminal 4 of the embodiment includes: a processor 40, a memory 41, and computer readable instructions 42 stored in the memory 41 and executable on the processor 40. The processor 40, when executing the computer readable instructions 42, implements the steps in the various video denoising method embodiments described above, such as S101-S104 shown in fig. 1. Alternatively, the processor 40, when executing the computer readable instructions 42, implements the functions of the units in the embodiments described above, such as the functions of the units 410 to 440 shown in fig. 5.
Illustratively, the computer readable instructions 42 may be divided into one or more units, which are stored in the memory 41 and executed by the processor 40 to accomplish the present application. The one or more units may be a series of computer readable instruction segments capable of performing specific functions, which are used to describe the execution process of the computer readable instructions 42 in the video noise reduction terminal 4. For example, the computer readable instructions 42 may be divided into a first processing unit, a second processing unit, a third processing unit, and a determination unit, each unit having the specific functions as described above.
The video noise reduction terminal may include, but is not limited to, a processor 40, a memory 41. Those skilled in the art will appreciate that fig. 6 is merely an example of a video noise reduction terminal 4 and does not constitute a limitation of video noise reduction terminal 4 and may include more or fewer components than shown, or combine certain components, or different components, e.g., the video noise reduction terminal may also include an input-output terminal, a network access terminal, a bus, etc.
The Processor 40 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 41 may be an internal storage unit of the video noise reduction terminal 4, such as a hard disk or a memory of the video noise reduction terminal 4. The memory 41 may also be an external storage terminal of the video noise reduction terminal 4, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are equipped on the video noise reduction terminal 4. Further, the memory 41 may also include both an internal storage unit and an external storage terminal of the video noise reduction terminal 4. The memory 41 is used for storing the computer readable instructions and other programs and data required by the terminal. The memory 41 may also be used to temporarily store data that has been output or is to be output.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not cause the essential features of the corresponding technical solutions to depart from the spirit scope of the technical solutions of the embodiments of the present application, and are intended to be included within the scope of the present application.

Claims (10)

1. A method for video denoising, comprising:
carrying out pre-noise reduction processing on each frame of video frame in a video to be processed to obtain a first noise reduction result of each frame of video frame;
performing Kalman filtering fusion processing on a first noise reduction result of each frame of non-first frame video frame in the video to be processed and a first noise reduction result of an adjacent frame of the non-first frame video frame to obtain a fusion result of the non-first frame video frame;
determining a second noise reduction result of the non-first frame video frame based on the fusion result;
and determining the noise reduction video corresponding to the video to be processed according to the first noise reduction result of the first frame video frame of the video to be processed and the second noise reduction result corresponding to each frame of non-first frame video frame.
2. The method of claim 1, wherein the pre-denoising each frame of video frames in the video to be processed to obtain the first denoising result of each frame of video frames comprises:
performing Gaussian noise reduction processing on each frame of video frame to obtain a Gaussian noise reduction result of each frame of video frame;
and carrying out bilateral filtering noise reduction processing on each frame of video frame to obtain a bilateral filtering noise reduction result of each frame of video frame.
3. The method of claim 2, wherein the frame adjacent to the non-top frame video frame is a previous frame video frame adjacent to the non-top frame video frame, and performing kalman filtering and fusing on the first noise reduction result of the non-top frame video frame and the first noise reduction result of the frame adjacent to the non-top frame video frame for each frame of the non-top frame video frame in the video to be processed to obtain the fused result of the non-top frame video frame comprises:
calculating Kalman filtering gain of the non-first frame video frame according to the Gaussian noise reduction result of the non-first frame video frame;
determining a system state of the non-leading frame video frame based on adjacent frames of the non-leading frame video frame;
determining a state observation value of the non-first frame video frame based on the system state;
and fusing the Kalman filtering gain, the system state, the state observation value and the bilateral filtering noise reduction result of the non-first frame video frame to obtain a fusion result of the non-first frame video frame.
4. The method of claim 3, wherein the calculating the Kalman filter gain for the non-leading frame video frame based on the Gaussian noise reduction result for the non-leading frame video frame comprises:
calculating a system error corresponding to the non-first frame video frame based on the Gaussian noise reduction result of the non-first frame video frame and the Gaussian noise reduction result of the adjacent frame of the non-first frame video frame;
calculating the observation error corresponding to the non-first frame video frame based on the observation error corresponding to the adjacent frame of the non-first frame video frame and the Kalman filtering gain of the adjacent frame of the non-first frame video frame;
calculating a state covariance matrix of the non-leading frame video frame based on the state covariance matrix of the adjacent frame of the non-leading frame video frame and the system error;
and calculating the Kalman filtering gain of the non-first frame video frame based on the state covariance matrix of the non-first frame video frame and the observation error corresponding to the non-first frame video frame.
5. The method of claim 1, wherein the performing kalman filtering fusion processing on the first denoising result of the non-leading frame video frame and the first denoising result of the adjacent frame of the non-leading frame video frame to obtain the fusion result of the non-leading frame video frame further comprises:
and correcting the state covariance matrix of the non-first frame video frame based on the Kalman filtering gain of the non-first frame video frame and the state covariance matrix of the non-first frame video frame.
6. The method of claim 1, wherein, before performing kalman filtering and fusion processing on the first denoising result of the non-leading frame video frame and the first denoising result of the adjacent frame of the non-leading frame video frame for each non-leading frame video frame in the video to be processed to obtain the fusion result of the non-leading frame video frame, the method further comprises:
and initializing parameters of the first frame of video frame, and determining Kalman filtering gain, observation error and state covariance matrix of the first frame of video frame.
7. The video denoising method of any one of claims 1-6, wherein the determining a second denoising result for the non-leading frame video frame based on the fusion result comprises:
and carrying out single-frame noise reduction processing on the fusion result of the non-first-frame video frame to obtain a second noise reduction result of the non-first-frame video frame.
8. A video noise reduction apparatus, comprising:
the first processing unit is used for carrying out pre-noise reduction processing on each frame of video frame in a video to be processed to obtain a first noise reduction result of each frame of video frame;
the second processing unit is used for performing Kalman filtering fusion processing on a first noise reduction result of a non-first frame video frame and a first noise reduction result of an adjacent frame of the non-first frame video frame aiming at each non-first frame video frame in the video to be processed to obtain a fusion result of the non-first frame video frame;
a third processing unit, configured to determine a second denoising result of the non-first frame video frame based on the fusion result;
and the determining unit is used for determining the noise reduction video corresponding to the video to be processed according to the first noise reduction result of the first frame video frame of the video to be processed and the second noise reduction result corresponding to each frame of non-first frame video frame.
9. A video denoising terminal comprising a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor implements the method according to any one of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1 to 7.
CN202010437187.7A 2020-05-21 2020-05-21 Video noise reduction method, video noise reduction device and video noise reduction terminal Pending CN113709324A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010437187.7A CN113709324A (en) 2020-05-21 2020-05-21 Video noise reduction method, video noise reduction device and video noise reduction terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010437187.7A CN113709324A (en) 2020-05-21 2020-05-21 Video noise reduction method, video noise reduction device and video noise reduction terminal

Publications (1)

Publication Number Publication Date
CN113709324A true CN113709324A (en) 2021-11-26

Family

ID=78645889

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010437187.7A Pending CN113709324A (en) 2020-05-21 2020-05-21 Video noise reduction method, video noise reduction device and video noise reduction terminal

Country Status (1)

Country Link
CN (1) CN113709324A (en)

Citations (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050074158A1 (en) * 2003-10-06 2005-04-07 Kaufhold John Patrick Methods and apparatus for visualizing low contrast moveable objects
US20060056724A1 (en) * 2004-07-30 2006-03-16 Le Dinh Chon T Apparatus and method for adaptive 3D noise reduction
US20060139494A1 (en) * 2004-12-29 2006-06-29 Samsung Electronics Co., Ltd. Method of temporal noise reduction in video sequences
EP1681849A2 (en) * 2005-01-18 2006-07-19 LG Electronics, Inc. Apparatus for removing noise from a video signal
US20060232710A1 (en) * 2005-04-19 2006-10-19 Samsung Electronics Co., Ltd. Method and apparatus of bidirectional temporal noise reduction
CN1856990A (en) * 2003-09-23 2006-11-01 皇家飞利浦电子股份有限公司 Video de-noising algorithm using inband motion-compensated temporal filtering
US20070070250A1 (en) * 2005-09-27 2007-03-29 Samsung Electronics Co., Ltd. Methods for adaptive noise reduction based on global motion estimation
CN101448077A (en) * 2008-12-26 2009-06-03 四川虹微技术有限公司 Self-adapting video image 3D denoise method
CN101887580A (en) * 2010-07-23 2010-11-17 扬州万方电子技术有限责任公司 Image noise reducing method of non-down sampling contourlet transformation domain
KR20110080371A (en) * 2010-01-05 2011-07-13 엘지전자 주식회사 Reducing noise in digital television
CN102238316A (en) * 2010-04-29 2011-11-09 北京科迪讯通科技有限公司 Self-adaptive real-time denoising scheme for 3D digital video image
CN103108109A (en) * 2013-01-31 2013-05-15 深圳英飞拓科技股份有限公司 Digital video noise reduction system and method
CN103177423A (en) * 2011-10-07 2013-06-26 伊姆普斯封闭式股份有限公司 Method of noise reduction in digital x-ray frames series
CN103533214A (en) * 2013-10-01 2014-01-22 中国人民解放军国防科学技术大学 Video real-time denoising method based on kalman filtering and bilateral filtering
CN103873743A (en) * 2014-03-24 2014-06-18 中国人民解放军国防科学技术大学 Video de-noising method based on structure tensor and Kalman filtering
CN104853064A (en) * 2015-04-10 2015-08-19 海视英科光电(苏州)有限公司 Electronic image-stabilizing method based on infrared thermal imager
KR20160056729A (en) * 2014-11-12 2016-05-20 고려대학교 산학협력단 Video quality enhancement device and method for extremely low-light video
US20170084007A1 (en) * 2014-05-15 2017-03-23 Wrnch Inc. Time-space methods and systems for the reduction of video noise
CN106550187A (en) * 2015-09-16 2017-03-29 韩华泰科株式会社 For the apparatus and method of image stabilization
CN106780542A (en) * 2016-12-29 2017-05-31 北京理工大学 A kind of machine fish tracking of the Camshift based on embedded Kalman filter
CN106803265A (en) * 2017-01-06 2017-06-06 重庆邮电大学 Multi-object tracking method based on optical flow method and Kalman filtering
US20170195591A1 (en) * 2016-01-05 2017-07-06 Nvidia Corporation Pre-processing for video noise reduction
CN108334885A (en) * 2018-02-05 2018-07-27 湖南航升卫星科技有限公司 A kind of video satellite image space object detection method
CN108438004A (en) * 2018-03-05 2018-08-24 长安大学 Lane departure warning system based on monocular vision
CN109544469A (en) * 2018-11-07 2019-03-29 南京信息工程大学 A kind of discrete Kalman's self-adapting image denoising system based on FPGA
CN110445951A (en) * 2018-05-02 2019-11-12 腾讯科技(深圳)有限公司 Filtering method and device, storage medium, the electronic device of video
CN110838088A (en) * 2018-08-15 2020-02-25 Tcl集团股份有限公司 Multi-frame noise reduction method and device based on deep learning and terminal equipment

Patent Citations (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1856990A (en) * 2003-09-23 2006-11-01 皇家飞利浦电子股份有限公司 Video de-noising algorithm using inband motion-compensated temporal filtering
US20050074158A1 (en) * 2003-10-06 2005-04-07 Kaufhold John Patrick Methods and apparatus for visualizing low contrast moveable objects
US20060056724A1 (en) * 2004-07-30 2006-03-16 Le Dinh Chon T Apparatus and method for adaptive 3D noise reduction
US20060139494A1 (en) * 2004-12-29 2006-06-29 Samsung Electronics Co., Ltd. Method of temporal noise reduction in video sequences
EP1681849A2 (en) * 2005-01-18 2006-07-19 LG Electronics, Inc. Apparatus for removing noise from a video signal
US20060232710A1 (en) * 2005-04-19 2006-10-19 Samsung Electronics Co., Ltd. Method and apparatus of bidirectional temporal noise reduction
US20070070250A1 (en) * 2005-09-27 2007-03-29 Samsung Electronics Co., Ltd. Methods for adaptive noise reduction based on global motion estimation
CN101448077A (en) * 2008-12-26 2009-06-03 四川虹微技术有限公司 Self-adapting video image 3D denoise method
KR20110080371A (en) * 2010-01-05 2011-07-13 엘지전자 주식회사 Reducing noise in digital television
CN102238316A (en) * 2010-04-29 2011-11-09 北京科迪讯通科技有限公司 Self-adaptive real-time denoising scheme for 3D digital video image
CN101887580A (en) * 2010-07-23 2010-11-17 扬州万方电子技术有限责任公司 Image noise reducing method of non-down sampling contourlet transformation domain
CN103177423A (en) * 2011-10-07 2013-06-26 伊姆普斯封闭式股份有限公司 Method of noise reduction in digital x-ray frames series
CN103108109A (en) * 2013-01-31 2013-05-15 深圳英飞拓科技股份有限公司 Digital video noise reduction system and method
CN103533214A (en) * 2013-10-01 2014-01-22 中国人民解放军国防科学技术大学 Video real-time denoising method based on kalman filtering and bilateral filtering
CN103873743A (en) * 2014-03-24 2014-06-18 中国人民解放军国防科学技术大学 Video de-noising method based on structure tensor and Kalman filtering
US20170084007A1 (en) * 2014-05-15 2017-03-23 Wrnch Inc. Time-space methods and systems for the reduction of video noise
KR20160056729A (en) * 2014-11-12 2016-05-20 고려대학교 산학협력단 Video quality enhancement device and method for extremely low-light video
CN104853064A (en) * 2015-04-10 2015-08-19 海视英科光电(苏州)有限公司 Electronic image-stabilizing method based on infrared thermal imager
CN106550187A (en) * 2015-09-16 2017-03-29 韩华泰科株式会社 For the apparatus and method of image stabilization
US20170195591A1 (en) * 2016-01-05 2017-07-06 Nvidia Corporation Pre-processing for video noise reduction
CN106780542A (en) * 2016-12-29 2017-05-31 北京理工大学 A kind of machine fish tracking of the Camshift based on embedded Kalman filter
CN106803265A (en) * 2017-01-06 2017-06-06 重庆邮电大学 Multi-object tracking method based on optical flow method and Kalman filtering
CN108334885A (en) * 2018-02-05 2018-07-27 湖南航升卫星科技有限公司 A kind of video satellite image space object detection method
CN108438004A (en) * 2018-03-05 2018-08-24 长安大学 Lane departure warning system based on monocular vision
CN110445951A (en) * 2018-05-02 2019-11-12 腾讯科技(深圳)有限公司 Filtering method and device, storage medium, the electronic device of video
CN110838088A (en) * 2018-08-15 2020-02-25 Tcl集团股份有限公司 Multi-frame noise reduction method and device based on deep learning and terminal equipment
CN109544469A (en) * 2018-11-07 2019-03-29 南京信息工程大学 A kind of discrete Kalman's self-adapting image denoising system based on FPGA

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
唐权华;雷金娥;周艳;金炜东;: "一种时空联合的视频去噪方法", 计算机工程与应用, no. 06 *
张金丽;张淑芳;: "基于自适应卡尔曼的时域增强算法研究", 信息技术, no. 08 *
石龙伟;邓欣;王进;陈乔松;: "基于光流法和卡尔曼滤波的多目标跟踪", 计算机应用, no. 1 *
谭洪涛;田逢春;张莎;张静;邱宇;: "结合运动补偿的球体双边滤波视频降噪算法", ***工程与电子技术, no. 12 *

Similar Documents

Publication Publication Date Title
CN107278314B (en) Device, mobile computing platform and method for denoising non-local mean image
US9747514B2 (en) Noise filtering and image sharpening utilizing common spatial support
JP6469678B2 (en) System and method for correcting image artifacts
US9514525B2 (en) Temporal filtering for image data using spatial filtering and noise history
US9413951B2 (en) Dynamic motion estimation and compensation for temporal filtering
US20160253787A1 (en) Methods and systems for denoising images
US9852353B2 (en) Structure aware image denoising and noise variance estimation
CN107077721B (en) Global matching of multiple images
CN109214996B (en) Image processing method and device
US9014503B2 (en) Noise-reduction method and apparatus
WO2020001164A1 (en) Image enhancement method and apparatus
CN111223061A (en) Image correction method, correction device, terminal device and readable storage medium
CN113327193A (en) Image processing method, image processing apparatus, electronic device, and medium
US11823352B2 (en) Processing video frames via convolutional neural network using previous frame statistics
CN113344801A (en) Image enhancement method, system, terminal and storage medium applied to gas metering facility environment
CN110717864B (en) Image enhancement method, device, terminal equipment and computer readable medium
US20140092116A1 (en) Wide dynamic range display
CN114943649A (en) Image deblurring method, device and computer readable storage medium
CN110689496A (en) Method and device for determining noise reduction model, electronic equipment and computer storage medium
CN111833262A (en) Image noise reduction method and device and electronic equipment
CN113658050A (en) Image denoising method, denoising device, mobile terminal and storage medium
CN113709324A (en) Video noise reduction method, video noise reduction device and video noise reduction terminal
KR102585573B1 (en) Content-based image processing
US8577180B2 (en) Image processing apparatus, image processing system and method for processing image
CN114119377A (en) Image processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination