CN110189271B - Method for removing noise of reflection background and related product - Google Patents

Method for removing noise of reflection background and related product Download PDF

Info

Publication number
CN110189271B
CN110189271B CN201910441068.6A CN201910441068A CN110189271B CN 110189271 B CN110189271 B CN 110189271B CN 201910441068 A CN201910441068 A CN 201910441068A CN 110189271 B CN110189271 B CN 110189271B
Authority
CN
China
Prior art keywords
pictures
frames
noise
picture
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910441068.6A
Other languages
Chinese (zh)
Other versions
CN110189271A (en
Inventor
危平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SHENZHEN CLOUDROOM TECHNOLOGY Co.,Ltd.
Original Assignee
Shenzhen Cloudroom Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Cloudroom Technology Co ltd filed Critical Shenzhen Cloudroom Technology Co ltd
Priority to CN201910441068.6A priority Critical patent/CN110189271B/en
Publication of CN110189271A publication Critical patent/CN110189271A/en
Application granted granted Critical
Publication of CN110189271B publication Critical patent/CN110189271B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The present disclosure provides a method for removing noise of a reflection background and a related product, the method comprising: the method comprises the steps that a terminal collects a first short video file and extracts a plurality of frames of pictures in the first short video file; the terminal identifies a plurality of frames of pictures to determine n frames of pictures with noise; the terminal removes noise of a noise area of the n frames of pictures to obtain n frames of filtered pictures, and the n frames of filtered pictures replace the n frames of pictures in the first short video file to obtain a second video file after noise removal; and n is an integer greater than or equal to 1. The technical scheme provided by the application has the advantage of high user experience.

Description

Method for removing noise of reflection background and related product
Technical Field
The invention relates to the technical field of videos, in particular to a method for removing noise of a reflection background and a related product.
Background
The short video is short video, which is an internet content transmission mode, and is generally video transmission content transmitted on new internet media within 1 minute. Reflective backgrounds, i.e. backgrounds against which video is taken, are reflective materials, such as building glass, curtain walls and the like.
The existing reflection background has certain shooting noise, and the noise has certain influence on the quality of a video, so that the experience degree of a user is influenced.
Disclosure of Invention
The embodiment of the invention provides a method for removing noise of a reflection background and a related product, which can realize noise replacement of the reflection background and have the advantage of improving user experience.
In a first aspect, an embodiment of the present invention provides a method for removing noise of a reflection background, where the method includes the following steps:
the method comprises the steps that a terminal collects a first short video file and extracts a plurality of frames of pictures in the first short video file;
the terminal identifies a plurality of frames of pictures to determine n frames of pictures with noise;
the terminal removes noise of a noise area of the n frames of pictures to obtain n frames of filtered pictures, and the n frames of filtered pictures replace the n frames of pictures in the first short video file to obtain a second video file after noise removal;
and n is an integer greater than or equal to 1.
Optionally, the identifying, by the terminal, the multiple frames of pictures to determine n frames of pictures with noise specifically includes:
the terminal performs a noise identification operation on a first picture in the multiple frames of pictures, where the noise identification operation specifically includes: the method comprises the steps of determining two edge areas of a first picture, carrying out human figure outline identification on the two edge areas to determine whether a human figure outline exists or not, such as the human figure outline exists, determining that the first picture has noise, such as the first picture does not have the human figure outline, determining that the first picture does not have the noise, and carrying out noise identification operation on a plurality of pictures in sequence to determine n frames of pictures with the noise.
Optionally, the removing, by the terminal, the noise in the noise region of the n frames of pictures to obtain the filtered n frames of pictures specifically includes:
the terminal acquires a first frame picture of the n frames of pictures, and performs a filtering operation on the first frame picture, wherein the filtering operation specifically comprises the following steps: dividing a left edge area and a right edge area of a first frame of picture into a plurality of square blocks with equal areas, determining C value matrixes of the square blocks according to RGB values of the square blocks and pixel positions of the square blocks, determining the same C value matrixes with the largest number from the C value matrixes as a first C value matrix, determining the first C value matrix as a reflection background C value matrix, replacing values of the C value matrixes of the square blocks in the left edge area and the right edge area with values of the first C value matrix to obtain a filtered first frame of picture, and performing the filtering operation on n frames of pictures to obtain filtered n frames of pictures;
c-value = R-value + G-value + B-value.
Optionally, the removing, by the terminal, the noise in the noise region of the n frames of pictures to obtain the filtered n frames of pictures specifically includes:
the terminal acquires a first frame picture of the n frames of pictures, and performs a filtering operation on the first frame picture, wherein the filtering operation specifically comprises the following steps: dividing a left edge area and a right edge area of a first frame of picture into a plurality of square blocks with equal areas, determining C value matrixes of the square blocks according to RGB values of the square blocks and pixel positions of the square blocks, determining the same C value matrixes with the largest number from the C value matrixes as a first C value matrix, and determining the first C value matrix as a reflection background C value matrix; acquiring a first person contour of a middle area of a first frame of picture, acquiring a first person contour of a second frame of picture, and determining the motion direction of a first person according to the first person contour area of the first frame of picture and the first person contour area of the second frame of picture; and determining the C value matrixes in the left edge area and the right edge area, which are the same as the motion direction, as non-filtering areas, replacing the C value matrixes of the square blocks outside the non-filtering areas with the values of the first C value matrixes to obtain filtered first frame pictures, and performing the filtering operation on the n frame pictures to obtain the filtered n frame pictures.
In a second aspect, a terminal is provided, which includes: a processor, a camera and a display screen,
the camera is used for collecting a first short video file;
the processor is used for extracting a plurality of frames of pictures in the first short video file; identifying a plurality of frames of pictures to determine n frames of pictures with noise; removing noise of the noise area of the n frames of pictures to obtain n frames of filtered pictures, and replacing the n frames of filtered pictures with the n frames of filtered pictures in the first short video file to obtain a second video file with the noise removed;
and n is an integer greater than or equal to 1.
Optionally, the processor is specifically configured to execute, by the terminal, a noise identification operation on a first picture in the multiple frames of pictures, where the noise identification operation specifically includes: the method comprises the steps of determining two edge areas of a first picture, carrying out human figure outline identification on the two edge areas to determine whether a human figure outline exists or not, such as the human figure outline exists, determining that the first picture has noise, such as the first picture does not have the human figure outline, determining that the first picture does not have the noise, and carrying out noise identification operation on a plurality of pictures in sequence to determine n frames of pictures with the noise.
Optionally, the processor is specifically configured to obtain a first frame picture of the n frame pictures, and perform a filtering operation on the first frame picture, where the filtering operation specifically includes: dividing a left edge area and a right edge area of a first frame of picture into a plurality of square blocks with equal areas, determining C value matrixes of the square blocks according to RGB values of the square blocks and pixel positions of the square blocks, determining the same C value matrixes with the largest number from the C value matrixes as a first C value matrix, determining the first C value matrix as a reflection background C value matrix, replacing values of the C value matrixes of the square blocks in the left edge area and the right edge area with values of the first C value matrix to obtain a filtered first frame of picture, and performing the filtering operation on n frames of pictures to obtain filtered n frames of pictures;
c-value = R-value + G-value + B-value.
Optionally, the processor is specifically configured to obtain a first frame picture of the n frame pictures, and perform a filtering operation on the first frame picture, where the filtering operation specifically includes: dividing a left edge area and a right edge area of a first frame of picture into a plurality of square blocks with equal areas, determining C value matrixes of the square blocks according to RGB values of the square blocks and pixel positions of the square blocks, determining the same C value matrixes with the largest number from the C value matrixes as a first C value matrix, and determining the first C value matrix as a reflection background C value matrix; acquiring a first person contour of a middle area of a first frame of picture, acquiring a first person contour of a second frame of picture, and determining the motion direction of a first person according to the first person contour area of the first frame of picture and the first person contour area of the second frame of picture; and determining the C value matrixes in the left edge area and the right edge area, which are the same as the motion direction, as non-filtering areas, replacing the C value matrixes of the square blocks outside the non-filtering areas with the values of the first C value matrixes to obtain filtered first frame pictures, and performing the filtering operation on the n frame pictures to obtain the filtered n frame pictures.
In a third aspect, a computer-readable storage medium is provided, which stores a program for electronic data exchange, wherein the program causes a terminal to execute the method provided in the first aspect.
The embodiment of the invention has the following beneficial effects:
according to the technical scheme, after the first short video file is collected, the multi-frame pictures in the first short video file are extracted, the multi-frame pictures are identified to determine n noisy pictures, n noise areas in the n noise pictures are determined, and the n noise areas are subjected to image noise removal to obtain the filtered n noise pictures, so that the video quality of the short video file can be improved, and the user experience degree is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic structural diagram of a terminal.
Fig. 1a is a schematic diagram of a reflected background noise picture.
Fig. 2 is a flow chart of a method for removing noise reflecting background.
Fig. 2a is a square division diagram of a noise picture reflecting the background.
FIG. 2b is a schematic diagram of a C value matrix.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terms "first," "second," "third," and "fourth," etc. in the description and claims of the invention and in the accompanying drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, result, or characteristic described in connection with the embodiment can be included in at least one embodiment of the invention. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
Referring to fig. 1, fig. 1 provides a terminal, which may specifically be a smart phone, a tablet computer, a computer, and a server, where the smart phone may be a terminal of an IOS system, an android system, and the terminal may specifically include: the device comprises a processor, a memory, a camera and a display screen, wherein the above components may be connected through a bus or in other ways.
Video generally refers to various techniques for capturing, recording, processing, storing, transmitting, and reproducing a series of still images as electrical signals. When the continuous image changes more than 24 frames (frames) of pictures per second, human eyes cannot distinguish a single static picture according to the persistence of vision principle; it appears as a smooth continuous visual effect, so that the continuous picture is called a video.
Short video applications, such as jittering, micro-video, etc. apps, for which the user belongs to publishing videos taken by himself or others on the platform in a shared manner. For scenes of video shot by the user himself, especially for reflective background scenes, which are different from the noise at the time of shooting, there may be no such noise in the shot scene, but there may be some noise in the reflective background of the shot video due to the difference in the emission angle.
As shown in fig. 1a, the area a shown in fig. 1a is the reflection background noise, which is generated by the reflection background, that is, there is no noise object in front of the shot, but due to the reflection background, the back of the shot object is shot in the reflection background image, and then the reflection noise occurs, which affects the quality of the video shot.
Referring to fig. 2, fig. 2 provides a method for removing noise reflecting background, which is shown in fig. 2 and is performed by the terminal shown in fig. 1, and which includes the following steps:
step S201, a terminal collects a first short video file and extracts a plurality of frames of pictures in the first short video file;
step S202, the terminal identifies a plurality of frames of pictures to determine n frames of pictures with noise;
and step S203, the terminal removes the noise of the noise area of the n frames of pictures to obtain filtered n frames of pictures, and replaces the n frames of pictures in the first short video file with the filtered n frames of pictures to obtain a second video file with the noise removed.
N is an integer of 1 or more, and in practical use, n is generally a large number.
According to the technical scheme, after the first short video file is collected, the multi-frame pictures in the first short video file are extracted, the multi-frame pictures are identified to determine n frames of pictures with noise, n noise areas in the n frames of pictures are determined, and the n noise areas are subjected to image noise removal to obtain the n frames of pictures after filtering, so that the video quality of the short video file can be improved, and the user experience is improved.
The identifying, by the terminal, the multiple frames of pictures to determine the n frames of pictures with noise may specifically include:
the terminal performs a noise identification operation on a first picture in the multiple frames of pictures, where the noise identification operation specifically includes: the method comprises the steps of determining two edge areas of a first picture, carrying out human figure outline identification on the two edge areas to determine whether a human figure outline exists or not, such as the human figure outline exists, determining that the first picture has noise, such as the first picture does not have the human figure outline, determining that the first picture does not have the noise, and carrying out noise identification operation on a plurality of pictures in sequence to determine n frames of pictures with the noise.
The figure outline recognition method can be specifically determined by a method for extracting and recognizing the outline features of the figure of the small sample. Of course, in practical application, other person outline recognition methods may be used for determination.
The removing, by the terminal, the noise in the noise region of the n frames of pictures to obtain the filtered n frames of pictures may specifically include:
the terminal acquires a first frame picture of the n frames of pictures, and performs a filtering operation on the first frame picture, wherein the filtering operation specifically comprises the following steps: dividing a left edge area and a right edge area of a first frame of picture into a plurality of square blocks with equal areas, determining RGB values of the square blocks and pixel positions of the square blocks to obtain C value matrixes of the square blocks, determining the same C value matrix with the largest quantity from the C value matrixes as a first C value matrix, determining the first C value matrix as a reflection background C value matrix (as shown in figure 2b, each square block on the left side represents the C value and the corresponding position of one pixel point of an image), replacing the values of the C value matrixes of the square blocks on the left edge area and the right edge area with the values of the first C value matrix to obtain a filtered first frame of picture, and performing the filtering operation on n frames of pictures to obtain the filtered n frames of pictures.
The principle of implementation is that, taking the left edge area of fig. 1a as an example, the size of this edge area may be set by the user, and certainly in practical applications, the size or range of the left edge area or the right edge area may be selected by the user through the touch screen. Taking fig. 1a as an example, for a reflective background, the image of most of its area is consistent, then the picture after dividing into a plurality of squares is as shown in fig. 2a, for a square with the same color, the corresponding C value matrix is the same, where C = R (value) + G (value) + B (value), because if RGB value data is used, it needs a three-dimensional data, the calculation amount is increased, so that the C value matrix is used here, that is, the change of RGB value can be determined, and the matrix is only a two-dimensional data. However, in this case, for the noise area, for example, as shown in fig. 1a, the C value matrix of the area a is different from the C value matrix of the reflective background, and it is a simpler method to directly replace the area a with the matrix of the reflective background, that is, the square C value matrix of the area a is replaced with the first C value matrix, so that the reflective background can be kept consistent, and the filtering of the noise data of the reflective background can be realized.
The filtering condition does not consider that some data in the reflecting background is needed, for example, a short video of a person walking, the user takes a video of not only the person walking, but also the person walking, which may need to be reflected in the reflecting background. Of course, in practical applications, the identification may be implemented by some characteristics of the video, so as to determine whether to replace the video.
In analyzing the captured video, the applicant found that, for the moving video and the person video in the reflecting background, since the moving video and the person video in the reflecting background belong to a synchronous motion state, whether the moving video and the person video belong to the picture in the reflecting background carried by the person motion can be determined by judging the motion states of the data of the two different areas. However, most of the noise data is in a disordered state, that is, the motion state thereof is irrelevant to the motion state of the photographed person, and thus can be determined by judging the motion state thereof.
The removing, by the terminal, the noise in the noise region of the n frames of pictures to obtain the filtered n frames of pictures may specifically include:
the terminal acquires a first frame picture of the n frames of pictures, and performs a filtering operation on the first frame picture, wherein the filtering operation specifically comprises the following steps: dividing a left edge area and a right edge area of a first frame of picture into a plurality of square blocks with equal areas, determining C value matrixes of the square blocks according to RGB values of the square blocks and pixel positions of the square blocks, determining the same C value matrixes with the largest number from the C value matrixes as a first C value matrix, and determining the first C value matrix as a reflection background C value matrix; acquiring a first person contour of a middle area of a first frame of picture, acquiring a first person contour of a second frame of picture, and determining the motion direction of a first person according to the first person contour area of the first frame of picture and the first person contour area of the second frame of picture; and determining the C value matrixes in the left edge area and the right edge area, which are the same as the motion direction, as non-filtering areas, replacing the C value matrixes of the square blocks outside the non-filtering areas with the values of the first C value matrixes to obtain filtered first frame pictures, and performing the filtering operation on the n frame pictures to obtain the filtered n frame pictures.
It should be noted that the determining manner of the motion direction of the C-value matrix may specifically include:
determining a first square serial number of a C value matrix (not a first C value matrix) in a first frame picture, determining a second square serial number of the same C value matrix in a second frame picture, and determining that the motion direction of the C value matrix is leftward motion if the second square serial number is positioned on the left side of the first square serial number.
For motion, one C value matrix of the motion image moves in a square, and then whether the motion direction of the motion image is consistent with the motion direction of a middle person can be known through the moving condition of two adjacent frame pictures, so that whether the motion image is a non-filtering area is determined.
The application provides a terminal, including: a processor, a camera and a display screen,
the camera is used for collecting a first short video file;
the processor is used for extracting a plurality of frames of pictures in the first short video file; identifying a plurality of frames of pictures to determine n frames of pictures with noise; removing noise of the noise area of the n frames of pictures to obtain n frames of filtered pictures, and replacing the n frames of filtered pictures with the n frames of filtered pictures in the first short video file to obtain a second video file with the noise removed;
and n is an integer greater than or equal to 1.
Embodiments of the present invention also provide a computer storage medium, wherein the computer storage medium stores a computer program for electronic data exchange, and the computer program enables a computer to execute part or all of the steps of any one of the above-described method embodiments of noise removal methods for reflecting a background.
Embodiments of the present invention also provide a computer program product comprising a non-transitory computer readable storage medium storing a computer program operable to cause a computer to perform some or all of the steps of any one of the above-described method embodiments of the background-reflected noise removal method.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the order of acts, as some steps may occur in other orders or concurrently in accordance with the invention. Further, those skilled in the art should also appreciate that the embodiments described in the specification are exemplary embodiments and that the acts and modules illustrated are not necessarily required to practice the invention.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of division of logical functions, and there may be other divisions when actually implementing, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of some interfaces, devices or units, and may be an electric or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit may be implemented in the form of hardware, or may be implemented in the form of a software program module.
The integrated units, if implemented in the form of software program modules and sold or used as stand-alone products, may be stored in a computer readable memory. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a memory and includes several instructions for causing a computer device (which may be a personal computer, a server, a network device, or the like) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned memory comprises: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable memory, which may include: flash Memory disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
The above embodiments of the present invention are described in detail, and the principle and the implementation of the present invention are explained by applying specific embodiments, and the above description of the embodiments is only used to help understanding the method of the present invention and the core idea thereof; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (6)

1. A method for removing noise from a reflecting background, the method comprising the steps of:
the method comprises the steps that a terminal collects a first short video file and extracts a plurality of frames of pictures in the first short video file;
the terminal identifies a plurality of frames of pictures to determine n frames of pictures with noise;
the terminal removes noise of a noise area of the n frames of pictures to obtain n frames of filtered pictures, and the n frames of filtered pictures replace the n frames of pictures in the first short video file to obtain a second video file after noise removal;
n is an integer greater than or equal to 1;
the method for removing the noise of the noise area of the n frames of pictures by the terminal to obtain the n frames of pictures after filtering specifically comprises the following steps:
the terminal acquires a first frame picture of the n frames of pictures, and performs a filtering operation on the first frame picture, wherein the filtering operation specifically comprises the following steps: dividing a left edge area and a right edge area of a first frame of picture into a plurality of square blocks with equal areas, determining C value matrixes of the square blocks according to RGB values of the square blocks and pixel positions of the square blocks, determining the same C value matrixes with the largest number from the C value matrixes as a first C value matrix, and determining the first C value matrix as a reflection background C value matrix; acquiring a first person contour of a middle area of a first frame of picture, acquiring a first person contour of a second frame of picture, and determining the motion direction of a first person according to the first person contour area of the first frame of picture and the first person contour area of the second frame of picture; determining C value matrixes in the left edge area and the right edge area, which are the same as the motion direction, as non-filtering areas, replacing the values of the C value matrixes of square blocks outside the non-filtering areas with the values of the first C value matrixes to obtain filtered first frame pictures, and performing the filtering operation on the n frame pictures to obtain filtered n frame pictures;
c-value = R-value + G-value + B-value.
2. The method according to claim 1, wherein the identifying, by the terminal, of the plurality of frames of pictures to determine n frames of pictures with noise comprises:
the terminal performs a noise identification operation on a first picture in the multiple frames of pictures, where the noise identification operation specifically includes: the method comprises the steps of determining two edge areas of a first picture, carrying out human figure outline identification on the two edge areas to determine whether a human figure outline exists or not, such as the human figure outline exists, determining that the first picture has noise, such as the first picture does not have the human figure outline, determining that the first picture does not have the noise, and carrying out noise identification operation on a plurality of pictures in sequence to determine n frames of pictures with the noise.
3. A terminal, the terminal comprising: a processor, a camera and a display screen, which is characterized in that,
the camera is used for collecting a first short video file;
the processor is used for extracting a plurality of frames of pictures in the first short video file; identifying a plurality of frames of pictures to determine n frames of pictures with noise; removing noise of the noise area of the n frames of pictures to obtain n frames of filtered pictures, and replacing the n frames of filtered pictures with the n frames of filtered pictures in the first short video file to obtain a second video file with the noise removed;
n is an integer greater than or equal to 1;
the processor is specifically configured to acquire a first frame picture of the n frame pictures, and perform a filtering operation on the first frame picture, where the filtering operation specifically includes: dividing a left edge area and a right edge area of a first frame of picture into a plurality of square blocks with equal areas, determining C value matrixes of the square blocks according to RGB values of the square blocks and pixel positions of the square blocks, determining the same C value matrixes with the largest number from the C value matrixes as a first C value matrix, and determining the first C value matrix as a reflection background C value matrix; acquiring a first person contour of a middle area of a first frame of picture, acquiring a first person contour of a second frame of picture, and determining the motion direction of a first person according to the first person contour area of the first frame of picture and the first person contour area of the second frame of picture; determining C value matrixes in the left edge area and the right edge area, which are the same as the motion direction, as non-filtering areas, replacing the values of the C value matrixes of square blocks outside the non-filtering areas with the values of the first C value matrixes to obtain filtered first frame pictures, and performing the filtering operation on the n frame pictures to obtain filtered n frame pictures; c-value = R-value + G-value + B-value.
4. The terminal of claim 3,
the processor is specifically configured to execute, by the terminal, a noise identification operation on a first picture in the multiple frames of pictures, where the noise identification operation specifically includes: the method comprises the steps of determining two edge areas of a first picture, carrying out human figure outline identification on the two edge areas to determine whether a human figure outline exists or not, such as the human figure outline exists, determining that the first picture has noise, such as the first picture does not have the human figure outline, determining that the first picture does not have the noise, and carrying out noise identification operation on a plurality of pictures in sequence to determine n frames of pictures with the noise.
5. A terminal according to any of claims 3-4,
the terminal is as follows: a smart phone or a tablet computer.
6. A computer-readable storage medium storing a program for electronic data exchange, wherein the program causes a terminal to perform the method as provided in any one of claims 1-2.
CN201910441068.6A 2019-05-24 2019-05-24 Method for removing noise of reflection background and related product Active CN110189271B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910441068.6A CN110189271B (en) 2019-05-24 2019-05-24 Method for removing noise of reflection background and related product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910441068.6A CN110189271B (en) 2019-05-24 2019-05-24 Method for removing noise of reflection background and related product

Publications (2)

Publication Number Publication Date
CN110189271A CN110189271A (en) 2019-08-30
CN110189271B true CN110189271B (en) 2021-06-01

Family

ID=67717731

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910441068.6A Active CN110189271B (en) 2019-05-24 2019-05-24 Method for removing noise of reflection background and related product

Country Status (1)

Country Link
CN (1) CN110189271B (en)

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8565525B2 (en) * 2005-12-30 2013-10-22 Telecom Italia S.P.A. Edge comparison in segmentation of video sequences
US9501839B1 (en) * 2015-05-27 2016-11-22 The Boeing Company Methods and systems for detecting moving objects in a sequence of image frames produced by sensors with inconsistent gain, offset, and dead pixels
CN105844256B (en) * 2016-04-07 2019-07-05 广州盈可视电子科技有限公司 A kind of panoramic video frame image processing method and device
CN106204426A (en) * 2016-06-30 2016-12-07 广州华多网络科技有限公司 A kind of method of video image processing and device
CN106372602A (en) * 2016-08-31 2017-02-01 华平智慧信息技术(深圳)有限公司 Method and device for processing video file
CN109697703A (en) * 2018-11-22 2019-04-30 深圳艺达文化传媒有限公司 The background stacking method and Related product of video
CN109587556B (en) * 2019-01-03 2021-10-15 腾讯科技(深圳)有限公司 Video processing method, video playing method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN110189271A (en) 2019-08-30

Similar Documents

Publication Publication Date Title
US20180048820A1 (en) Pixel readout of a charge coupled device having a variable aperture
CN109993824B (en) Image processing method, intelligent terminal and device with storage function
CN104205826A (en) Apparatus and method for reconstructing high density three-dimensional image
CN107908998B (en) Two-dimensional code decoding method and device, terminal equipment and computer readable storage medium
EP2460359A1 (en) Adjusting perspective and disparity in stereoscopic image pairs
CN108776800B (en) Image processing method, mobile terminal and computer readable storage medium
CN112511859A (en) Video processing method, device and storage medium
CN108764040B (en) Image detection method, terminal and computer storage medium
CN107578372B (en) Image processing method, image processing device, computer-readable storage medium and electronic equipment
CN111192286A (en) Image synthesis method, electronic device and storage medium
CN110189271B (en) Method for removing noise of reflection background and related product
CN110223219B (en) 3D image generation method and device
CN109598195B (en) Method and device for processing clear face image based on monitoring video
CN109034059B (en) Silence type face living body detection method, silence type face living body detection device, storage medium and processor
CN111881734A (en) Method and device for automatically intercepting target video
CN116168045A (en) Method and system for dividing sweeping lens, storage medium and electronic equipment
CN107770446B (en) Image processing method, image processing device, computer-readable storage medium and electronic equipment
CN109640170B (en) Speed processing method of self-shooting video, terminal and storage medium
CN105467741A (en) Panoramic shooting method and terminal
CN112991419B (en) Parallax data generation method, parallax data generation device, computer equipment and storage medium
CN112055255B (en) Shooting image quality optimization method and device, smart television and readable storage medium
CN114387157A (en) Image processing method and device and computer readable storage medium
CN109671138B (en) Double overlapping method for head portrait background of self-photographing video and related product
CN113014905A (en) Image frame generation method and device, storage medium and electronic equipment
CN108924405B (en) Photographing focus correction and image processing method and device based on distance

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20210513

Address after: 518000 a1-1201a, building a, Kexing Science Park, 15 Keyuan Road, Science Park community, Yuehai street, Nanshan District, Shenzhen City, Guangdong Province

Applicant after: SHENZHEN CLOUDROOM TECHNOLOGY Co.,Ltd.

Address before: Room 242, 2 / F, C33 Innovation Industrial Park, 33 Cuizhu North Road, Dongxiao street, Luohu District, Shenzhen, Guangdong 518000

Applicant before: SHENZHEN ZIYU JIEEN TECHNOLOGY Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: Noise removal method of reflected background and related products

Effective date of registration: 20230112

Granted publication date: 20210601

Pledgee: Bank of Jiangsu Limited by Share Ltd. Shenzhen branch

Pledgor: SHENZHEN CLOUDROOM TECHNOLOGY CO.,LTD.

Registration number: Y2023440020005

PE01 Entry into force of the registration of the contract for pledge of patent right