CN113284075B - Image denoising method and device, electronic device and storage medium - Google Patents

Image denoising method and device, electronic device and storage medium Download PDF

Info

Publication number
CN113284075B
CN113284075B CN202110810575.XA CN202110810575A CN113284075B CN 113284075 B CN113284075 B CN 113284075B CN 202110810575 A CN202110810575 A CN 202110810575A CN 113284075 B CN113284075 B CN 113284075B
Authority
CN
China
Prior art keywords
pixel
point
sampling
processed
pixel value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110810575.XA
Other languages
Chinese (zh)
Other versions
CN113284075A (en
Inventor
杨弢
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beike Technology Co Ltd
Original Assignee
Beike Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beike Technology Co Ltd filed Critical Beike Technology Co Ltd
Priority to CN202110810575.XA priority Critical patent/CN113284075B/en
Publication of CN113284075A publication Critical patent/CN113284075A/en
Application granted granted Critical
Publication of CN113284075B publication Critical patent/CN113284075B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • G06T5/94Dynamic range modification of images or parts thereof based on local image properties, e.g. for local contrast enhancement

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The present disclosure provides an image denoising method, an image denoising device, an electronic device and a storage medium, and relates to the technical field of image processing, wherein the method comprises the following steps: determining a first sample point and a first weight in a first direction; according to the current pixel value of the pixel point to be processed, the pixel value of the first sampling point and the first weight, first filtering processing is carried out in the first direction, and the first pixel value of the pixel point to be processed is obtained; determining a second sampling point and a second weight in a second direction, and performing second filtering processing in the second direction according to the first pixel value of the pixel point to be processed, the pixel value of the second sampling point and the second weight to obtain a second pixel value of the pixel point to be processed as the current pixel value of the pixel point to be processed; the method, the device, the electronic equipment and the storage medium improve the image denoising efficiency, reduce the operation complexity, improve the image denoising capability and improve the image quality processed by the ISP.

Description

Image denoising method and device, electronic device and storage medium
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to an image denoising method and apparatus, an electronic device, and a storage medium.
Background
At present, in order to enable a client to obtain real perception experience of a house, VR house watching is provided, a three-dimensional scene of a house source is really restored by using a VR technology, and a user can realize multi-angle roaming visit through the control of a keyboard and a mouse. Indoor scene data and the like can be acquired through equipment such as a panoramic camera, a depth camera and a laser radar, and a three-dimensional scene of a room source is constructed. The "VR house-viewing" needs to integrate and apply various technologies, a house photograph presented to a customer needs to be subjected to ISP (Image Signal processing), and in the Process of ISP processing, an Image needs to be subjected to denoising processing. Since the VR watching room will affect the decision of the customer and have a certain effect on the deal, it is necessary to ensure that the picture generated after the ISP processing is optimal and most realistic as much as possible. However, the existing denoising algorithm has the disadvantages of poor denoising effect, high power consumption and the like, and affects the processing effect of ISP processing, so a new image denoising technical scheme is needed.
Disclosure of Invention
The present disclosure is proposed to solve the above technical problems. The embodiment of the disclosure provides an image denoising method and device, an electronic device and a storage medium.
According to a first aspect of the embodiments of the present disclosure, there is provided an image denoising method, including: in a first direction, acquiring a first sampling point corresponding to a pixel point to be processed in an image, and determining a first sampling point pixel block corresponding to the first sampling point and a first pixel block corresponding to the pixel point to be processed; calculating first pixel difference information between the first sampling point pixel block and each pixel point corresponding to the first pixel block, and determining a first weight based on the first pixel difference; according to the current pixel value of the pixel point to be processed, the pixel value of the first sampling point and the first weight, performing first filtering processing on the pixel value of the pixel point to be processed in a first direction to obtain a first pixel value of the pixel point to be processed; in a second direction, acquiring a second sampling point corresponding to a pixel point to be processed in the image, and determining a pixel block of the second sampling point corresponding to the second sampling point and a second pixel block corresponding to the pixel point to be processed; calculating second pixel difference information between the second sampling point pixel block and each pixel point corresponding to the second pixel block, and determining a second weight based on the second pixel difference; and according to the first pixel value of the pixel point to be processed, the pixel value of the second sampling point and the second weight, carrying out second filtering processing on the pixel value of the pixel point to be processed in a second direction to obtain a second pixel value of the pixel point to be processed, wherein the second pixel value is used as the current pixel value of the pixel point to be processed.
Optionally, the first direction is an X direction of an image coordinate system; the acquiring of the first sampling point corresponding to the pixel point to be processed in the image comprises: determining a sampling step length in the X direction, and selecting two X-direction sampling points as two first sampling points based on the sampling step length and by taking the pixel point to be processed as a symmetrical point; the determining a first sampling point pixel block corresponding to the first sampling point and a first pixel block corresponding to the pixel point to be processed comprises: respectively acquiring two adjacent areas with two X-direction sampling points as centers, and taking the two adjacent areas as two first sampling point pixel blocks; and acquiring a first adjacent area taking the pixel point to be processed as a center as the first pixel block.
Optionally, the calculating first pixel difference information between the first sampling point pixel block and each pixel point corresponding to the first pixel block, and the determining a first weight based on the first pixel difference includes: respectively calculating the square of pixel difference between the two first sampling point pixel blocks and each corresponding pixel point between the first pixel blocks; respectively calculating the sum of the squares of all pixel differences corresponding to each first sampling point pixel block as two first accumulated distances corresponding to the two X-direction sampling points; two of the first weights are determined based on two first cumulative distances.
Optionally, obtaining a first pixel value of the pixel point to be processed is:
Figure DEST_PATH_IMAGE001
wherein,
Figure DEST_PATH_IMAGE002
is the pixel value of the first X-direction sample point,
Figure DEST_PATH_IMAGE003
is the pixel value of the second X-direction sample point,
Figure DEST_PATH_IMAGE004
the current pixel value of the pixel point to be processed is obtained;
Figure DEST_PATH_IMAGE005
and
Figure DEST_PATH_IMAGE006
two of the first weights;
Figure DEST_PATH_IMAGE007
is a first cumulative distance corresponding to the first X-direction sample point,
Figure DEST_PATH_IMAGE008
is a first cumulative distance corresponding to the second X-direction sample point;
Figure DEST_PATH_IMAGE009
is composed of
Figure DEST_PATH_IMAGE010
Figure DEST_PATH_IMAGE011
And i is the image noise level variance parameter and the sampling step length.
Optionally, the second direction is a Y direction of an image coordinate system; the acquiring of the second sampling point corresponding to the pixel point to be processed in the image comprises: in the Y direction, two Y-direction sampling points are selected as two second sampling points based on the sampling step length and by taking the pixel point to be processed as a symmetrical point; the determining of a second sampling point pixel block corresponding to the second sampling point and a second pixel block corresponding to the pixel to be processed includes: respectively acquiring two adjacent areas with two Y-direction sampling points as centers, and taking the two adjacent areas as two second sampling point pixel blocks; and acquiring a second adjacent area taking the pixel point to be processed as a center to serve as the second pixel block.
Optionally, the calculating second pixel difference information between the second sampling point pixel block and each pixel point corresponding to the second pixel block, and determining a second weight based on the second pixel difference includes: respectively calculating the square of pixel difference between the two second sampling point pixel blocks and each corresponding pixel point between the two second sampling point pixel blocks; respectively calculating the sum of the squares of all pixel differences corresponding to each second sampling point pixel block as two second accumulated distances corresponding to the two Y-direction sampling points; two of the second weights are determined based on two second cumulative distances.
Optionally, obtaining a second pixel value of the pixel point to be processed is:
Figure DEST_PATH_IMAGE012
wherein,
Figure DEST_PATH_IMAGE013
is the pixel value of the first Y-direction sample point,
Figure DEST_PATH_IMAGE014
is the pixel value of the second Y-direction sample point,
Figure DEST_PATH_IMAGE015
the first pixel value of the pixel point to be processed is obtained;
Figure DEST_PATH_IMAGE016
two of the second weights;
Figure DEST_PATH_IMAGE017
is a second cumulative distance corresponding to the first Y direction sample point,
Figure DEST_PATH_IMAGE018
a second cumulative distance corresponding to a second Y-direction sampling point;
Figure DEST_PATH_IMAGE019
is composed of
Figure DEST_PATH_IMAGE020
Figure DEST_PATH_IMAGE021
And i is the image noise level variance parameter and the sampling step length.
Optionally, iteration processing is performed based on a preset iteration number, so as to obtain a first pixel value and a second pixel value of the pixel point to be processed in each iteration, and the second pixel value is set as the current pixel value of the pixel point to be processed.
Optionally, the image comprises: a RAW image.
According to a second aspect of the embodiments of the present disclosure, there is provided an image denoising apparatus, including: the first sampling processing module is used for acquiring a first sampling point corresponding to a pixel point to be processed in an image in a first direction, and determining a first sampling point pixel block corresponding to the first sampling point and a first pixel block corresponding to the pixel point to be processed; the first weight acquisition module is used for calculating first pixel difference information between the first sampling point pixel block and each pixel point corresponding to the first pixel block and determining a first weight based on the first pixel difference; a first pixel value obtaining module, configured to perform a first filtering process on a pixel value of the to-be-processed pixel point in a first direction according to the current pixel value of the to-be-processed pixel point, the pixel value of the first sampling point, and the first weight, so as to obtain a first pixel value of the to-be-processed pixel point; the second sampling processing module is used for acquiring a second sampling point corresponding to a pixel point to be processed in the image in a second direction, and determining a pixel block of the second sampling point corresponding to the second sampling point and a second pixel block corresponding to the pixel point to be processed; the second weight obtaining module is used for calculating second pixel difference information between the second sampling point pixel block and each pixel point corresponding to the second pixel block and determining a second weight based on the second pixel difference; and the second pixel value obtaining module is used for performing second filtering processing on the pixel value of the pixel point to be processed in a second direction according to the first pixel value of the pixel point to be processed, the pixel value of the second sampling point and the second weight to obtain a second pixel value of the pixel point to be processed as the current pixel value of the pixel point to be processed.
Optionally, the first sampling processing module is specifically configured to determine a sampling step length in an X direction, and select two X-direction sampling points as two first sampling points based on the sampling step length and with the pixel point to be processed as a symmetric point; respectively acquiring two adjacent areas with two X-direction sampling points as centers, and taking the two adjacent areas as two first sampling point pixel blocks; and acquiring a first adjacent area taking the pixel point to be processed as a center as the first pixel block.
Optionally, the first weight obtaining module is specifically configured to calculate a pixel difference square between each pixel point corresponding to two first sampling point pixel blocks and the first pixel block respectively; respectively calculating the sum of the squares of all pixel differences corresponding to each first sampling point pixel block as two first accumulated distances corresponding to the two X-direction sampling points; two of the first weights are determined based on two first cumulative distances.
Optionally, the first pixel value obtaining module is configured to obtain the first pixel value of the pixel point to be processed as follows:
Figure DEST_PATH_IMAGE022
wherein,
Figure 643302DEST_PATH_IMAGE002
is the pixel value of the first X-direction sample point,
Figure 797203DEST_PATH_IMAGE003
is the pixel value of the second X-direction sample point,
Figure 945287DEST_PATH_IMAGE004
the current pixel value of the pixel point to be processed is obtained;
Figure DEST_PATH_IMAGE023
and
Figure DEST_PATH_IMAGE024
is two of the first weights;
Figure 999569DEST_PATH_IMAGE007
is a first cumulative distance corresponding to the first X-direction sample point,
Figure 942117DEST_PATH_IMAGE008
is a first cumulative distance corresponding to the second X-direction sample point;
Figure 583314DEST_PATH_IMAGE019
is composed of
Figure DEST_PATH_IMAGE025
Figure 3931DEST_PATH_IMAGE021
And i is the image noise level variance parameter and the sampling step length.
Optionally, the second sampling processing module is specifically configured to select, in the Y direction, two Y-direction sampling points as two second sampling points based on the sampling step length and with the pixel point to be processed as a symmetric point; respectively acquiring two adjacent areas with two Y-direction sampling points as centers, and taking the two adjacent areas as two second sampling point pixel blocks; and acquiring a second adjacent area taking the pixel point to be processed as a center to serve as the second pixel block.
Optionally, the second weight obtaining module is specifically configured to calculate a pixel difference square between each pixel point corresponding to two second sampling point pixel blocks and the second pixel block, respectively; respectively calculating the sum of the squares of all pixel differences corresponding to each second sampling point pixel block as two second accumulated distances corresponding to the two Y-direction sampling points; two of the second weights are determined based on two second cumulative distances.
Optionally, the second pixel value obtaining module is configured to obtain a second pixel value of the pixel point to be processed as follows:
Figure DEST_PATH_IMAGE026
wherein,
Figure 414184DEST_PATH_IMAGE013
is the pixel value of the first Y-direction sample point,
Figure 527633DEST_PATH_IMAGE014
is the pixel value of the second Y-direction sample point,
Figure 452864DEST_PATH_IMAGE015
the first pixel value of the pixel point to be processed is obtained;
Figure 677172DEST_PATH_IMAGE016
two of the second weights;
Figure 879614DEST_PATH_IMAGE017
is a second cumulative distance corresponding to the first Y direction sample point,
Figure 163965DEST_PATH_IMAGE018
a second cumulative distance corresponding to a second Y-direction sampling point;
Figure DEST_PATH_IMAGE027
is composed of
Figure 576492DEST_PATH_IMAGE020
Figure 40709DEST_PATH_IMAGE021
And i is the image noise level variance parameter and the sampling step length.
Optionally, the iterative processing module is configured to perform iterative processing based on a preset number of iterations, so as to obtain a first pixel value and a second pixel value of the to-be-processed pixel point in each iteration, and set the second pixel value as the current pixel value of the to-be-processed pixel point.
According to a third aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium storing a computer program for executing the above-mentioned method.
According to a fourth aspect of the embodiments of the present disclosure, there is provided an electronic apparatus including: a processor; a memory for storing the processor-executable instructions; the processor is used for executing the method.
Based on the image denoising method and device, the electronic device, and the storage medium provided by the embodiments of the present disclosure, a first sampling point and a first weight are determined in a first direction, and a first filtering process is performed in the first direction according to a current pixel value of a pixel point to be processed, a pixel value of the first sampling point, and the first weight, so as to obtain a first pixel value of the pixel point to be processed; determining a second sampling point and a second weight in a second direction, and performing second filtering processing in the second direction according to the first pixel value of the pixel point to be processed, the pixel value of the second sampling point and the second weight to obtain a second pixel value of the pixel point to be processed as the current pixel value of the pixel point to be processed; the adaptability of denoising processing can be improved, and the image denoising efficiency is improved; the method has the advantages of reducing the operation complexity, reducing the power consumption, improving the image denoising capability and improving the image quality processed by the ISP.
The technical solution of the present disclosure is further described in detail by the accompanying drawings and examples.
Drawings
The above and other objects, features and advantages of the present disclosure will become more apparent by describing in more detail embodiments of the present disclosure with reference to the attached drawings. The accompanying drawings are included to provide a further understanding of the embodiments of the disclosure, and are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the description serve to explain the principles of the disclosure and not to limit the disclosure. In the drawings, like reference numbers generally represent like parts or steps.
FIG. 1 is a flow chart of one embodiment of an image denoising method of the present disclosure;
FIG. 2 is a flow chart of determining a first weight in one embodiment of an image denoising method of the present disclosure;
FIG. 3 is a schematic diagram of lateral sampling and aggregation in an embodiment of an image denoising method according to the present disclosure;
FIG. 4 is a flow chart of determining second weights in an embodiment of the image denoising method of the present disclosure;
FIG. 5 is a schematic diagram of longitudinal sampling and aggregation in an embodiment of an image denoising method according to the present disclosure;
FIG. 6 is a schematic structural diagram of an embodiment of an image denoising apparatus according to the present disclosure;
FIG. 7 is a schematic structural diagram of another embodiment of an image denoising apparatus according to the present disclosure;
FIG. 8 is a block diagram of one embodiment of an electronic device of the present disclosure.
Detailed Description
Example embodiments according to the present disclosure will be described in detail below with reference to the accompanying drawings. It is to be understood that the described embodiments are merely a subset of the embodiments of the present disclosure and not all embodiments of the present disclosure, with the understanding that the present disclosure is not limited to the example embodiments described herein.
It should be noted that: the relative arrangement of the components and steps, the numerical expressions, and numerical values set forth in these embodiments do not limit the scope of the present disclosure unless specifically stated otherwise.
It will be understood by those of skill in the art that the terms "first," "second," and the like in the embodiments of the present disclosure are used merely to distinguish one element from another, and are not intended to imply any particular technical meaning, nor is the necessary logical order between them.
It is also understood that in embodiments of the present disclosure, "a plurality" may refer to two or more than two and "at least one" may refer to one, two or more than two.
It is also to be understood that any reference to any component, data, or structure in the embodiments of the disclosure, may be generally understood as one or more, unless explicitly defined otherwise or stated otherwise.
In addition, the term "and/or" in the present disclosure is only one kind of association relationship describing the associated object, and means that there may be three kinds of relationships, such as a and/or B, and may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" in the present disclosure generally indicates that the former and latter associated objects are in an "or" relationship.
It should also be understood that the description of the various embodiments of the present disclosure emphasizes the differences between the various embodiments, and the same or similar parts may be referred to each other, so that the descriptions thereof are omitted for brevity.
Meanwhile, it should be understood that the sizes of the respective portions shown in the drawings are not drawn in an actual proportional relationship for the convenience of description.
The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the disclosure, its application, or uses.
Techniques, methods, and apparatus known to those of ordinary skill in the relevant art may not be discussed in detail but are intended to be part of the specification where appropriate.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, further discussion thereof is not required in subsequent figures.
Embodiments of the present disclosure may be implemented in electronic devices such as terminal devices, computer systems, servers, etc., which are operational with numerous other general purpose or special purpose computing system environments or configurations. Examples of well known terminal devices, computing systems, environments, and/or configurations that may be suitable for use with an electronic device, such as a terminal device, computer system, or server, include, but are not limited to: personal computer systems, server computer systems, thin clients, thick clients, hand-held or laptop devices, microprocessor-based systems, set top boxes, programmable consumer electronics, network pcs, minicomputer systems, mainframe computer systems, distributed cloud computing environments that include any of the above, and the like.
Electronic devices such as terminal devices, computer systems, servers, etc. may be described in the general context of computer system-executable instructions, such as program modules, being executed by a computer system. Generally, program modules may include routines, programs, objects, components, logic, data structures, etc. that perform particular tasks or implement particular abstract data types. The computer system/server may be implemented in a distributed cloud computing environment. In a distributed cloud computing environment, tasks may be performed by remote processing devices that are linked through a communications network. In a distributed cloud computing environment, program modules may be located in both local and remote computer system storage media including memory storage devices.
In the process of implementing the present disclosure, the inventor finds that the existing image denoising method has the disadvantages of poor denoising effect, high power consumption, and the like, and affects the processing effect of ISP processing, so a new image denoising technical scheme is needed.
The image denoising method provided by the disclosure determines a first sampling point and a first weight in a first direction, and performs first filtering processing in the first direction according to a current pixel value of a pixel point to be processed, the pixel value of the first sampling point and the first weight to obtain a first pixel value of the pixel point to be processed; determining a second sampling point and a second weight in a second direction, and performing second filtering processing in the second direction according to the first pixel value of the pixel point to be processed, the pixel value of the second sampling point and the second weight to obtain a second pixel value of the pixel point to be processed as the current pixel value of the pixel point to be processed; the image denoising efficiency is improved, the operation complexity is reduced, the image denoising capability is improved, the image quality processed by an ISP (Internet service provider) can be improved, and the customer experience is effectively improved.
Fig. 1 is a flowchart of an embodiment of an image denoising method according to the present disclosure, where the method shown in fig. 1 includes the steps of: S101-S106. The following describes each step.
S101, in a first direction, acquiring a first sampling point corresponding to a pixel point to be processed in an image, and determining a first sampling point pixel block corresponding to the first sampling point and a first pixel block corresponding to the pixel point to be processed.
In one embodiment, the image is a RAW image or the like acquired by the image acquisition device for the target room. The RAW image is RAW data in which a CMOS or CCD image sensor converts a captured light source signal into a digital signal, and is unprocessed. The RGB image is lossy compressed and has much less detail than the RAW image. The first direction may be an X direction (X coordinate direction) of the image coordinate system, or may be another direction on the image coordinate system. The first sample point pixel block and the first pixel block may be a 3 × 3, 5 × 5 pixel matrix, or the like.
S102, calculating first pixel difference information between the first sampling point pixel block and each pixel point corresponding to the first pixel block, and determining a first weight based on the first pixel difference.
In one embodiment, corresponding pixel points between the first sampling point pixel block and the first pixel block are determined, and first pixel difference information between the corresponding pixel points is calculated, wherein the first pixel difference information can be pixel difference square and the like.
S103, according to the current pixel value of the pixel point to be processed, the pixel value of the first sampling point and the first weight, first filtering processing is carried out on the pixel value of the pixel point to be processed in the first direction, and the first pixel value of the pixel point to be processed is obtained.
And S104, acquiring a second sampling point corresponding to the pixel point to be processed in the image in a second direction, and determining a pixel block of the second sampling point corresponding to the second sampling point and a pixel block of the second sampling point corresponding to the pixel point to be processed.
In one embodiment, the second direction may be a Y direction (Y coordinate direction) of the image coordinate system, or may be another direction on the image coordinate system. The second sample point pixel block and the second pixel block may be a 3 × 3, 5 × 5 pixel matrix, or the like. The first pixel block and the second pixel block corresponding to the pixel point to be processed may be the same.
And S105, calculating second pixel difference information of each corresponding pixel point between the second sampling point pixel block and the second pixel block, and determining a second weight based on the second pixel difference.
In one embodiment, the corresponding pixel points between the pixel block of the second sampling point and the second pixel block are determined, and second pixel difference information between the corresponding pixel points is calculated, where the second pixel difference information may be a pixel difference square or the like.
And S106, according to the first pixel value of the pixel point to be processed, the pixel value of the second sampling point and the second weight, performing second filtering processing on the pixel value of the pixel point to be processed in the second direction to obtain a second pixel value of the pixel point to be processed, wherein the second pixel value is used as the current pixel value of the pixel point to be processed. The first filtering process and the second filtering process may both be cost aggregated filtering processes.
The sampling points can be set in a self-adaptive manner, and multiple iterations of the steps from the step S101 to the step S106 are performed based on preset iteration times, so that a first pixel value and a second pixel value of the pixel point to be processed are obtained in each iteration, and the second pixel value is set as the current pixel value of the pixel point to be processed. The number of iterations may be set, for example, to 5,6, etc.
In an embodiment, the image denoising method in the above embodiment may be applied to a Denoise step of an ISP processing flow, where the periphery of a pixel to be processed in each image is sampled, then the pixel to be processed and surrounding sampling points are weighted and averaged, and then a neighborhood around the point is weighted and aggregated in different regions.
Fig. 2 is a flowchart of determining a first weight in an embodiment of the image denoising method of the present disclosure, where the first direction is an X direction of an image coordinate system, and the method shown in fig. 2 includes the steps of: S201-S205. The following describes each step.
S201, determining a sampling step length in the X direction, and selecting two X-direction sampling points as two first sampling points based on the sampling step length and by taking the pixel point to be processed as a symmetrical point.
In one embodiment, the length of the sampling step may be 1, 3, 5, etc. pixels. And taking the pixel point to be processed as a symmetrical point, and selecting two X-direction sampling points on the left and right of the pixel point to be processed based on the sampling step length in the X direction to serve as two first sampling points.
S202, two adjacent areas with two X-direction sampling points as centers are respectively obtained and are used as pixel blocks of two first sampling points.
S203, acquiring a first adjacent area with the pixel point to be processed as a center, and using the first adjacent area as a first pixel block.
For example, two adjacent regions centered on two X-direction sampling points may be two 3 × 3, 5 × 5 pixel matrices centered on two X-direction sampling points, respectively, or the like; the first adjacent area with the pixel point to be processed as the center may be a 3 × 3, 5 × 5 pixel matrix with the pixel point to be processed as the center, or the like.
And S204, respectively calculating the square of pixel difference between the two first sampling point pixel blocks and each corresponding pixel point between the first pixel blocks.
S205, two first weights respectively corresponding to the two X-direction sample points are determined according to the pixel difference squared.
In one embodiment, pixel difference squares between pixel points corresponding to two first sample point pixel blocks and the first pixel block are respectively calculated, a sum of all pixel difference squares corresponding to the first sample point pixel blocks is respectively calculated as two first cumulative distances corresponding to the two X-direction sample points, and two first weights are determined based on the two first cumulative distances.
For example, the two first sample point pixel blocks are first sample point pixel blocks a and B, respectively, the first pixel block is C:
Figure DEST_PATH_IMAGE028
;
wherein a1-a9, b1-b9 and c1-c9 are all pixel values.
The corresponding pixel difference squared between A, B and C is calculated separately, e.g.,
Figure DEST_PATH_IMAGE029
Figure DEST_PATH_IMAGE030
……,
Figure DEST_PATH_IMAGE031
2
Figure DEST_PATH_IMAGE032
2……。
the sum of the squares of all pixel differences corresponding to a is calculated as D1=
Figure DEST_PATH_IMAGE033
As a first cumulative distance corresponding to an X-direction sampling point; the sum of the squares of all pixel differences corresponding to B is calculated as D2=
Figure DEST_PATH_IMAGE034
Two first weights are determined based on the two first cumulative distances as another first cumulative distance corresponding to another X-direction sample point.
In one embodiment, any sampling interval can be adopted for the X-direction sampling points (transverse position sampling points), the greater the sampling number is, the greater the calculation amount is, the linear growth proportional relationship is formed between the sampling points and the transverse position sampling points, and the appropriate number of sampling points needs to be adopted according to actual conditions. As shown in fig. 3, 1 is the center point and 2 is the sampling point. The X-direction aggregation (lateral aggregation) means addition row by row, and is stored in the center-shaded column 3.
A pair of first sampling points are sampled in the x direction (the number and the step length of the first sampling points, and whether the positions of the first sampling points are symmetrical or not are arbitrary), and the sampling step length can be set to 1, 2, 4, 7 and other pixel points. Sampling in the X direction of the image, and setting the sampling step length as i, then sampling a point in the X direction and on the left and right of a central point (pixel point to be processed), which is a first sampling point. Under the condition that the left and right points are symmetrical, only one point (a first sampling point) needs to be sampled in the x direction in the actual processing process. The point-by-point pixel difference in the x direction is calculated as:
Figure DEST_PATH_IMAGE035
wherein
Figure DEST_PATH_IMAGE036
is the pixel value of the X-direction sample point,
Figure DEST_PATH_IMAGE037
is the current pixel value of the pixel point to be processed.
Calculating the weight of the X-direction sampling point: first, the sum of squares of the dot-by-dot pixel differences is calculated
Figure DEST_PATH_IMAGE038
Summing in 3x3 neighborhood to obtain
Figure DEST_PATH_IMAGE039
A 3x3 pixel block (to
Figure DEST_PATH_IMAGE040
The first pixel block of which is the center) to
Figure DEST_PATH_IMAGE041
A 3x3 pixel block (to
Figure DEST_PATH_IMAGE042
The first sample point pixel block at the center) of the first sample point (first cumulative distance). Setting the obtained distance matrix to
Figure DEST_PATH_IMAGE043
And shifting the corresponding position element by i bit to the left:
Figure DEST_PATH_IMAGE044
can obtain
Figure DEST_PATH_IMAGE045
A 3x3 pixel block (to
Figure DEST_PATH_IMAGE046
The first sampling point pixel block at the center) to
Figure DEST_PATH_IMAGE047
The cumulative sum of squares distance of the 3x3 pixel blocks at (a).
For example, with
Figure 878084DEST_PATH_IMAGE047
The first pixel block at the center is a first 3x3 pixel matrix, to
Figure 5440DEST_PATH_IMAGE046
The pixel block of the first sampling point as the center is the second 3x3 pixel matrix, and 9 pixel points and the second pixel point of the first 3x3 pixel matrix are calculated one by oneThe squares of the pixel differences between 9 pixels of the two 3x3 pixel matrix are added to obtain a first cumulative distance
Figure DEST_PATH_IMAGE048
. The same can obtain
Figure DEST_PATH_IMAGE049
Performing cost aggregation filtering on the X direction to obtain a first pixel value of a pixel point to be processed, wherein the first pixel value is as follows:
Figure DEST_PATH_IMAGE050
wherein,
Figure DEST_PATH_IMAGE051
is the pixel value of the first X-direction sample point,
Figure DEST_PATH_IMAGE052
is the pixel value of the second X-direction sample point,
Figure 341481DEST_PATH_IMAGE037
the current pixel value of the pixel point to be processed;
Figure DEST_PATH_IMAGE053
and
Figure DEST_PATH_IMAGE054
two first weights;
Figure DEST_PATH_IMAGE055
is a first cumulative distance corresponding to the first X-direction sample point,
Figure DEST_PATH_IMAGE056
is a first cumulative distance corresponding to the second X-direction sample point;
Figure 110854DEST_PATH_IMAGE027
is composed of
Figure DEST_PATH_IMAGE057
Figure DEST_PATH_IMAGE058
I is the sampling step size for the image noise level variance parameter.
In one embodiment of the present invention,
Figure DEST_PATH_IMAGE059
for the first weight to be used for the weighted average,
Figure DEST_PATH_IMAGE060
is shown in
Figure DEST_PATH_IMAGE061
The central 3x3 pixel block (3 x3 matrix) and
Figure DEST_PATH_IMAGE062
the sum of squares of the pixel value differences computed one by one for the central 3x3 pixel block. The more similar the two 3x3 pixel blocks,
Figure DEST_PATH_IMAGE063
the smaller the corresponding weight
Figure DEST_PATH_IMAGE064
The larger. The larger the weight obtained in the weighted average, the more the information carried by the corresponding pixel point is extracted and fused.
FIG. 4 is a flow chart of determining second weights in an embodiment of the image denoising method of the present disclosure; the second direction is the Y direction of the image coordinate system, and the method shown in fig. 4 includes the steps of: S401-S405. The following describes each step.
S401, in the Y direction, two Y-direction sampling points are selected as two second sampling points based on the sampling step length and by taking the pixel point to be processed as a symmetrical point.
In one embodiment, the length of the sampling step may be 1, 3, 5, etc. pixels. And taking the pixel point to be processed as a symmetrical point, and selecting two Y-direction sampling points on the left and right of the pixel point to be processed in the Y direction based on the sampling step length to serve as two second sampling points. X, Y the sampling steps in the direction may be the same.
S402, two adjacent areas with two Y-direction sampling points as centers are respectively obtained and are used as pixel blocks of two second sampling points.
S403, acquiring a second adjacent area with the pixel point to be processed as a second pixel block.
For example, two adjacent regions centered on two Y-direction sampling points are respectively two 3 × 3 and 5 × 5 pixel matrices centered on two Y-direction sampling points, and the like; the second adjacent area with the pixel point to be processed as the center is a pixel matrix of 3 × 3 and 5 × 5 with the pixel point to be processed as the center, and the like. The first pixel block and the second pixel block with the pixel point to be processed as the center can be the same.
S404, respectively calculating the square of pixel difference between the pixel blocks of the two second sampling points and the corresponding pixel points between the pixel blocks of the second sampling points.
S405, two second weights respectively corresponding to the two Y-direction sample points are determined according to the pixel difference squared.
In one embodiment, the pixel difference square between the pixel blocks of the two second sampling points and the corresponding pixel points between the second pixel blocks is respectively calculated; respectively calculating the sum of the squares of all pixel differences corresponding to each second sampling point pixel block as two second accumulated distances corresponding to the two Y-direction sampling points; two second weights are determined based on the two second cumulative distances. The method of determining the second weight is the same as the method of determining the first weight.
In one embodiment, any sampling interval can be adopted for the Y-direction sampling points (longitudinal position sampling points), the greater the sampling number is, the greater the calculation amount is, the linear growth proportional relationship is formed between the sampling points and the longitudinal position sampling points, and the appropriate number of sampling points needs to be adopted according to actual conditions. As shown in fig. 5, 5 is the center point and 6 is the sampling point. Y-direction aggregation (vertical aggregation) refers to addition row by row and is stored in the center-shaded column 7.
And after the first pixel value is used as the current pixel value of the pixel point to be processed, sampling a pair of second sampling points in the Y direction (the number and the step length of the second sampling points and whether the positions of the second sampling points are symmetrical or not are arbitrary). If the sampling step length in the Y direction is the same as that in the X direction, i is the same, then a point is sampled in the Y direction and at a central point (pixel point to be processed) up and down, and under the condition of symmetrical positions of the up and down points, only one point needs to be sampled in the Y direction in the actual processing process. Calculating the y-direction point-by-point pixel difference:
Figure DEST_PATH_IMAGE065
calculating the weight of the Y-direction sampling point: first, the sum of squares of the dot-by-dot pixel differences is calculated
Figure DEST_PATH_IMAGE066
Summing in 3x3 neighborhood to obtain
Figure DEST_PATH_IMAGE067
A 3x3 pixel block (to
Figure 927369DEST_PATH_IMAGE067
The first pixel block of which is the center) to
Figure DEST_PATH_IMAGE068
A 3x3 pixel block (to
Figure DEST_PATH_IMAGE069
The pixel block of the second sample point that is the center) of the first sample point (the second cumulative distance). Setting the obtained distance matrix to
Figure DEST_PATH_IMAGE070
And shifting up the corresponding position element by i bit:
Figure DEST_PATH_IMAGE071
can obtain
Figure DEST_PATH_IMAGE072
At 3x3 pixel block to
Figure DEST_PATH_IMAGE073
The cumulative sum of squares distance for a 3x3 pixel block.
Performing cost aggregation filtering on the Y direction to obtain a second pixel value of the pixel point to be processed, wherein the second pixel value is as follows:
Figure DEST_PATH_IMAGE074
wherein,
Figure DEST_PATH_IMAGE075
is the pixel value of the first Y-direction sample point,
Figure DEST_PATH_IMAGE076
is the pixel value of the second Y-direction sample point,
Figure DEST_PATH_IMAGE077
the first pixel value of the pixel point to be processed;
Figure DEST_PATH_IMAGE078
two second weights;
Figure DEST_PATH_IMAGE079
is a second cumulative distance corresponding to the first Y direction sample point,
Figure DEST_PATH_IMAGE080
a second cumulative distance corresponding to a second Y-direction sampling point;
Figure DEST_PATH_IMAGE081
is composed of
Figure 6052DEST_PATH_IMAGE057
Figure 393171DEST_PATH_IMAGE058
I is the sampling step size for the image noise level variance parameter.
Adaptively defining sampling points, repeating the steps for N times, obtaining a first pixel value and a second pixel value of a pixel point to be processed in each iteration, and setting the second pixel value as the current pixel value of the pixel point to be processed, namely starting from the second iteration, and performing iteration each time by using the formula (1-1)
Figure DEST_PATH_IMAGE082
By conversion into one generated in the last iteration
Figure DEST_PATH_IMAGE083
In one embodiment, as shown in fig. 6, the present disclosure provides an image denoising apparatus, including: a first sampling processing module 601, a first weight obtaining module 602, a first pixel value obtaining module 603, a second sampling processing module 604, a second weight obtaining module 605, and a second pixel value obtaining module 606.
The first sampling processing module 601 obtains a first sampling point corresponding to a pixel point to be processed in an image in a first direction, and determines a first sampling point pixel block corresponding to the first sampling point and a first pixel block corresponding to the pixel point to be processed. The first weight obtaining module 602 calculates first pixel difference information between the first sampling point pixel block and each pixel point corresponding to the first pixel block, and determines a first weight based on the first pixel difference. The first pixel value obtaining module 603 performs a first filtering process on the pixel value of the pixel point to be processed in the first direction according to the current pixel value of the pixel point to be processed, the pixel value of the first sampling point, and the first weight, so as to obtain a first pixel value of the pixel point to be processed.
The second sampling processing module 604 obtains a second sampling point corresponding to a pixel point to be processed in the image in a second direction, and determines a pixel block of the second sampling point corresponding to the second sampling point and a pixel block of the second sampling point corresponding to the pixel point to be processed. The second weight obtaining module 605 calculates second pixel difference information of each corresponding pixel point between the second sampling point pixel block and the second pixel block, and determines a second weight based on the second pixel difference. The second pixel value obtaining module 606 performs second filtering processing on the pixel value of the pixel point to be processed in the second direction according to the first pixel value of the pixel point to be processed, the pixel value of the second sampling point, and the second weight, and obtains a second pixel value of the pixel point to be processed, which is used as the current pixel value of the pixel point to be processed.
In one embodiment, the first sampling processing module 601 determines a sampling step length in the X direction, and selects two X direction sampling points as two first sampling points based on the sampling step length and with the pixel point to be processed as a symmetric point. The first sampling processing module 601 respectively obtains two adjacent regions with two X-direction sampling points as centers, and obtains a first adjacent region with a pixel point to be processed as a first pixel block, as two first sampling point pixel blocks.
The first weight obtaining module 602 calculates pixel difference squares between the two first sampling point pixel blocks and corresponding pixel points between the first pixel blocks, respectively. The first weight obtaining module 602 calculates the sum of the squares of all pixel differences corresponding to each first sampling point pixel block as two first cumulative distances corresponding to two X-direction sampling points, respectively; two first weights are determined based on the two first cumulative distances.
In one embodiment, the second sampling processing module 604 selects two Y-direction sampling points as two second sampling points in the Y direction based on the sampling step length and taking the pixel point to be processed as a symmetric point. The second sampling processing module 604 obtains two adjacent regions centered on two Y-direction sampling points as two second sampling point pixel blocks, and obtains a second adjacent region centered on a pixel point to be processed as a second pixel block.
The second weight obtaining module 605 calculates the square of the pixel difference between the two second sampling point pixel blocks and each corresponding pixel point between the second sampling point pixel blocks. The second weight acquisition module 605 calculates the sum of the squares of all the pixel differences corresponding to the respective second sample point pixel blocks as two second cumulative distances corresponding to the two Y-direction sample points, respectively, and determines two second weights based on the two second cumulative distances.
As shown in fig. 7, the iteration processing module 607 performs iteration processing based on a preset number of iterations, so as to obtain a first pixel value and a second pixel value of the to-be-processed pixel point in each iteration, and set the second pixel value as the current pixel value of the to-be-processed pixel point.
Fig. 8 is a block diagram of one embodiment of an electronic device of the present disclosure, as shown in fig. 8, the electronic device 81 includes one or more processors 811 and memory 812.
The processor 811 may be a Central Processing Unit (CPU) or other form of processing unit having data processing capability and/or instruction execution capability, and may control other components in the electronic device 81 to perform desired functions.
Memory 812 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. Volatile memory, for example, may include: random Access Memory (RAM) and/or cache memory (cache), etc. The nonvolatile memory, for example, may include: read Only Memory (ROM), hard disk, flash memory, and the like. One or more computer program instructions may be stored on a computer-readable storage medium and executed by processor 811 to implement the image denoising methods of the various embodiments of the present disclosure above and/or other desired functions. Various contents such as an input signal, a signal component, a noise component, etc. may also be stored in the computer-readable storage medium.
In one example, the electronic device 81 may further include: an input device 813, an output device 814, etc., which are interconnected by a bus system and/or other form of connection mechanism (not shown). The input device 813 may further include, for example, a keyboard, a mouse, and the like. The output device 814 may output various information to the outside. The output devices 814 may include, for example, a display, speakers, a printer, and a communication network and remote output devices connected thereto, among others.
Of course, for simplicity, only some of the components of the electronic device 81 relevant to the present disclosure are shown in fig. 8, and components such as buses, input/output interfaces, and the like are omitted. In addition, the electronic device 81 may include any other suitable components, depending on the particular application.
In addition to the above-described methods and apparatus, embodiments of the present disclosure may also be a computer program product comprising computer program instructions that, when executed by a processor, cause the processor to perform the steps in the image denoising method according to various embodiments of the present disclosure described in the "exemplary methods" section above of this specification.
The computer program product may write program code for performing the operations of embodiments of the present disclosure in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server.
Furthermore, embodiments of the present disclosure may also be a computer-readable storage medium having stored thereon computer program instructions that, when executed by a processor, cause the processor to perform the steps in the image denoising method according to various embodiments of the present disclosure described in the "exemplary methods" section above in this specification.
The computer-readable storage medium may take any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may include, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium may include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The foregoing describes the general principles of the present disclosure in conjunction with specific embodiments, however, it is noted that the advantages, effects, etc. mentioned in the present disclosure are merely examples and are not limiting, and they should not be considered essential to the various embodiments of the present disclosure. Furthermore, the foregoing disclosure of specific details is for the purpose of illustration and description and is not intended to be limiting, since the disclosure is not intended to be limited to the specific details so described.
In the image denoising method and apparatus, the electronic device, and the storage medium in the above embodiments, a first sampling point and a first weight are determined in a first direction, and cost aggregation filtering processing is performed in the first direction according to a current pixel value of a pixel point to be processed, a pixel value of the first sampling point, and the first weight, so as to obtain a first pixel value of the pixel point to be processed; determining a second sampling point and a second weight in a second direction, and performing cost aggregation filtering processing in the second direction according to a first pixel value of the pixel point to be processed, a pixel value of the second sampling point and the second weight to obtain a second pixel value of the pixel point to be processed as a current pixel value of the pixel point to be processed; the adaptability of denoising processing can be improved, and the image denoising efficiency is improved; the operation complexity is reduced, the power consumption is low, and the image denoising capability is improved; the image quality processed by the ISP can be improved, and the customer experience is effectively improved.
In the present specification, the embodiments are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same or similar parts in the embodiments are referred to each other. For the system embodiment, since it basically corresponds to the method embodiment, the description is relatively simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The block diagrams of devices, apparatuses, systems referred to in this disclosure are only given as illustrative examples and are not intended to require or imply that the connections, arrangements, configurations, etc. must be made in the manner shown in the block diagrams. These devices, apparatuses, devices, and systems may be connected, arranged, configured in any manner, as will be appreciated by those skilled in the art. Words such as "including," comprising, "having," and the like are open-ended words that mean "including, but not limited to," and are used interchangeably therewith. The words "or" and "as used herein mean, and are used interchangeably with, the word" and/or, "unless the context clearly dictates otherwise. The word "such as" is used herein to mean, and is used interchangeably with, the phrase "such as but not limited to".
The methods and apparatus of the present disclosure may be implemented in a number of ways. For example, the methods and apparatus of the present disclosure may be implemented by software, hardware, firmware, or any combination of software, hardware, and firmware. The above-described order for the steps of the method is for illustration only, and the steps of the method of the present disclosure are not limited to the order specifically described above unless specifically stated otherwise. Further, in some embodiments, the present disclosure may also be embodied as programs recorded in a recording medium, the programs including machine-readable instructions for implementing the methods according to the present disclosure. Thus, the present disclosure also covers a recording medium storing a program for executing the method according to the present disclosure.
It is also noted that in the devices, apparatuses, and methods of the present disclosure, each component or step can be decomposed and/or recombined. These decompositions and/or recombinations are to be considered equivalents of the present disclosure.
The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present disclosure. Various modifications to these aspects, and the like, will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the disclosure. Thus, the present disclosure is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing description has been presented for purposes of illustration and description. Furthermore, the description is not intended to limit embodiments of the disclosure to the form disclosed herein. While a number of example aspects and embodiments have been discussed above, those of skill in the art will recognize certain variations, modifications, alterations, additions and sub-combinations thereof.

Claims (8)

1. An image denoising method, comprising:
in a first direction, acquiring a first sampling point corresponding to a pixel point to be processed in an image, and determining a first sampling point pixel block corresponding to the first sampling point and a first pixel block corresponding to the pixel point to be processed; wherein the first direction is an X direction of an image coordinate system; determining a sampling step length in the X direction, and selecting two X-direction sampling points as two first sampling points based on the sampling step length and by taking the pixel point to be processed as a symmetrical point; respectively acquiring two adjacent areas with two X-direction sampling points as centers, and taking the two adjacent areas as two first sampling point pixel blocks; acquiring a first adjacent region with the pixel point to be processed as a center, and taking the first adjacent region as the first pixel block;
calculating first pixel difference information between the first sampling point pixel block and each pixel point corresponding to the first pixel block, and determining a first weight based on the first pixel difference;
according to the current pixel value of the pixel point to be processed, the pixel value of the first sampling point and the first weight, performing first filtering processing on the pixel value of the pixel point to be processed in a first direction to obtain a first pixel value of the pixel point to be processed;
in a second direction, acquiring a second sampling point corresponding to a pixel point to be processed in the image, and determining a pixel block of the second sampling point corresponding to the second sampling point and a second pixel block corresponding to the pixel point to be processed; wherein the second direction is the Y direction of the image coordinate system; in the Y direction, two Y-direction sampling points are selected as two second sampling points based on the sampling step length and by taking the pixel point to be processed as a symmetrical point; respectively acquiring two adjacent areas with two Y-direction sampling points as centers, and taking the two adjacent areas as two second sampling point pixel blocks; acquiring a second adjacent area taking the pixel point to be processed as a center, and taking the second adjacent area as the second pixel block;
calculating second pixel difference information between the second sampling point pixel block and each pixel point corresponding to the second pixel block, and determining a second weight based on the second pixel difference;
and according to the first pixel value of the pixel point to be processed, the pixel value of the second sampling point and the second weight, carrying out second filtering processing on the pixel value of the pixel point to be processed in a second direction to obtain a second pixel value of the pixel point to be processed, wherein the second pixel value is used as the current pixel value of the pixel point to be processed.
2. The method of claim 1, wherein said calculating first pixel difference information between said first sample pixel block and respective pixel points corresponding to said first pixel block, and wherein determining a first weight based on said first pixel difference comprises:
respectively calculating the square of pixel difference between the two first sampling point pixel blocks and each corresponding pixel point between the first pixel blocks;
respectively calculating the sum of the squares of all pixel differences corresponding to each first sampling point pixel block as two first accumulated distances corresponding to the two X-direction sampling points;
two of the first weights are determined based on two first cumulative distances.
3. The method of claim 2,
obtaining a first pixel value of the pixel point to be processed as follows:
Figure 793718DEST_PATH_IMAGE001
wherein,
Figure 386374DEST_PATH_IMAGE002
is the pixel value of the first X-direction sample point,
Figure 77249DEST_PATH_IMAGE003
is the pixel value of the second X-direction sample point,
Figure 755355DEST_PATH_IMAGE004
the current pixel value of the pixel point to be processed is obtained;
Figure 439146DEST_PATH_IMAGE005
and
Figure 886308DEST_PATH_IMAGE006
two first weights;
Figure 607139DEST_PATH_IMAGE007
is a first cumulative distance corresponding to the first X-direction sample point,
Figure 647908DEST_PATH_IMAGE008
is a first cumulative distance corresponding to the second X-direction sample point;
Figure 745177DEST_PATH_IMAGE009
is composed of
Figure 312424DEST_PATH_IMAGE010
Figure 830256DEST_PATH_IMAGE011
And i is the image noise level variance parameter and the sampling step length.
4. The method of claim 1, wherein said calculating second pixel difference information between said second sample block of pixels and respective pixels corresponding to said second block of pixels, and wherein determining a second weight based on said second pixel difference comprises:
respectively calculating the square of pixel difference between the two second sampling point pixel blocks and each corresponding pixel point between the two second sampling point pixel blocks;
respectively calculating the sum of the squares of all pixel differences corresponding to each second sampling point pixel block as two second accumulated distances corresponding to the two Y-direction sampling points;
two of the second weights are determined based on two second cumulative distances.
5. The method of claim 4,
obtaining a second pixel value of the pixel point to be processed as follows:
Figure 748533DEST_PATH_IMAGE012
wherein,
Figure 259280DEST_PATH_IMAGE013
is the pixel value of the first Y-direction sample point,
Figure 681034DEST_PATH_IMAGE014
is the pixel value of the second Y-direction sample point,
Figure 743668DEST_PATH_IMAGE015
the first pixel value of the pixel point to be processed is obtained;
Figure 8296DEST_PATH_IMAGE016
two of the second weights;
Figure 447368DEST_PATH_IMAGE017
is a second cumulative distance corresponding to the first Y direction sample point,
Figure 864574DEST_PATH_IMAGE018
a second cumulative distance corresponding to a second Y-direction sampling point;
Figure 363688DEST_PATH_IMAGE009
is composed of
Figure 725399DEST_PATH_IMAGE010
Figure 594260DEST_PATH_IMAGE011
And i is the image noise level variance parameter and the sampling step length.
6. An image denoising apparatus, comprising:
the first sampling processing module is used for acquiring a first sampling point corresponding to a pixel point to be processed in an image in a first direction, and determining a first sampling point pixel block corresponding to the first sampling point and a first pixel block corresponding to the pixel point to be processed; wherein the first direction is an X direction of an image coordinate system; the first sampling processing module is specifically used for determining a sampling step length in the X direction, and selecting two X-direction sampling points as two first sampling points based on the sampling step length and by taking the pixel point to be processed as a symmetrical point; respectively acquiring two adjacent areas with two X-direction sampling points as centers, and taking the two adjacent areas as two first sampling point pixel blocks; acquiring a first adjacent region with the pixel point to be processed as a center, and taking the first adjacent region as the first pixel block;
the first weight acquisition module is used for calculating first pixel difference information between the first sampling point pixel block and each pixel point corresponding to the first pixel block and determining a first weight based on the first pixel difference;
a first pixel value obtaining module, configured to perform a first filtering process on a pixel value of the to-be-processed pixel point in a first direction according to the current pixel value of the to-be-processed pixel point, the pixel value of the first sampling point, and the first weight, so as to obtain a first pixel value of the to-be-processed pixel point;
the second sampling processing module is used for acquiring a second sampling point corresponding to a pixel point to be processed in the image in a second direction, and determining a pixel block of the second sampling point corresponding to the second sampling point and a second pixel block corresponding to the pixel point to be processed; wherein the second direction is the Y direction of the image coordinate system; the second sampling processing module is specifically used for selecting two Y-direction sampling points as two second sampling points in the Y direction based on the sampling step length and by taking the pixel point to be processed as a symmetrical point; respectively acquiring two adjacent areas with two Y-direction sampling points as centers, and taking the two adjacent areas as two second sampling point pixel blocks; acquiring a second adjacent area taking the pixel point to be processed as a center, and taking the second adjacent area as the second pixel block;
the second weight obtaining module is used for calculating second pixel difference information between the second sampling point pixel block and each pixel point corresponding to the second pixel block and determining a second weight based on the second pixel difference;
and the second pixel value obtaining module is used for performing second filtering processing on the pixel value of the pixel point to be processed in a second direction according to the first pixel value of the pixel point to be processed, the pixel value of the second sampling point and the second weight to obtain a second pixel value of the pixel point to be processed as the current pixel value of the pixel point to be processed.
7. A computer-readable storage medium, characterized in that the storage medium stores a computer program for performing the method of any of the preceding claims 1-5.
8. An electronic device, characterized in that the electronic device comprises:
a processor; a memory for storing the processor-executable instructions;
the processor is used for reading the executable instructions from the memory and executing the instructions to realize the method of any one of the claims 1 to 5.
CN202110810575.XA 2021-07-19 2021-07-19 Image denoising method and device, electronic device and storage medium Active CN113284075B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110810575.XA CN113284075B (en) 2021-07-19 2021-07-19 Image denoising method and device, electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110810575.XA CN113284075B (en) 2021-07-19 2021-07-19 Image denoising method and device, electronic device and storage medium

Publications (2)

Publication Number Publication Date
CN113284075A CN113284075A (en) 2021-08-20
CN113284075B true CN113284075B (en) 2021-09-21

Family

ID=77286695

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110810575.XA Active CN113284075B (en) 2021-07-19 2021-07-19 Image denoising method and device, electronic device and storage medium

Country Status (1)

Country Link
CN (1) CN113284075B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114596348B (en) * 2021-12-08 2023-09-01 北京蓝亚盒子科技有限公司 Screen space-based ambient occlusion calculating method, device, operator and readable storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106023114A (en) * 2016-05-27 2016-10-12 北京小米移动软件有限公司 Image processing method and apparatus
CN111861938A (en) * 2020-07-30 2020-10-30 展讯通信(上海)有限公司 Image denoising method and device, electronic equipment and readable storage medium
CN113034387A (en) * 2021-03-05 2021-06-25 成都国科微电子有限公司 Image denoising method, device, equipment and medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106023114A (en) * 2016-05-27 2016-10-12 北京小米移动软件有限公司 Image processing method and apparatus
CN111861938A (en) * 2020-07-30 2020-10-30 展讯通信(上海)有限公司 Image denoising method and device, electronic equipment and readable storage medium
CN113034387A (en) * 2021-03-05 2021-06-25 成都国科微电子有限公司 Image denoising method, device, equipment and medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
SEPARABLE BILATERAL FILTERING FOR FAST VIDEO PREPROCESSING;Tuan Q. Pham 等;《2005 IEEE International Conference on Multimedia and Expo》;20050706;第1-4页 *
基于分区域双边滤波的谷粒噪声修复算法;徐敏 等;《上海大学学报(自然科学版)》;20201031;第26卷(第5期);第693-701页 *

Also Published As

Publication number Publication date
CN113284075A (en) 2021-08-20

Similar Documents

Publication Publication Date Title
JP6902122B2 (en) Double viewing angle Image calibration and image processing methods, equipment, storage media and electronics
JP4689667B2 (en) Encoding method, encoding apparatus, filter generation method, and filter generation apparatus
JP6507846B2 (en) Image noise removing method and image noise removing apparatus
JP7078139B2 (en) Video stabilization methods and equipment, as well as non-temporary computer-readable media
JP7030493B2 (en) Image processing equipment, image processing methods and programs
CN105069424B (en) Quick face recognition system and method
CN114399597B (en) Method and device for constructing scene space model and storage medium
EP2898473A1 (en) Systems and methods for reducing noise in video streams
WO2014070273A1 (en) Recursive conditional means image denoising
CN113284075B (en) Image denoising method and device, electronic device and storage medium
CN113947768A (en) Monocular 3D target detection-based data enhancement method and device
CN113112561B (en) Image reconstruction method and device and electronic equipment
KR101262164B1 (en) Method for generating high resolution depth image from low resolution depth image, and medium recording the same
CN113436075A (en) Image demosaicing method and device, electronic device and medium
CN110689565B (en) Depth map determination method and device and electronic equipment
CN117408886A (en) Gas image enhancement method, gas image enhancement device, electronic device and storage medium
JP2005339535A (en) Calculation of dissimilarity measure
US20130202199A1 (en) Using higher order statistics to estimate pixel values in digital image processing to improve accuracy and computation efficiency
CN111932466B (en) Image defogging method, electronic equipment and storage medium
CN113269696B (en) Method for denoising image, electronic device, and medium
CN112995633B (en) Image white balance processing method and device, electronic equipment and storage medium
CN112150532A (en) Image processing method and device, electronic equipment and computer readable medium
CN111429568A (en) Point cloud processing method and device, electronic equipment and storage medium
CN111784733A (en) Image processing method, device, terminal and computer readable storage medium
CN104932868B (en) A kind of data processing method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant