CN112598600A - Image moire correction method, electronic device and medium therefor - Google Patents

Image moire correction method, electronic device and medium therefor Download PDF

Info

Publication number
CN112598600A
CN112598600A CN202110006964.7A CN202110006964A CN112598600A CN 112598600 A CN112598600 A CN 112598600A CN 202110006964 A CN202110006964 A CN 202110006964A CN 112598600 A CN112598600 A CN 112598600A
Authority
CN
China
Prior art keywords
pixel point
moire
pixel
brightness
detected
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110006964.7A
Other languages
Chinese (zh)
Other versions
CN112598600B (en
Inventor
凌晨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
ARM Technology China Co Ltd
Original Assignee
ARM Technology China Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ARM Technology China Co Ltd filed Critical ARM Technology China Co Ltd
Publication of CN112598600A publication Critical patent/CN112598600A/en
Application granted granted Critical
Publication of CN112598600B publication Critical patent/CN112598600B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20216Image averaging

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The application relates to the technical field of image processing, and discloses an image moire correction method, electronic equipment and a medium thereof. The method of the present application comprises: acquiring an image to be corrected; determining whether the pixel points are located in a moire area or not based on the brightness component values of the pixel points in the image to be corrected; and under the condition that the pixel points are located in the moire area, performing molar correction on the pixel points. According to the moire correction method, whether the brightness among a plurality of continuous pixel points appears step change or not is determined by detecting the brightness change among a plurality of continuous pixel points in a certain direction in an image to be corrected, and the pixel points to be detected are determined to be in a moire area under the condition that the number of the pixel points with the step change is larger than a step threshold value, so that electronic equipment can more accurately determine and correct moire in the image to be corrected.

Description

Image moire correction method, electronic device and medium therefor
The present application claims priority to chinese patent application filed in 30/09/2020/10 under the name of "image moire correction method and electronic device and medium therefor", from the chinese patent office, application No. 202011064866.0, the entire contents of which are incorporated herein by reference.
Technical Field
The present disclosure relates to the field of image processing, and in particular, to a method for correcting moire fringes of an image, an electronic device and a medium thereof.
Background
When the sampling frequency of the image sensor of the electronic device is lower than the spatial frequency of the captured scene, moire may be generated in the captured image due to stacking of high-frequency image information in a low-frequency pixel space, which affects the image quality. When the image contains moire areas, a lot of false colors are also accompanied. In the pixel space of the image including the moire area, the frequency of the photosensitive element corresponding to the pixel point in the moire area is significantly higher than the frequency of the photosensitive element of the surrounding pixel point, for example, in the image of YUV (Luminance-Bandwidth-Chrominance) format, the Y component (Luminance), that is, the Luminance, of the pixel point in the moire area is significantly higher than the Luminance of the surrounding pixel point.
Currently, most manufacturers choose to use an optical low pass filter in the camera to suppress moir é generation at the expense of reducing image sharpness. A commonly used method for removing moire is to take a raw Bayer image (Bayer image) after photographing, and since the Bayer image is a raw image obtained by an imaging Device CCD (Charge-coupled Device) or CMOS (Complementary Metal Oxide Semiconductor) image sensor of an electronic Device, the above method cannot be adopted in a case where the raw Bayer image cannot be obtained. Therefore, there is a need for a moire correction method that can support common image formats.
Disclosure of Invention
An object of the present application is to provide an image moire correction method, an electronic apparatus and a medium therefor. By the method, the electronic equipment can detect the brightness change among a plurality of continuous pixels in a certain direction of the pixels to be detected in the image to be corrected, determine whether the brightness among the plurality of continuous pixels has a step, and determine that the pixels to be detected are located in the moire area under the condition that the number of the pixels with the step is greater than the step threshold value, so that the electronic equipment can more accurately determine the moire area in the image to be corrected.
A first aspect of the present application provides an image moire correction method, including: acquiring an image to be corrected; determining whether the pixel points are located in a moire area or not based on the brightness component values of the pixel points in the image to be corrected; and under the condition that the pixel points are located in the moire area, performing molar correction on the pixel points.
That is, in the embodiment of the present application, for example, the format of the image to be corrected may be a YUV (Luminance-Bandwidth-Chrominance space) format. The Luminance component value of the pixel point may be a Y component (Luminance).
In a possible implementation of the first aspect, determining whether a pixel point is located in a moire area based on a luminance component of each pixel point in an image to be corrected includes:
selecting pixel points included in a detection window of the pixel points to be detected from the image to be corrected, wherein the detection window includes the pixel points to be detected, the detection window includes M multiplied by M pixel points, and M is an odd number larger than 1;
calculating the number of pixel points with brightness steps in the detection window based on the brightness value of each pixel point in the detection window;
and under the condition that the number of pixel points with brightness step in the detection window exceeds a preset number, determining that the pixel points to be detected are located in the Moire pattern area.
In one possible implementation of the first aspect described above, the detection window is a sliding window.
That is, in the embodiment of the present application, for example, the detection window may be a sliding window of 5 × 5 size in fig. 6, that is, the detection window may include 25 pixels. The preset number may be a step threshold, and if the number of pixel points where a brightness step occurs in the detection window is determined to be 10, that is, the number of steps in the detection window step _ hv is 10, it is determined that the pixel points to be detected are located in the moire area under the condition that the preset step threshold is 5.
In a possible implementation of the first aspect, the pixel point to be detected is located at the center of the detection window.
For example, the pixel point to be detected may be the pixel point Y22 in fig. 6.
In a possible implementation of the first aspect, whether a pixel has a brightness step is determined by:
under the condition that the brightness change direction of the brightness value from the first pixel point to the second pixel point and the brightness change direction of the brightness value from the second pixel point to the third pixel point in at least one detection direction in the detection window are different, determining that the brightness step occurs at the second pixel point, wherein the first pixel point, the second pixel point and the third pixel point are respectively three adjacent pixel points in the detection window, and the brightness change direction of the brightness value comprises two directions of brightness increase and brightness decrease.
For example, whether a step occurs can be determined by determining whether the luminance change direction from the first pixel point to the second pixel point is opposite to the luminance change direction from the second pixel point to the third pixel point, and if the luminance change directions of the first pixel point and the second pixel point are opposite, the step occurs.
In a possible implementation of the first aspect, it is determined whether a luminance change direction from the first pixel point to the second pixel point is the same as a luminance change direction from the second pixel point to the third pixel point by: for example, the first pixel point, the second pixel point, and the third pixel point may be Y21, Y22, and Y23 in fig. 6; for another example, the first pixel, the second pixel and the third pixel can also be Y12, Y22 and Y32 in fig. 6
Calculating the brightness value step sign diffH _ r1 of the first pixel point and the second pixel point by the following formula:
Figure BDA0002883438340000021
wherein diffH1 can be calculated by the following formula:
diffH1=sign(diff1)*diffHa1
wherein sign (diff1) is the sign bit of the difference value of the brightness values of the first pixel point and the second pixel point, diffHa1Can be calculated by the following formula:
diffHa1=abs(diff1)-tn
wherein abs (diff1) is the absolute value of the difference between the brightness values of the first pixel point and the second pixel point, and tn is the noise model calibration parameter;
calculating the brightness value step sign diffH _ r2 of the second pixel point and the third pixel point by the following formula:
Figure BDA0002883438340000031
wherein diffH2 can be calculated by the following formula:
diffH2=sign(diff2)*diffHa2
wherein sign (diff2) is the sign bit of the difference value of the brightness values of the second pixel point and the third pixel point, diffH a2 can be calculated by the following formula:
diffHa2=abs(diff2)-tn
wherein abs (diff1) is the absolute value of the difference between the brightness values of the second pixel point and the third pixel point;
and under the condition that the product of diffH _ r1 and diffH _ r2 is a negative number, determining that the brightness change direction from the first pixel point to the second pixel point is different from the brightness change direction from the second pixel point to the third pixel point.
In a possible implementation of the first aspect, the sign bit is-1 when the difference between the luminance values of the first pixel point and the second pixel point is a negative number; under the condition that the difference value of the brightness values of the first pixel point and the second pixel point is a non-negative number, the sign bit is 1, and under the condition that the difference value of the brightness values of the second pixel point and the third pixel point is a negative number, the sign bit is-1; and under the condition that the difference value of the brightness values of the second pixel point and the third pixel point is a non-negative number, the sign bit is 1.
In a possible implementation of the first aspect, the noise model calibration parameter is used to remove a noise signal in an absolute value of a difference between luminance values of the first pixel point and the second pixel point and an absolute value of a difference between luminance values of the second pixel point and the third pixel point.
For example, taking Y23 and Y22 in fig. 6 as examples, the noise model specifies parameters for representing the noise magnitudes of the Y components of Y23 and Y22. The noise model calibration parameters can adjust the values of the Y components of the pixel points in the sliding window, and the values can change along with the change of the values of the Y components of each pixel point in the sliding window.
In a possible implementation of the first aspect, it is determined that a brightness step is generated at the second pixel point when the brightness value of the first pixel point is smaller than the brightness value of the second pixel point, the absolute value of the difference between the brightness values of the first pixel point and the second pixel point is greater than a first threshold, the brightness value of the third pixel point is smaller than the brightness value of the second pixel point, and the absolute value of the difference between the brightness values of the third pixel point and the second pixel point is greater than a second threshold.
For example, taking Y21, Y22, and Y23 in fig. 6 as an example, in the case where Y22 to Y23 make an upward change in the Y component, and Y21 to Y22 make a downward change in the Y component, it is determined that a brightness step is generated at Y22.
In a possible implementation of the first aspect, performing moir correction on a pixel point when the pixel point is located in a moir é area includes:
and calculating texture evaluation indexes in the detection window based on the brightness values of all the pixel points in the detection window, wherein the texture evaluation indexes are used for describing the distribution state of the brightness values of all the pixel points in the detection window.
For example, in fig. 6, the texture evaluation index of a sliding window centered on Y22 is calculated.
In one possible implementation of the first aspect, the texture evaluation index in the detection window is calculated by the following formula:
calculating the brightness value step sign texH _ r3 of each pixel point and the fourth pixel point in the detection window by the following formula:
Figure BDA0002883438340000032
wherein, the texH3 can be calculated by the following formula:
texH3=sign(tex3)*texHa3
sign (tex3) is the sign bit of the difference between the brightness values of each pixel and the fourth pixel, texHa3Can be calculated by the following formula:
texHa3=abs(tex3)-tn
wherein abs (tex3) is the absolute value of the difference between the brightness values of each pixel point and the fourth pixel point, and tn is a noise model calibration parameter;
and solving the sum of the brightness value step symbol texH _ r3 of each pixel point and the fourth pixel point in the detection window, and recording the sum as the texture evaluation index of the detection window.
In a possible implementation of the first aspect, a correction parameter for moire correction of a pixel point to be detected is calculated by using the number of pixel points having a brightness step in a detection window and a texture evaluation index of the detection window, and the correction parameter is calculated by using the following formula:
moire=step_hv/step_tex
alpha=k1/(1+e-a*(moire-b))-k2
step _ hv is the number of pixel points with brightness steps in the detection window, step _ tex is the texture evaluation index of the detection window, alpha is the correction parameter, and a, b, k1, k2 are the debugging parameters.
In a possible implementation of the first aspect, the luminance value of the pixel point to be detected is corrected by the following formula:
CorY=beta*med+(1-beta)*mean
Yout=alpha*CorY+(1-alpha)*Y
wherein, med and mean are the median and mean of the brightness values of each pixel point and the pixel point to be detected in at least one direction with the pixel point to be detected as the center; beta is an adjustment parameter, CorY is a correction component of the brightness value of the pixel point to be detected, Y is the brightness value of the pixel point to be detected, and Yout is the brightness value of the pixel point to be detected after correction.
In a possible implementation of the first aspect, in a case where a distance between the pixel point to be detected and at least one boundary of the image to be corrected is less than M, the partial region of the detection window is located outside the boundary of the image to be corrected.
In a possible implementation of the first aspect, the luminance values of the pixels in the detection window are used to complement the partial region with the pixel to be detected as a center based on a direction parallel to the boundary as a symmetry axis.
For example, as shown in fig. 13a and 13b, Y22 is located at the left boundary of the image to be corrected, at this time, in the sliding window with Y22 as the center, except Y22, there are no adjacent pixel points on the left sides of Y02, Y12, Y32, and Y42, at this time, the pixel points on the right side can be mirror-copied by taking Y02, Y12, Y22, Y32, and Y42 as the center, so as to complete the missing pixel points.
In one possible implementation of the first aspect described above, the moire correction method is implemented by an image signal processor of the electronic device, and the image to be corrected is an image taken by a camera of the electronic device.
In one possible implementation of the first aspect, the format of the image to be corrected is a format of a luminance width chrominance space.
In one possible implementation of the first aspect, the detection direction is a horizontal direction or a vertical direction or a diagonal direction in the region to be corrected.
In one possible implementation of the first aspect, the detection window is determined in the image to be corrected by a preset step size.
In a possible implementation of the first aspect, when a next detection window is determined in the image to be corrected and the next detection window includes a second pixel point, it is determined that a step is generated at the second pixel point in the next region to be corrected.
For example, as shown in fig. 6-8, the inspection window moves from centering on Y22 to centering on Y23, and for Y22, the inspection window may determine that a step is generated at Y22 using the previous inspection results.
A second aspect of the present application provides a readable medium, in which instructions are stored, and when the instructions are executed by an electronic device, the electronic device executes the image moire correction method provided in the foregoing first aspect.
A third aspect of the present application provides an electronic device comprising:
a memory having stored therein instructions, an
A processor configured to read and execute the instructions in the memory, so as to enable the electronic device to execute the image moire correction method provided by the foregoing first aspect.
In one possible implementation of the above third aspect, the processor comprises an image signal processor.
Drawings
FIG. 1 illustrates a scene schematic of moir é correction, according to some embodiments of the present application;
fig. 2a illustrates a schematic diagram of a to-be-detected pixel point in an image to be corrected showing a step according to some embodiments of the present application;
fig. 2b is a schematic diagram illustrating another example of a step occurring in a pixel point to be detected in an image to be corrected according to some embodiments of the present application;
fig. 2c shows a schematic diagram of an image to be corrected in which no step occurs in a pixel point to be detected, according to some embodiments of the present application;
FIG. 3 is a block diagram of a mobile phone capable of implementing the Moire correction method according to an embodiment of the present application;
fig. 4 is a block diagram illustrating an image signal processor ISP capable of implementing the moire correction method according to an embodiment of the present application;
FIG. 5 illustrates a flow chart for Moire correction of an image, according to an embodiment of the present application;
FIG. 6 shows a schematic view of a sliding window according to an embodiment of the present application;
fig. 7a is a schematic diagram illustrating a pixel point to be detected as a boundary in a sliding window according to an embodiment of the present application;
FIG. 7b is a schematic diagram illustrating padding of neighboring pixels of a border pixel according to an embodiment of the present application;
FIG. 8 is a schematic diagram illustrating a sliding window sliding to a next pixel point to be detected according to a step length according to an embodiment of the present application;
fig. 9 is a diagram illustrating luminance components of neighboring pixels of the pixel Y22 to be tested according to an embodiment of the present application;
FIG. 10 shows a schematic of an image of a correction parameter for moire correction, in accordance with an embodiment of the present application;
FIG. 11 shows a schematic view of another sliding window, according to an embodiment of the present application;
FIG. 12 illustrates a flow chart for Moire correction of an image, according to an embodiment of the present application;
fig. 13a is a schematic diagram illustrating a pixel point to be detected as a boundary in a sliding window according to an embodiment of the present application;
FIG. 13b is a schematic diagram illustrating padding of neighboring pixels of a border pixel according to an embodiment of the present application;
FIG. 14 illustrates a structural block diagram of an electronic device, according to some embodiments of the present application;
FIG. 15 illustrates a schematic diagram of a structure of an SOC based electronic device, according to some embodiments of the present application.
DETAILED DESCRIPTION OF EMBODIMENT (S) OF INVENTION
Illustrative embodiments of the present application include, but are not limited to, an image moire correction method, and an electronic device and medium therefor.
Fig. 1 is a schematic view of a moire correction scene according to an embodiment of the present application.
As shown in fig. 1, when the electronic device 100 captures the target object 101, moire may occur in the captured image 102 for the aforementioned reasons. The moire correction technology disclosed in the application can determine whether the brightness between a plurality of continuous pixels appears a step through detecting the brightness change between a plurality of continuous pixels in a certain direction of the pixels to be detected in the image 102, that is, determine whether the pixels to be detected are in a moire region.
For example, fig. 2a to 2c show the situation that a step occurs between the pixel point to be detected and the pixel points on the left side and the right side of the pixel point to be detected, for example, for the horizontal direction, it can be determined whether the step occurs by determining whether the luminance change direction from the pixel point on the left side of the pixel point to be detected to the pixel point to be detected is opposite to the luminance change direction from the pixel point on the right side of the pixel point to be detected to the pixel point to be detected, and if the luminance change directions of the two are opposite, that is, the step occurs when. The same is true for the vertical direction, whether a step occurs can be judged by judging whether the brightness change direction from the pixel point above the pixel point to be detected to the pixel point to be detected is opposite to the brightness change direction from the pixel point below the pixel point to be detected to the pixel point to be detected, and if the brightness change directions of the pixel point above the pixel point to be detected and the pixel point below the pixel point to be detected are opposite, namely the brightness change direction is increased and then decreased or increased after being decreased, the. For example, the direction of the brightness change from the pixel point to the pixel point is increased by the value +1, the direction of the brightness change from the pixel point to the pixel point is decreased by the value-1, and the brightness change from the pixel point to the pixel point is not changed by the value 0. As shown in fig. 2a, when the luminance change direction of the pixel point to be detected and the pixel point on the left side is +1, and the luminance change direction of the pixel point to be detected and the pixel point on the right side is-1, that is, the luminance change direction of the pixel point to be detected and the pixel point on the right side is decreased after being increased, it is indicated that a step occurs in the pixel point to be detected, and thus the pixel point to be detected is located in the moire area.
Fig. 2b shows the case where the luminance change direction of both decreases first and then increases. At this time, the brightness change direction of the pixel point to be detected and the pixel point on the left side is-1, and the brightness change direction of the pixel point to be detected and the pixel point on the right side is + 1. And showing that the pixel point to be detected has a step, so that the pixel point to be detected is in the Moire pattern area.
Fig. 2c shows a case that the luminance change direction of the pixel point to be detected and the pixel point on the left side rises after no change, at this time, the luminance change direction of the pixel point to be detected and the pixel point on the right side is 0, and the luminance change direction of the pixel point to be detected and the pixel point on the right side is + 1. It is demonstrated that no step occurs in the pixel point to be detected, and therefore the pixel point to be detected is not in the moire area.
Then, when it is determined that the pixel point is in the moire area, the moire correction is performed, for example, the moire correction is performed on the image 102 in fig. 1, so as to obtain an image 103.
Specifically, the moire correction method of the present application includes: counting the number of times of phase steps of the pixel points to be detected, calculating the sum of numerical values of brightness change directions between each pixel point in the sliding window and the interlaced and spaced pixel points thereof, taking the sum as a texture evaluation index of all the pixel points in the sliding window, determining a correction parameter according to the number of times of phase steps of the pixel points to be detected and the texture evaluation index, and correcting the pixel points to be detected according to the correction parameter so as to eliminate Moire patterns. The method can judge whether the pixel point to be detected has a step change or not by judging whether the brightness change direction between the pixel point to be detected and the adjacent pixel point in the sliding window is opposite or not, and carries out moire detection through the number of the step changes of the pixel point to be detected, so as to correct the detected moire. According to the method, the moire in the image to be corrected can be detected and corrected more accurately, besides the image supporting the YUV format, the method can also be applied to images in other formats, meanwhile, cross median filtering is used, the moire is restrained compared with the traditional low-pass filtering, and details in the image can be better protected.
In the embodiment of the present application, the image to be corrected is in YUV format. YUV is a common color space. The Y component represents Luminance (Luma) and represents a Luminance component in the image to be corrected, and the U component and the V component represent Chrominance (Chroma) and describe the color and saturation of the image, so as to specify 2 Chrominance components in the image to be corrected.
It is to be understood that, although the present application is described using YUV format as an example, the technical solution of the present application is also applicable to any color space having a luminance component, for example, the image to be corrected may also be in YCbCr format, where Y refers to the luminance component, Cb refers to the blue chrominance component, and Cr refers to the red chrominance component.
The image to be corrected may be an image captured by the electronic device 100 through a camera thereof, or an image captured by another device, and then sent to the electronic device 100 for moire correction. After the electronic device 100 acquires the image to be corrected, if the format of the image to be corrected does not belong to the YUV format, it needs to be converted into the YUV format.
It is understood that in embodiments of the present application, the electronic device 100 may be various devices capable of capturing and processing images, such as: various devices with cameras, such as mobile phones, computers, laptop computers, tablet computers, televisions, game machines, display devices, outdoor display screens, vehicle-mounted terminals, digital cameras, and the like. For convenience of explanation, the following description will be given taking the mobile phone 100 as an example.
Fig. 3 shows a block diagram of a mobile phone 100 capable of implementing the moire correction method according to an embodiment of the present application. Specifically, as shown in fig. 3, the mobile phone 100 may include a processor 110, a mobile communication module 120, a wireless communication module 125, a display 130, a camera 140, an Image Signal Processor (ISP) 141, an external storage interface 150, an internal memory 151, an audio module 160, a sensor module 170, an input unit 180, and a power supply 190.
It is to be understood that the illustrated structure of the embodiments of the present application does not specifically limit the handset 100. In other embodiments of the present application, the handset 100 may include more or fewer components than shown, or combine certain components, or split certain components, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Processor 110 may include one or more processing units, such as: the processor 110 may include an Application Processor (AP), a modem processor, a Graphics Processing Unit (GPU), an Image Signal Processor (ISP), a controller, a video codec, a Digital Signal Processor (DSP), a DPU (data processing unit), a baseband processor, and/or a neural-Network Processing Unit (NPU), etc. The different processing units may be separate devices or may be integrated into one or more processors.
A memory may also be provided in processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that have just been used or recycled by the processor 110. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Avoiding repeated accesses reduces the latency of the processor 110, thereby increasing the efficiency of the system.
The mobile communication module 120 may be used for receiving and transmitting signals during information transmission and reception or during a call, and in particular, receives downlink information of a base station and then delivers the received downlink information to the one or more processors 110 for processing; in addition, data relating to uplink is transmitted to the base station. Generally, the mobile communication module 120 includes, but is not limited to, an antenna, at least one amplifier, a tuner, one or more oscillators, a Subscriber Identity Module (SIM) card, a transceiver, a coupler, an LNA (Low noise Amplifier, Chinese), a duplexer, and the like. In addition, the mobile communication module 120 may also communicate with a network and other devices through wireless communication. The wireless communication may use any communication standard or protocol, including but not limited to GSM (Global System for Mobile communications, Chinese), GPRS (General Packet Radio Service, Chinese), CDMA (Code Division multiple Access, Chinese), CDMA (Wideband Code Division multiple Access), LTE (Long Term Evolution, Chinese), e-mail, SMS (Short Messaging Service, Chinese), and so on.
The wireless communication module 125 can provide solutions for wireless communication applied to the mobile phone 100, including Wireless Local Area Networks (WLANs) (e.g., wireless fidelity (Wi-Fi) networks), Bluetooth (BT), Global Navigation Satellite System (GNSS), Frequency Modulation (FM), Near Field Communication (NFC), Infrared (IR), and the like. The wireless communication module 125 may be one or more devices integrating at least one communication processing module. The wireless communication module 125 receives electromagnetic waves via an antenna, performs frequency modulation and filtering processing on electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 125 can also receive a signal to be transmitted from the processor 110, frequency modulate it, amplify it, and convert it into electromagnetic waves via the antenna to radiate it out. In some embodiments, the wireless communication module 125 is capable of implementing the multicarrier techniques of the Wi-Fi network-based communication protocols described above, thereby enabling the handset 100 to implement ultra-wideband transmission over existing Wi-Fi protocols.
The mobile phone 100 can implement a shooting function through the camera 140, the ISP141, the video codec, the GPU, the display 130, the application processor, and the like.
The ISP141 is used for processing data fed back by the camera 140. For example, when a photo is taken, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electric signal, and the camera photosensitive element transmits the electric signal to the ISP141 for processing and converting into an image visible to the naked eye. The ISP141 may also perform algorithm optimization for noise, brightness, and skin color of the image. The ISP141 may also optimize parameters such as exposure, color temperature, etc. of the shooting scene. In some embodiments of the present application, moir é patterns generated by the camera 140 on the captured image may be processed by the ISP 141. The camera photosensitive element here may be the image sensor 142. An analog-to-digital converter 143 (not shown) may receive the plurality of electrical signals of the image area from the image sensor 142 and convert them into a plurality of digital pixel data.
It should be noted that in some embodiments of the present application, the ISP141 may be disposed inside the camera 140, and in other embodiments of the present application, the ISP141 may also be disposed outside the camera 140, for example, in the processor 110, which is not limited herein.
The camera 140 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image to the photosensitive element. The photosensitive element may be a Charge Coupled Device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The light sensing element converts the optical signal into an electrical signal, which is then passed to the ISP where it is converted into a digital image signal. The ISP141 outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into image signal in standard RGB, YUV and other formats. In some embodiments, the handset 100 may include 1 or N cameras 130, N being a positive integer greater than 1.
The digital signal processor is used for processing digital signals, and can process digital image signals and other digital signals. For example, when the handset 100 is in frequency bin selection, the digital signal processor is used to perform fourier transform or the like on the frequency bin energy.
Video codecs are used to compress or decompress digital video. Handset 100 may support one or more video codecs. Thus, the handset 100 can play or record video in a variety of encoding formats, such as: moving Picture Experts Group (MPEG) 1, MPEG2, MPEG3, MPEG4, and the like.
The external memory interface 150 may be used to connect an external memory card, such as a Micro SD card, to extend the storage capability of the mobile phone 100.
The internal memory 151 may be used to store computer-executable program code, which includes instructions. The internal memory 151 may include a program storage area and a data storage area. In some embodiments of the present application, the processor 110 applies and processes data by executing instructions stored in the internal memory 151, and/or instructions stored in a memory provided in the processor. For example, the internal memory may store the image to be corrected to be morbid corrected.
The handset 100 further includes an audio module 160, and the audio module 160 may include a speaker, a receiver, a microphone, an earphone interface, and an application processor, etc. to implement audio functions. Such as music playing, recording, etc.
The handset 100 also includes a sensor module 170, wherein the sensor module 170 may include a pressure sensor, a gyroscope sensor, an air pressure sensor, a magnetic sensor, an acceleration sensor, a distance sensor, a proximity light sensor, a fingerprint sensor, a temperature sensor, a touch sensor, an ambient light sensor, and the like.
The input unit 180 may be used to receive input numeric or character information and generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function control.
The handset 100 also includes a power supply 190 (e.g., a battery) for powering the various components, which may preferably be logically connected to the processor 110 via a power management system, such that the power management system may be used to manage charging, discharging, and power consumption. The power supply 190 may also include any component including one or more of a dc or ac power source, a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator, and the like.
It is understood that the structure shown in fig. 2 is only one specific structure for implementing the functions of the mobile phone 100 in the technical solution of the present application, and other structures are possible. The mobile phone 100 implementing similar functions is also applicable to the technical solution of the present application, and is not limited herein.
It is to be understood that the moire correction of the image in the present application may be implemented in the ISP141 of the camera of the mobile phone 100, or may be executed by the processor 110 of the mobile phone 100, or the mobile phone 100 transmits the image to the server and is executed by the server, which is not limited herein.
For convenience of explanation, the ISP141 in the mobile phone 100 will be described as an example. Fig. 3 shows a schematic diagram of the structure of ISP 141. As shown in fig. 3, the ISP141 includes a moire detection unit 141A, a moire correction unit 141B, and a color correction unit 141C. The concrete steps are as follows:
the analog-to-digital converter 143 receives a plurality of pixel voltages of the image area from the image sensor 142 and converts them into a plurality of digital pixel data. The moire detection unit 141A calculates, according to the difference between the luminance components of the pixel to be detected and the surrounding pixels, whether a step of luminance is generated between the pixel and the pixel in at least one direction of the pixel to be detected, so as to determine whether the pixel to be detected is in a moire region. The moire correction unit 141B determines a correction parameter by the step of the brightness of the pixel point in at least one direction of the pixel point to be detected and the texture evaluation index, and performs moire correction on the pixel point to be detected in the moire region. The color correction unit 141C performs pseudo color suppression on the image data output by the moire correction unit 141B, and transmits the image data to the CPU through the I/O interface for processing, thereby obtaining an image to be finally displayed.
Based on the structures shown in fig. 3 and 4, the moire correction scheme of the image of the embodiment of the present application will be described in detail below, wherein the moire correction scheme of the image of the present application can be implemented by the moire detection unit 141A and the moire correction unit 141B in fig. 4.
FIG. 5 shows a flow chart of Moire correction of an image, as shown in FIG. 5, comprising:
s501: the moire detection unit 141A acquires an image to be corrected.
It is understood that the image to be corrected may be an image captured by the camera 140 in real time, or may be an image stored in the internal memory 151 or the external memory of the mobile phone 100.
S502: the moire detection unit 141A determines whether the pixel point to be detected is in a moire area. If yes, S504 is executed, and Moire pattern correction is carried out on the pixel point to be detected. Otherwise, returning to S503, the moire detection unit 141A continues to detect the next pixel point to be detected.
It can be understood that the moire detection unit 141A may use a sliding window to traverse the image to be corrected, and obtain the pixel point to be detected. Fig. 6 shows a schematic view of the sliding window in the embodiment of the present application, as for the sliding window. The size of the sliding window may be m × m, where m is 3 or more and m is an odd number, and fig. 6 exemplifies that m is 5. It can be understood that, under the condition that the length and the width of the sliding window are both odd numbers, the midpoint of the sliding window can be conveniently used as a pixel point to be detected.
For example, a sliding window of 5 × 5 size is used to traverse the image to be corrected, and each time the sliding window slides on the image to be corrected, Y components of 25 pixels can be extracted from the image to be corrected to form a Y component block of 5 × 5 size, and meanwhile, a midpoint in the sliding window is used as a pixel to be detected. As shown in fig. 6, Y22 represents the current pixel to be detected in the sliding window, i.e. the Y component of the pixel in row 2 and column 2 in the sliding window with the size of 5 × 5. It is understood that the Y component here is the luminance in YUV format.
In some embodiments, as described above, the moire detection unit 141A may determine whether the pixel to be detected is in the moire area by determining whether there is a step in brightness between all adjacent pixels in a certain direction (for example, a horizontal direction, or a vertical direction, a direction at 45 degrees from the horizontal direction, or any other direction) of the pixel to be detected in the sliding window with the pixel to be detected as the center. Specifically, for example, for the horizontal direction, taking the pixel point to be detected as an example, it may be determined whether the luminance change direction from the pixel point on the left side of the pixel point to be detected is opposite to the luminance change direction from the pixel point on the right side of the pixel point to be detected to determine whether a step occurs, and if the luminance change directions of the pixel point to be detected and the pixel point to be detected are opposite, the step occurs. The same is true for the vertical direction, whether a step occurs can be judged by judging whether the brightness change direction from the pixel point above the pixel point to be detected to the pixel point to be detected is opposite to the brightness change direction from the pixel point below the pixel point to be detected to the pixel point to be detected, and if the brightness change directions of the pixel point to be detected and the pixel point to be detected are opposite, namely the pixel point is decreased after the brightness change direction is increased or is increased after the brightness change direction is decreased, the step.
The process of determining the moire area is described below by taking the horizontal and vertical directions as examples. For example, whether a step occurs between adjacent pixel points in the horizontal and vertical directions of the pixel point to be detected can be judged through the following steps and formulas, and the pixel point to be detected is determined to be in the moire area.
The method for determining whether the pixel point to be detected is in the moire area by the moire detection unit 141A includes:
a) the moire detection unit 141A calculates a difference value of Y components of adjacent pixel points of the pixel point to be detected in the horizontal and vertical directions.
Taking Y22 as the current pixel point to be detected, as shown in fig. 6, in the horizontal direction of Y22, there are five pixel points of Y20, Y21, Y22, Y23 and Y25, and it is necessary to calculate the differences of Y components between Y21 and Y20, between Y22 and Y21, between Y21 and Y22, between Y23 and Y22, between Y23 and Y22, and between Y24 and Y23. Similarly, in the vertical direction, the difference of the Y components between Y12 and Y02 and Y22 and Y12, between Y32 and Y22 and Y22 and Y12, and between Y32 and Y22 and between Y42 and Y32 needs to be calculated.
With continuing reference to Y22, taking the examples between Y21 and Y22 and between Y23 and Y22 below, the difference between the Y23 pixel point on the right side in the horizontal direction of Y22 and the Y component thereof can be calculated by the following formula (one).
diff23-22=Y23-Y22(A)
Wherein, diff23-22=Y23-Y22Represents the difference in Y component between Y23 and Y22 in the horizontal direction.
In the same way, diff22-21=Y22-Y21Represents the difference in Y component between Y22 and Y21 in the horizontal direction.
b) The moire detection unit 141A removes noise from the difference of the Y components of the adjacent pixel points.
By the following formula (two), the moire detection unit 141A calculates the absolute value of the difference of the Y components between Y22 and Y23, minus the preset noise model calibration parameters.
diffHa23-22=abs(diff23-22) -tn (two)
Wherein abs represents the absolute value of the difference between Y23 and Y22, and tn is a preset noise model calibration parameter for representing the noise level of the luminance value of the pixel, i.e. the noise level of the Y component. tn is used for adjusting the value of the Y component of the pixel points in the sliding window, and the value of tn can change along with the change of the value of the Y component of each pixel point in the sliding window. When the pixel point contained in the sliding window belongs to the high-frequency image information, namely, the value of the Y component of the pixel point contained in the sliding window is larger, and the value of tn can be correspondingly increased. For example, when the average value of the Y components of the pixels included in the sliding window is 100, the value of tn may be 80. When the average value of the Y components of the pixels included in the sliding window is 80, the value of tn may be 60.
It can be understood that tn may have different sizes tn according to the average brightness of the block, and may also have different sizes tn according to different frequencies (e.g., edge, flat region, texture region) of the pixel points in the image.
Similarly, for Y22 and Y21, the signal can be obtained by diffHa22-21=abs(diff22-21) Tn removes the noise signal in the difference of the Y components of Y22 and Y21.
c) The moire detection unit 141A converts the difference between the Y component of the pixel point to be detected and the Y component of the pixel point adjacent thereto into the direction of change in the Y component of the pixel point to be detected and the pixel point adjacent thereto.
The moire detection unit 141A performs sign operation on the difference between the Y component of the pixel point Y22 to be detected and the Y component of the adjacent pixel point Y23, and multiplies the result by the value obtained in step b).
diffH23-22=sign(diff23-22)*diffHa(III)
Wherein function sign represents obtaining diffH23-22Is operated on the sign bit, i.e. when diff23-22>=0,sign(diff23-22)1 is ═ 1; when diff23-22<0,sign(diff23-22)=-1。
Similarly, diffH for Y22 and Y2122-21=sign(diff22-21)*diffHa
d) The moire detection unit 141A obtains the change direction of the pixel point to be detected and the adjacent pixel point thereof on the Y component.
Can be obtained by the following formula (four) for step c)diffH of23-22The sign operation is performed again, and the changing directions of Y23 and Y22 on the Y component are determined based on the obtained result.
Figure BDA0002883438340000111
When diffHr23-22Above 0, this indicates an upward change in Y component from Y22 to Y23; when diffHr23-22Less than 0, indicating a decreasing change in the Y component from Y22 to Y23; when diffHr23-22Equal to 0, indicates no change in the Y component from Y22 to Y23.
Similarly, the moire detecting unit 141A acquires the changing directions of Y22 and Y21 in the Y component.
Next, with the data of the Y component of Y21, Y22, and Y23 shown in fig. 6, whether a step occurs at Y22 is determined by the above-described equations (one) to (four). When Y23 is 1.5, Y22 is 1, Y21 is 2.5 and tn is 0.4, for between Y23 and Y22,
diff23-22=Y23-Y22=0.5;diffHa23-22=abs(0.5)-0.4=0.1;
diffH23-22=sign(0.5)*(0.1)=0.1;diffHr23-22=1。
similarly, between Y22 and Y21, the above formulas (one) to (four) can be used.
diff22-21=Y22-Y21=-1.5;diffHa22-21=abs(-1.5)-0.4=1.1;
diffH22-21=sign(-1.5)*(1.1)=-1.1;diffHr22-21=-1。
e) The moire detection unit 141A determines whether a step occurs on the pixel to be detected by determining whether a change direction from the pixel point on the left side of the pixel to be detected to the pixel to be detected on the Y component is opposite to a change direction from the pixel point on the right side of the pixel to be detected to the pixel to be detected on the Y component, and if the step occurs, counts a step _ hv, where the step _ hv may be a variable stored in the moire detection unit 141A.
Taking Y22 as an example, for diffHr22-21Say that the direction of change of the Y component from Y21 to Y22 is decreasing for diffHr23-22The direction of change of the Y component from Y22 to Y23 is illustrated as 1. By calculating Y22 and Y21 and Y23 and Y22, diffHr22-21And diffHr23-22To determine whether a step occurred at Y22. When the product of the difference of the Y components of the pixel point to be detected and two continuous adjacent pixel points is negative, a step is shown to appear at the pixel point. Here, the step, i.e. the process of jumping from-1 to 1 of the difference between Y22 and the Y component of its two consecutive adjacent pixels Y21 and Y23 in the horizontal direction, jumps from-1 to 1, and a "falling edge + rising edge" appears on the Y component from Y21 to Y22 to Y23. It can be understood that, in the case of the step occurring in Y22, the difference between Y22 and the Y component of its two consecutive adjacent pixels Y21 and Y23 in the horizontal direction may jump from 1 to-1, that is, a process from Y21 to Y22 to Y23 occurs a "rising edge + falling edge" on the Y component.
It will be appreciated that, taking Y22 as an example, if a step occurs at Y22, indicating a large change in the value of the Y component from Y21 to Y22, and from Y22 to Y23. Meanwhile, the number of steps is counted by adding 1.
In addition to between Y21 and Y22 and Y23 and Y22, in the horizontal direction of Y22, it is necessary to calculate whether or not steps occur between Y21 and Y20 and between Y22 and Y21, between Y23 and Y22, and between Y24 and Y23, and if so, the number of steps is incremented by 1.
Similarly, taking the example between Y32 and Y22 and between Y22 and Y12, i.e., Y12 and Y32 in the direction perpendicular to Y22, the direction of change of the Y component, i.e., diffHr, is also calculated through the above steps a) to e)32-22And diffHr22-12If there is also a "falling edge + rising edge" or "rising edge + falling edge" process in the Y component between Y32 and Y22 and Y22 and Y12, then it is stated that Y22 also has a step in the vertical direction, and the number of steps is incremented by 1.
In addition to between Y32 and Y22 and Y22 and Y12, in the vertical direction of Y22, it is necessary to calculate whether steps occur between Y12 and Y02 and between Y22 and Y12, between Y32 and Y22, and between Y42 and Y32, and if so, the number of steps is counted by adding 1.
f) The moire detection unit 141A determines whether the number of steps is greater than 0, and if so, it indicates that the pixel point to be detected is in a moire area.
For example, according to diffHr described above22-21And diffHr23-22As a result, it is judged that at least one step occurs in the horizontal direction of Y22, and therefore the moire detecting unit 141A determines that Y22 is in the moire region. And sends the number of steps to the moir é correction unit 141B.
In some embodiments of the present application, in a case where the pixel point to be detected is a pixel point on a boundary of the image to be corrected, as shown in fig. 7a, taking Y00 as an example, it is assumed that Y00 is a first pixel point of the image to be corrected, and at this time, there is no pixel point in one direction in each of the horizontal direction and the vertical direction with Y00 as a center, and therefore, there is no corresponding Y component. In this case, as shown in fig. 7b, after mirroring the pixel on one side with Y00 as the center, it is determined whether or not a step occurs in the Y component between the adjacent pixels in the horizontal and vertical directions with Y00 using the above formulas (one) to (four). For example, taking the above Y00 as an example, in the horizontal direction, only the right side has adjacent pixels Y01 and Y02, so the left sides of Y01 and Y02 to Y00 are mirror-copied as pixels on the left side of Y00 with Y00 as the center. Similarly, in the vertical direction, with Y00 as the center, the top of Y10, Y20 to Y00 is mirror-copied as the top pixel of Y00.
S503: the moire detection unit 141A continues to detect the next pixel point to be detected.
After the pixel point to be detected is corrected or under the condition that the pixel point to be detected is not located in the moire pattern area, the sliding window horizontally slides to the next pixel point to be detected according to the step length of 1, and moire pattern detection is carried out on the next pixel point to be detected in the sliding window. Taking Y22 as an example, as shown in fig. 8, after the Y22 is corrected, the sliding window slides to the next pixel point Y23 to be detected according to the step length of 1.
It will be appreciated that in some embodiments, the sliding window may also be slid in the vertical direction by a step size of 1.
It can be understood that, except for the detection method of the moire pixel in S502, taking the horizontal direction as an example, when the luminance values of the left adjacent pixel and the right adjacent pixel of the pixel to be detected are both smaller than or larger than the luminance value of the pixel to be detected, and the absolute value of the difference between the luminance values of the left adjacent pixel and the pixel to be detected is larger than the first threshold, and the absolute value of the difference between the luminance values of the right adjacent pixel and the pixel to be detected is larger than the second threshold, it can also be determined that the step occurs in the pixel to be detected.
S504: the moire correction unit 141B performs moire correction on the pixel point to be detected.
In some embodiments, the moire correction unit 141B performs a moire correction process on the pixel point to be detected, including:
a) the moire correction unit 141B calculates a difference value of Y components of interlaced and spaced pixel points of the pixel point to be detected in the horizontal and vertical directions.
And for the pixel points to be detected in the sliding window, calculating the sum of the difference values of the Y components of the pixel points which are interlaced and spaced in the horizontal and vertical directions of the pixel points to be detected, and taking the sum as the texture evaluation index step _ tex of the sliding window.
For example, for the sliding window shown in fig. 6, taking Y22 as the current pixel point to be detected, as shown in fig. 6, in the horizontal direction of Y22, there are five pixel points of Y20, Y21, Y22, Y23 and Y25, and it is necessary to calculate the differences of Y components between Y20 and Y22, between Y21 and Y23, and between Y22 and Y24. Similarly, in the vertical direction, it is necessary to calculate the differences between Y22 and Y02, between Y32 and Y12, between Y42 and Y22, and between the Y components, and sum the calculated differences.
Taking the example of calculating the difference between the Y components of the pixels interlaced with the Y22 in the horizontal and vertical directions as an example, with reference to fig. 9, the moire detection unit 141A needs to calculate the difference between the Y24 and Y22 in the horizontal direction and the difference between the Y42 and Y22 in the vertical direction, and the difference between the Y24 and the Y22 can be calculated by the following equation (six).
tex24-22=Y24-Y22(V)
Next, the moire correction unit 141B removes a noise signal from the difference of the Y components of Y24 and Y22. This step is the same as step b) in S502 described above.
The noise signal can be removed from the difference of the Y components of Y24 and Y22 by the following formula (seven).
texHa24-22=abs(tex24-22) -tn (six)
Then, the moire correction unit 141B converts the difference between the Y components of Y24 and Y22 into the direction of change of Y22 and its neighboring pixel points in the Y component. This step is the same as step c) in S502 described above, and may be converted into the direction of change of Y22 and its neighboring pixel point on the Y component by the difference between the Y components of formulas (seven) Y24 and Y22.
texH24-22=sign(tex24-22)*texHa(seven)
The moire correction unit 141B acquires the direction of change in the Y component for Y24 and Y22. This step is similar to step d) in S502 described above, and the following formula (eight) is used for texH24-22The sign operation is performed again to obtain the direction of change of the Y component for Y24 and Y22.
Figure BDA0002883438340000141
Finally, the values texHr of the directions of change of Y24 and Y22 on the Y component are compared24-22And accumulating the texture evaluation indexes to the texture evaluation index step _ tex of the sliding window.
For example, when Y24 is 2.0, Y22 is 1, Y20 is 1.5, and tn is 0.4, for between Y24 and Y22,
tex24-22=Y24-Y22=1;texHa24-22=abs(1)-0.4=0.6;
texH24-22=sign(1)*(0.6)=0.6;texHr24-22=1。
thus, for Y24 and Y22, texHr 24-221. The accumulated texture evaluation index step _ tex becomes 1.
It will be appreciated that in addition to between Y24 and Y22, the texHr between Y22 and Y20, and Y23 and Y21, needs to be calculated22-20And texHr23-21
Similarly, for between Y42 and Y22, the value texHr of the direction of change of the Y component thereof is also calculated42-22And then texHr42-22And accumulating the texture evaluation indexes to step _ tex of the sliding window.
It will be appreciated that in addition to between Y42 and Y22, the texHr between Y22 and Y02, and Y32 and Y12, needs to be calculated22-02And texHr32-12
It can be understood that the difference of the Y components of the interlaced pixels in the horizontal and vertical directions is not limited to the current pixel point Y22 to be detected. For other pixel points in the sliding window, the difference of the Y components of the interlaced and spaced pixel points in the horizontal and vertical directions needs to be calculated, and the texture evaluation index step _ tex of the sliding window is counted. For example, for Y00, the difference between Y02 and Y00 and between Y20 and Y00 needs to be calculated.
In some embodiments of the present application, if a portion of the pixels in the sliding window are not present in horizontally and vertically interlaced pixels within the sliding window, then the difference in the Y component is not calculated. For example, taking Y04 in fig. 6 as an example, in the horizontal direction, in the sliding window, there are no pixel points in every other column in Y04, and therefore, the direction of change of the Y component in the horizontal direction is not calculated.
b) After the moire correction unit 141B calculates texture evaluation indexes of all pixel points in the sliding window, the moire correction unit 141B calculates correction parameters of the pixel points to be detected through the number of steps of the pixel points to be detected and the texture evaluation indexes.
The moire correction unit 141B may calculate the correction parameter alpha of the pixel point to be detected by using a formula (nine) through the number of steps step _ hv of the pixel point to be detected, which is obtained from the moire detection unit 141A, and the texture evaluation index step _ tex of the sliding window, which is calculated by itself.
moire=step_hv/step_tex
alpha=k1/(1+e-a*(moire-b)) -k2 (nine)
For the correction parameter alpha, a, b, k1, k2 are debugging parameters. 1+ e-a*(moire-b)Is a gaussian distance. The image of the formula of the correction parameter alpha may be, as shown in fig. 10, such that the range of the calculated alpha may be between (0,1) by adjusting a, b, k1, k2 in the formula of the correction parameter alpha and the function image of the correction parameter alpha tends to be smooth. The process of adjusting the parameters a, b, k1, k2 may include: first, a set of moire's training data and the training results of alpha's corresponding to the moire's training data are prepared, for example, 100 ranges are [1,20 ]]Training data of moire in between, and training results of 100 alphas ranging between (0,1) corresponding thereto. Then, moire's training data are input into the formula and compared with the alpha training results to find the range of 100 sets of a, b, k1, k 2. Finally, the median value of each of the 100 groups a, b, k1, k2 is taken as the parameter of the formula for the correction parameter alpha, and the median value represents the value in the comparative set of a, b, k1, k2 obtained by the training data of moire and the training result of alpha. It is understood that for the parameters a, b, k1, k2, the above steps can be repeated multiple times using training data of different moire and training results of alpha to obtain more accurate parameters a, b, k1, k 2.
c) The moire correction unit 141B performs moire correction on the pixel point to be detected in the sliding window by using the correction parameters.
In the embodiment of the present application, the moire correction unit 141B may calculate the corrected median of the pixel to be detected by using a cross median filtering method.
Continuing with fig. 9, taking Y22 as an example of the pixel point to be detected, the corrected median med of Y22 is calculated by a cross median filtering method22
For example, as shown in fig. 9, for Y22 in the sliding window, the Y components of Y02, Y12, Y32, Y42, and Y20Y 21Y 23Y 24 in the horizontal and vertical directions of Y22 in the sliding window are obtained with Y22 as the center, and Y22 is added to calculate the corrected median med of the above 9Y components to Y2222Is 2.
Meanwhile, the moire correction unit 141B calculates the average value of the pixel points to be detected in a cross filtering manner.
For example, for Y22 in the sliding window, a mean value mean is calculated by adopting a cross median filtering mode22Comprises obtaining Y components of Y02, Y12, Y32, Y42 and Y20Y 21Y 23Y 24 around Y22 and adding Y22, and calculating the mean of the above 9Y components to Y2222It was 1.89.
Then, the moire correction unit 141B calculates a corrected Y component using the corrected median and mean of the pixel points to be detected
For example, a corrected Y component CorY of Y22 is calculated using the following formula (ten).
CorY22=beta*med22+(1-beta)*mean22(Ten)
For example, where beta is the tuning parameter, the corrected median value med at Y2222Mean value of 2, Y22221.89, beta is 0.5, the corrected Y component CorY of Y22 is obtained22Is 1.94.
Finally, the moire correction unit 141B corrects the pixel point to be detected based on the correction parameter and the correction Y component.
For example, the moire correction unit 141B calculates the corrected Y component Yout of Y22 using (eleven)22
Yout22=alpha*CorY22+ (1-alpha) Y22 (eleven)
For example, correcting the Y component CorY at Y22221.94, alpha 0.4, the calculated corrected Y component Yout of Y2222Is 1.37.
In another embodiment of the present application, a window with another size may be used to calculate the median and the mean of the pixel point to be detected.
For example, as shown in FIG. 11, Y22 is corrected for median med through a window of 3 × 3 size22Mean of22And (4) calculating. In fig. 11, the Y components of Y12, Y32, and Y21 and Y23 in the horizontal and vertical directions of Y22 in the sliding window are obtained with Y22 as the center, and Y22 is added thereto to calculate the above-mentioned 5 pairs of Y componentsCorrected median med at Y2222Is 1.5. Similarly, the mean value mean of Y22 is calculated22Is 1.6.
S505: the color correction unit 141C performs pseudo color suppression on the corrected pixel point to be detected.
Continuing to take Y22 as an example, after the moire correction unit 141B performs moire correction on the Y component of the to-be-detected pixel point Y22 in the sliding window, the color correction unit 141C performs pseudo color suppression on the U component and the V component corresponding to the to-be-detected pixel point Y22 by using the correction parameter alpha.
For example, the saturation represented by the U component and the V component is suppressed based on the above correction parameter alpha calculated in S504 being 0.4.
For U22 corresponding to Y22 in the sliding window, the following formula (twelve) Uout may be used22Saturation suppression was performed on the U component of Y22 (U22-128) (1-alpha) C (twelve). Where 128 represents the median value of the U component, which represents no saturation. And C is debugging parameters. For example, when the correction parameter alpha calculated from the Y component is 0.4, U22 is 156, and C is 4, Uout22112, (156-. Similarly, when Y22 corresponds to V22 ═ 180, according to Vout22Vout is calculated by (V-128) (1-alpha) × C22=124。
After the pixel point to be detected is corrected, step 503 is executed, and the sliding window horizontally slides to the next pixel point to be detected according to the step length of 1.
It is to be understood that the values used in S501 to S505 above are exemplary and that other values may be included when using the method of the present application.
It can be understood that, in another embodiment of the present application, it may also be determined whether the brightness of the pixel point to be detected is shot to have a step by detecting the brightness change between the pixel point to be detected and the adjacent pixel point in the diagonal direction thereof. For example, whether a step occurs can be determined by determining whether the luminance change direction from the pixel point on the left upper side of the pixel point to be detected to the pixel point to be detected is opposite to the luminance change direction from the pixel point on the right lower side of the pixel point to be detected to the pixel point to be detected, and if the luminance change directions of the pixel point to be detected and the pixel point to be detected are opposite, namely, the pixel point is decreased after the luminance change direction is increased or is increased after.
In another embodiment of the present application, after the mobile phone 100 detects the moire pixel through the moire detection unit 141A, the moire pixel may be corrected by using other methods.
For example, the Y component of the moire pixel is decomposed to obtain moire layer content and image layer content of the moire pixel, then the moire layer is filtered to separate and remove moire in the moire pixel, and then the image layer content is combined into the Y component of the moire pixel.
It is to be understood that filtering within the moire layer may be performed by using the above-mentioned median filtering, and may also be performed by using gaussian filtering.
In the method for correcting moire of the image shown in fig. 5, whether the pixel point in the sliding window is in the moire area is determined by whether the brightness of all the adjacent pixel points has a step in the horizontal and vertical directions of the pixel point Y22 to be detected. Next, whether there is a step in brightness between all the adjacent pixel points in the two diagonal directions of the pixel point Y22 to be detected is described.
As shown in fig. 6, when the moire detection unit 141A determines whether the pixel point Y22 to be detected is in a moire area, five pixel points of Y00, Y11, Y22, Y33, and Y44 exist in a diagonal direction from top left to bottom right of Y22, and the moire detection unit 141A needs to calculate differences of Y components between Y22 and Y11, between Y11 and Y00, between Y33 and Y22, between Y22 and Y11, between Y22 and Y11, and between Y11 and Y00.
Taking Y33 and Y22 and Y22 and Y11 as examples, the variation direction diffHr of Y33 and Y22 in the Y component can be calculated by the above equations (one) to (four)33-22And the changing directions diffHr of Y22 and Y11 on the Y component22-11. If, diffHr33-22And diffHr22-11The negative number of the product indicates that a "rising edge + falling edge" occurs in the Y component from Y11 to Y22 to Y33, i.e., a step occurs at Y22. In the same way, the method for preparing the composite material,in the diagonal direction from top left to bottom right of Y22, it is also necessary to calculate whether steps occur between Y22 and Y11 and between Y11 and Y00 and between Y22 and Y11 and between Y11 and Y00, and if they exist, count the number of steps to step _ hv.
Thereafter, similarly, in the diagonal direction from the bottom left to the top right, with the above formulas (one) to (four), the moire detecting unit 141A needs to calculate the differences of the Y components between Y22 and Y31 and between Y31 and Y40, between Y13 and Y22 and between Y22 and Y31, between Y04 and Y12, and between Y12 and Y22. And counting whether steps occur between Y22 and Y31, between Y31 and Y40, between Y13 and Y22, between Y22 and Y31, between Y04 and Y12, and between Y12 and Y22, and counting the number of steps _ hv to the step if the steps occur.
Here, the moire correction unit 141B may calculate the texture evaluation index step _ tex of the sliding window in the same manner as in S504. That is, the value of the change direction of the Y component of the interlaced alternate column of each pixel point in the sliding window with the pixel point Y22 to be detected as the midpoint.
In addition to the method described in the above step S502, a sliding window of 5 × 5 size with the pixel point Y22 to be detected as the midpoint is determined, whether the region of the sliding window is a moire region is determined by calculating whether there is a step in the brightness between the pixel points in each direction with Y22 as the center, and if so, a moire correction is performed on the pixel point Y22. The embodiment of the application also provides another method for determining whether the pixel point to be detected in the image is in the moire area. In the method, the moire detection unit 141A detects each pixel point in the image to be corrected, so as to determine whether there is a pixel point in the image to be corrected, which needs to be molar corrected. When determining whether a pixel needs to perform moir correction, it is necessary to determine whether the pixel is located in a moir é area. In some embodiments, whether a pixel is located in a moire area may be determined by the following method.
Firstly, determining a sliding window containing a pixel point to be detected by taking the pixel point to be detected as a center; and then calculating the number of pixel points with step steps in the sliding window, if the number of the pixel points with step steps exceeds a step threshold value, indicating that pixel points positioned in a moire pattern area are arranged in the sliding window, namely the pixel points to be detected positioned at the center are positioned in the moire pattern area, and performing moire correction, and if the number of the pixel points with step steps does not exceed the step threshold value, indicating that the pixel points in the sliding window are not positioned in the moire pattern area, namely the pixel points to be detected positioned at the center are not positioned in the moire pattern area, and the moire correction is not needed. This scheme will be explained below with reference to fig. 12. As shown in fig. 12, includes:
s1201: the moire detection unit 141A determines a sliding window containing a pixel point to be detected in the image to be corrected.
For example, the moire detection unit 141A may acquire an image to be corrected using the same method as in step S501. After the image to be corrected is obtained, the moire detection unit 141A determines a sliding window of 5 × 5 size with the pixel point to be detected as the midpoint in the image to be corrected.
S1202: the moire detection unit 141A determines whether there is a missing pixel in the sliding window.
In the embodiment of the present application, if the moire detection unit 141A determines that the pixel point to be detected is located at the boundary of the image to be detected or is close to the boundary of the image to be detected, the sliding window may have a missing pixel point. In this case, S1203 needs to be executed to complete the missing pixels. Here, the method for determining whether the pixel point to be detected is located at the boundary of the image to be detected or close to the boundary of the image to be detected may be determined by determining a difference between the coordinate of the pixel point to be detected and the coordinate of the boundary of the image to be detected.
S1203: the moire detection unit 141A complements missing pixels in the sliding window in a mirror image copy manner.
In the embodiment of the present application, the moire detection unit 141A may mirror-copy the pixel point on the other side opposite to the missing pixel point with the pixel point to be detected as a center to complete the missing pixel point.
For example, as shown in fig. 13a, Y22 in the sliding window is located at the left boundary of the image to be corrected, at this time, in the sliding window with Y22 as the center, except Y22, there are no adjacent pixel points on the left sides of Y02, Y12, Y32, and Y42, at this time, as shown in fig. 13b, the moire detection unit 141A may copy the pixel points on the right side by mirroring the pixel points on the right side with Y02, Y12, Y22, Y32, and Y42 as the center, so as to complete the missing pixel points. After the missing pixel points in the sliding window are completed, S1205 is executed.
S1204: the moire detection unit 141A determines whether the pixel point to be detected is located in a moire area.
In the embodiment of the present application, after the missing pixel points do not exist in the sliding window or the moire detection unit 141A completes the pixel points in the sliding window by executing S1203. The moire detection unit 141A detects whether the pixel point to be detected is located in a moire area. If yes, S1205 is executed, and moire correction is carried out on the pixel point to be detected. If not, S1207 is executed, the moire detection unit 141A selects an undetected pixel point in the image to be corrected as a pixel point to be detected, and determines a sliding window including the pixel point to be detected. The method for determining whether the pixel point to be detected is located in the moire area by the moire detection unit 141A in S1204 will be described in detail below.
S1205: the moire correction unit 141B performs moire correction on the pixel point to be detected.
The moire correction unit 141B may perform moire correction on the pixel point to be detected by using the same method as that in step S504. For example, the moire correction unit 141B may calculate a texture evaluation index of the sliding window by using the methods of a) to c) in the above steps S504, calculate a correction parameter by using the texture evaluation index and the number of pixel points having a brightness step, and perform moire correction on the pixel points to be detected in the moire region.
S1206: the color correction unit 141C performs pseudo color suppression on the corrected pixel point to be detected.
The color correction unit 141C may perform pseudo color suppression on the corrected pixel point to be detected by using the same method as that in step S505. And after the moire correction of the pixel to be detected is finished, continuing to execute the step S1206, and continuing to detect the next pixel point to be detected.
S1207: the moire detection unit 141A selects an undetected pixel point as a pixel point to be detected when the undetected pixel point exists in the image to be corrected.
After the pixel point to be detected is corrected or when the pixel point to be detected is not located in the moire area, the moire detection unit 141A may use the same method as that in step S503 to continue to select an undetected pixel point as the pixel point to be detected when there is an undetected pixel point in the image to be corrected, and perform moire detection on the pixel point to be detected in the sliding window.
The method of determining whether the pixel point to be detected is in the moire area in step S1204 by the moire detection unit 141A will be described in detail through the following steps a) to f). A) to f) included in step S1204 may be the same as a) to f) described in S502.
Continuing with fig. 13b, taking the pixel points Y24, Y23, Y22, Y23, and Y24 in fig. 13b as an example, wherein Y24 and Y23 are pixel points obtained by mirror-copying Y23 and Y24 on the right side of Y22, and the number of pixel points with brightness steps is calculated by the following method.
a) The moire detection unit 141A calculates a difference value of Y components of three adjacent pixel points in the sliding window.
With continuing reference to FIG. 13b, taking three adjacent pixels Y24, Y23, and Y22 in the sliding window as an example, the difference between Y components Y24 and Y23 and Y23 and Y22 need to be calculated. Here, Y24, Y23 and Y22 are three adjacent pixels in the horizontal direction of the pixel to be detected Y22.
After the moire detecting unit 141A determines and acquires the Y components of Y24, Y23, Y22, the differences of the Y components between Y24 and Y23 and between Y23 and Y22 are calculated. Here, the method of the moire detecting unit 141A calculating the difference value of the Y component may be the same as the method in a) of step S502. The moire detecting unit 141A calculates a difference diff of Y components between Y24 and Y2323-24And the difference diff of the Y component between Y23 and Y2222-23
b) The moire detection unit 141A removes noise from the difference of the Y components of three adjacent pixel points.
Here, the moire detection unit 141A mayCalculating absolute values of the obtained difference value of the Y component between Y24 and Y23 and the obtained difference value of the Y component between Y23 and Y22 in the same method as that in b) of step S502, subtracting preset noise model calibration parameters, and obtaining diffHa23-24And diffHa22-23
c) The moire detection unit 141A converts the difference between the Y components of the three adjacent pixel points into the direction of change of the three adjacent pixel points in the Y component.
Here, the moire detecting unit 141A may use the same method as in c) of step S502 to remove noise of diffHa between Y24 and Y2323-24And diffHa between Y23 and Y2222-23Performing sign operation to obtain the changing directions diffH of Y24 and Y23 and Y23 and Y22 on the Y component23-24And diffH22-23
d) The moire detection unit 141A obtains the change direction of three adjacent pixel points in the Y component.
Here, the moire detecting unit 141A may change the changing directions diffH of Y24 and Y23 and Y23 and Y22 in the Y component using the same method as in d) of step S50223-24And diffH22-23To values of-1, 0 and 1. Here, -1, 0 and 1 indicate that Y24 and Y23 and Y23 and Y22 rise, fall or do not change in the Y component.
e) The moire detection unit 141A determines whether a step occurs by determining whether the change directions of three adjacent pixel points on the Y component are opposite, and counts the number of steps _ hv if a step occurs.
Here, the moire detecting unit 141A may determine whether a step has occurred at Y23 by calculating the products of Y24 and Y23 and Y23 and Y22 converted into values of-1, 0 and 1 in the change direction of the Y component using the same method as in e) of step S502. When the product of the direction of change of the Y components of Y24 and Y23 and Y23 and Y22 is negative, it is an indication that a step has occurred at that pixel point. The moire detection unit 141A counts the number of steps by 1, and counts up to the number of steps step _ hv.
f) The moire detection unit 141A determines whether the number of steps is greater than a step threshold, and if so, it indicates that the pixel point in the sliding window is in a moire area.
Here, after the moire detection unit 141A determines that a step occurs at Y23 among three pixels Y24, Y23, and Y22, the moire detection unit 141A continues to determine whether steps occur at other three adjacent pixels in the sliding window, for example, three pixels Y23, Y22, and Y23, after the moire detection unit 141A calculates that all adjacent pixels in the sliding window are completed, it is determined whether the number of steps step _ hv obtained by statistics exceeds a preset step threshold, and if so, the moire detection unit 141A determines that the pixel in the sliding window is in a moire area. And sends the number of steps step _ hv to the moire correction unit 141B.
In the above S1204, the moire detection unit 141A calculates the adjacent pixel point in the horizontal direction of the pixel point Y22 to be detected, so as to determine the number of pixel points having a brightness step in the sliding window. In another embodiment of the present application, in a case that the sliding window is as shown in fig. 6, the above method for calculating the number of pixel points having a brightness step in the sliding window by the moire detection unit 141A in S1204 can be further implemented by the following steps.
The moire detection unit 141A may determine whether the pixel point in the sliding window is in a moire region by determining whether there is a step in the brightness between three adjacent pixel points in the sliding window. The three pixels may be three adjacent pixels in a certain direction within the sliding window (for example, in a horizontal direction, or a vertical direction, or a direction at 45 degrees from the horizontal direction, or any other direction). For example, three adjacent pixels may be represented as a first pixel, a second pixel and a third pixel, taking the horizontal direction as an example, the first pixel, the second pixel and the third pixel are located in the sliding window in the order from left to right, and the moire detection unit 141A may determine whether a step occurs at the second pixel by determining whether the luminance change direction from the first pixel to the second pixel is opposite to the luminance change direction from the second pixel to the third pixel, and if the luminance change directions of the two are opposite, it indicates that a step occurs at the second pixel. The same is true for the vertical direction, the first pixel point, the second pixel point and the third pixel point are located in the sliding window in the sequence from top to bottom, the moire detection unit 141A can determine whether a step occurs at the second pixel point by determining whether the luminance change direction from the first pixel point to the second pixel point is opposite to the luminance change direction from the second pixel point to the third pixel point, if the luminance change directions are opposite, the moire detection unit is indicated at the second pixel point to determine whether a step occurs at the second pixel point, if the luminance change directions are opposite, namely, the moire detection unit is decreased after being increased or increased after being decreased, the moire detection unit is indicated that a step occurs at the second pixel point.
The following process of determining whether a pixel point in a sliding window is in a moire area by the moire detection unit 141A is described by taking three adjacent pixel points in the horizontal direction in the sliding window as an example, and the process includes:
a) the moire detection unit 141A calculates a difference value of Y components of three adjacent pixel points in the sliding window.
With continued reference to fig. 6, taking three adjacent pixels Y00, Y01, and Y02 in the sliding window as an example, the difference between the Y components of Y00 and Y01 and Y01 and Y02 needs to be calculated. Here, Y00, Y01, and Y02 are three adjacent pixels in the sliding window shown in fig. 6 in the horizontal direction, and it is understood that three adjacent pixels in the sliding window may also be Y01, Y02, and Y03; y00, Y10, Y20, Y00, Y11, Y22, and the like, in the horizontal direction, the vertical direction, the direction at 45 degrees to the horizontal direction, or any other directions in the sliding window. The embodiment of the application is described by three adjacent pixel points of Y00, Y01 and Y02.
After the moire detecting unit 141A determines and acquires the Y components of Y00, Y01, Y02, the differences of the Y components between Y00 and Y01 and between Y01 and Y02 are calculated. Here, the method of the moire detecting unit 141A calculating the difference value of the Y component may be the same as the method in a) of step S502. The moire detection unit 141A obtains diff by calculation01-00And diff02-01Which respectively represent the difference in Y component between Y00 and Y01 and the difference in Y component between Y01 and Y02.
b) The moire detection unit 141A removes noise from the difference of the Y components of three adjacent pixel points.
Here, the moire detecting unit 141A may calculate an absolute value of the obtained difference value of the Y component between Y00 and Y01 and the difference value of the Y component between Y01 and Y02 using the same method as in b) of step S502, subtract a preset noise model calibration parameter, and obtain diffHa01-00And diffHa02-01
c) The moire detection unit 141A converts the difference between the Y components of the three adjacent pixel points into the direction of change of the three adjacent pixel points in the Y component.
Here, the moire detecting unit 141A may apply the same method as in c) of step S502 to the difference diffHa of the Y component between the Y00 and the Y01 from which the noise is removed01-00And the difference diffHa of the Y component between Y01 and Y0202-01Performing sign operation to obtain the changing directions diffH of Y00 and Y01 and Y01 and Y02 on the Y component01-00And diffH02-01
d) The moire detection unit 141A obtains the change direction of three adjacent pixel points in the Y component.
Here, the moire detecting unit 141A may change the changing directions diffH of Y00 and Y01 and Y01 and Y02 in the Y component using the same method as in d) of step S50201-00And diffH02-01To values of-1, 0 and 1. Here, -1, 0 and 1 indicate that Y00 and Y01 and Y01 and Y02 rise, fall or do not change in the Y component.
e) The moire detection unit 141A determines whether a step occurs by determining whether the change directions of three adjacent pixel points on the Y component are opposite, and counts the number of steps _ hv if a step occurs.
Here, the moire detecting unit 141A may determine whether a step has occurred at Y01 by calculating the products of Y00 and Y01 and Y01 and Y02 converted into values of-1, 0 and 1 in the change direction of the Y component using the same method as in e) of step S502. When the product of the direction of change of the Y components of Y00 and Y01 and Y01 and Y02 is negative, it is an indication that a step has occurred at that pixel point. The moire detection unit 141A counts the number of steps by 1, and counts up to the number of steps step _ hv.
f) The moire detection unit 141A determines whether the number of steps is greater than a step threshold, and if so, it indicates that the pixel point in the sliding window is in a moire area.
Here, after the moire detection unit 141A determines whether a step occurs at Y01 among three pixel points Y00, Y01, and Y02, the moire detection unit 141A continues to determine whether a step occurs at other three adjacent pixel points in the sliding window, after the moire detection unit 141A determines that the step occurs at other three adjacent pixel points in the sliding window, it is determined whether the number of times step _ hv of the step obtained by statistics exceeds a preset step threshold, and if so, the moire detection unit 141A determines that the pixel point in the sliding window is in a moire region. And sends the number of steps step _ hv to the moire correction unit 141B. For example, if the moire detection unit 141A determines that the step number step _ hv of the sliding window is 10 and the preset step threshold is 5, the moire detection unit 141A determines that the pixel point in the sliding window is in the moire area.
After the moire detection unit 141A determines that the pixel point in the sliding window of 5 × 5 size formed by using the pixel point Y22 to be detected as the midpoint is in the moire region, S1204 is entered, and the moire correction unit 141B performs moire correction on the pixel point Y22 to be detected.
It is understood that, in addition to the three pixel points Y00, Y01, and Y02, the moire detection unit 141A detects other pixel points in the sliding window through the steps a-f. After the pixel points in the sliding window are not in the moire area or the moire correction unit 141B finishes moire correction on the pixel points in the sliding window, the sliding window slides in the image to be corrected according to the preset step length, and the moire detection unit 141A continues to detect the pixel points in the sliding window through the steps a-f in the sliding window after sliding, and judges whether the pixel points are in the moire area.
In one embodiment of the present application, the moire detection unit 141A is a hardware module in the ISP141 in the cell phone 100. In this case, after the moire detecting unit 141A completes one detection of whether the pixel points in the sliding window are in the moire area by using the steps a) to f), the moire detecting unit 141A does not store the result of this detection. For example, referring to fig. 6 and 8, taking three adjacent pixel points of Y01, Y02, and Y03 as an example, in fig. 6, the moire detection unit 141A determines that a step occurs at Y02. When the sliding window is slid to the position of fig. 8, the moire detecting unit 141A will re-detect whether a step occurs at Y02 without reading the previous result.
In one embodiment of the present application, the moire detection unit 141A may also be a software module running in the ISP141 in the mobile phone 100. Different from the hardware module, after the moire detection unit 141A completes one-time detection on whether the pixel point in the sliding window is in the moire area by using the steps a to f, the moire detection unit 141A may store the detection result in the storage area of the mobile phone 100. For example, referring to fig. 6 and 8, taking three adjacent pixel points of Y01, Y02, and Y03 as an example, in fig. 6, the moire detection unit 141A determines that a step occurs at Y02. When the sliding window is slid to the position of fig. 8, the moire detecting unit 141A can directly read the result of the step change at Y02 without recalculating whether the step change occurs at Y02.
Referring now to FIG. 14, shown is a block diagram of an electronic device 1400 for molar correction in accordance with one embodiment of the present application. FIG. 14 schematically illustrates an example electronic device 1400 in accordance with various embodiments. In one embodiment, electronic device 1400 may include one or more processors 1404, system control logic 1408 coupled to at least one of processors 1404, system memory 1412 coupled to system control logic 1408, non-volatile memory (NVM)1416 coupled to system control logic 1408, and a network interface 1420 coupled to system control logic 1408.
In some embodiments, processor 1404 may include one or more single-core or multi-core processors. In some embodiments, processor 1404 may include any combination of general-purpose processors and dedicated processors (e.g., graphics processors, application processors, baseband processors, etc.). The processor 1404 may be configured to perform various embodiments consistent, for example, correcting moir é in an image to be corrected, as in one or more of the various embodiments shown in fig. 1-13.
In some embodiments, system control logic 1408 may include any suitable interface controllers to provide any suitable interface to at least one of processors 1404 and/or to any suitable device or component in communication with system control logic 1408.
In some embodiments, system control logic 1408 may include one or more memory controllers to provide an interface to system memory 1412. System memory 1412 may be used to load and store data and/or instructions. Memory 1412 of electronic device 1400 may include any suitable volatile memory, such as suitable Dynamic Random Access Memory (DRAM), in some embodiments.
NVM/memory 1416 may include one or more tangible, non-transitory computer-readable media for storing data and/or instructions. In some embodiments, the NVM/memory 1416 may include any suitable non-volatile memory such as flash memory and/or any suitable non-volatile storage device, such as at least one of a HDD (Hard disk drive), CD (Compact Disc) drive, DVD (Digital Versatile Disc) drive.
The NVM/memory 1416 may include a portion of the storage resource installed on the electronic device 1400 or it may be accessible by, but not necessarily a part of, the device. For example, the NVM/storage 1416 may be accessible over a network via the network interface 1420. The NVM/memory 1416 may be configured to store an image to be corrected.
In particular, system memory 1412 and NVM/storage 1416 may each include: a temporary copy and a permanent copy of instructions 1424. Instructions 1424 may include: instructions that, when executed by at least one of the processors 1404, cause the electronic device 1400 to implement the methods shown in fig. 1-13. In some embodiments, instructions 1424, hardware, firmware, and/or software components thereof may additionally/alternatively be located in system control logic 1408, network interface 1420, and/or processor 1404.
The network interface 1420 may include a transceiver to provide a radio interface for the electronic device 1400 to communicate with any other suitable devices (e.g., front end modules, antennas, etc.) over one or more networks. In some embodiments, the network interface 1420 may be integrated with other components of the electronic device 1400. For example, network interface 1420 may be integrated with at least one of processor 1404, system memory 1412, NVM/storage 1416, and a firmware device (not shown) having instructions that, when executed by at least one of processors 1404, electronic device 1400 implements the method shown in fig. 4.
Network interface 1420 may further include any suitable hardware and/or firmware to provide a multiple-input multiple-output radio interface. For example, network interface 1420 may be a network adapter, a wireless network adapter, a telephone modem, and/or a wireless modem.
In one embodiment, at least one of the processors 1404 may be packaged together with logic for one or more controllers of system control logic 1408 to form a System In Package (SiP). In one embodiment, at least one of processors 1404 may be integrated on the same die with logic for one or more controllers of system control logic 1408 to form a system on a chip (SoC).
The electronic device 1400 may further include: input/output (I/O) devices 1432. The I/O device 1432 may include a user interface to enable a user to interact with the electronic device 1400; the design of the peripheral component interface enables peripheral components to also interact with the electronic device 1400. In some embodiments, the electronic device 1400 further includes sensors for determining at least one of environmental conditions and location information related to the electronic device 1400.
In some embodiments, the user interface may include, but is not limited to, a display (e.g., a liquid crystal display, a touch screen display, etc.), a speaker, a microphone, one or more cameras (e.g., still image cameras and/or video cameras), a flashlight (e.g., a light emitting diode flash), and a keyboard. The camera here is used for taking an image to be corrected.
In some embodiments, the peripheral component interfaces may include, but are not limited to, a non-volatile memory port, an audio jack, and a power interface.
In some embodiments, the sensors may include, but are not limited to, a gyroscope sensor, an accelerometer, a proximity sensor, an ambient light sensor, and a positioning unit. The positioning unit may also be part of the network interface 1420 or interact with the network interface 1420 to communicate with components of a positioning network, such as Global Positioning System (GPS) satellites.
Fig. 15 shows a block diagram of an SoC (System on Chip) based electronic device 1500, according to an embodiment of the present application. In fig. 15, like parts have the same reference numerals. In addition, the dashed box is an optional feature of more advanced socs. In fig. 15, an electronic apparatus 1500 includes: an interconnect unit 1550 coupled to the application processor 1515; a system agent unit 1570; a bus controller unit 1580; an integrated memory controller unit 1540; a set or one or more coprocessors 1520 which may include integrated graphics logic, an image processor, an audio processor, and a video processor; an Static Random Access Memory (SRAM) unit 1530; a Direct Memory Access (DMA) unit 1560. In one embodiment, the coprocessor 1520 comprises a special-purpose processor, such as, for example, a network or communication processor, compression engine, GPGPU, a high-throughput MIC processor, embedded processor, or the like. The application processor 1515 and coprocessor 1520 may be configured to correct moir é in the image to be corrected.
Embodiments of the mechanisms disclosed herein may be implemented in hardware, software, firmware, or a combination of these implementations. Embodiments of the application may be implemented as computer programs or program code executing on programmable systems comprising at least one processor, a storage system (including volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device.
Program code may be applied to input instructions to perform the functions described herein and generate output information. The output information may be applied to one or more output devices in a known manner. For purposes of this application, a processing system includes any system having a processor such as, for example, a Digital Signal Processor (DSP), a microcontroller, an Application Specific Integrated Circuit (ASIC), or a microprocessor.
The program code may be implemented in a high level procedural or object oriented programming language to communicate with a processing system. The program code can also be implemented in assembly or machine language, if desired. Indeed, the mechanisms described in this application are not limited in scope to any particular programming language. In any case, the language may be a compiled or interpreted language.
In some cases, the disclosed embodiments may be implemented in hardware, firmware, software, or any combination thereof. The disclosed embodiments may also be implemented as instructions carried by or stored on one or more transitory or non-transitory machine-readable (e.g., computer-readable) storage media, which may be read and executed by one or more processors. For example, the instructions may be distributed via a network or via other computer readable media. Thus, a machine-readable medium may include any mechanism for storing or transmitting information in a form readable by a machine (e.g., a computer), including, but not limited to, floppy diskettes, optical disks, read-only memories (CD-ROMs), magneto-optical disks, read-only memories (ROMs), Random Access Memories (RAMs), erasable programmable read-only memories (EPROMs), electrically erasable programmable read-only memories (EEPROMs), magnetic or optical cards, flash memory, or a tangible machine-readable memory for transmitting information (e.g., carrier waves, infrared digital signals, etc.) using the internet in an electrical, optical, acoustical or other form of propagated signal. Thus, a machine-readable medium includes any type of machine-readable medium suitable for storing or transmitting electronic instructions or information in a form readable by a machine (e.g., a computer).
In the drawings, some features of the structures or methods may be shown in a particular arrangement and/or order. However, it is to be understood that such specific arrangement and/or ordering may not be required. Rather, in some embodiments, the features may be arranged in a manner and/or order different from that shown in the illustrative figures. In addition, the inclusion of a structural or methodical feature in a particular figure is not meant to imply that such feature is required in all embodiments, and in some embodiments, may not be included or may be combined with other features.
It should be noted that, in the embodiments of the apparatuses in the present application, each unit/module is a logical unit/module, and physically, one logical unit/module may be one physical unit/module, or may be a part of one physical unit/module, and may also be implemented by a combination of multiple physical units/modules, where the physical implementation manner of the logical unit/module itself is not the most important, and the combination of the functions implemented by the logical unit/module is the key to solve the technical problem provided by the present application. Furthermore, in order to highlight the innovative part of the present application, the above-mentioned device embodiments of the present application do not introduce units/modules which are not so closely related to solve the technical problems presented in the present application, which does not indicate that no other units/modules exist in the above-mentioned device embodiments.
It is noted that, in the examples and descriptions of this patent, relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, the use of the verb "comprise a" to define an element does not exclude the presence of another, same element in a process, method, article, or apparatus that comprises the element.
While the present application has been shown and described with reference to certain preferred embodiments thereof, it will be understood by those of ordinary skill in the art that various changes in form and details may be made therein without departing from the spirit and scope of the present application.

Claims (20)

1. An image moire correction method, comprising:
acquiring an image to be corrected;
determining whether the pixel points are located in a moire area or not based on the brightness component values of the pixel points in the image to be corrected;
and under the condition that the pixel points are located in the Moire pattern area, carrying out Moire correction on the pixel points.
2. The method according to claim 1, wherein the determining whether the pixel point is located in a moire area based on the luminance component of each pixel point in the image to be corrected comprises:
selecting pixel points included in a detection window of pixel points to be detected from an image to be corrected, wherein the detection window includes the pixel points to be detected, the detection window includes M × M pixel points, and M is an odd number larger than 1;
calculating the number of pixel points with brightness steps in the detection window based on the brightness value of each pixel point in the detection window;
and determining that the pixel points to be detected are located in the Moire pattern area under the condition that the number of the pixel points with the brightness step in the detection window exceeds the preset number.
3. The method of claim 2, wherein the pixel to be detected is located at the center of the detection window.
4. The method of claim 2, wherein determining whether the pixel has a brightness step is performed by:
and determining that a brightness step occurs at the second pixel point under the condition that the brightness change direction of the brightness value from the first pixel point to the second pixel point and the brightness change direction of the brightness value from the second pixel point to the third pixel point in at least one detection direction in the detection window are different, wherein the first pixel point, the second pixel point and the third pixel point are respectively three adjacent pixel points in the detection window, and the brightness change direction of the brightness value comprises two directions of brightness increase and brightness decrease.
5. The method of claim 4, wherein whether the brightness change direction from the first pixel point to the second pixel point is the same as the brightness change direction from the second pixel point to the third pixel point is determined by:
calculating the brightness value step sign diffH _ r1 of the first pixel point and the second pixel point by the following formula:
Figure FDA0002883438330000011
wherein diffH1 can be calculated by the following formula:
diffH1=sign(diff1)*diffHa1
wherein sign (diff1) is a sign bit of a difference value between luminance values of the first pixel point and the second pixel point, diffHa1Can be calculated by the following formula:
diffHa1=abs(diff1)-tn
wherein abs (diff1) is an absolute value of a difference between luminance values of the first pixel point and the second pixel point, and tn is a noise model calibration parameter;
calculating the brightness value step sign diffH _ r2 of the second pixel point and the third pixel point by the following formula:
Figure FDA0002883438330000021
wherein diffH2 can be calculated by the following formula:
diffH2=sign(diff2)*diffHa2
wherein sign (diff2) is a sign bit of a difference value between the brightness values of the second pixel point and the third pixel point, and diffHa2 can be calculated by the following formula:
diffHa2=abs(diff2)-tn
wherein abs (diff1) is an absolute value of a difference between luminance values of the second pixel point and the third pixel point;
and under the condition that the product of the diffH _ r1 and the diffH _ r2 is a negative number, determining that the brightness change direction from the first pixel point to the second pixel point is different from the brightness change direction from the second pixel point to the third pixel point.
6. The method according to claim 5, wherein in a case that a difference between luminance values of the first pixel point and the second pixel point is a negative number, the sign bit is-1; the sign bit is 1 when the difference value of the brightness values of the first pixel point and the second pixel point is a non-negative number, and the sign bit is-1 when the difference value of the brightness values of the second pixel point and the third pixel point is a negative number; and under the condition that the difference value of the brightness values of the second pixel point and the third pixel point is a non-negative number, the sign bit is 1.
7. The method of claim 5, wherein the noise model scaling parameter is used to remove noise signals in the absolute value of the difference between the luminance values of the first pixel point and the second pixel point and the absolute value of the difference between the luminance values of the second pixel point and the third pixel point.
8. The method according to claim 4, wherein it is determined that a brightness step is generated at the second pixel point in a case where a brightness value of the first pixel point is smaller than a brightness value of the second pixel point, and an absolute value of a difference between the brightness values of the first pixel point and the second pixel point is larger than a first threshold, and a brightness value of the third pixel point is smaller than a brightness value of the second pixel point, and an absolute value of a difference between the brightness values of the third pixel point and the second pixel point is larger than a second threshold.
9. The method of claim 2, wherein performing Moire correction on the pixel points in the Moire region comprises:
and calculating a texture evaluation index in the detection window based on the brightness value of each pixel point in the detection window, wherein the texture evaluation index is used for describing the distribution state of the brightness value of each pixel point in the detection window.
10. The method of claim 9, wherein the texture evaluation index in the detection window is calculated by the following formula:
calculating the brightness value step sign texH _ r3 of each pixel point and the fourth pixel point in the detection window by the following formula:
Figure FDA0002883438330000031
wherein, the texH3 can be calculated by the following formula:
texH3=sign(tex3)*texHa3
wherein sign (tex3) is a sign bit of a luminance value difference between each pixel point and the fourth pixel point, texHa3Can be calculated by the following formula:
texHa3=abs(tex3)-tn
wherein abs (tex3) is the absolute value of the difference between the brightness values of each pixel point and the fourth pixel point, and tn is a noise model calibration parameter;
and calculating the sum of the brightness value step symbol texH _ r3 of each pixel point and the fourth pixel point in the detection window, and recording the sum as the texture evaluation index of the detection window.
11. The method according to claim 10, wherein a correction parameter for moire correction of the pixel points to be detected is calculated by the number of pixel points having brightness step in the detection window and the texture evaluation index of the detection window, and the correction parameter is calculated by the following formula:
moire=step_hv/step_tex
alpha=k1/(1+e-a*(moire-b))-k2
step _ hv is the number of pixel points with brightness steps in the detection window, step _ tex is the texture evaluation index of the detection window, alpha is a correction parameter, and a, b, k1 and k2 are debugging parameters.
12. The method according to claim 11, wherein the luminance value of the pixel to be detected is corrected by the following formula:
CorY=beta*med+(1-beta)*mean
Yout=alpha*CorY+(1-alpha)*Y
wherein med and mean are the median and mean of the brightness values of each pixel point and the pixel point to be detected in at least one direction with the pixel point to be detected as the center; beta is an adjustment parameter, CorY is a correction component of the brightness value of the pixel point to be detected, Y is the brightness value of the pixel point to be detected, and Yout is the brightness value of the pixel point to be detected after correction.
13. The method according to claim 3, characterized in that in case the distance between the pixel point to be detected and at least one boundary of the image to be corrected is less than M, the partial area of the detection window is located outside the boundary of the image to be corrected.
14. The method according to claim 13, wherein the partial region is complemented with luminance values of the pixels of the detection window based on a direction parallel to the boundary with the pixel to be detected as a center based on a symmetry axis.
15. The method according to claim 1, wherein the moire correction method is implemented by an image signal processor of an electronic device, and the image to be corrected is an image taken by a camera of the electronic device.
16. The method according to claim 1, wherein the format of the image to be corrected is a format of a luminance width chrominance space.
17. The method according to claim 4, wherein the detection direction is a horizontal direction or a vertical direction or a diagonal direction in the area to be corrected.
18. A readable medium having stored therein instructions that, when executed by an electronic device, cause the electronic device to perform the image moire correction method as claimed in any one of claims 1 to 17.
19. An electronic device, comprising:
a memory having instructions stored therein, an
A processor configured to read and execute the instructions in the memory to cause the electronic device to perform the image moire correction method as claimed in any one of claims 1-17.
20. The electronic device of claim 19, wherein the processor comprises an image signal processor.
CN202110006964.7A 2020-09-30 2021-01-05 Image moire correction method, electronic device and medium therefor Active CN112598600B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN2020110648660 2020-09-30
CN202011064866 2020-09-30

Publications (2)

Publication Number Publication Date
CN112598600A true CN112598600A (en) 2021-04-02
CN112598600B CN112598600B (en) 2023-03-10

Family

ID=75206945

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110006964.7A Active CN112598600B (en) 2020-09-30 2021-01-05 Image moire correction method, electronic device and medium therefor

Country Status (1)

Country Link
CN (1) CN112598600B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6421087B1 (en) * 1997-03-05 2002-07-16 Canon Kabushiki Kaisha Image pickup apparatus having a signal processor for generating luminance and chrominance signals in accordance with image pickup signals
US20070122030A1 (en) * 2004-05-24 2007-05-31 Burkhard Hahn Method for reducing color moire in digital images
CN104813648A (en) * 2012-12-11 2015-07-29 富士胶片株式会社 Image processing device, image capture device, image processing method, and image processing program
CN108615227A (en) * 2018-05-08 2018-10-02 浙江大华技术股份有限公司 A kind of suppressing method and equipment of image moire fringes
CN108846818A (en) * 2018-06-25 2018-11-20 Oppo(重庆)智能科技有限公司 Remove method, apparatus, terminal and the computer readable storage medium of moire fringes

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6421087B1 (en) * 1997-03-05 2002-07-16 Canon Kabushiki Kaisha Image pickup apparatus having a signal processor for generating luminance and chrominance signals in accordance with image pickup signals
US20070122030A1 (en) * 2004-05-24 2007-05-31 Burkhard Hahn Method for reducing color moire in digital images
CN104813648A (en) * 2012-12-11 2015-07-29 富士胶片株式会社 Image processing device, image capture device, image processing method, and image processing program
CN108615227A (en) * 2018-05-08 2018-10-02 浙江大华技术股份有限公司 A kind of suppressing method and equipment of image moire fringes
CN108846818A (en) * 2018-06-25 2018-11-20 Oppo(重庆)智能科技有限公司 Remove method, apparatus, terminal and the computer readable storage medium of moire fringes

Also Published As

Publication number Publication date
CN112598600B (en) 2023-03-10

Similar Documents

Publication Publication Date Title
US10497097B2 (en) Image processing method and device, computer readable storage medium and electronic device
US8363123B2 (en) Image pickup apparatus, color noise reduction method, and color noise reduction program
US8890963B2 (en) Image quality evaluation device, terminal device, image quality evaluation system, image quality evaluation method and computer-readable recording medium for storing programs
US9558543B2 (en) Image fusion method and image processing apparatus
RU2537038C2 (en) Automatic white balance processing with flexible colour space selection
US9111365B2 (en) Edge-adaptive interpolation and noise filtering method, computer-readable recording medium, and portable terminal
US8922680B2 (en) Image processing apparatus and control method for image processing apparatus
US10516860B2 (en) Image processing method, storage medium, and terminal
WO2014091854A1 (en) Image processing device, image capture device, image processing method, and image processing program
JP6172935B2 (en) Image processing apparatus, image processing method, and image processing program
US8379977B2 (en) Method for removing color fringe in digital image
CN113096022B (en) Image blurring processing method and device, storage medium and electronic device
JP5749409B2 (en) Imaging apparatus, image processing method, and program
US20090304276A1 (en) Color deviation compensating apparatus and method, image processor using it, recorded medium
CN112598600B (en) Image moire correction method, electronic device and medium therefor
JP5877931B2 (en) Pixel interpolation device and operation control method thereof
CN109672829B (en) Image brightness adjusting method and device, storage medium and terminal
CN111885371A (en) Image occlusion detection method and device, electronic equipment and computer readable medium
US8675106B2 (en) Image processing apparatus and control method for the same
US8654220B2 (en) Image processing apparatus and control method for the same
KR102015585B1 (en) Image processing apparatus and image processing method
WO2014007041A1 (en) Image processing device and method, and imaging device
JP2013225772A (en) Image processing device and method
JP4155559B2 (en) Signal processing apparatus and method
KR100799834B1 (en) Method for acquiring image frame with improved noise characteristic

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant