WO2017088564A1 - 一种图像处理方法及装置、终端、存储介质 - Google Patents

一种图像处理方法及装置、终端、存储介质 Download PDF

Info

Publication number
WO2017088564A1
WO2017088564A1 PCT/CN2016/099255 CN2016099255W WO2017088564A1 WO 2017088564 A1 WO2017088564 A1 WO 2017088564A1 CN 2016099255 W CN2016099255 W CN 2016099255W WO 2017088564 A1 WO2017088564 A1 WO 2017088564A1
Authority
WO
WIPO (PCT)
Prior art keywords
pixel
color component
frame
pixels
color
Prior art date
Application number
PCT/CN2016/099255
Other languages
English (en)
French (fr)
Inventor
朱德志
Original Assignee
努比亚技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 努比亚技术有限公司 filed Critical 努比亚技术有限公司
Publication of WO2017088564A1 publication Critical patent/WO2017088564A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/10Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/60Noise processing, e.g. detecting, correcting, reducing or removing noise

Definitions

  • the present invention relates to image processing technologies, and in particular, to an image processing method and apparatus, a terminal, and a storage medium.
  • the embodiment of the present invention provides an image processing method and device, a terminal, and a storage medium, which can improve the denoising effect and overcome the loss of edge details, etc., in order to solve at least one problem existing in the prior art. Disadvantages, make the final photo to get a picture has a better visual effect.
  • an embodiment of the present invention provides an image processing method, where the method includes:
  • N an integer greater than or equal to 3
  • data of the N frames of images being represented by a color model, the color model comprising M color components, the M being an integer greater than or equal to 1;
  • N-1 Determining one frame from the N frame image as a reference frame, (N-1) other than the reference frame
  • the frame image is determined to be a comparison frame
  • the first pixel point is a pixel point of the N frame image with coordinates (i, j), and the first color component is any one of the M color components;
  • Pixel values of all the pixels on the M color components are combined into one frame image.
  • an embodiment of the present invention provides an image processing apparatus, where the apparatus includes an acquiring unit, a first determining unit, a comparing unit, a second determining unit, a first processing unit, a second processing unit, and a component unit, where:
  • the acquiring unit is configured to acquire consecutive N-frame images, the N is an integer greater than or equal to 3, the data of the N-frame image is represented by a color model, and the color model includes M color components, the M An integer greater than or equal to 1;
  • the first determining unit is configured to determine one frame from the N frame image as a reference frame, and determine (N-1) frame images other than the reference frame as a comparison frame;
  • the comparing unit is configured to set a pixel value Q1(i,j) of the first pixel point of the reference frame on the first color component and a pixel value Q0 of the first pixel point on the first color component in the comparison frame (i, j) comparing, obtaining a comparison result;
  • the first pixel point is a pixel point of coordinates (i, j) in the N frame image, and the first color component is any of the M color components a color component;
  • the second determining unit is configured to determine a pixel value of the first pixel on the first color component according to the comparison result
  • the first processing unit is configured to be according to the comparing unit and the second determining unit Processing, obtaining pixel values of all pixels remaining on the first color component;
  • the second processing unit is configured to obtain, according to a processing procedure of the comparing unit, the second determining unit, and the first processing unit, pixels of all pixel points on other color components except the first color component value;
  • the component unit is configured to form a pixel image of pixel values of all pixel points on the first color component and pixel values of all pixel points on other color components.
  • an embodiment of the present invention provides a terminal, where the terminal includes a memory and a processor, where:
  • the memory is configured to store consecutive N frames of images
  • the processor is configured to acquire a continuous N frame image, the N is an integer greater than or equal to 3, the data of the N frame image is represented by a color model, and the color model includes M color components, the M An integer greater than or equal to 1; determining one frame from the N frame image as a reference frame, determining (N-1) frame images other than the reference frame as a comparison frame; Comparing the pixel value Q1(i,j) of the first pixel on the color component with the pixel value Q0(i,j) of the first pixel on the first color component in the comparison frame, obtaining a comparison result; a pixel is a pixel point of coordinates (i, j) in the N frame image, the first color component is any one of the M color components; and determining, on the first color component, according to the comparison result a pixel value of the first pixel, thereby obtaining pixel values of all the pixels remaining on the first color component; and analogizing with the pixel value on the first color component, obtaining other colors than the first color component
  • an embodiment of the present invention provides a computer storage medium, where the computer storage medium stores computer executable instructions, and the computer executable instructions are used to execute an image processing method provided by the first aspect of the present invention.
  • An embodiment of the present invention provides an image processing method and device, a terminal, and a storage medium, where Acquiring consecutive N frames of images, the N being an integer greater than or equal to 3, the data of the N frames of images being represented by a color model, the color model comprising M color components, the M being an integer greater than or equal to 1; Determining one frame from the N frame image as a reference frame, determining (N-1) frame images other than the reference frame as a comparison frame; and placing the reference frame on the first color component at a first pixel point The pixel value Q1(i,j) is compared with the pixel value Q0(i,j) of the first pixel point on the first color component in the comparison frame to obtain a comparison result; the first pixel point is the N frame a pixel in the image having coordinates (i, j), the first color component being any one of the M color components; determining a pixel of the first pixel on the first color component according to the comparison result a value, thereby obtaining a pixel value
  • 1-1 is a schematic structural diagram of hardware of an optional mobile terminal in implementing various embodiments of the present invention.
  • Figure 1-2 is a schematic structural diagram of a photographic lens in the mobile terminal shown in Figure 1-1;
  • FIG. 1-3 are schematic diagrams showing an implementation process of an image processing method according to an embodiment of the present invention.
  • FIG. 2 is a schematic flowchart of an implementation process of an image processing method according to Embodiment 2 of the present invention
  • FIG. 3 is a schematic diagram of a three-pixel block according to an embodiment of the present invention.
  • FIG. 4 is a pixel value of the pixel block shown in FIG. 3;
  • FIG. 5 is a schematic flowchart of an implementation process of an image processing apparatus according to Embodiment 4 of the present invention.
  • the mobile terminal can be implemented in various forms.
  • the terminal described in the present invention may include, for example, a mobile phone, a smart phone, a notebook computer, a digital broadcast receiver, a PDA (Personal Digital Assistant), a PAD (Tablet), a PMP (Portable Multimedia Player), a navigation device, etc.
  • Mobile terminals and fixed terminals such as digital TVs, desktop computers, and the like.
  • the terminal is a mobile terminal.
  • those skilled in the art will appreciate that configurations in accordance with embodiments of the present invention can be applied to fixed type terminals in addition to components that are specifically for mobile purposes.
  • FIG. 1-1 is a schematic structural diagram of hardware of an optional mobile terminal in implementing various embodiments of the present invention.
  • the mobile terminal 100 may include a wireless communication unit 110 and an A/V (audio/video) input.
  • Figure 1-1 illustrates a mobile terminal having various components, but it should be understood that not all illustrated components are required to be implemented. More or fewer components can be implemented instead. The elements of the mobile terminal will be described in detail below.
  • Wireless communication unit 110 typically includes one or more components that permit radio communication between mobile terminal 100 and a wireless communication system or network.
  • the wireless communication unit can include at least one of the mobile communication module 112, the wireless internet module 113, and the short-range communication module 114.
  • the mobile communication module 112 transmits the radio signals to and/or receives radio signals from at least one of a base station (e.g., an access point, a Node B, etc.), an external terminal, and a server.
  • a base station e.g., an access point, a Node B, etc.
  • Such radio signals may include voice call signals, video call signals, or various types of data transmitted and/or received in accordance with text and/or multimedia messages.
  • the wireless internet module 113 supports wireless internet access of the mobile terminal.
  • the module can be inside Partially or externally coupled to the terminal.
  • the wireless Internet access technologies involved in the module may include WLAN (Wireless LAN) (Wi-Fi), Wibro (Wireless Broadband), Wimax (Worldwide Interoperability for Microwave Access), HSDPA (High Speed Downlink Packet Access), etc. .
  • the short range communication module 114 is a module for supporting short range communication.
  • Some examples of short-range communication technology include Bluetooth TM, a radio frequency identification (RFID), infrared data association (IrDA), ultra wideband (UWB), ZigBee, etc. TM.
  • the A/V input unit 120 is for receiving an audio or video signal.
  • the A/V input unit 120 may include a camera 121 and a microphone 1220 that processes image data of a still image or video obtained by an image capturing device in a video capturing mode or an image capturing mode.
  • the processed image frame can be displayed on the display unit 151.
  • the image frames processed by the camera 121 may be stored in the memory 160 (or other storage medium) or transmitted via the wireless communication unit 110, and two or more cameras 121 may be provided according to the configuration of the mobile terminal.
  • the microphone 122 can receive sound (audio data) via a microphone in an operation mode of a telephone call mode, a recording mode, a voice recognition mode, and the like, and can process such sound as audio data.
  • the processed audio (voice) data can be converted to a format output that can be transmitted to the mobile communication base station via the mobile communication module 112 in the case of a telephone call mode.
  • the microphone 122 can implement various types of noise cancellation (or suppression) algorithms to cancel (or suppress) noise or interference generated during the process of receiving and transmitting audio signals.
  • the user input unit 130 may generate key input data according to a command input by the user to control various operations of the mobile terminal.
  • the user input unit 130 allows the user to input various types of information, and may include a keyboard, a pot, a touch pad (eg, a touch sensitive component that detects changes in resistance, pressure, capacitance, etc. due to contact), a scroll wheel , rocker, etc.
  • a touch screen can be formed.
  • the interface unit 170 serves as an interface through which at least one external device can connect with the mobile terminal 100.
  • external devices may include wired or wireless headset ports, external power supplies (or Battery charger) port, wired or wireless data port, memory card port, port for connecting devices with identification modules, audio input/output (I/O) ports, video I/O ports, headphone ports, and more.
  • the identification module may be stored to verify various information used by the user using the mobile terminal 100 and may include a User Identification Module (UIM), a Customer Identification Module (SIM), a Universal Customer Identity Module (USIM), and the like.
  • UCM User Identification Module
  • SIM Customer Identification Module
  • USB Universal Customer Identity Module
  • the device having the identification module may take the form of a smart card, and thus the identification device may be connected to the mobile terminal 100 via a port or other connection device.
  • the interface unit 170 can be configured to receive input from an external device (eg, data information, power, etc.) and transmit the received input to one or more components within the mobile terminal 100 or can be used at the mobile terminal and external device Transfer data between.
  • the interface unit 170 may function as a path through which power is supplied from the base to the mobile terminal 100 or may be used as a transmission of various command signals allowing input from the base to the mobile terminal 100 The path to the terminal.
  • Various command signals or power input from the base can be used as signals for identifying whether the mobile terminal is accurately mounted on the base.
  • Output unit 150 is configured to provide an output signal (eg, an audio signal, a video signal, an alarm signal, a vibration signal, etc.) in a visual, audio, and/or tactile manner.
  • the output unit 150 may include a display unit 151, an audio output module 152, and the like.
  • the display unit 151 can display information processed in the mobile terminal 100. For example, when the mobile terminal 100 is in a phone call mode, the display unit 151 can display a user interface (UI) or a graphical user interface (GUI) related to a call or other communication (eg, text messaging, multimedia file download, etc.). When the mobile terminal 100 is in a video call mode or an image capturing mode, the display unit 151 may display a captured image and/or a received image, a UI or GUI showing a video or image and related functions, and the like.
  • UI user interface
  • GUI graphical user interface
  • the display unit 151 can function as an input device and an output device.
  • the display unit 151 may include a liquid crystal display (LCD), a thin film transistor LCD (TFT-LCD), and an organic light emitting diode (OLED).
  • LCD liquid crystal display
  • TFT-LCD thin film transistor LCD
  • OLED organic light emitting diode
  • Some of these displays may be configured to be transparent to allow a user to view from the outside, which may be referred to as a transparent display, and a typical transparent display may be, for example, a TOLED (Transparent Organic Light Emitting Diode) display or the like.
  • TOLED Transparent Organic Light Emitting Diode
  • the mobile terminal 100 may include two or more display units (or other display devices), for example, the mobile terminal may include an external display unit (not shown) and an internal display unit (not shown) .
  • the touch screen can be used to detect touch input pressure as well as touch input position and touch input area.
  • the audio output module 152 may convert audio data received by the wireless communication unit 110 or stored in the memory 160 when the mobile terminal is in a call signal receiving mode, a call mode, a recording mode, a voice recognition mode, a broadcast receiving mode, and the like.
  • the audio signal is output as sound.
  • the audio output module 152 can provide audio output (eg, call signal reception sound, message reception sound, etc.) associated with a particular function performed by the mobile terminal 100.
  • the audio output module 152 can include a speaker, a buzzer, and the like.
  • the memory 160 may store a software program or the like for processing and control operations performed by the controller 180, or may temporarily store data (for example, a phone book, a message, a still image, a video, etc.) that has been output or is to be output. Moreover, the memory 160 can store data regarding vibrations and audio signals of various manners that are output when a touch is applied to the touch screen.
  • the memory 160 may include at least one type of storage medium including a flash memory, a hard disk, a multimedia card, a card type memory (eg, SD or DX memory, etc.), a random access memory (RAM), a static random access memory ( SRAM), read only memory (ROM), electrically erasable programmable read only memory (EEPROM), programmable read only memory (PROM), magnetic memory, magnetic disk, optical disk, and the like.
  • the mobile terminal 100 can cooperate with a network storage device that performs a storage function of the memory 160 through a network connection.
  • the controller 180 typically controls the overall operation of the mobile terminal. For example, the controller 180 performs the control and processing associated with voice calls, data communications, video calls, and the like.
  • the controller 180 A multimedia module 181 for reproducing or playing back multimedia data may be included, and the multimedia module 181 may be constructed within the controller 180 or may be configured to be separate from the controller 180.
  • the controller 180 may perform a pattern recognition process to recognize a handwriting input or an image drawing input performed on the touch screen as a character or an image.
  • the power supply unit 190 receives external power or internal power under the control of the controller 180 and provides appropriate power required to operate the various components and components.
  • the various embodiments described herein can be implemented in a computer readable medium using, for example, computer software, hardware, or any combination thereof.
  • the embodiments described herein may be through the use of application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays ( An FPGA, a processor, a controller, a microcontroller, a microprocessor, at least one of the electronic units designed to perform the functions described herein, in some cases, such an embodiment may be at the controller 180 Implemented in the middle.
  • implementations such as procedures or functions may be implemented with separate software modules that permit the execution of at least one function or operation.
  • the software code can be implemented by a software application (or program) written in any suitable programming language, which can be stored in memory 160 and executed by
  • the mobile terminal has been described in terms of its function.
  • a slide type mobile terminal among various types of mobile terminals such as a folding type, a bar type, a swing type, a slide type mobile terminal, and the like will be described as an example. Therefore, the present invention can be applied to any type of mobile terminal, and is not limited to a slide type mobile terminal.
  • the mobile terminal in the embodiment of the present invention further includes a photographic lens.
  • the photographic lens 1211 is composed of a plurality of optical lenses for forming a subject image, and is a single focus lens or a zoom lens.
  • the photographic lens 1211 is movable in the optical axis direction under the control of the lens driver 1221, and the lens driver 1221 controls the focus position of the photographic lens 1211 in accordance with a control signal from the lens driving control circuit 1222, and can also be controlled in the case of the zoom lens. Focus distance.
  • Lens drive The motion control circuit 1222 performs drive control of the lens driver 1221 in accordance with a control command from the microcomputer 1217.
  • An imaging element 1212 is disposed on the optical axis of the photographic lens 1211 near the position of the subject image formed by the photographic lens 1211.
  • the imaging element 1212 is for capturing an image of a subject and acquiring captured image data.
  • Photodiodes constituting each pixel are arranged two-dimensionally and in a matrix on the imaging element 1212. Each photodiode generates a photoelectric conversion current corresponding to the amount of received light, and the photoelectric conversion current is charged by a capacitor connected to each photodiode.
  • the front surface of each pixel is provided with a Bayer array of RGB color filters.
  • the imaging element 1212 is connected to the imaging circuit 1213.
  • the imaging circuit 1213 performs charge accumulation control and image signal readout control in the imaging element 1212, and performs waveform shaping after reducing the reset noise of the read image signal (analog image signal). Further, gain improvement or the like is performed to obtain an appropriate signal level.
  • the imaging circuit 1213 is connected to an A/D converter 1214 that performs analog-to-digital conversion on the analog image signal and outputs a digital image signal (hereinafter referred to as image data) to the bus 1227.
  • the bus 1227 is a transmission path for transmitting various data read or generated inside the camera.
  • the A/D converter 1214 is connected to the bus 1227, and an image processor 1215, a JPEG processor 1216, a microcomputer 1217, a SDRAM (Synchronous Dynamic Random Access Memory) 1218, and a memory interface are also connected. (hereinafter referred to as memory I/F) 1219, LCD (Liquid Crystal Display) driver 1220.
  • memory I/F memory I/F
  • LCD Liquid Crystal Display
  • the image processor 1215 performs various kinds of images such as OB subtraction processing, white balance adjustment, color matrix calculation, gamma conversion, color difference signal processing, noise removal processing, simultaneous processing, edge processing, and the like on the image data based on the output of the imaging element 1212. deal with.
  • the JPEG processor 1216 compresses the image data read out from the SDRAM 1218 in accordance with the JPEG compression method when the image data is recorded on the recording medium 1225. Further, the JPEG processor 1216 performs for image reproduction display Decompression of JPEG image data.
  • the file recorded on the recording medium 1225 is read, and after the compression processing is performed in the JPEG processor 1216, the decompressed image data is temporarily stored in the SDRAM 1218 and displayed on the LCD 1226.
  • the JPEG method is adopted as the image compression/decompression method.
  • the compression/decompression method is not limited thereto, and other compression/decompression methods such as MPEG, TIFF, and H.264 may be used.
  • the microcomputer 1217 functions as a control unit of the entire camera, and collectively controls various processing sequences of the camera.
  • the microcomputer 1217 is connected to the operation unit 1223 and the flash memory 1224.
  • the operating unit 1223 includes, but is not limited to, a physical button or a virtual button, and the entity or virtual button may be a power button, a camera button, an edit button, a dynamic image button, a reproduction button, a menu button, a cross button, an OK button, a delete button, an enlarge button
  • the operation controls such as various input buttons and various input keys detect the operational state of these operation controls.
  • the detection result is output to the microcomputer 1217. Further, a touch panel is provided on the front surface of the LCD 1226 as a display, and the touch position of the user is detected, and the touch position is output to the microcomputer 1217.
  • the microcomputer 1217 executes various processing sequences corresponding to the user's operation in accordance with the detection result from the operation position of the operation unit 1223.
  • the flash memory 1224 stores programs for executing various processing sequences of the microcomputer 1217.
  • the microcomputer 1217 performs overall control of the camera in accordance with the program. Further, the flash memory 1224 stores various adjustment values of the camera, and the microcomputer 1217 reads out the adjustment value, and performs control of the camera in accordance with the adjustment value.
  • the SDRAM 1218 is an electrically rewritable volatile memory for temporarily storing image data or the like.
  • the SDRAM 1218 temporarily stores image data output from the A/D converter 1214 and image data processed in the image processor 1215, the JPEG processor 1216, and the like.
  • the memory interface 1219 is connected to the recording medium 1225, and performs control for writing image data and a file header attached to the image data to the recording medium 1225 and reading out from the recording medium 1225.
  • the recording medium 1225 is, for example, a recording such as a memory card that can be detachably attached to the camera body.
  • the medium is not limited thereto, and may be a hard disk or the like built in the camera body.
  • the LCD driver 1210 is connected to the LCD 1226, and stores image data processed by the image processor 1215 in the SDRAM 1218.
  • the image data stored in the SDRAM 1218 is read and displayed on the LCD 1226, or the image data stored in the JPEG processor 1216 is compressed.
  • the JPEG processor 1216 reads the compressed image data of the SDRAM 1218, decompresses it, and displays the decompressed image data through the LCD 1226.
  • the LCD 1226 is configured to display an image on the back of the camera body.
  • the LCD 1226 LCD is not limited thereto, and various display panels (LCD 1226) such as an organic EL may be used.
  • various display panels such as an organic EL may be used.
  • the present invention is not limited thereto, and various display panels such as an organic EL may be used.
  • an embodiment of the present invention provides an image processing method, in which a new three-dimensional (3D) filtering method is adopted, which is used in a current electronic device such as a mobile phone photographing system. Not only can the noise be well removed, but also the edge of the image can be well maintained, which greatly improves the quality of the photo.
  • 3D three-dimensional
  • the three-dimensional (3D) filtering method is actually based on the 3D (inter-frame) image processing method, and it is determined whether to perform 3D noise reduction by comparing the corresponding pixel difference values of each frame image; compared with the existing mobile phone denoising method
  • the embodiment of the invention does not need to perform motion estimation on the image, has a small amount of calculation, low cost and easy understanding of the algorithm, and can perform real-time photo processing on the mobile phone.
  • an embodiment of the present invention provides an image processing method, where the image processing method is applied to a terminal, and the function implemented by the method can be implemented by a processor calling a program code in a terminal, and of course, the program code can be saved.
  • the terminal includes at least a processor and a storage medium.
  • the image processing method includes:
  • Step S101 acquiring consecutive N frame images
  • the N is an integer greater than or equal to 3
  • the data of the N-frame image is represented by a color model
  • the color model includes M color components
  • the M is an integer greater than or equal to 1;
  • the terminal includes an electronic device such as a mobile phone or a tablet computer;
  • the N frame image is an N frame captured for the same object
  • the implicit condition is that environmental conditions are required to be different, for example, the same location and for the same object
  • the N frame image may be other Photographed by the electronic device and stored on the terminal. Of course, it may also be taken by the terminal's own image acquisition unit, such as a camera. Therefore, before step S101, the method further includes: continuously capturing N frames of images for the same object.
  • Step S102 determining one frame from the N frame image as a reference frame, and determining (N-1) frame images other than the reference frame as a comparison frame;
  • Step S103 the pixel value Q1(i,j) of the first pixel point of the reference frame on the first color component and the pixel value Q0 of the first pixel point of the comparison frame in the first color component (i,j) ) comparison, get the comparison results;
  • the first pixel point (i, j) is any one of the pixel points
  • the first color component is any one of the M color components.
  • the first color component may be It is a Y component or a U component or a V component.
  • Step S104 determining, according to the comparison result, a pixel value of the first pixel on the first color component, thereby obtaining pixel values of all the pixels remaining on the first color component;
  • Step S105 analogizing with the manner of obtaining the pixel value on the first color component, obtaining pixel values of all the pixel points on the other color components except the first color component;
  • Step S106 the pixel values of all the pixels on the M color components are combined into one frame image.
  • step S103 the reference frame is on the first color component.
  • the pixel value Q1(i,j) of one pixel is compared with the pixel value Q0(i,j) of the first pixel on the first color component in the comparison frame, and a comparison result is obtained, including:
  • Step S131 the step of setting the pixel value Q1(i,j) of the first pixel point of the reference frame on the first color component and the pixel value Q0 of the first pixel point in the comparison frame on the first color component (i,j) Make a difference and get the difference z(i,j);
  • Step S132 determining whether the difference z(i, j) is within a preset first threshold range, and obtaining a determination result.
  • Step S133 when the determination result indicates that at least one of the difference values z(i, j) is within a preset first threshold range, determining a first set of pixel values;
  • the first set of pixels corresponds to Q0(i,j) when the difference z(i,j) is within a preset first threshold range; the first set of pixel values includes at least the The pixel value of the pixel of one frame of the image in the N frame image.
  • Step S134 arithmetically averaging the Q1(i,j) and the first set of pixel values to obtain a pixel value Q(i,j) of the first pixel on the first color component.
  • an embodiment of the present invention provides an image processing method, where the image processing method is applied to a terminal, and the function implemented by the method can be implemented by a processor calling a program code in a terminal, and of course, the program code can be saved.
  • the terminal includes at least a processor and a storage medium.
  • the image processing method includes:
  • Step S201 acquiring consecutive N frame images
  • the N is an integer greater than or equal to 3
  • the data of the N-frame image is represented by a color model
  • the color model includes M color components
  • the M is an integer greater than or equal to 1;
  • the terminal includes an electronic device such as a mobile phone or a tablet computer;
  • the N frame image is an N frame captured for the same object
  • the implicit condition is that environmental conditions are required to be different, for example, the same location and for the same object
  • the N frame image may be other Photographed by the electronic device and stored on the terminal. Of course, it may also be taken by the terminal's own image acquisition unit, such as a camera. Therefore, before step S101, the method further includes: continuously capturing N frames of images for the same object.
  • Step S202 determining one frame from the N frame image as a reference frame, and determining (N-1) frame images other than the reference frame as a comparison frame;
  • Step S203 the pixel value Q1(i,j) of the first pixel point of the reference frame on the first color component and the pixel value Q0(i,j) of the first pixel point on the first color component in the comparison frame. Make a difference and get the difference z(i,j);
  • the first pixel point (i, j) is any one of the pixel points
  • the first color component is any one of the M color components.
  • the first color component may be It is a Y component or a U component or a V component.
  • Step S204 determining whether the difference z(i, j) is within a preset first threshold range, and obtaining a determination result.
  • Step S205 when the determination result indicates that at least one of the difference values z(i, j) is within a preset first threshold range, select an image block by using Q1(i, j) as a center point;
  • the image block is one of the following:
  • Step S206 determining a second set of pixel values from the image block, where the second set of pixel values is a pixel point in the image block that satisfies a pixel value within a preset second threshold range;
  • Step S207 performing arithmetic averaging on the set of Q1(i, j) and the second pixel value to obtain a pixel value Q(i, j) of the first pixel point on the first color component, thereby obtaining the first color component.
  • Step S208 analogously, in a manner of obtaining a pixel value on the first color component, obtaining pixel values of all pixel points on other color components except the first color component;
  • Step S209 the pixel values of all the pixels on the M color components are combined into one frame image.
  • FIG. 2 is a schematic flowchart of an image processing method according to Embodiment 3 of the present invention. As shown in FIG. 2, the method includes the following steps:
  • Step S301 obtaining continuous four-frame image YUV data from the memory of the mobile phone
  • the image storage in the mobile phone is based on the YUV model.
  • the YUV data is taken as an example, and the other color models are similar to the YUV model, and therefore will not be described again.
  • YUV data is a color coding method.
  • three-tube color camera or color CCD camera is usually used for image acquisition, and then the obtained color image signals are subjected to color separation and separately amplified to obtain red, green and blue ( RGB), and then through the matrix conversion circuit to obtain the luminance signal Y and the two color difference signals B-Y (ie U), R-Y (ie V), and finally the transmitting end encodes the three signals of luminance and color difference respectively, using the same channel. Send it out.
  • This representation of color is the so-called YUV color space representation.
  • the importance of using the YUV color space is that its luminance signal Y and chrominance signals U, V are separated.
  • Step S302 performing YUV denoising using four frames of images
  • the traditional denoising algorithm includes mean, median, Gaussian, bilateral filtering, 3D noise reduction with motion estimation, etc., where the bilateral filtering can preserve the edges while removing noise, but the noise is relatively large.
  • the filter is not obvious, and the amount of calculation is large;
  • the mean, median, and Gaussian can filter out the noise better, but it is easy to blur the edge, and it is difficult to achieve as the filter window increases.
  • Real-time processing Existing 3D noise reduction with motion estimation first requires motion estimation and then 3D denoising, while motion estimation requires a large amount of computation and does not guarantee 100% accuracy.
  • the 3D denoising algorithm used in this step effectively circumvents motion estimation, making the calculation small and easy to understand.
  • Step S303 outputting the YUV data obtained by denoising as a result image
  • Step S302 includes:
  • step S321 the obtained continuous four-frame YUV images are encoded as img1, img2, img3, and img4, respectively.
  • step S322 any one of the four frames of images is used as a reference image.
  • img1 is used as a reference image, and then img1 is subjected to noise reduction processing using img2, img3, and img4.
  • the essence of 3D noise reduction is the same pixel point, which is different in noise pollution at different times, and the noise generally obeys Gaussian noise with a mean value of zero. Therefore, the average value of a pixel point at different times is averaged, and the result will be It tends to the true value of the pixel.
  • the mobile phone performs photographing, there will be more or less movement between the four frames of images, that is, for example, the first pixel p1 is at the point (3, 3) of img1, and it may be (3, in img2).
  • a noise reduction method proposed in this embodiment bypasses the traditional motion estimation, adopts a new judgment method instead of motion estimation, and performs denoising according to the judgment result.
  • step S323 the Y component, the U component, and the V component information of the four frames of images are respectively traversed, and each corresponding pixel of the four frame images is determined, and then the filtering mode is determined.
  • the Y component is taken as an example for description.
  • the U component and the V component please refer to the Y component.
  • the following takes the pixel point (i, j) as an example. After the pixel point (i, j) is processed, the entire image data can be traversed, and each pixel point can be performed according to the pixel point (i, j). Denoise, the final denoising result can be used as an output. The following describes the processing of pixel points (i, j):
  • the absolute value of the difference between the reference frame pixel Y1(i,j) and Y2(i,j), Y3(i,j), Y4(i,j) is three values, only two of them are For a certain range, the value of the current pixel in the resulting image is equal to the average of the two frames and the reference frame within the range, and the corresponding pixels of the three frames.
  • the absolute value of the difference between Y1(i,j) and Y2(i,j) and Y3(i,j) is in a certain range, and the absolute value of the difference between Y1(i,j) and Y4(i,j) is not in the range.
  • the value of the current pixel in the resulting image is equal to the mean of Y1(i,j) and Y2(i,j), Y3(i,j). If Y2(i,j) and Y3(i,j) are not in a range, the processing method is the same.
  • the value of the current pixel in the resulting image is equal to the frame and the reference frame in this range, and the average of the corresponding pixel points of the two frames.
  • the absolute value of the difference between Y1(i,j) and Y2(i,j) is in a certain range, and the absolute value of the difference between Y1(i,j) and Y3(i,j) and Y4(i,j) is not in the range.
  • the value of the current pixel in the resulting image is equal to the mean of Y1(i,j) and Y2(i,j). Other similar.
  • Y1(i,j) is filtered by the following method, that is, a certain size area around Y1(i, j), such as 3 ⁇ 3 or 5 ⁇ 5 or 7 ⁇ 7, is selected.
  • P22 is the filtered pixel point Y1 (i, j), other values around it. Selecting the absolute value of the difference between the nine values and the P22 value to participate in the filtering of the original value corresponding to the value of a certain range, and averaging the values as the final filtered value, as shown in FIG. 4, the point to be processed in FIG. 4 is 5. If the absolute value range of the difference is set to 2, that is, the value corresponding to the absolute value of the difference is less than or equal to 2, and the five values of 3, 4, 5, 6, and 7 are involved in the calculation, and then 5 The mean of the numbers is the final result.
  • the technical solution of the embodiment of the present invention can eliminate the shortcomings of the existing denoising effect and the loss of edge details, so that the final photographing of the picture has a better visual effect.
  • YUV denoising is performed by using four frames of images
  • YUV data obtained by denoising is output as a result image.
  • the focus is on averaging the values of a pixel at different times, bypassing the traditional motion estimation, and avoiding the traditional motion estimation method. “Not only the calculation amount is large, but also the motion estimation in the scene with many details. The problem of deviation.
  • 3D noise reduction is performed by comparing the corresponding pixel difference values of each frame image; compared with the existing mobile phone denoising method and 3D noise reduction method, the method does not need to perform image processing. Motion estimation, small amount of calculation, low cost, easy to understand algorithm, and real-time photo processing on the mobile phone.
  • an embodiment of the present invention provides an image processing apparatus.
  • Each unit included in the apparatus and each module included in each unit may be implemented by a processor in the terminal, and may also be logical.
  • the circuit is implemented; in the process of implementation, the processor may be a central processing unit (CPU), a microprocessor (MPU), a digital signal processor (DSP), or a field programmable gate array (FPGA).
  • CPU central processing unit
  • MPU microprocessor
  • DSP digital signal processor
  • FPGA field programmable gate array
  • FIG. 5 is a schematic structural diagram of a fourth embodiment of an image processing apparatus according to Embodiment 4 of the present invention.
  • the apparatus 400 includes an obtaining unit 401, a first determining unit 402, a comparing unit 403, a second determining unit 404, and a first processing unit. 405.
  • the second processing unit 406 and the component unit 407 wherein:
  • the acquiring unit 401 is configured to acquire consecutive N-frame images, the N is an integer greater than or equal to 3, the data of the N-frame image is represented by a color model, and the color model includes M color components, M is an integer greater than or equal to 1;
  • the first determining unit 402 is configured to determine one frame from the N frames of images as a reference frame, and determine (N-1) frame images other than the reference frame as comparison frames;
  • the comparing unit 403 is configured to configure a pixel value Q1(i,j) of the first pixel point of the reference frame on the first color component and a pixel value of the first pixel point of the first color component in the comparison frame Comparing Q0(i,j), obtaining a comparison result;
  • the first pixel point is a pixel point of coordinates (i, j) in the N frame image, and the first color component is the M color component Any one color component;
  • the second determining unit 404 is configured to determine a pixel value of the first pixel on the first color component according to the comparison result
  • the first processing unit 405 is configured to obtain, according to a processing procedure of the comparing unit and the second determining unit, pixel values of all pixels remaining on the first color component;
  • the second processing unit 406 is configured to obtain, according to a processing procedure of the comparing unit, the second determining unit, and the first processing unit, all pixel points on other color components except the first color component. Pixel values;
  • the component unit 407 is configured to compose a pixel value of all pixel points on the first color component and pixel values of all pixel points on the other color components into one frame image.
  • the apparatus further includes a photographing unit configured to continuously capture N frames of images for the same object.
  • an embodiment of the present invention provides an image processing apparatus.
  • Each unit included in the apparatus and each module included in each unit may be implemented by a processor in the terminal, and may also be logical. Circuit implementation; during implementation, the processor can be a central processing unit (CPU), a microprocessor (MPU), a digital signal processor (DSP), or a field Programming gate arrays (FPGAs), etc.
  • the processor can be a central processing unit (CPU), a microprocessor (MPU), a digital signal processor (DSP), or a field Programming gate arrays (FPGAs), etc.
  • CPU central processing unit
  • MPU microprocessor
  • DSP digital signal processor
  • FPGAs field Programming gate arrays
  • the apparatus 400 includes an obtaining unit 401, a first determining unit 402, a comparing unit 403, a second determining unit 404, a first processing unit 405, a second processing unit 406, and a component unit 407, wherein the comparing unit 403 includes a difference module 431. And a determining module 432, the second determining unit 404 includes a first determining module 441 and a first averaging module 442, wherein:
  • the acquiring unit 401 is configured to acquire consecutive N-frame images, the N is an integer greater than or equal to 3, the data of the N-frame image is represented by a color model, and the color model includes M color components, M is an integer greater than or equal to 1;
  • the first determining unit 402 is configured to determine one frame from the N frames of images as a reference frame, and determine (N-1) frame images other than the reference frame as comparison frames;
  • the difference module 431 is configured to set a pixel value Q1(i,j) of the first pixel point of the reference frame on the first color component and a pixel value of the first pixel point on the first color component in the comparison frame.
  • Q0(i,j) performs the difference and obtains the difference z(i,j);
  • the first pixel point is a pixel point having a coordinate of (i, j) in the N frame image, and the first color component is any one of the M color components; wherein i and j are An integer greater than or equal to 0;
  • the determining module 432 is configured to determine whether the difference z(i, j) is within a preset first threshold range, and obtain a determination result.
  • the first determining module 441 is configured to determine a first set of pixel values when the determining result indicates that at least one of the differences z(i, j) is within a preset first threshold range, where the a pixel set is a corresponding Q0(i,j) when the difference z(i,j) is within a preset first threshold range;
  • the first averaging module 442 is configured to perform arithmetic averaging of the Q1(i,j) and the first set of pixel values to obtain a pixel value Q(i,j) of the first pixel on the first color component.
  • the first processing unit 405 is configured to obtain, according to a processing procedure of each module in the comparing unit and the second determining unit, a pixel value of all pixels remaining on the first color component;
  • the second processing unit 406 is configured to obtain all the other color components except the first color component according to the processing processes of each of the comparison unit, the second determining unit, and the first processing unit.
  • the component unit 407 is configured to compose a pixel value of all pixel points on the first color component and pixel values of all pixel points on the other color components into one frame image.
  • the apparatus further includes a photographing unit configured to continuously capture N frames of images for the same object.
  • an embodiment of the present invention provides an image processing apparatus.
  • Each unit included in the apparatus and each module included in each unit may be implemented by a processor in the terminal, and may also be logical.
  • the circuit is implemented; in the process of implementation, the processor may be a central processing unit (CPU), a microprocessor (MPU), a digital signal processor (DSP), or a field programmable gate array (FPGA).
  • CPU central processing unit
  • MPU microprocessor
  • DSP digital signal processor
  • FPGA field programmable gate array
  • the apparatus 400 includes an obtaining unit 401, a first determining unit 402, a comparing unit 403, a second determining unit 404, a first processing unit 405, a second processing unit 406, and a component unit 407, wherein the comparing unit 403 includes a difference module 431 and judging module 432, the second determining unit 404 includes a selecting module 443, a first determining module 444, and a first averaging module 445, where:
  • the acquiring unit 401 is configured to acquire consecutive N-frame images, the N is an integer greater than or equal to 3, the data of the N-frame image is represented by a color model, and the color model includes M color components, M is an integer greater than or equal to 1;
  • the first determining unit 402 is configured to determine one frame from the N frames of images as a reference frame, and determine (N-1) frame images other than the reference frame as comparison frames;
  • the difference module 431 is configured to set a pixel value Q1(i,j) of the first pixel point of the reference frame on the first color component and a pixel value of the first pixel point on the first color component in the comparison frame.
  • Q0(i,j) Make a difference and get the difference z(i,j);
  • the first pixel point is a pixel point having a coordinate of (i, j) in the N frame image, and the first color component is any one of the M color components;
  • the determining module 432 is configured to determine whether the difference z(i, j) is within a preset first threshold range, and obtain a determination result.
  • the selecting module 443 is configured to: when the judgment result indicates that at least one of the difference values z(i, j) is within a preset first threshold range, select Q1(i, j) as a center point.
  • the second determining module 444 is configured to determine a second set of pixel values from the image block, where the second set of pixel values is a pixel point within the image block that satisfies a pixel value within a preset second threshold range;
  • the second averaging module 445 is configured to perform arithmetic averaging of the Q1(i,j) and the second set of pixel values to obtain a pixel value Q(i,j) of the first pixel on the first color component.
  • the first processing unit 405 is configured to obtain, according to a processing procedure of each module in the comparing unit and the second determining unit, a pixel value of all pixels remaining on the first color component;
  • the second processing unit 406 is configured to obtain all the other color components except the first color component according to the processing processes of each of the comparison unit, the second determining unit, and the first processing unit.
  • the component unit 407 is configured to compose a pixel value of all pixel points on the first color component and pixel values of all pixel points on the other color components into one frame image.
  • the apparatus further includes a photographing unit configured to continuously capture N frames of images for the same object.
  • the image block is one of the following:
  • the N is 4.
  • an embodiment of the present invention further provides a terminal, where the terminal includes an image collection unit, a memory, and a processor, where:
  • An image acquisition unit (such as a camera) configured to continuously capture N frames of images for the same object to obtain continuous N frames of images;
  • the memory is configured to store consecutive N frames of images
  • the processor is configured to, for the same object, invoke an image acquisition unit to continuously capture an N-frame image; obtain a continuous N-frame image from the memory, the N is an integer greater than or equal to 3, and the data of the N-frame image adopts a color model.
  • the color model includes M color components, the M is an integer greater than or equal to 1; a frame is determined from the N frame images as a reference frame, and (N-1) other than the reference frame a frame image is determined as a comparison frame; a pixel value Q1(i,j) of the first pixel on the first color component and a pixel value of the first pixel on the first color component in the comparison frame Comparing Q0(i,j), obtaining a comparison result; the first pixel point is a pixel point of coordinates (i, j) in the N frame image, and the first color component is the M color component Determining, according to the comparison result, a pixel value of the first pixel on the first color component, thereby obtaining pixel values of all the pixels remaining on the first color component; to obtain a pixel on the first color component
  • the analogy of the value, in addition to the first color score The pixel values of all pixels of the other color components on the outside; all pixel values of the M points on one frame of
  • the image processing method described above is implemented in the form of a software function module and sold or used as a stand-alone product, it may also be stored in a computer readable storage medium.
  • the technical solution of the embodiments of the present invention may be embodied in the form of a software product in essence or in the form of a software product stored in a storage medium, including a plurality of instructions.
  • a computer device (which may be a personal computer, server, or network device, etc.) is caused to perform all or part of the methods described in various embodiments of the present invention.
  • the foregoing storage medium includes various media that can store program codes, such as a USB flash drive, a mobile hard disk, a read only memory (ROM), a magnetic disk, or an optical disk.
  • embodiments of the invention are not limited to any specific combination of hardware and software.
  • the embodiment of the present invention further provides a computer storage medium, where the computer storage medium stores computer executable instructions, and the computer executable instructions are used to execute an image processing method in the embodiment of the present invention.
  • the disclosed apparatus and method may be implemented in other manners.
  • the device embodiments described above are merely illustrative.
  • the division of the unit is only a logical function division.
  • there may be another division manner such as: multiple units or components may be combined, or Can be integrated into another system, or some features can be ignored or not executed.
  • the coupling, or direct coupling, or communication connection of the components shown or discussed may be indirect coupling or communication connection through some interfaces, devices or units, and may be electrical, mechanical or other forms. of.
  • the units described above as separate components may or may not be physically separated, and the components displayed as the unit may or may not be physical units; they may be located in one place or distributed on multiple network units; Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may be separately used as one unit, or two or more units may be integrated into one unit;
  • the unit can be implemented in the form of hardware or in the form of hardware plus software functional units.
  • the foregoing program may be stored in a computer readable storage medium, and when executed, the program includes The foregoing steps of the method embodiment; and the foregoing storage medium includes: a removable storage device, a read only memory (ROM), a magnetic disk, or an optical disk, and the like, which can store program codes.
  • ROM read only memory
  • the above-described integrated unit of the present invention may be stored in a computer readable storage medium if it is implemented in the form of a software function module and sold or used as a standalone product.
  • the technical solution of the embodiments of the present invention may be embodied in the form of a software product in essence or in the form of a software product stored in a storage medium, including a plurality of instructions.
  • a computer device (which may be a personal computer, server, or network device, etc.) is caused to perform all or part of the methods described in various embodiments of the present invention.
  • the foregoing storage medium includes various media that can store program codes, such as a mobile storage device, a ROM, a magnetic disk, or an optical disk.
  • a continuous N frame image is obtained, where N is an integer greater than or equal to 3.
  • the data of the N frame image is represented by a color model, where the color model includes M color components, and the M is An integer greater than or equal to 1; determining one frame from the N frames of images as a reference frame, determining (N-1) frame images other than the reference frame as comparison frames; and placing the reference frame in a first color Comparing the pixel value Q1(i,j) of the first pixel on the component with the pixel value Q0(i,j) of the first pixel on the first color component in the comparison frame, obtaining a comparison result; the first pixel a point is a pixel point of the N frame image with coordinates (i, j), the first color component being any one of the M color components; determining the first color component according to the comparison result a pixel value of the first pixel, thereby obtaining a pixel value of all the pixels remaining on the first color component; and obtaining a color component other than the

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

本发明公开了一种图像处理方法及装置、终端、存储介质,所述方法还包括:获取连续的N帧图像,所述N为大于等于3的整数;从所述N帧图像中确定一帧作为基准帧,将除所述基准帧之外的(N-1)帧图像确定为对比帧;将所述基准帧在第一色彩分量上第一像素点的像素值Q1(i,j)与对比帧中在第一色彩分量上的第一像素点的像素值Q0(i,j)对比,得到对比结果;所述第一像素点为所述N帧图像中坐标为(i,j)的像素点,所述第一色彩分量为所述M个色彩分量的任意一个分量;根据所述对比结果确定在第一色彩分量上第一像素点的像素值,依此得到第一色彩分量上剩余的全部像素点的像素值;以得到第一色彩分量上像素值的方式得到除所述第一色彩分量外的其他色彩分量上的全部像素点的像素值;将所述M个色彩分量上全部像素点的像素值组成一帧图像。

Description

一种图像处理方法及装置、终端、存储介质 技术领域
本发明涉及图像处理技术,尤其涉及一种图像处理方法及装置、终端、存储介质。
背景技术
现如今,科学技术一日千里,电子设备如手机、平板电脑等的拍照功能越来越完善。由于其体积小便于携带等特性,越来越多的人更热衷于使用电子设备如手机拍照功能来记录生活中的美好瞬间,人们对手机拍摄的图像质量要求也越来越高。但是令人厌恶的是,在亮度不够的场合,图像中经常会带有一些噪声(亮度噪声加色度噪声),并且亮度越低噪声越大。这些噪声的存在严重的影响图像的视觉质量。
发明内容
有鉴于此,本发明实施例为解决现有技术中存在的至少一个问题而提供一种图像处理方法及装置、终端、存储介质,能够提高去噪效果,并能克服容易造成边缘细节的损失等缺点,使最终拍照得到图片有更好的视觉效果。
本发明实施例的技术方案是这样实现的:
第一方面,本发明实施例提供一种图像处理方法,所述方法包括:
获取连续的N帧图像,所述N为大于等于3的整数,所述N帧图像的数据采用颜色模型来表示,所述颜色模型包括M个色彩分量,所述M为大于等于1的整数;
从所述N帧图像中确定一帧作为基准帧,将除所述基准帧之外的(N-1) 帧图像确定为对比帧;
将所述基准帧在第一色彩分量上第一像素点的像素值Q1(i,j)与对比帧中在第一色彩分量上的第一像素点的像素值Q0(i,j)对比,得到对比结果;所述第一像素点为所述N帧图像中坐标为(i,j)的像素点,所述第一色彩分量为所述M个色彩分量的任意一个色彩分量;
根据所述对比结果确定在第一色彩分量上第一像素点的像素值,依此得到第一色彩分量上剩余的全部像素点的像素值;
以得到第一色彩分量上像素值的方式,得到除所述第一色彩分量外的其他色彩分量上的全部像素点的像素值;
将所述M个色彩分量上全部像素点的像素值组成一帧图像。
第二方面,本发明实施例提供一种图像处理装置,所述装置包括获取单元、第一确定单元、对比单元、第二确定单元、第一处理单元、第二处理单元和组成单元,其中:
所述获取单元,配置为获取连续的N帧图像,所述N为大于等于3的整数,所述N帧图像的数据采用颜色模型来表示,所述颜色模型包括M个色彩分量,所述M为大于等于1的整数;
所述第一确定单元,配置为从所述N帧图像中确定一帧作为基准帧,将除所述基准帧之外的(N-1)帧图像确定为对比帧;
所述对比单元,配置为将所述基准帧在第一色彩分量上第一像素点的像素值Q1(i,j)与对比帧中在第一色彩分量上的第一像素点的像素值Q0(i,j)对比,得到对比结果;所述第一像素点为所述N帧图像中坐标为(i,j)的像素点,所述第一色彩分量为所述M个色彩分量的任意一个色彩分量;
所述第二确定单元,配置为根据所述对比结果确定在第一色彩分量上第一像素点的像素值;
所述第一处理单元,配置为根据所述对比单元和所述第二确定单元的 处理过程,得到第一色彩分量上剩余的全部像素点的像素值;
所述第二处理单元,配置为根据所述对比单元、所述第二确定单元和第一处理单元的处理过程,得到除所述第一色彩分量外的其他色彩分量上的全部像素点的像素值;
所述组成单元,配置为将第一色彩分量上全部像素点的像素值和其他色彩分量上全部像素点的像素值组成一帧图像。
第三方面,本发明实施例提供一种终端,所述终端包括存储器和处理器,其中:
所述存储器,配置为存储连续的N帧图像;
所述处理器,配置为获取连续的N帧图像,所述N为大于等于3的整数,所述N帧图像的数据采用颜色模型来表示,所述颜色模型包括M个色彩分量,所述M为大于等于1的整数;从所述N帧图像中确定一帧作为基准帧,将除所述基准帧之外的(N-1)帧图像确定为对比帧;将所述基准帧在第一色彩分量上第一像素点的像素值Q1(i,j)与对比帧中在第一色彩分量上的第一像素点的像素值Q0(i,j)对比,得到对比结果;所述第一像素点为所述N帧图像中坐标为(i,j)的像素点,所述第一色彩分量为所述M个色彩分量的任意一个分量;根据所述对比结果确定在第一色彩分量上第一像素点的像素值,依此得到第一色彩分量上剩余的全部像素点的像素值;以得到第一色彩分量上像素值的方式类推,得到除所述第一色彩分量外的其他色彩分量上的全部像素点的像素值;将所述M个色彩分量上全部像素点的像素值组成一帧图像。
第四方面,本发明实施例提供一种计算机存储介质,所述计算机存储介质中存储有计算机可执行指令,该计算机可执行指令用于执行本发明第一方面实施例提供的图像处理方法。
本发明实施例提供一种图像处理方法及装置、终端、存储介质,其中, 获取连续的N帧图像,所述N为大于等于3的整数,所述N帧图像的数据采用颜色模型来表示,所述颜色模型包括M个色彩分量,所述M为大于等于1的整数;从所述N帧图像中确定一帧作为基准帧,将除所述基准帧之外的(N-1)帧图像确定为对比帧;将所述基准帧在第一色彩分量上第一像素点的像素值Q1(i,j)与对比帧中在第一色彩分量上的第一像素点的像素值Q0(i,j)对比,得到对比结果;所述第一像素点为所述N帧图像中坐标为(i,j)的像素点,所述第一色彩分量为所述M个色彩分量的任意一个色彩分量;根据所述对比结果确定在第一色彩分量上第一像素点的像素值,依此得到第一色彩分量上剩余的全部像素点的像素值;以得到第一色彩分量上像素值的方式,得到除所述第一色彩分量外的其他色彩分量上的全部像素点的像素值;将所述M个色彩分量上全部像素点的像素值组成一帧图像;如此,能够提高去噪效果,并能克服容易造成边缘细节的损失等缺点,使最终拍照得到图片有更好的视觉效果。
附图说明
图1-1为实现本发明各个实施例中一个可选的移动终端的硬件结构示意图;
图1-2为如图1-1所示的移动终端中摄影镜头的组成结构示意图;
图1-3为本发明实施例一图像处理方法的实现流程示意图;
图2为本发明实施例二图像处理方法的实现流程示意图;
图3为本发明实施例三像素块的示意图;
图4为图3所示的像素块的像素值;
图5为本发明实施例四图像处理装置的实现流程示意图。
具体实施方式
应当理解,此处所描述的具体实施例仅仅用以解释本发明的技术方案, 并不用于限定本发明的保护范围。现在将参考附图描述实现本发明各个实施例的移动终端。在后续的描述中,使用用于表示元件的诸如“模块”、“部件”或“单元”的后缀仅为了有利于本发明的说明,其本身并没有特定的意义。因此,“模块”与“部件”可以混合地使用。
移动终端可以以各种形式来实施。例如,本发明中描述的终端可以包括诸如移动电话、智能电话、笔记本电脑、数字广播接收器、PDA(个人数字助理)、PAD(平板电脑)、PMP(便携式多媒体播放器)、导航装置等等的移动终端以及诸如数字TV、台式计算机等等的固定终端。下面,假设终端是移动终端。然而,本领域技术人员将理解的是,除了特别用于移动目的的元件之外,根据本发明的实施方式的构造也能够应用于固定类型的终端。
图1-1为实现本发明各个实施例中的一个可选移动终端的硬件结构示意图,如图1-1所示,移动终端100可以包括无线通信单元110、A/V(音频/视频)输入单元120、用户输入单元130、输出单元150、存储器160、接口单元170、控制器180和电源单元190等等。图1-1示出了具有各种组件的移动终端,但是应理解的是,并不要求实施所有示出的组件。可以替代地实施更多或更少的组件。将在下面详细描述移动终端的元件。
无线通信单元110通常包括一个或多个组件,其允许移动终端100与无线通信***或网络之间的无线电通信。例如,无线通信单元可以包括移动通信模块112、无线互联网模块113和短程通信模块114中的至少一个。
移动通信模块112将无线电信号发送到基站(例如,接入点、节点B等等)、外部终端以及服务器中的至少一个和/或从其接收无线电信号。这样的无线电信号可以包括语音通话信号、视频通话信号、或者根据文本和/或多媒体消息发送和/或接收的各种类型的数据。
无线互联网模块113支持移动终端的无线互联网接入。该模块可以内 部或外部地耦接到终端。该模块所涉及的无线互联网接入技术可以包括WLAN(无线LAN)(Wi-Fi)、Wibro(无线宽带)、Wimax(全球微波互联接入)、HSDPA(高速下行链路分组接入)等等。
短程通信模块114是用于支持短程通信的模块。短程通信技术的一些示例包括蓝牙TM、射频识别(RFID)、红外数据协会(IrDA)、超宽带(UWB)、紫蜂TM等等。
A/V输入单元120用于接收音频或视频信号。A/V输入单元120可以包括相机121和麦克风1220,相机121对在视频捕获模式或图像捕获模式中由图像捕获装置获得的静态图像或视频的图像数据进行处理。处理后的图像帧可以显示在显示单元151上。经相机121处理后的图像帧可以存储在存储器160(或其它存储介质)中或者经由无线通信单元110进行发送,可以根据移动终端的构造提供两个或更多相机121。麦克风122可以在电话通话模式、记录模式、语音识别模式等等运行模式中经由麦克风接收声音(音频数据),并且能够将这样的声音处理为音频数据。处理后的音频(语音)数据可以在电话通话模式的情况下转换为可经由移动通信模块112发送到移动通信基站的格式输出。麦克风122可以实施各种类型的噪声消除(或抑制)算法以消除(或抑制)在接收和发送音频信号的过程中产生的噪声或者干扰。
用户输入单元130可以根据用户输入的命令生成键输入数据以控制移动终端的各种操作。用户输入单元130允许用户输入各种类型的信息,并且可以包括键盘、锅仔片、触摸板(例如,检测由于被接触而导致的电阻、压力、电容等等的变化的触敏组件)、滚轮、摇杆等等。特别地,当触摸板以层的形式叠加在显示单元151上时,可以形成触摸屏。
接口单元170用作至少一个外部装置与移动终端100连接可以通过的接口。例如,外部装置可以包括有线或无线头戴式耳机端口、外部电源(或 电池充电器)端口、有线或无线数据端口、存储卡端口、用于连接具有识别模块的装置的端口、音频输入/输出(I/O)端口、视频I/O端口、耳机端口等等。识别模块可以是存储用于验证用户使用移动终端100的各种信息并且可以包括用户识别模块(UIM)、客户识别模块(SIM)、通用客户识别模块(USIM)等等。另外,具有识别模块的装置(下面称为“识别装置”)可以采取智能卡的形式,因此,识别装置可以经由端口或其它连接装置与移动终端100连接。接口单元170可以用于接收来自外部装置的输入(例如,数据信息、电力等等)并且将接收到的输入传输到移动终端100内的一个或多个元件或者可以用于在移动终端和外部装置之间传输数据。
另外,当移动终端100与外部底座连接时,接口单元170可以用作允许通过其将电力从底座提供到移动终端100的路径或者可以用作允许从底座输入的各种命令信号通过其传输到移动终端的路径。从底座输入的各种命令信号或电力可以用作用于识别移动终端是否准确地安装在底座上的信号。输出单元150被构造为以视觉、音频和/或触觉方式提供输出信号(例如,音频信号、视频信号、警报信号、振动信号等等)。输出单元150可以包括显示单元151、音频输出模块152等等。
显示单元151可以显示在移动终端100中处理的信息。例如,当移动终端100处于电话通话模式时,显示单元151可以显示与通话或其它通信(例如,文本消息收发、多媒体文件下载等等)相关的用户界面(UI)或图形用户界面(GUI)。当移动终端100处于视频通话模式或者图像捕获模式时,显示单元151可以显示捕获的图像和/或接收的图像、示出视频或图像以及相关功能的UI或GUI等等。
同时,当显示单元151和触摸板以层的形式彼此叠加以形成触摸屏时,显示单元151可以用作输入装置和输出装置。显示单元151可以包括液晶显示器(LCD)、薄膜晶体管LCD(TFT-LCD)、有机发光二极管(OLED) 显示器、柔性显示器、三维(3D)显示器等等中的至少一种。这些显示器中的一些可以被构造为透明状以允许用户从外部观看,这可以称为透明显示器,典型的透明显示器可以例如为TOLED(透明有机发光二极管)显示器等等。根据特定想要的实施方式,移动终端100可以包括两个或更多显示单元(或其它显示装置),例如,移动终端可以包括外部显示单元(未示出)和内部显示单元(未示出)。触摸屏可用于检测触摸输入压力以及触摸输入位置和触摸输入面积。
音频输出模块152可以在移动终端处于呼叫信号接收模式、通话模式、记录模式、语音识别模式、广播接收模式等等模式下时,将无线通信单元110接收的或者在存储器160中存储的音频数据转换音频信号并且输出为声音。而且,音频输出模块152可以提供与移动终端100执行的特定功能相关的音频输出(例如,呼叫信号接收声音、消息接收声音等等)。音频输出模块152可以包括扬声器、蜂鸣器等等。
存储器160可以存储由控制器180执行的处理和控制操作的软件程序等等,或者可以暂时地存储己经输出或将要输出的数据(例如,电话簿、消息、静态图像、视频等等)。而且,存储器160可以存储关于当触摸施加到触摸屏时输出的各种方式的振动和音频信号的数据。
存储器160可以包括至少一种类型的存储介质,所述存储介质包括闪存、硬盘、多媒体卡、卡型存储器(例如,SD或DX存储器等等)、随机访问存储器(RAM)、静态随机访问存储器(SRAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、可编程只读存储器(PROM)、磁性存储器、磁盘、光盘等等。而且,移动终端100可以与通过网络连接执行存储器160的存储功能的网络存储装置协作。
控制器180通常控制移动终端的总体操作。例如,控制器180执行与语音通话、数据通信、视频通话等等相关的控制和处理。另外,控制器180 可以包括用于再现或回放多媒体数据的多媒体模块181,多媒体模块181可以构造在控制器180内,或者可以构造为与控制器180分离。控制器180可以执行模式识别处理,以将在触摸屏上执行的手写输入或者图像绘制输入识别为字符或图像。
电源单元190在控制器180的控制下接收外部电力或内部电力并且提供操作各元件和组件所需的适当的电力。
这里描述的各种实施方式可以以使用例如计算机软件、硬件或其任何组合的计算机可读介质来实施。对于硬件实施,这里描述的实施方式可以通过使用特定用途集成电路(ASIC)、数字信号处理器(DSP)、数字信号处理装置(DSPD)、可编程逻辑装置(PLD)、现场可编程门阵列(FPGA)、处理器、控制器、微控制器、微处理器、被设计为执行这里描述的功能的电子单元中的至少一种来实施,在一些情况下,这样的实施方式可以在控制器180中实施。对于软件实施,诸如过程或功能的实施方式可以与允许执行至少一种功能或操作的单独的软件模块来实施。软件代码可以由以任何适当的编程语言编写的软件应用程序(或程序)来实施,软件代码可以存储在存储器160中并且由控制器180执行。
至此,己经按照其功能描述了移动终端。下面,为了简要起见,将描述诸如折叠型、直板型、摆动型、滑动型移动终端等等的各种类型的移动终端中的滑动型移动终端作为示例。因此,本发明能够应用于任何类型的移动终端,并且不限于滑动型移动终端。
本发明实施例中所述移动终端还包括摄影镜头,参见图1-2所示,摄影镜头1211由用于形成被摄体像的多个光学镜头构成,为单焦点镜头或变焦镜头。摄影镜头1211在镜头驱动器1221的控制下能够在光轴方向上移动,镜头驱动器1221根据来自镜头驱动控制电路1222的控制信号,控制摄影镜头1211的焦点位置,在变焦镜头的情况下,也可控制焦点距离。镜头驱 动控制电路1222按照来自微型计算机1217的控制命令进行镜头驱动器1221的驱动控制。
在摄影镜头1211的光轴上、由摄影镜头1211形成的被摄体像的位置附近配置有摄像元件1212。摄像元件1212用于对被摄体像摄像并取得摄像图像数据。在摄像元件1212上二维且呈矩阵状配置有构成各像素的光电二极管。各光电二极管产生与受光量对应的光电转换电流,该光电转换电流由与各光电二极管连接的电容器进行电荷蓄积。各像素的前表面配置有拜耳排列的RGB滤色器。
摄像元件1212与摄像电路1213连接,该摄像电路1213在摄像元件1212中进行电荷蓄积控制和图像信号读出控制,对该读出的图像信号(模拟图像信号)降低重置噪声后进行波形整形,进而进行增益提高等以成为适当的信号电平。摄像电路1213与A/D转换器1214连接,该A/D转换器1214对模拟图像信号进行模数转换,向总线1227输出数字图像信号(以下称之为图像数据)。
总线1227是用于传送在相机的内部读出或生成的各种数据的传送路径。在总线1227连接着上述A/D转换器1214,此外还连接着图像处理器1215、JPEG处理器1216、微型计算机1217、SDRAM(Synchronous Dynamic random access memory,同步动态随机存取内存)1218、存储器接口(以下称之为存储器I/F)1219、LCD(Liquid Crystal Display,液晶显示器)驱动器1220。
图像处理器1215对基于摄像元件1212的输出的图像数据进行OB相减处理、白平衡调整、颜色矩阵运算、伽马转换、色差信号处理、噪声去除处理、同时化处理、边缘处理等各种图像处理。JPEG处理器1216在将图像数据记录于记录介质1225时,按照JPEG压缩方式压缩从SDRAM1218读出的图像数据。此外,JPEG处理器1216为了进行图像再现显示而进行 JPEG图像数据的解压缩。进行解压缩时,读出记录在记录介质1225中的文件,在JPEG处理器1216中实施了解压缩处理后,将解压缩的图像数据暂时存储于SDRAM1218中并在LCD1226上进行显示。另外,在本实施方式中,作为图像压缩解压缩方式采用的是JPEG方式,然而压缩解压缩方式不限于此,当然可以采用MPEG、TIFF、H.264等其他的压缩解压缩方式。
微型计算机1217发挥作为该相机整体的控制部的功能,统一控制相机的各种处理序列。微型计算机1217连接着操作单元1223和闪存1224。
操作单元1223包括但不限于实体按键或者虚拟按键,该实体或虚拟按键可以为电源按钮、拍照键、编辑按键、动态图像按钮、再现按钮、菜单按钮、十字键、OK按钮、删除按钮、放大按钮等各种输入按钮和各种输入键等操作控件,检测这些操作控件的操作状态。
将检测结果向微型计算机1217输出。此外,在作为显示器的LCD1226的前表面设有触摸面板,检测用户的触摸位置,将该触摸位置向微型计算机1217输出。微型计算机1217根据来自操作单元1223的操作位置的检测结果,执行与用户的操作对应的各种处理序列。
闪存1224存储用于执行微型计算机1217的各种处理序列的程序。微型计算机1217根据该程序进行相机整体的控制。此外,闪存1224存储相机的各种调整值,微型计算机1217读出调整值,按照该调整值进行相机的控制。
SDRAM1218是用于对图像数据等进行暂时存储的可电改写的易失性存储器。该SDRAM1218暂时存储从A/D转换器1214输出的图像数据和在图像处理器1215、JPEG处理器1216等中进行了处理后的图像数据。
存储器接口1219与记录介质1225连接,进行将图像数据和附加在图像数据中的文件头等数据写入记录介质1225和从记录介质1225中读出的控制。记录介质1225例如为能够在相机主体上自由拆装的存储器卡等记录 介质,然而不限于此,也可以是内置在相机主体中的硬盘等。
LCD驱动器1210与LCD1226连接,将由图像处理器1215处理后的图像数据存储于SDRAM1218,需要显示时,读取SDRAM1218存储的图像数据并在LCD1226上显示,或者,JPEG处理器1216压缩过的图像数据存储于SDRAM1218,在需要显示时,JPEG处理器1216读取SDRAM1218的压缩过的图像数据,再进行解压缩,将解压缩后的图像数据通过LCD1226进行显示。
LCD1226配置在相机主体的背面进行图像显示。该LCD1226LCD,然而不限于此,也可以采用有机EL等各种显示面板(LCD1226),然而不限于此,也可以采用有机EL等各种显示面板。
基于上述移动终端硬件结构以及摄影镜头,提出本发明方法各个实施例。为了解决背景技术中存在的技术问题,本发明实施例提供一种图像处理方法,在该图像处理方法中采用了新的三维(3D)滤波的方法,用于目前的电子设备如手机拍照***,不仅能很好的去除噪声,而且也能很好的保持图像边缘,极大的提高了照片的质量。三维(3D)滤波方法实际上实际基于3D(帧间)图像处理方法,通过对比各帧图像相对应位置像素差值,来判断是否进行3D降噪;相较于现有的手机的去噪方法和3D降噪方法,本发明实施例不需要对图像进行运动估计,具有计算量小、成本低和算法易于理解的有点,并且能在手机上进行实时拍照处理。
下面结合附图和具体实施例对本发明的技术方案进一步详细阐述。
实施例一
为了解决前述的技术问题,本发明实施例提供一种图像处理方法,该图像处理方法应用于终端,该方法所实现的功能可以通过终端中的处理器调用程序代码来实现,当然程序代码可以保存在计算机存储介质中,可见,该终端至少包括处理器和存储介质。
图1-3为本发明实施例一图像处理方法的实现流程示意图,如图1-3所示,该图像处理方法包括:
步骤S101,获取连续的N帧图像;
这里,所述N为大于等于3的整数,所述N帧图像的数据采用颜色模型来表示,所述颜色模型包括M个色彩分量,所述M为大于等于1的整数;
这里,所述终端包括手机、平板电脑等电子设备;
这里,所述N帧图像是针对同一对象所拍摄的N帧,在此隐含的条件是需要环境条件相差不大,例如是相同的地点和针对相同的对象,所述N帧图像可以是其他电子设备拍摄的,存储在该终端上。当然还可以是该终端自己的图像采集单元如摄像头所拍摄的,因此,在步骤S101之前,所述方法还包括:针对同一对象,连续拍摄N帧图像。
步骤S102,从所述N帧图像中确定一帧作为基准帧,将除所述基准帧之外的(N-1)帧图像确定为对比帧;
步骤S103,将所述基准帧在第一色彩分量上第一像素点的像素值Q1(i,j)与对比帧中在第一色彩分量上的第一像素点的像素值Q0(i,j)对比,得到对比结果;
这里,所述第一像素点(i,j)为任意一个像素点,所述第一色彩分量为所述M个色彩分量的任意一个色彩分量,以YUV色彩模型为例,第一色彩分量可以为Y分量或U分量或V分量。
步骤S104,根据所述对比结果确定在第一色彩分量上第一像素点的像素值,依此得到第一色彩分量上剩余的全部像素点的像素值;
步骤S105,以得到第一色彩分量上像素值的方式类推,得到除所述第一色彩分量外的其他色彩分量上的全部像素点的像素值;
步骤S106,将所述M个色彩分量上全部像素点的像素值组成一帧图像。
本发明实施例中,步骤S103,所述将所述基准帧在第一色彩分量上第 一像素点的像素值Q1(i,j)与对比帧中在第一色彩分量上的第一像素点的像素值Q0(i,j)对比,得到对比结果,包括:
步骤S131,步骤将基准帧在第一色彩分量上的第一像素点的像素值Q1(i,j)与对比帧中在第一色彩分量上的第一像素点的像素值Q0(i,j)进行做差,得到差值z(i,j);
步骤S132,判断所述差值z(i,j)是否在预设的第一阈值范围内,得到判断结果。
步骤S133,当所述判断结果表明至少有一个所述差值z(i,j)在预设的第一阈值范围内时,确定第一像素值集合;
这里,所述第一像素集合为所述差值z(i,j)在预设的第一阈值范围内时对应的Q0(i,j);所述第一像素值集合中至少包括所述N帧图像中一帧图像的像素的像素值。
步骤S134,将所述Q1(i,j)与第一像素值集合进行算术平均,得到第一色彩分量上第一像素点的像素值Q(i,j)。
本发明实施例中,作为一种优选的实施例方式,所述N为4,当N等于4时,在做算术平均的时候,比较容易计算,因为4正好是2的平方,那么算术平均只需要移动两位即可。当N=8也可以带来这种计算上的简便,但是8帧图像的比较是比较麻烦的,同时计算量也会成倍增加。
实施例二
为了解决前述的技术问题,本发明实施例提供一种图像处理方法,该图像处理方法应用于终端,该方法所实现的功能可以通过终端中的处理器调用程序代码来实现,当然程序代码可以保存在计算机存储介质中,可见,该终端至少包括处理器和存储介质。
该图像处理方法包括:
步骤S201,获取连续的N帧图像;
这里,所述N为大于等于3的整数,所述N帧图像的数据采用颜色模型来表示,所述颜色模型包括M个色彩分量,所述M为大于等于1的整数;
这里,所述终端包括手机、平板电脑等电子设备;
这里,所述N帧图像是针对同一对象所拍摄的N帧,在此隐含的条件是需要环境条件相差不大,例如是相同的地点和针对相同的对象,所述N帧图像可以是其他电子设备拍摄的,存储在该终端上。当然还可以是该终端自己的图像采集单元如摄像头所拍摄的,因此,在步骤S101之前,所述方法还包括:针对同一对象,连续拍摄N帧图像。
步骤S202,从所述N帧图像中确定一帧作为基准帧,将除所述基准帧之外的(N-1)帧图像确定为对比帧;
步骤S203,将基准帧在第一色彩分量上的第一像素点的像素值Q1(i,j)与对比帧中在第一色彩分量上的第一像素点的像素值Q0(i,j)进行做差,得到差值z(i,j);
这里,所述第一像素点(i,j)为任意一个像素点,所述第一色彩分量为所述M个色彩分量的任意一个色彩分量,以YUV色彩模型为例,第一色彩分量可以为Y分量或U分量或V分量。
步骤S204,判断所述差值z(i,j)是否在预设的第一阈值范围内,得到判断结果。
步骤S205,当所述判断结果表明至少有一个所述差值z(i,j)在预设的第一阈值范围内时,将Q1(i,j)作为中心点选取一图像块;
这里,所述图像块为以下之一:
3个像素×3个像素、5个像素×5个像素、7个像素×7个像素、9个像素×9个像素。
步骤S206,从所述图像块内确定第二像素值集合,所述第二像素值集合为图像块内满足预设的第二阈值范围内的像素值的像素点;
步骤S207,将所述Q1(i,j)与第二像素值集合进行算术平均,得到第一色彩分量上第一像素点的像素值Q(i,j),依此得到第一色彩分量上剩余的全部像素点的像素值;
步骤S208,以得到第一色彩分量上像素值的方式类推,得到除所述第一色彩分量外的其他色彩分量上的全部像素点的像素值;
步骤S209,将所述M个色彩分量上全部像素点的像素值组成一帧图像。
实施例三
基于前述的实施例,本发明实施例提供一种图像处理方法,图2为本发明实施例三图像处理方法的实现流程示意图,如图2所示,该方法包括以下步骤:
步骤S301,从手机内存中获取连续的四帧图像YUV数据;
这里,一般来说,手机中图像的存储是采用YUV模型的,本发明实施例中以YUV数据为例,对于其他的色彩模型,与YUV模型类似,因此不再赘述。YUV数据是一种颜色编码方法,在现代彩色电视***中,通常采用三管彩色摄影机或彩色CCD摄影机进行取像,然后把取得的彩色图像信号经分色、分别放大校正后得到红绿蓝(RGB),再经过矩阵变换电路得到亮度信号Y和两个色差信号B-Y(即U)、R-Y(即V),最后发送端将亮度和色差三个信号分别进行编码,用同一信道发送出去。这种色彩的表示方法就是所谓的YUV色彩空间表示。采用YUV色彩空间的重要性是它的亮度信号Y和色度信号U、V是分离的。
这里,对手机图像YUV数据流,连续获取相邻四帧并锁定。
步骤S302,利用四帧图像进行YUV去噪;
这里,传统去噪算法包括均值、中值、高斯、双边滤波、带运动估计的3D降噪等等去噪算法,其中双边滤波能在去除噪声的同时较好的保留边缘,但是在噪声比较大的情况下滤波效果不明显,并且计算量很大;在滤 波窗口增大的情况下,很难做到实时处理;均值、中值和高斯能较好的滤除噪声,但是很容易模糊掉边缘,并且随着滤波窗口的增大,也很难做到实时处理。现有的带运动估计的3D降噪,首先要进行运动估计,然后在进行3D去噪,而运动估计需要很大的计算量,并且也不能保证百分之百的准确性。在本步骤中所使用的3D去噪算法有效规避了运动估计,从而使得计算量小而且易于理解。
步骤S303,将去噪得到的YUV数据输出,作为结果图像;
下面对上述的步骤S302进行说明,步骤S302包括:
步骤S321,对得到的连续的四帧YUV图像,分别编码为img1、img2、img3和img4。
步骤S322,将四帧图像中任意一张作为基准图像,举例来说,以img1作为基准图像,然后利用img2、img3和img4对img1进行降噪处理。
这里,3D降噪的实质是同一个像素点,在不同时刻受噪声污染程度不一样,而噪声一般服从均值为零的高斯噪声,故对一个像素点不同时刻的值进行取平均操作,结果将趋向于该像素点的真实值。但由于手机进行拍照时,四帧图像之间会有或多或少的移动,即例如第一像素p1在img1中点(3,3)处,而其在img2中可能就会是(3,4)或者(2,3)抑或其他位置,其在img3和img4中位置也一样,均有可能不再位置(3,3);这样,如果不再同一个位置,直接进行取平均操作,势必会造成图像边缘错位;而如果直接使用运动估计不仅计算量大,而且在细节很多的场景中运动估计也会有偏差。而下面本实施例提出的一种降噪方法,绕过传统的运动估计,采用一种新的判断方法来替代运动估计,根据判断结果进行去噪。
步骤S323,分别遍历四帧图像的Y分量、U分量和V分量信息,对四帧图像的每一个对应像素进行判断,然后决定滤波方式。
这里,由于这三个分量计算起来过程都是类似的,因此在下面的实施 例中,以Y分量为例进行说明,对于U分量和V分量,请参见Y分量即可。下面以像素点(i,j)为例进行说明,当像素点(i,j)处理完成之后,可以循环遍历整个图像数据,每一像素点都可以按照像素点(i,j)的方式进行去噪,将最终去噪结果作为输出即可。下面介绍像素点(i,j)的处理过程:
1)如果基准帧像素Y1(i,j)与Y2(i,j)、Y3(i,j)、Y4(i,j)的差值的绝对值三个值中,均在某一个范围内,则结果图像中当前像素点的值就等于四帧图像对应像素点的均值;其中,Y1(i,j)表示img1的像素点(i,j)的Y分量,Y2(i,j)表示img2的像素点(i,j)的Y分量,Y3(i,j)表示img3的像素点(i,j)的Y分量,Y4(i,j)表示img4的像素点(i,j)的Y分量。
2)如果基准帧像素Y1(i,j)与Y2(i,j)、Y3(i,j)、Y4(i,j)的差值的绝对值三个值中,只有某两个是在某一范围的,则结果图像中当前像素点的值就等于在此范围内的两帧和基准帧,三帧对应像素点的均值。例如Y1(i,j)与Y2(i,j)、Y3(i,j)差值绝对值在某个范围,而Y1(i,j)与Y4(i,j)差值绝对值不在该范围,结果图像中当前像素点的值就等于Y1(i,j)与Y2(i,j)、Y3(i,j)的均值。如果不在一个范围内的是Y2(i,j)和Y3(i,j),处理方法一样。
3)如果基准帧像素Y1(i,j)与Y2(i,j)、Y3(i,j)、Y4(i,j)的差值的绝对值三个值中,只有一个是在某一范围的,则结果图像中当前像素点的值就等于在此范围内的帧和基准帧,两帧对应像素点的均值。例如Y1(i,j)与Y2(i,j)差值绝对值在某个范围,而Y1(i,j)与Y3(i,j)、Y4(i,j)差值绝对值不在该范围,结果图像中当前像素点的值就等于Y1(i,j)与Y2(i,j)的均值。其他类似。
4)如果基准帧像素Y1(i,j)与Y2(i,j)、Y3(i,j)、Y4(i,j)的差值的绝对值三个值中,没有一个是在预设范围内,则对Y1(i,j)用下面的方法进行滤波,即选取Y1(i,j)周围一定大小区域,如3×3或5×5或7×7等。
以3×3大小区域为例进行讲解:如图3所示,P22为带滤波像素点Y1(i, j),其他为其周围的值。选取这九个值中与P22差值的绝对值在某一个范围的值对应的原始值参与滤波,将这些值取平均作为最终的滤波值,如图4所示,图4中待处理点为5,如果将差值的绝对值范围设定为2,即差值绝对值小于等于2对应的数值参与计算,即3、4、5、6和7五个数值参与计算,然后求这5个数的均值作为最终结果。
从以上的描述可以看出,本发明实施例的技术方案能够消除现有的去噪效果差和容易造成边缘细节的损失等缺点,使最终拍照得到图片有更好的视觉效果。
本发明实施例中,先从手机内存中连续获取四帧图像YUV数据;利用四帧图像进行YUV去噪;将去噪得到的YUV数据输出,作为结果图像。其中,重点在于对一个像素点不同时刻的值进行取平均操作,绕过了传统的运动估计,避免了传统的运动估计方法“不仅计算量大,而且在细节很多的场景中运动估计也会有偏差”的问题。在去噪的过程中,通过对比各帧图像相对应位置像素差值,来判断是否进行3D降噪;相较于现有的手机去噪方法和3D降噪方法,本方法不需要对图像进行运动估计,计算量小、成本低、算法易于理解,并且能在手机上进行实时拍照处理。
实施例四
基于前述的实施例,本发明实施例提供一种图像处理装置,该装置所包括的各单元,以及各单元所包括的各模块,都可以通过终端中的处理器来实现,当然也可通过逻辑电路实现;在实施的过程中,处理器可以为中央处理器(CPU)、微处理器(MPU)、数字信号处理器(DSP)或现场可编程门阵列(FPGA)等。
图5为本发明实施例四图像处理装置的组成结构示意图,如图5所示,该装置400包括获取单元401、第一确定单元402、对比单元403、第二确定单元404、第一处理单元405、第二处理单元406和组成单元407,其中:
所述获取单元401,配置为获取连续的N帧图像,所述N为大于等于3的整数,所述N帧图像的数据采用颜色模型来表示,所述颜色模型包括M个色彩分量,所述M为大于等于1的整数;
所述第一确定单元402,配置为从所述N帧图像中确定一帧作为基准帧,将除所述基准帧之外的(N-1)帧图像确定为对比帧;
所述对比单元403,配置为将所述基准帧在第一色彩分量上第一像素点的像素值Q1(i,j)与对比帧中在第一色彩分量上的第一像素点的像素值Q0(i,j)对比,得到对比结果;所述第一像素点为所述N帧图像中坐标为(i,j)的像素点,所述第一色彩分量为所述M个色彩分量的任意一个色彩分量;
所述第二确定单元404,配置为根据所述对比结果确定在第一色彩分量上第一像素点的像素值;
所述第一处理单元405,配置为根据所述对比单元和所述第二确定单元的处理过程,得到第一色彩分量上剩余的全部像素点的像素值;
所述第二处理单元406,配置为根据所述对比单元、所述第二确定单元和第一处理单元的处理过程,得到除所述第一色彩分量外的其他色彩分量上的全部像素点的像素值;
所述组成单元407,配置为将第一色彩分量上全部像素点的像素值和其他色彩分量上全部像素点的像素值组成一帧图像。
本发明实施例中,所述装置还包括拍摄单元,配置为针对同一对象,连续拍摄N帧图像。
实施例五
基于前述的实施例,本发明实施例提供一种图像处理装置,该装置所包括的各单元,以及各单元所包括的各模块,都可以通过终端中的处理器来实现,当然也可通过逻辑电路实现;在实施的过程中,处理器可以为中央处理器(CPU)、微处理器(MPU)、数字信号处理器(DSP)或现场可 编程门阵列(FPGA)等。
该装置400包括获取单元401、第一确定单元402、对比单元403、第二确定单元404、第一处理单元405、第二处理单元406和组成单元407,其中所对比单元403包括做差模块431和判断模块432,所述第二确定单元404包括第一确定模块441和第一平均模块442,其中:
所述获取单元401,配置为获取连续的N帧图像,所述N为大于等于3的整数,所述N帧图像的数据采用颜色模型来表示,所述颜色模型包括M个色彩分量,所述M为大于等于1的整数;
所述第一确定单元402,配置为从所述N帧图像中确定一帧作为基准帧,将除所述基准帧之外的(N-1)帧图像确定为对比帧;
所述做差模块431,配置为将基准帧在第一色彩分量上的第一像素点的像素值Q1(i,j)与对比帧中在第一色彩分量上的第一像素点的像素值Q0(i,j)进行做差,得到差值z(i,j);
这里,所述第一像素点为所述N帧图像中坐标为(i,j)的像素点,所述第一色彩分量为所述M个色彩分量的任意一个色彩分量;其中i和j为大于等于0的整数;
所述判断模块432,配置为判断所述差值z(i,j)是否在预设的第一阈值范围内,得到判断结果。
所述第一确定模块441,配置为当所述判断结果表明至少有一个所述差值z(i,j)在预设的第一阈值范围内时,确定第一像素值集合,所述第一像素集合为所述差值z(i,j)在预设的第一阈值范围内时对应的Q0(i,j);
所述第一平均模块442,配置为将所述Q1(i,j)与第一像素值集合进行算术平均,得到第一色彩分量上第一像素点的像素值Q(i,j)。
所述第一处理单元405,配置为根据所述对比单元和所述第二确定单元中各模块的处理过程,得到第一色彩分量上剩余的全部像素点的像素值;
所述第二处理单元406,配置为根据所述对比单元、所述第二确定单元和第一处理单元中各模块的处理过程,得到除所述第一色彩分量外的其他色彩分量上的全部像素点的像素值;
所述组成单元407,配置为将第一色彩分量上全部像素点的像素值和其他色彩分量上全部像素点的像素值组成一帧图像。
本发明实施例中,所述装置还包括拍摄单元,配置为针对同一对象,连续拍摄N帧图像。
实施例四
基于前述的实施例,本发明实施例提供一种图像处理装置,该装置所包括的各单元,以及各单元所包括的各模块,都可以通过终端中的处理器来实现,当然也可通过逻辑电路实现;在实施的过程中,处理器可以为中央处理器(CPU)、微处理器(MPU)、数字信号处理器(DSP)或现场可编程门阵列(FPGA)等。
该装置400包括获取单元401、第一确定单元402、对比单元403、第二确定单元404、第一处理单元405、第二处理单元406和组成单元407,其中所述对比单元403包括做差模块431和判断模块432,所述第二确定单元404包括选取模块443、第一确定模块444和第一平均模块445,其中:
所述获取单元401,配置为获取连续的N帧图像,所述N为大于等于3的整数,所述N帧图像的数据采用颜色模型来表示,所述颜色模型包括M个色彩分量,所述M为大于等于1的整数;
所述第一确定单元402,配置为从所述N帧图像中确定一帧作为基准帧,将除所述基准帧之外的(N-1)帧图像确定为对比帧;
所述做差模块431,配置为将基准帧在第一色彩分量上的第一像素点的像素值Q1(i,j)与对比帧中在第一色彩分量上的第一像素点的像素值Q0(i,j) 进行做差,得到差值z(i,j);
这里,所述第一像素点为所述N帧图像中坐标为(i,j)的像素点,所述第一色彩分量为所述M个色彩分量的任意一个色彩分量;
所述判断模块432,配置为判断所述差值z(i,j)是否在预设的第一阈值范围内,得到判断结果。
所述选取模块443,配置为当所述判断结果表明至少有一个所述差值z(i,j)在预设的第一阈值范围内时,将Q1(i,j)作为中心点选取一图像块;
所述第二确定模块444,配置为从所述图像块内确定第二像素值集合,所述第二像素值集合为图像块内满足预设的第二阈值范围内的像素值的像素点;
所述第二平均模块445,配置为将所述Q1(i,j)与第二像素值集合进行算术平均,得到第一色彩分量上第一像素点的像素值Q(i,j)。
所述第一处理单元405,配置为根据所述对比单元和所述第二确定单元中各模块的处理过程,得到第一色彩分量上剩余的全部像素点的像素值;
所述第二处理单元406,配置为根据所述对比单元、所述第二确定单元和第一处理单元中各模块的处理过程,得到除所述第一色彩分量外的其他色彩分量上的全部像素点的像素值;
所述组成单元407,配置为将第一色彩分量上全部像素点的像素值和其他色彩分量上全部像素点的像素值组成一帧图像。
本发明实施例中,所述装置还包括拍摄单元,配置为针对同一对象,连续拍摄N帧图像。
本发明实施例中,所述图像块为以下之一:
3个像素×3个像素、5个像素×5个像素、7个像素×7个像素、9个像素×9个像素。
本发明实施例中,所述N为4。
这里需要指出的是:以上装置实施例的描述,与上述方法实施例的描述是类似的,具有同方法实施例相似的有益效果,因此不做赘述。对于本发明装置实施例中未披露的技术细节,请参照本发明方法实施例的描述而理解,为节约篇幅,因此不再赘述。
实施例六
基于前述的实施例,本发明实施例再提供一种终端,该终端包括图像采集单元、存储器和处理器,其中:
图像采集单元(如摄像头),配置为针对同一对象,连续拍摄N帧图像,以便获取连续的N帧图像;
所述存储器,配置为存储连续的N帧图像;
所述处理器,配置为针对同一对象,调用图像采集单元连续拍摄N帧图像;从存储器获取连续的N帧图像,所述N为大于等于3的整数,所述N帧图像的数据采用颜色模型来表示,所述颜色模型包括M个色彩分量,所述M为大于等于1的整数;从所述N帧图像中确定一帧作为基准帧,将除所述基准帧之外的(N-1)帧图像确定为对比帧;将所述基准帧在第一色彩分量上第一像素点的像素值Q1(i,j)与对比帧中在第一色彩分量上的第一像素点的像素值Q0(i,j)对比,得到对比结果;所述第一像素点为所述N帧图像中坐标为(i,j)的像素点,所述第一色彩分量为所述M个色彩分量的任意一个色彩分量;根据所述对比结果确定在第一色彩分量上第一像素点的像素值,依此得到第一色彩分量上剩余的全部像素点的像素值;以得到第一色彩分量上像素值的方式类推,得到除所述第一色彩分量外的其他色彩分量上的全部像素点的像素值;将所述M个色彩分量上全部像素点的像素值组成一帧图像。
这里需要指出的是:以上电子设备实施例项的描述,与上述方法描述是类似的,具有同方法实施例相同的有益效果,因此不做赘述。对于本发 明电子设备实施例中未披露的技术细节,本领域的技术人员请参照本发明方法实施例的描述而理解,为节约篇幅,这里不再赘述。
需要说明的是,本发明实施例中,如果以软件功能模块的形式实现上述的图像处理方法,并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明实施例的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机、服务器、或者网络设备等)执行本发明各个实施例所述方法的全部或部分。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read Only Memory)、磁碟或者光盘等各种可以存储程序代码的介质。这样,本发明实施例不限制于任何特定的硬件和软件结合。相应地,本发明实施例再提供一种计算机存储介质,所述计算机存储介质中存储有计算机可执行指令,该计算机可执行指令用于执行本发明实施例中图像处理方法。
应理解,说明书通篇中提到的“一个实施例”或“一实施例”意味着与实施例有关的特定特征、结构或特性包括在本发明的至少一个实施例中。因此,在整个说明书各处出现的“在一个实施例中”或“在一实施例中”未必一定指相同的实施例。此外,这些特定的特征、结构或特性可以任意适合的方式结合在一个或多个实施例中。应理解,在本发明的各种实施例中,上述各过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本发明实施例的实施过程构成任何限定。上述本发明实施例序号仅仅为了描述,不代表实施例的优劣。
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或 者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素。
在本申请所提供的几个实施例中,应该理解到,所揭露的设备和方法,可以通过其它的方式实现。以上所描述的设备实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,如:多个单元或组件可以结合,或可以集成到另一个***,或一些特征可以忽略,或不执行。另外,所显示或讨论的各组成部分相互之间的耦合、或直接耦合、或通信连接可以是通过一些接口,设备或单元的间接耦合或通信连接,可以是电性的、机械的或其它形式的。
上述作为分离部件说明的单元可以是、或也可以不是物理上分开的,作为单元显示的部件可以是、或也可以不是物理单元;既可以位于一个地方,也可以分布到多个网络单元上;可以根据实际的需要选择其中的部分或全部单元来实现本实施例方案的目的。
另外,在本发明各实施例中的各功能单元可以全部集成在一个处理单元中,也可以是各单元分别单独作为一个单元,也可以两个或两个以上单元集成在一个单元中;上述集成的单元既可以采用硬件的形式实现,也可以采用硬件加软件功能单元的形式实现。
本领域普通技术人员可以理解:实现上述方法实施例的全部或部分步骤可以通过程序指令相关的硬件来完成,前述的程序可以存储于计算机可读取存储介质中,该程序在执行时,执行包括上述方法实施例的步骤;而前述的存储介质包括:移动存储设备、只读存储器(Read Only Memory,ROM)、磁碟或者光盘等各种可以存储程序代码的介质。
或者,本发明上述集成的单元如果以软件功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。 基于这样的理解,本发明实施例的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机、服务器、或者网络设备等)执行本发明各个实施例所述方法的全部或部分。而前述的存储介质包括:移动存储设备、ROM、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,仅为本发明的具体实施方式,但本发明的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本发明揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本发明的保护范围之内。因此,本发明的保护范围应以所述权利要求的保护范围为准。
工业实用性
本发明实施例中,获取连续的N帧图像,所述N为大于等于3的整数,所述N帧图像的数据采用颜色模型来表示,所述颜色模型包括M个色彩分量,所述M为大于等于1的整数;从所述N帧图像中确定一帧作为基准帧,将除所述基准帧之外的(N-1)帧图像确定为对比帧;将所述基准帧在第一色彩分量上第一像素点的像素值Q1(i,j)与对比帧中在第一色彩分量上的第一像素点的像素值Q0(i,j)对比,得到对比结果;所述第一像素点为所述N帧图像中坐标为(i,j)的像素点,所述第一色彩分量为所述M个色彩分量的任意一个色彩分量;根据所述对比结果确定在第一色彩分量上第一像素点的像素值,依此得到第一色彩分量上剩余的全部像素点的像素值;以得到第一色彩分量上像素值的方式,得到除所述第一色彩分量外的其他色彩分量上的全部像素点的像素值;将所述M个色彩分量上全部像素点的像素值组成一帧图像;如此,能够提高去噪效果,并能克服容易造成边缘细节的损失等缺点,使最终拍照得到图片有更好的视觉效果。

Claims (20)

  1. 一种图像处理方法,所述方法包括:
    获取连续的N帧图像,所述N为大于等于3的整数,所述N帧图像的数据采用颜色模型来表示,所述颜色模型包括M个色彩分量,所述M为大于等于1的整数;
    从所述N帧图像中确定一帧作为基准帧,将除所述基准帧之外的(N-1)帧图像确定为对比帧;
    将所述基准帧在第一色彩分量上第一像素点的像素值Q1(i,j)与对比帧中在第一色彩分量上的第一像素点的像素值Q0(i,j)对比,得到对比结果;所述第一像素点为所述N帧图像中坐标为(i,j)的像素点,所述第一色彩分量为所述M个色彩分量的任意一个色彩分量;
    根据所述对比结果确定在第一色彩分量上第一像素点的像素值,依此得到第一色彩分量上剩余的全部像素点的像素值;
    以得到第一色彩分量上像素值的方式得到除所述第一色彩分量外的其他色彩分量上的全部像素点的像素值;
    将所述M个色彩分量上全部像素点的像素值组成一帧图像。
  2. 根据权利要求1所述的方法,其中,所述方法还包括:
    针对同一对象,连续拍摄N帧图像。
  3. 根据权利要求1或2所述的方法,其中,所述将所述基准帧在第一色彩分量上第一像素点的像素值Q1(i,j)与对比帧中在第一色彩分量上的第一像素点的像素值Q0(i,j)对比,得到对比结果,包括:
    将基准帧在第一色彩分量上的第一像素点的像素值Q1(i,j)与对比帧中在第一色彩分量上的第一像素点的像素值Q0(i,j)进行做差,得到差值z(i,j);
    判断所述差值z(i,j)是否在预设的第一阈值范围内,得到判断结果。
  4. 根据权利要求3所述的方法,其中,根据所述对比结果确定在第一色彩分量上第一像素点的像素值,包括:
    当所述判断结果表明至少有一个所述差值z(i,j)在预设的第一阈值范围内时,确定第一像素值集合,所述第一像素集合为所述差值z(i,j)在预设的第一阈值范围内时对应的Q0(i,j);
    将所述Q1(i,j)与第一像素值集合进行算术平均,得到第一色彩分量上第一像素点的像素值Q(i,j)。
  5. 根据权利要求4所述的方法,其中,根据所述对比结果确定在第一色彩分量上第一像素点的像素值,包括:
    当所述判断结果表明至少有一个所述差值z(i,j)在预设的第一阈值范围内时,将Q1(i,j)作为中心点选取一图像块;
    从所述图像块内确定第二像素值集合,所述第二像素值集合为图像块内满足预设的第二阈值范围内的像素值的像素点;
    将所述Q1(i,j)与第二像素值集合进行算术平均,得到第一色彩分量上第一像素点的像素值Q(i,j)。
  6. 根据权利要求5所述的方法,其中,所述图像块为以下之一:
    3个像素×3个像素、5个像素×5个像素、7个像素×7个像素、9个像素×9个像素。
  7. 根据权利要求1至6任一项所述的方法,其中,所述N为4。
  8. 一种图像处理装置,所述装置包括获取单元、第一确定单元、对比单元、第二确定单元、第一处理单元、第二处理单元和组成单元,其中:
    所述获取单元,配置为获取连续的N帧图像,所述N为大于等于3的整数,所述N帧图像的数据采用颜色模型来表示,所述颜色模型包括M个色彩分量,所述M为大于等于1的整数;
    所述第一确定单元,配置为从所述N帧图像中确定一帧作为基准帧,将除所述基准帧之外的(N-1)帧图像确定为对比帧;
    所述对比单元,配置为将所述基准帧在第一色彩分量上第一像素点的像素值Q1(i,j)与对比帧中在第一色彩分量上的第一像素点的像素值Q0(i,j)对比,得到对比结果;所述第一像素点为所述N帧图像中坐标为(i,j)的像素点,所述第一色彩分量为所述M个色彩分量的任意一个色彩分量;
    所述第二确定单元,配置为根据所述对比结果确定在第一色彩分量上第一像素点的像素值;
    所述第一处理单元,配置为根据所述对比单元和所述第二确定单元的处理过程,得到第一色彩分量上剩余的全部像素点的像素值;
    所述第二处理单元,配置为根据所述对比单元、所述第二确定单元和第一处理单元的处理过程,得到除所述第一色彩分量外的其他分量上的全部像素点的像素值;
    所述组成单元,配置为将第一色彩分量上全部像素点的像素值和其他色彩分量上全部像素点的像素值组成一帧图像。
  9. 根据权利要求8所述的装置,其中,所述对比单元包括做差模块和判断模块,其中:
    所述做差模块,配置为将基准帧在第一色彩分量上的第一像素点的像素值Q1(i,j)与对比帧中在第一色彩分量上的第一像素点的像素值Q0(i,j)进行做差,得到差值z(i,j);
    所述判断模块,配置为判断所述差值z(i,j)是否在预设的第一阈值范围内,得到判断结果。
  10. 根据权利要求8所述的装置,其中,所述第二确定单元包括第一确定模块和第一平均模块,其中:
    所述第一确定模块,配置为当所述判断结果表明至少有一个所述差值z(i,j)在预设的第一阈值范围内时,确定第一像素值集合,所述第一像素集合为所述差值z(i,j)在预设的第一阈值范围内时对应的Q0(i,j);
    所述第一平均模块,配置为将所述Q1(i,j)与第一像素值集合进行算术平均,得到第一色彩分量上第一像素点的像素值Q(i,j)。
  11. 根据权利要求10所述的装置,其中,所述第二确定单元包括选取模块、第一确定模块和第一平均模块,其中:
    所述选取模块,配置为当所述判断结果表明至少有一个所述差值z(i,j)在预设的第一阈值范围内时,将Q1(i,j)作为中心点选取一图像块;
    所述第二确定模块,配置为从所述图像块内确定第二像素值集合,所述第二像素值集合为图像块内满足预设的第二阈值范围内的像素值的像素点;
    所述第二平均模块,配置为将所述Q1(i,j)与第二像素值集合进行算术平均,得到第一色彩分量上第一像素点的像素值Q(i,j)。
  12. 根据权利要求11所述的装置,其中,所述图像块为以下之一:
    3个像素×3个像素、5个像素×5个像素、7个像素×7个像素、9个像素×9个像素。
  13. 根据权利要求8至12任一项所述的装置,其中,所述N为4。
  14. 根据权利要求8至12任一项所述的装置,其中,所述装置还包括拍摄单元,配置为针对同一对象,连续拍摄N帧图像。
  15. 一种终端,所述终端包括存储器和处理器,其中:
    所述存储器,配置为存储连续的N帧图像;
    所述处理器,配置为获取连续的N帧图像,所述N为大于等于3的整数,所述N帧图像的数据采用颜色模型来表示,所述颜色模型包括M个色彩分量,所述M为大于等于1的整数;从所述N帧图像中确定一帧 作为基准帧,将除所述基准帧之外的(N-1)帧图像确定为对比帧;将所述基准帧在第一色彩分量上第一像素点的像素值Q1(i,j)与对比帧中在第一色彩分量上的第一像素点的像素值Q0(i,j)对比,得到对比结果;所述第一像素点为所述N帧图像中坐标为(i,j)的像素点,所述第一色彩分量为所述M个色彩分量的任意一个色彩分量;根据所述对比结果确定在第一色彩分量上第一像素点的像素值,依此得到第一色彩分量上剩余的全部像素点的像素值;以得到第一色彩分量上像素值的方式类推,得到除所述第一色彩分量外的其他色彩分量上的全部像素点的像素值;将所述M个色彩分量上全部像素点的像素值组成一帧图像。
  16. 一种计算机存储介质,所述计算机存储介质中存储有计算机可执行指令,该计算机可执行指令用于执行以下步骤:
    获取连续的N帧图像,所述N为大于等于3的整数,所述N帧图像的数据采用颜色模型来表示,所述颜色模型包括M个色彩分量,所述M为大于等于1的整数;
    从所述N帧图像中确定一帧作为基准帧,将除所述基准帧之外的(N-1)帧图像确定为对比帧;
    将所述基准帧在第一色彩分量上第一像素点的像素值Q1(i,j)与对比帧中在第一色彩分量上的第一像素点的像素值Q0(i,j)对比,得到对比结果;所述第一像素点为所述N帧图像中坐标为(i,j)的像素点,所述第一色彩分量为所述M个色彩分量的任意一个色彩分量;
    根据所述对比结果确定在第一色彩分量上第一像素点的像素值,依此得到第一色彩分量上剩余的全部像素点的像素值;
    以得到第一色彩分量上像素值的方式得到除所述第一色彩分量外的其他色彩分量上的全部像素点的像素值;
    将所述M个色彩分量上全部像素点的像素值组成一帧图像。
  17. 根据权利要求16所述的存储介质,其中,所述将所述基准帧在第一色彩分量上第一像素点的像素值Q1(i,j)与对比帧中在第一色彩分量上的第一像素点的像素值Q0(i,j)对比,得到对比结果,包括:
    将基准帧在第一色彩分量上的第一像素点的像素值Q1(i,j)与对比帧中在第一色彩分量上的第一像素点的像素值Q0(i,j)进行做差,得到差值z(i,j);
    判断所述差值z(i,j)是否在预设的第一阈值范围内,得到判断结果。
  18. 根据权利要求17所述的存储介质,其中,根据所述对比结果确定在第一色彩分量上第一像素点的像素值,包括:
    当所述判断结果表明至少有一个所述差值z(i,j)在预设的第一阈值范围内时,确定第一像素值集合,所述第一像素集合为所述差值z(i,j)在预设的第一阈值范围内时对应的Q0(i,j);
    将所述Q1(i,j)与第一像素值集合进行算术平均,得到第一色彩分量上第一像素点的像素值Q(i,j)。
  19. 根据权利要求18所述的存储介质,其中,根据所述对比结果确定在第一色彩分量上第一像素点的像素值,包括:
    当所述判断结果表明至少有一个所述差值z(i,j)在预设的第一阈值范围内时,将Q1(i,j)作为中心点选取一图像块;
    从所述图像块内确定第二像素值集合,所述第二像素值集合为图像块内满足预设的第二阈值范围内的像素值的像素点;
    将所述Q1(i,j)与第二像素值集合进行算术平均,得到第一色彩分量上第一像素点的像素值Q(i,j)。
  20. 根据权利要求19所述的存储介质,其中,所述图像块为以下之一:3个像素×3个像素、5个像素×5个像素、7个像素×7个像素、9个像素×9个像素。
PCT/CN2016/099255 2015-11-26 2016-09-18 一种图像处理方法及装置、终端、存储介质 WO2017088564A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201510843465.8 2015-11-26
CN201510843465.8A CN105491358B (zh) 2015-11-26 2015-11-26 一种图像处理方法及装置、终端

Publications (1)

Publication Number Publication Date
WO2017088564A1 true WO2017088564A1 (zh) 2017-06-01

Family

ID=55678035

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/099255 WO2017088564A1 (zh) 2015-11-26 2016-09-18 一种图像处理方法及装置、终端、存储介质

Country Status (2)

Country Link
CN (1) CN105491358B (zh)
WO (1) WO2017088564A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111243038A (zh) * 2020-01-07 2020-06-05 西安芯瞳半导体技术有限公司 基于颜色特征的图形对比方法、装置及计算机存储介质

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105491358B (zh) * 2015-11-26 2018-11-16 努比亚技术有限公司 一种图像处理方法及装置、终端
CN108540728A (zh) * 2017-03-06 2018-09-14 中兴通讯股份有限公司 长曝光拍照方法及装置
CN110876014B (zh) * 2018-08-31 2022-04-08 北京小米移动软件有限公司 图像处理方法及装置、电子设备及存储介质
CN109903260B (zh) 2019-01-30 2023-05-23 华为技术有限公司 图像处理方法及图像处理装置
CN113962877B (zh) * 2019-07-17 2024-06-25 中国电子科技集团公司第十三研究所 一种像素畸变的校正方法、校正装置及终端
CN112651056B (zh) * 2019-10-11 2024-05-31 中国信息通信研究院 防截屏显示方法、装置及***
CN112817108B (zh) * 2021-01-22 2022-10-18 浙江大华技术股份有限公司 一种滤光片间切换检测方法、装置、存储介质及电子装置

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012175257A (ja) * 2011-02-18 2012-09-10 Fujitsu Ltd フィルムモード検出装置、フィルムモード検出方法、およびプログラム
CN104299214A (zh) * 2014-09-30 2015-01-21 中国科学院深圳先进技术研究院 小雨场景视频数据中雨滴的检测与去除方法和***
CN104318537A (zh) * 2014-09-30 2015-01-28 中国科学院深圳先进技术研究院 大雨场景视频数据中雨滴的检测和去除方法及***
CN105491358A (zh) * 2015-11-26 2016-04-13 努比亚技术有限公司 一种图像处理方法及装置、终端
CN105513021A (zh) * 2015-11-27 2016-04-20 努比亚技术有限公司 图像去噪装置和方法

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101658027B (zh) * 2007-03-31 2013-04-10 索尼德国有限责任公司 用于图像帧的降噪方法和单元
JP4480760B2 (ja) * 2007-12-29 2010-06-16 株式会社モルフォ 画像データ処理方法および画像処理装置
US8963949B2 (en) * 2009-04-22 2015-02-24 Qualcomm Incorporated Image selection and combination method and device
EP2284800B1 (en) * 2009-07-23 2018-09-05 Samsung Electronics Co., Ltd. Method and system for creating an image
JP5445363B2 (ja) * 2010-07-08 2014-03-19 株式会社リコー 画像処理装置、画像処理方法および画像処理プログラム
JP2013081078A (ja) * 2011-10-04 2013-05-02 Sony Corp 画像処理装置および方法、プログラム、並びに記録媒体
CN103959304A (zh) * 2011-12-02 2014-07-30 索尼公司 图像处理装置、图像处理方法以及程序
JP5990004B2 (ja) * 2012-02-08 2016-09-07 キヤノン株式会社 撮像装置
CN104205804B (zh) * 2012-03-30 2016-05-25 富士胶片株式会社 图像处理装置、拍摄装置以及图像处理方法
CN104036471B (zh) * 2013-03-05 2017-07-25 腾讯科技(深圳)有限公司 一种图像噪声估值方法及图像噪声估值装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012175257A (ja) * 2011-02-18 2012-09-10 Fujitsu Ltd フィルムモード検出装置、フィルムモード検出方法、およびプログラム
CN104299214A (zh) * 2014-09-30 2015-01-21 中国科学院深圳先进技术研究院 小雨场景视频数据中雨滴的检测与去除方法和***
CN104318537A (zh) * 2014-09-30 2015-01-28 中国科学院深圳先进技术研究院 大雨场景视频数据中雨滴的检测和去除方法及***
CN105491358A (zh) * 2015-11-26 2016-04-13 努比亚技术有限公司 一种图像处理方法及装置、终端
CN105513021A (zh) * 2015-11-27 2016-04-20 努比亚技术有限公司 图像去噪装置和方法

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111243038A (zh) * 2020-01-07 2020-06-05 西安芯瞳半导体技术有限公司 基于颜色特征的图形对比方法、装置及计算机存储介质
CN111243038B (zh) * 2020-01-07 2024-02-09 芯瞳半导体技术(山东)有限公司 基于颜色特征的图形对比方法、装置及计算机存储介质

Also Published As

Publication number Publication date
CN105491358B (zh) 2018-11-16
CN105491358A (zh) 2016-04-13

Similar Documents

Publication Publication Date Title
WO2017088564A1 (zh) 一种图像处理方法及装置、终端、存储介质
JP6803982B2 (ja) 光学撮像方法および装置
TWI696146B (zh) 影像處理方法、裝置、電腦可讀儲存媒體和行動終端
US10827140B2 (en) Photographing method for terminal and terminal
WO2019153920A1 (zh) 一种图像处理的方法以及相关设备
WO2017113937A1 (zh) 移动终端和降噪方法
US10516860B2 (en) Image processing method, storage medium, and terminal
CN111418201A (zh) 一种拍摄方法及设备
WO2017088609A1 (zh) 图像去噪装置和方法
CN104380727B (zh) 图像处理装置和图像处理方法
WO2020078273A1 (zh) 一种拍摄方法及电子设备
JP6574878B2 (ja) 画像処理装置及び画像処理方法、撮像装置、プログラム、並びに記憶媒体
JP5677625B2 (ja) 信号処理装置、撮像装置、信号補正方法
CN105469357B (zh) 图像处理方法、装置及终端
WO2017114088A1 (zh) 一种自动白平衡方法及装置、终端、存储介质
CN105472246A (zh) 拍照装置及方法
JP2018517369A (ja) オートホワイトバランスのための動的フレームスキップ
US11222235B2 (en) Method and apparatus for training image processing model, and storage medium
US20200210682A1 (en) Skin color identification method, skin color identification apparatus and storage medium
CN108156392B (zh) 一种拍摄方法、终端和计算机可读存储介质
US11521305B2 (en) Image processing method and device, mobile terminal, and storage medium
JP6450107B2 (ja) 画像処理装置及び画像処理方法、プログラム、記憶媒体
CN107360378B (zh) 一种曝光控制方法、移动终端和计算机存储介质
US20130242167A1 (en) Apparatus and method for capturing image in mobile terminal
CN107610073B (zh) 一种图像处理方法及终端、存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16867791

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16867791

Country of ref document: EP

Kind code of ref document: A1