WO2018072267A1 - 用于终端拍照的方法及终端 - Google Patents

用于终端拍照的方法及终端 Download PDF

Info

Publication number
WO2018072267A1
WO2018072267A1 PCT/CN2016/108293 CN2016108293W WO2018072267A1 WO 2018072267 A1 WO2018072267 A1 WO 2018072267A1 CN 2016108293 W CN2016108293 W CN 2016108293W WO 2018072267 A1 WO2018072267 A1 WO 2018072267A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
black
camera
frequency information
color
Prior art date
Application number
PCT/CN2016/108293
Other languages
English (en)
French (fr)
Inventor
朱聪超
罗巍
杜成
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to CN201680080689.0A priority Critical patent/CN108605099B/zh
Priority to US16/342,299 priority patent/US10827140B2/en
Publication of WO2018072267A1 publication Critical patent/WO2018072267A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/14Transformations for image registration, e.g. adjusting or mapping for alignment of images
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/60Noise processing, e.g. detecting, correcting, reducing or removing noise
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/45Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from two or more image sensors being of different type or operating in different modes, e.g. with a CMOS sensor for moving images in combination with a charge-coupled device [CCD] for still images
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/667Camera operation mode switching, e.g. between still and video, sport and normal or high- and low-resolution modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/71Circuitry for evaluating the brightness variation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/74Circuitry for compensating brightness variation in the scene by influencing the scene brightness using illuminating means
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/76Circuitry for compensating brightness variation in the scene by influencing the image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/40Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled
    • H04N25/46Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled by combining or binning pixels

Definitions

  • Embodiments of the present invention relate to the field of image processing, and, more particularly, to a method and terminal for photographing a terminal.
  • the lens and sensor area of the terminal camera are relatively small, and the quality of the captured image is poor. In night scenes or low illumination environments, the quality of the captured image will be worse due to the weaker light.
  • the terminal In order to improve the shooting effect of the terminal, the terminal currently uses the binning technology to improve the brightness of the camera in the night scene or low illumination environment to improve the image quality.
  • the adjacent pixels are combined into one pixel to improve the light sensitivity in a dark environment.
  • this single merging technique loses high-frequency detail of the image, seriously degrading the quality of the image.
  • Embodiments of the present invention provide a method and a terminal for photographing a terminal, which can improve image quality.
  • a terminal including:
  • a black and white camera and a color camera for simultaneously capturing the same scene to be photographed respectively to obtain a K frame image, wherein the black and white camera adopts a full-size working mode, and the color camera adopts a combined working mode, K ⁇ 1;
  • a processor connected to the black and white camera and the color camera, for acquiring a first image corresponding to the black and white camera and a second image corresponding to the color camera;
  • a plurality of cameras including a black and white camera and a color camera
  • the black and white camera and the color camera capture images at the same time, causing handheld jitter between the cameras or the possibility of motion within the scene, ie black and white cameras and color
  • the color camera heads remain relatively stationary.
  • the black and white camera adopts a full size working mode
  • the color camera adopts a binning mode of operation.
  • the processor of the terminal may acquire a first image corresponding to the black and white camera and a second image corresponding to the color camera, and then extract high frequency information (such as full resolution detail information) and low frequency information of the scene to be captured (for example) Brightness information and color information), thereby merging the first image and the second image according to the information, and generating a composite image of the scene to be photographed, thereby obtaining an image with higher degree of reduction, the quality of the composite image being superior to that of the previous photographing One frame of image.
  • the method for photographing a terminal in the embodiment of the present invention is particularly suitable for low illumination or night scene, and can improve image quality.
  • the first image corresponding to the monochrome camera is a black and white image
  • the second image corresponding to the color camera is a color image.
  • the black-and-white camera also called a monochrome camera
  • the color camera can obtain the brightness information and color information of the scene to be photographed, for example, the color camera can output the unprocessed raw data of the Bell Bayer image format, and the processor can pass the anti-mosaic demosaic algorithm or other The image processing algorithm parses the color information of the image.
  • the resolution of the first image is better than the second image.
  • the black and white camera and the color camera are independent of each other, are in the same plane, and their corresponding optical axes are parallel.
  • black and white cameras and color cameras can be placed side by side in hardware design.
  • K is not specifically limited in the embodiment of the present invention.
  • the K frame image may correspond to K shots, for example, K may be 1, 2, 3, 4, or the like.
  • the selection of K can take into account the relationship between the number of continuous shooting images and the time interval required to take these images.
  • the terminal can shoot once, capture one frame of image (such as the first image and the second image) through the black and white camera and the color camera; or the terminal can shoot multiple times, collect multiple frames of images through the black and white camera and the color camera, and then respectively Multi-frame time domain noise reduction (multi-frame time domain noise reduction method, which will be described later) is processed to obtain a frame image corresponding to a black-and-white camera (such as a first image) and a frame corresponding to a color camera. Image (such as a second image).
  • Multi-frame time domain noise reduction multi-frame time domain noise reduction method, which will be described later
  • the specific processing procedure of the processor may include:
  • the fifth image and the third image are fused to generate a composite image of the scene to be shot.
  • the terminal can combine the high frequency information and the low frequency information, and process the first image acquired by the black and white camera and the second image obtained by the color camera to obtain a composite image of the scene to be photographed, thereby improving the quality of the captured image.
  • the processor may perform time domain noise reduction processing on the multi-frame image to obtain a first image corresponding to the black and white camera and a second image corresponding to the color camera. Specifically, it may include:
  • performing time domain noise reduction on the K frame image corresponding to the black and white camera including:
  • the processor can directly use the global motion relationship corresponding to the color camera to perform global image registration operation on the K-frame image of the black-and-white camera, thereby avoiding recalculating the black-and-white camera.
  • the global motion relationship saves computation and increases processing speed.
  • the processor may further perform spatial domain noise reduction on the composite image of the scene to be shot.
  • the processor can further reduce noise, thereby obtaining a clearer image.
  • a method for photographing a terminal comprising:
  • the K-frame image is obtained by simultaneously shooting the same scene to be photographed by the black-and-white camera and the color camera, wherein the black-and-white camera adopts a full-size working mode, and the color camera adopts a combined working mode, K ⁇ 1;
  • a black and white camera black and white camera
  • a color camera color camera
  • the black and white camera and the color camera capture images at the same time, the camera shakes or the object movement in the scene occurs between the cameras.
  • the likelihood is reduced, that is, the black and white camera and the color camera head remain relatively stationary.
  • the black and white camera adopts a full size working mode
  • the color camera adopts a binning mode of operation.
  • the terminal acquires a first image corresponding to the black and white camera and a second image corresponding to the color camera according to the K times of shooting, and then extracts high frequency information (such as full resolution detail information) and low frequency information of the scene to be captured.
  • the method for photographing a terminal in the embodiment of the present invention is particularly suitable for low illumination or night scene, and can improve image quality.
  • the step of “singing a K-frame image by simultaneously capturing the same scene to be photographed by the black-and-white camera and the color camera” may also be understood as: “through the black-and-white camera and the The color camera performs K shots on the same scene to be photographed, and the black and white camera and the color camera simultaneously acquire one frame of image each time.
  • the first image and the second image are fused according to the high frequency information and the low frequency information, and the composite image of the scene to be shot is generated, including:
  • the fifth image and the third image are fused to generate a composite image of the scene to be shot.
  • An image corresponding to the second image of the color camera includes:
  • performing time domain noise reduction on the K frame image corresponding to the black and white camera including:
  • the method may further include:
  • a method for photographing a terminal comprising an infrared camera and a color camera, the method being similar to the method for photographing a terminal in the second aspect, wherein the infrared camera can perform a black and white camera
  • the related scheme of the color camera is the same as in the second aspect.
  • the black and white camera of the second aspect can be replaced with an infrared camera.
  • a terminal comprising: a method for performing the second aspect or any of the possible implementations of the second aspect.
  • the apparatus comprises means for performing the method of any of the above-described second aspect or any of the possible implementations of the second aspect.
  • a terminal comprising: a method for performing any of the above third aspect or any possible implementation of the third aspect.
  • the apparatus comprises means for performing the method of any of the possible implementations of the third aspect or the third aspect described above.
  • a computer readable storage medium storing a program for causing a terminal to perform the second aspect described above, and any of its various implementations for photographing a terminal method.
  • a seventh aspect a computer readable storage medium storing a program for causing a terminal to perform the above third aspect, and any of its various implementations for photographing a terminal method.
  • FIG. 1A is a schematic diagram of an example of a merge mode.
  • Figure 1B is a block diagram of a Bell color filter array.
  • 1C is a block diagram showing a part of the structure of a mobile phone related to an embodiment of the present invention.
  • FIG. 1D is a schematic block diagram of a terminal according to an embodiment of the present invention.
  • FIG. 2 is a schematic flowchart of a method for photographing a terminal according to an embodiment of the present invention.
  • Fig. 3 is a view showing an effect comparison of an example to which an embodiment of the present invention is applied.
  • Fig. 4 is a partial effect comparison diagram of an example to which an embodiment of the present invention is applied.
  • FIG. 5 is a schematic flowchart of a method for photographing a terminal according to an embodiment of the present invention.
  • Figure 6 is a schematic block diagram of a terminal in accordance with one embodiment of the present invention.
  • the technical solution of the embodiment of the present invention can be applied to a terminal.
  • the terminal can be, but not limited to, a mobile station (MS), a mobile terminal, a mobile phone, a handset, and a portable device.
  • a portable device (such as a Radio Access Network, RAN) can communicate with one or more core networks, and the terminal can be a mobile terminal, such as a mobile phone (or "cellular" phone).
  • a computer having a mobile terminal for example, can be a portable, pocket, handheld, computer built-in or in-vehicle mobile terminal that exchanges language and/or data with the wireless access network.
  • the terminal may also be various types of products with a touch screen, such as a tablet computer, a touch screen mobile phone, a touch screen device, a mobile phone terminal, etc., and no limitation is imposed thereon.
  • the terminal may be a device having a photographing function, such as a mobile phone with a camera, a tablet computer or other devices having a photographing function.
  • the merge binning mode is an image readout method in which charges induced in adjacent pixels of an image sensor are added together and read out in a pixel mode.
  • the binning mode can handle the charge in the vertical direction and/or the horizontal direction, and the vertical direction is to add the charges of the adjacent columns together, and the horizontal direction is to add the pixel-induced charges of the adjacent rows together.
  • the binning mode superimposes the values of adjacent N pixels of the image and outputs them as one pixel value.
  • FIG. 1A shows a schematic diagram of an example of a binning mode.
  • Each of the left diagrams in Figure 1A Representing a pixel, the binning mode is about 4 pixels in the left image of Figure 1A. Unite as a pixel use. It should be understood that only four pixels are exemplified herein, and the present invention is not limited.
  • binning mode is especially suitable for low illumination shooting environments. In other words, the binning mode does not reduce the number of cells involved in imaging, so it does not degrade the sensitivity.
  • the binning mode reduces the resolution of the image, which means that the binning mode increases its light sensitivity and output rate at the expense of image resolution.
  • Figure 1B shows a block diagram of a Bayer color filter array. Light of one of red (R), green (G), and blue (B) is allowed to pass through at each pixel position.
  • the color values of the three color channels are acquired at each pixel position.
  • the color camera obtains color information of the image through a Color Filter Array (CFA), that is, the color filter array allows only light components of one color of R, G, and B to pass through the camera at each pixel position (for example, through filtering)
  • CFA Color Filter Array
  • the light sheet is implemented such that each image pixel captured by the color camera has only one color component. Therefore, to display a complete color image of RGB three-color components, it is necessary to obtain the components of the other two colors by region estimation.
  • Such a process is called CFA interpolation.
  • the current common interpolation algorithms include bilinear interpolation algorithm or edge detection algorithm or anti-mosaic demosaic algorithm, through which the image can be color restored.
  • the black and white camera since the black and white camera does not need to filter the color through the filter, the resolution of the black and white image obtained by the black and white camera is better than that of the color camera.
  • FIG. 1C is a block diagram showing a part of the structure of the mobile phone 100 related to the embodiment of the present invention.
  • the mobile phone 100 may include: a radio frequency (RF) circuit 110, a power source 120, a processor 130, a memory 140, an input unit 150, a display unit 160, a sensor 170, an audio circuit 180, and a wireless fidelity. (Wireless Fidelity, WiFi) module 190 and other components.
  • RF radio frequency
  • WiFi Wireless Fidelity
  • the handset structure illustrated in FIG. 1 does not constitute a limitation to a handset, and may include more or fewer components than those illustrated, or some components may be combined, or a different component arrangement.
  • the RF circuit 110 can be used for transmitting and receiving information or during a call, and receiving and transmitting the signal. Specifically, after receiving the downlink information of the base station, the processor 130 processes the data. In addition, the uplink data is designed to be sent to the base station.
  • RF circuits include, but are not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a Low Noise Amplifier (LNA), a duplexer, and the like.
  • LNA Low Noise Amplifier
  • RF circuitry 110 can also communicate with the network and other devices via wireless communication.
  • the wireless communication may use any communication standard or protocol, including but not limited to Global System of Mobile communication (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (Code). Division Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE), E-mail, Short Messaging Service (SMS), etc.
  • GSM Global System of Mobile communication
  • GPRS General Packet
  • the memory 140 can be used to store software programs and modules, and the processor 130 executes various functional applications and data processing of the mobile phone 100 by running software programs and modules stored in the memory 140.
  • the memory 140 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application required for at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may be stored. Data created according to the use of the mobile phone 100 (such as audio data, phone book, etc.).
  • memory 140 can include high speed random access memory, and can also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
  • the input unit 150 can be configured to receive input numeric or character information and to generate key signal inputs related to user settings and function control of the handset 100.
  • the input unit 150 may include a touch panel 151 and other input devices 152.
  • the touch panel 151 also referred to as a touch screen, can collect touch operations on or near the user (such as the user using a finger, a stylus, or the like on the touch panel 151 or near the touch panel 151. Operation), and drive the corresponding connecting device according to a preset program.
  • the touch panel 151 may include two parts of a touch detection device and a touch controller.
  • the touch detection device detects the touch orientation of the user, and detects a signal brought by the touch operation, and transmits the signal to the touch controller; the touch controller receives the touch information from the touch detection device, converts the touch information into contact coordinates, and sends the touch information.
  • the processor 130 is provided and can receive commands from the processor 130 and execute them.
  • the touch panel 151 can be implemented in various types such as resistive, capacitive, infrared, and surface acoustic waves.
  • the input unit 150 may also include other input devices 152.
  • other input devices 152 may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control buttons, switch buttons, etc.), trackballs, mice, joysticks, and the like.
  • the display unit 160 can be used to display information input by the user or information provided to the user and various menus of the mobile phone 100.
  • the display unit 160 may include a display panel 161, which may alternatively be employed
  • the display panel 161 is configured in the form of an LCD, an OLED, or the like.
  • the touch panel 151 may cover the display panel 161.
  • the touch panel 151 detects a touch operation on or near the touch panel 151, the touch panel 151 transmits to the processor 130 to determine the type of the touch event, and then the processor 130 according to the touch event.
  • the type provides a corresponding visual output on display panel 161.
  • the touch panel 151 and the display panel 151 are two independent components to implement the input and input functions of the mobile phone 100 in FIG. 1, in some embodiments, the touch panel 151 may be integrated with the display panel 161. The input and output functions of the mobile phone 100 are implemented.
  • the handset 100 can also include at least one type of sensor 170, such as a light sensor, motion sensor, and other sensors.
  • the light sensor may include an ambient light sensor and a proximity sensor, wherein the ambient light sensor may adjust the brightness of the display panel 161 according to the brightness of the ambient light, and the proximity sensor may close the display panel 161 when the mobile phone 100 moves to the ear. / or backlight.
  • the accelerometer sensor can detect the magnitude of acceleration in all directions (usually three axes). When it is stationary, it can detect the magnitude and direction of gravity. It can be used to identify the gesture of the mobile phone (such as horizontal and vertical screen switching, related Game, magnetometer attitude calibration), vibration recognition related functions (such as pedometer, tapping), etc.
  • the mobile phone 100 can also be configured with gyroscopes, barometers, hygrometers, thermometers, infrared sensors and other sensors, here Let me repeat.
  • the audio circuit 180, the speaker 181, and the microphone 182 can provide an audio interface between the user and the handset 100.
  • the audio circuit 180 can transmit the converted electrical data of the received audio data to the speaker 181 for conversion to the sound signal output by the speaker 181; on the other hand, the microphone 182 converts the collected sound signal into an electrical signal by the audio circuit 180. After receiving, it is converted into audio data, and then the audio data is output to the RF circuit 110 for transmission to, for example, another mobile phone, or the audio data is output to the memory 140 for further processing.
  • WiFi is a short-range wireless transmission technology
  • the mobile phone 100 can help users to send and receive emails, browse web pages, and access streaming media through the WiFi module 190, which provides wireless broadband Internet access for users.
  • FIG. 1C shows the WiFi module 190, it can be understood that it does not belong to the essential configuration of the mobile phone 100, and may be omitted as needed within the scope of not changing the essence of the invention.
  • the processor 130 is the control center of the handset 100, which connects various portions of the entire handset using various interfaces and lines, by running or executing software programs and/or modules stored in the memory 140, and recalling data stored in the memory 140, The various functions and processing data of the mobile phone 100 are executed, thereby realizing various services based on the mobile phone.
  • processor 130 may include one or more processes
  • the processor 130 can integrate an application processor and a modem processor, wherein the application processor primarily processes an operating system, a user interface, an application, etc., and the modem processor primarily processes wireless communications. It can be understood that the above modem processor may not be integrated into the processor 130.
  • the mobile phone 100 also includes a power source 120 (such as a battery) that supplies power to various components.
  • a power source 120 such as a battery
  • the power source can be logically coupled to the processor 130 through a power management system to manage functions such as charging, discharging, and power consumption through the power management system.
  • the handset 100 may also include a camera, a Bluetooth module, and the like.
  • FIG. 1D is a schematic block diagram of a terminal according to an embodiment of the present invention.
  • the terminal 10 includes a camera 11 (such as the camera 11 being a black and white camera) and a camera 12 (such as the camera 12 being a color camera), and a processor 13 (such as the processor 130 in FIG. 1C).
  • the terminal 10 may further include some or all of the structures shown in the mobile phone 100 in FIG. 1C, and the functions of the structure are not described herein again.
  • the camera 11 is a monochrome camera and the camera 12 is a color camera.
  • the black and white camera 11 and the color camera 12 are used to simultaneously capture the same scene to be photographed to obtain a K frame image, the black and white camera adopts a full-size working mode, and the color camera adopts a combined working mode, K ⁇ 1;
  • the processor 13 is connected to the black and white camera 11 and the color camera 12, and is configured to acquire a first image corresponding to the black and white camera and a second image corresponding to the color camera;
  • a plurality of cameras including a black and white camera and a color camera, can be installed on the terminal.
  • the black and white camera and the color camera simultaneously capture images, causing a hand-shake between the cameras or a reduced likelihood of motion within the scene, ie, the black and white camera and the color camera head remain relatively stationary.
  • the black and white camera adopts a full size working mode to obtain high frequency information of the scene to be photographed; the color camera adopts a binning working mode to merge N pixels adjacent to the image. Shooting for one pixel, the sensitivity is higher, to get the low frequency information of the scene to be shot.
  • the processor 13 of the terminal can obtain the K frame image according to the K times of shooting, and then acquire the first image corresponding to the black and white camera according to the K frame black and white image corresponding to the black and white camera, and acquire the image according to the K frame color image corresponding to the color camera.
  • a second image corresponding to the color camera and extracting high frequency information (such as full resolution detail information, that is, image details) and low frequency information (such as brightness information and color information) of the scene to be photographed, and then according to the information
  • An image and a second image are fused to generate a composite image of the scene to be photographed, thereby obtaining an image with a higher degree of reduction, the quality of the composite image being superior to any one of the K shots.
  • the method for photographing a terminal in the embodiment of the present invention is particularly suitable for low illumination or night scene, and can improve image quality.
  • brightness enhancement processing is performed on the image to improve the photographing quality of low illumination or night scene, such as a brightness enhancement method based on image histogram distribution, and a brightness enhancement method based on depth learning. Since the signal-to-noise ratio of the image is very low under low illumination, such a single brightness enhancement method brings about disadvantages such as large noise and color cast. However, the method of the present application does not bring noise and/or color shift, and the image obtained by the method of the present application has improved image detail information, brightness information, and color information.
  • the photographing quality of low illumination or night scene such as increasing the International Organization for Standardization (ISO) sensitivity or exposure value.
  • ISO International Organization for Standardization
  • the backlight compensation (BLC) of the camera is not very accurate, it is easy to drift, and the camera's automatic white balance (Automatic White Balance) , AWB) accuracy is not high, usually purpleish red, especially in the four corners of the image, the color information of the image is lost, that is, the image quality obtained by increasing the digital gain value is not good.
  • the photographing method of the present application does not bring about these problems, and the image details, brightness information, and color information of the image obtained by the method of the present application are improved.
  • the quality of the photographs of low illumination or night scenes is improved. Since the increase of the exposure time introduces a motion blur problem, it is very inconvenient for a user who takes photos anywhere and anytime. In the solution of the present application, the user can conveniently take photos at any time and anywhere, and the obtained image, the picture detail information, the brightness information, and the color information are all improved.
  • a single binning technique will lose high-frequency information of the image, that is, image detail information.
  • the photographing method of the present application can acquire high frequency information and low frequency information of the image, that is, the resulting beat.
  • the picture details, picture detail information, brightness information, and color information are all improved.
  • the method for photographing a terminal of the present application can obtain a better quality image.
  • the first image corresponding to the monochrome camera is a black and white image
  • the second image corresponding to the color camera is a color image.
  • the black-and-white camera also called a monochrome camera
  • the black-and-white camera has a high transmittance and a full-pixel sensor, which has the advantages of high resolution and low noise, so the black-and-white camera can be used to obtain the full resolution of the scene to be photographed.
  • Detailed information ie, high frequency information
  • the color camera can obtain brightness information and color information of the scene to be photographed, for example, the color camera can output unprocessed raw data in the Bayer format, and the processor 13 can pass the anti-mosaic demosaic algorithm or other images.
  • the processing algorithm parses out the color information of the image.
  • the resolution of the first image is better than the second image.
  • the black and white camera may also be replaced with a monochrome camera, such as an infrared camera, to facilitate shooting in dark environments or night scenes.
  • a monochrome camera such as an infrared camera
  • the black and white camera of the present application can be replaced with an infrared camera to perform corresponding operations. To avoid repetition, no further details are provided herein.
  • the black and white camera and the color camera are independent of each other, are in the same plane, and their corresponding optical axes are parallel.
  • black and white cameras and color cameras can be placed side by side in hardware design.
  • K is not specifically limited in the embodiment of the present invention.
  • the K frame image may be obtained by K shots, for example, K may be 1, 2, 3, 4, or the like.
  • the selection of K can take into account the relationship between the number of continuous shooting images and the time interval required to take these images.
  • the terminal can shoot once, capture one frame of image (such as the first image and the second image) through the black and white camera and the color camera; or the terminal can shoot multiple times, collect multiple frames of images through the black and white camera and the color camera, and then respectively Multi-frame time domain noise reduction (multi-frame time domain noise reduction method, which will be described later) is processed to obtain a frame image corresponding to a black-and-white camera (such as a first image) and a frame corresponding to a color camera. Image (such as a second image).
  • Multi-frame time domain noise reduction multi-frame time domain noise reduction method, which will be described later
  • K can be set to 4, for the first image, the corresponding 4 frames of black and white full-size mono full size image (for example, the resolution is 12M), for the second The image takes the corresponding 4-frame color combined color binning image (for example, the resolution is 3M).
  • the values of the camera are not specifically limited in the embodiment of the present invention. For example, except black There are more cameras besides the white camera and color camera. Specifically, the selection of the number of cameras may consider the relationship between the number of cameras and the terminal cost. The more the number of cameras, the shorter the time interval for capturing multiple frames of images, and the higher the corresponding terminal cost.
  • the processor 13 may be an Image Signal Processing (ISP) device, or may be a Central Processing Unit (CPU), or may include both an ISP device and a CPU. That is, the functions of the above processor are jointly performed by the ISP device and the CPU.
  • the processor 13 can control the camera for image acquisition.
  • the terminal obtains a K-frame image by simultaneously capturing the same scene to be photographed by the black-and-white camera and the color camera, the black-and-white camera adopts a full-size working mode, and the color camera adopts a combined working mode to acquire the first image and the corresponding image of the black-and-white camera.
  • Determining a second image corresponding to the color camera then acquiring high frequency information according to the first image, acquiring low frequency information according to the second image, and finally, according to the high frequency information and the low frequency information, the first image And merging with the second image to generate a composite image of the scene to be photographed, which can improve the quality of the image.
  • the specific processing of the processor 13 may include:
  • the fifth image and the third image are fused to generate a composite image of the scene to be shot.
  • the first image is a black and white image
  • the second image is a color image
  • the processor 13 performs histogram matching on the first image (mono full size image) based on the low frequency information (mainly brightness information) with a histogram of the second image (color binning image) as a reference, and obtains a global A third image of the same or equivalent brightness as the second image.
  • the second image (ie, the color image) is obtained by the color camera in the binning mode. Since the binning mode does not reduce the number of cells involved in imaging, it does not degrade the sensitivity. Therefore, the first image (ie, the black-and-white image) is subjected to luminance processing (specifically, the luminance of the black-and-white image is increased), so as to obtain an image having the same or equivalent brightness level as that of the color image.
  • the processor 13 performs upsampling processing on the second image based on the high frequency information, A fourth image that is the same as or equivalent to the resolution of the first image is obtained, wherein the upsampling method is not specifically limited.
  • the upsampling method may use an existing sampling algorithm such as bilinear interpolation or cubic interpolation.
  • the processor 13 registers the fourth image with the third image as a reference image to obtain a fifth image.
  • a feature matching system based on Feature Point Matching (SURF) may be employed (eg, matching point pairs according to feature points of the third image and feature points of the fourth image, combined with least squares or Other methods are used to find the affine transformation relationship between the third image and the fourth image, and transform the fourth image to obtain a registered image) and a block matching (Block Match, BM) based local registration algorithm.
  • the fourth image is registered (for example, block matching is performed according to the sub-block of the third image and the sub-block of the fourth image, and the fourth image is transformed to obtain a registered image). For example, using the third image as a reference image, the fourth image is transformed to obtain an image that matches the third image, that is, the fifth image.
  • the processor 13 combines the fifth image and the third image, specifically, combines the color information of the fifth image and the brightness information of the third image, and outputs a new color image of the frame, that is, the scene to be photographed. Composite image.
  • the terminal can integrate the high-frequency information and the low-frequency information of the scene to be photographed, and process the first image acquired by the black-and-white camera and the second image acquired by the color camera to obtain a composite image of the scene to be photographed, thereby improving the captured image. the quality of.
  • the processor 13 may perform time domain noise reduction processing on the multi-frame image to obtain a first image corresponding to the black and white camera and a second image corresponding to the color camera.
  • the image may specifically include:
  • the terminal may collect multiple frames of images, such as a multi-frame image corresponding to a black-and-white camera and a multi-frame image corresponding to the color camera.
  • the processor 13 may separately perform time domain noise reduction processing on the collected multi-frame images, so as to obtain the first image and the second image. Image for subsequent operations.
  • the time domain noise reduction algorithm can refer to the operations of the prior art, including: global image registration, local ghost detection, and time domain fusion.
  • performing time domain noise reduction on the K frame image corresponding to the black and white camera including:
  • the processor 13 can directly perform global image registration operation on the K-frame image of the black-and-white camera by using the global motion relationship corresponding to the color camera, thereby avoiding again Calculate the global motion relationship of the black and white camera, saving the amount of calculation and improving the processing speed.
  • the black and white camera and the color camera are synchronously performing image acquisition, and The relative motion relationship between them is the same. Therefore, when image registration is performed on the black and white camera, the processor 13 directly increases the processing speed using the global motion relationship of the color camera.
  • the processor 13 may perform spatial domain noise reduction on the composite image of the scene to be captured.
  • the processor 13 can further reduce noise by performing spatial domain noise reduction on the composite image of the scene to be shot, thereby obtaining a clearer image.
  • airspace noise reduction can also be understood as “space filtering”, which can perform spatial domain noise reduction on composite images by prior art methods, such as non-local means algorithm or industry classic bilateral filter. Other denoising algorithms.
  • FIG. 2 A method for photographing a terminal according to an embodiment of the present invention is described in more detail below with reference to specific examples. It should be noted that the example of FIG. 2 is only intended to assist those skilled in the art to understand the embodiments of the present invention, and is not intended to limit the embodiments of the present invention to the specific numerical values or specific examples illustrated.
  • FIG. 2 is a schematic flowchart of a method for photographing a terminal according to an embodiment of the present invention.
  • the terminal 10 mounts a black and white camera (corresponding to the camera 11) and a color camera (corresponding to the camera 12) on the same plane, and performs a total of four consecutive shots to obtain four frames of black and white images and four frames of color images.
  • the black and white camera adopts the full-size working mode; the color camera adopts the combined working mode.
  • the method may include: an image registration operation, a local ghost detection operation, and a multi-frame temporal fusion operation. Among them, in the image registration operation on the black and white image, the global motion relationship of the color image can be adopted.
  • image registration, ghost elimination, time domain fusion, and the like are performed on 4 frames of color images to obtain 1 frame color image (resolution is 3M).
  • the camera shake model is used to compensate and register the motion of 4 frames of color images.
  • motion compensation and registration can also be performed based on SURF feature point matching and Homography matrix.
  • ghost image cancellation can be performed on 4 frames of color images.
  • moving object detection and removal can be performed between images, and specific reference can be made to the existing ghost elimination algorithm; when shooting a still scene Can not eliminate ghosting.
  • time domain noise reduction or spatial domain filtering on the ghost image-removed 4 frames of color images, for example, averaging the pixel values of each pixel of the 4 frames of color images, or using a pulseless response digital filter ( Infinite Impulse Response, IIR) is filtered.
  • IIR Infinite Impulse Response
  • the image fusion (Color-Mono Fusion) of one frame of black and white image and one frame of color image may include: image preprocessing (Pre-processing), image registration (Image Alignment), image fusion (Image Fusion) and the like. .
  • image M one frame of black and white images
  • image C one frame of color image
  • Image preprocessing includes: (1) brightness registration, since the brightness of image M and image C are different (the brightness of image C is better than image M), in order to make the final result image reach the brightness level of image C, for example, The histogram of image C is used as a reference to perform histogram matching on image M to obtain image M1 corresponding to image C in global brightness; (2) image size conversion, since image M and image C have different resolutions (image M The resolution is better than the image C), in order to make the final result map reach the resolution level of the image M, for example, the image C (resolution 3M) is sampled (such as bilinear interpolation algorithm or cubic interpolation algorithm, etc.) ), thereby obtaining an image C1 (resolution of 12M) which is the same as or equivalent to the resolution of the image M.
  • the image matching may be performed by using a combination of a feature matching system (SUBF global registration) and a block matching local registration algorithm (Block Match local registration).
  • SUBF global registration feature matching system
  • Block Match local registration block matching local registration
  • Merging the image M1 and the image C2, comprising: fusing the color information of the image C2 and the brightness information of the image M1, and outputting a frame of the color image F, that is, the final composite image of the scene to be captured.
  • the composite image is subjected to a single frame spatial denoise.
  • spatial composite noise reduction or spatial domain filtering may be performed on the composite image (ie, image F) to obtain a final output result.
  • the spatial domain filtering can adopt the filtering method such as the industry's classic bilateral filter or non-local mean filter.
  • the algorithm corresponding to the above example may be implemented in a software manner (for example), or the function module corresponding to each step may be integrated into the ISP chip, which is not limited by the present invention.
  • the above steps correspond to corresponding modules, and may be, for example, an image acquisition module, a multi-frame time domain noise reduction module, a black and white color image fusion module, a spatial domain noise reduction module, and the like.
  • the modules listed herein are merely illustrative, and the invention is not limited to the specific modules illustrated, and various equivalent modifications or changes can be made.
  • the terminal may directly acquire one frame color image and one frame black and white image.
  • the step of “multi-frame time domain noise reduction” may be omitted, and the black and white color image is directly directly processed. Fusion.
  • the method for photographing a terminal in the embodiment of the present invention may be implemented by using a simulation platform, for example, in a VC6.0 (ie, Microsoft Visual C++ 6.0) environment, a Matlab environment, or an open source computer vision library (Open).
  • VC6.0 ie, Microsoft Visual C++ 6.0
  • Matlab a Matlab environment
  • Open open source computer vision library
  • the source computer Vision Library, open cv performs programming simulation implementation algorithms, which are not limited by the present invention.
  • FIG. 3 is an effect comparison diagram of the original image (located in the upper and middle portions of FIG. 3) in the collected low-light environment and the composite image (located in the lower portion of FIG. 3) obtained by the present application, wherein the original image includes 1 frame.
  • the quality of the composite image obtained by using the embodiment of the present invention is obviously improved, and the composite image combines the high frequency information of the black and white image (detailed information of full resolution) and the low frequency of the color image.
  • Information (brightness information and color information).
  • FIG. 4 is a partial effect comparison diagram to which an example of an embodiment of the present invention is applied.
  • Figure 4 shows A partial enlarged view of Fig. 3 is seen, and a comparison of the effects of the original image (located in the upper and middle portions of Fig. 4) and the composite image (located in the lower portion of Fig. 4) after applying the present application can be more clearly seen.
  • the original image includes a corresponding one-frame color image (located in the upper portion of FIG. 4) and a 1-frame black-and-white image (located in the middle of FIG. 4).
  • the method for photographing a terminal in the embodiment of the present invention has an important significance for improving the photographing effect in a low illumination scene.
  • a method for photographing a terminal according to an embodiment of the present invention is described in detail below with reference to FIG. It should be noted that the method for photographing the terminal in FIG. 5 may be performed by the terminal described in the foregoing.
  • the specific process of the method is the same as or corresponding to the processing flow of the processor of the terminal, and the repeated description is omitted as appropriate to avoid repetition.
  • FIG. 5 is a schematic flowchart of a method 500 for photographing a terminal according to an embodiment of the present invention.
  • the terminal includes a black and white camera and a color camera, and the method 500 includes:
  • the K-frame image is obtained by simultaneously shooting the same scene to be photographed by the black-and-white camera and the color camera, wherein the black-and-white camera adopts a full-size working mode, and the color camera adopts a combined working mode, K ⁇ 1 ;
  • S550 fused the first image and the second image according to the high frequency information and the low frequency information to generate a composite image of the scene to be shot.
  • step S510 "the K-frame image is obtained by simultaneously capturing the same scene to be photographed by the black-and-white camera and the color camera", which can also be understood as: “through the black-and-white camera and The color camera performs K times of shooting on the same scene to be photographed, and the black and white camera and the color camera simultaneously acquire one frame of image each time.
  • a black and white camera and a color camera can be installed on the terminal, and each time the black and white camera and the color camera simultaneously capture images, the possibility of hand-shake between the cameras or the movement of objects in the scene is reduced, that is, black and white.
  • the camera and the color camera head remain relatively stationary.
  • the black and white camera adopts a full size working mode
  • the color camera adopts a binning mode of operation.
  • the camera takes a simultaneous shooting of the shooting scene, and can respectively obtain a K-frame image, and then acquires a first image corresponding to the black-and-white camera according to the K-frame black-and-white image corresponding to the black-and-white camera, and acquires the color camera according to the K-frame color image corresponding to the color camera.
  • Corresponding second image and then extracting high frequency information (such as full resolution detail information) and low frequency information (such as brightness information and color information) of the scene to be photographed, thereby performing the first image and the second image according to the information Fusion, generating a composite image of a scene to be photographed, the quality of the composite image being better than any one of the K frame images acquired by the K shots.
  • the method for photographing a terminal in the embodiment of the present invention is particularly suitable for low illumination or night scene, and can improve image quality.
  • S550 includes:
  • the fourth image is the same as or equivalent to a resolution of the first image
  • the fifth image and the third image are fused to generate a composite image of the scene to be shot.
  • S520 includes:
  • performing time domain noise reduction on the K frame image corresponding to the black and white camera including:
  • the method 500 may further include:
  • a terminal 600 according to an embodiment of the present invention is described below with reference to FIG. 6, which can implement various steps of the foregoing method for photographing a terminal, and is not described herein for brevity.
  • FIG. 6 is a schematic block diagram of a terminal in accordance with one embodiment of the present invention. As shown in FIG. 6, the terminal 600 can include:
  • the black and white camera 610 and the color camera 620 are configured to simultaneously capture the same scene to be captured to obtain a K frame image, wherein the black and white camera adopts a full-size working mode, and the color camera adopts a combined working mode, K ⁇ 1;
  • the acquiring module 630 is configured to acquire a first image corresponding to the black and white camera and a second image corresponding to the color camera;
  • the merging module 640 is configured to fuse the first image and the second image according to the high frequency information and the low frequency information acquired by the acquiring module 630 to generate a composite image of the to-be-shot scene.
  • the fusion module 640 is specifically configured to:
  • the fifth image and the third image are fused to generate a composite image of the scene to be shot.
  • the acquiring module 630 is specifically configured to:
  • the time domain noise reduction is performed on the K frame image corresponding to the black and white camera, including:
  • the terminal 600 may further include:
  • noise reduction module configured to perform spatial domain noise reduction on the composite image of the scene to be shot.
  • the disclosed apparatus and method may be implemented in other manners.
  • the device embodiments described above are merely illustrative.
  • the division of the unit is only a logical function division.
  • there may be another division manner for example, multiple units or components may be combined or Can be integrated into another system, or some features can be ignored or not executed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, and may be in an electrical, mechanical or other form.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the functions may be stored in a computer readable storage medium if implemented in the form of a software functional unit and sold or used as a standalone product.
  • the technical solution of the present application which is essential or contributes to the prior art, or a part of the technical solution, may be embodied in the form of a software product, which is stored in a storage medium, including
  • the instructions are used to cause a computer device (which may be a personal computer, server, or network device, etc.) to perform all or part of the steps of the methods described in various embodiments of the present application.
  • the foregoing storage medium includes: a U disk, a mobile hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk, and the like. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)
  • Image Processing (AREA)

Abstract

本发明实施例提供了一种用于终端拍照的方法及终端,该终端包括黑白摄像头和彩色摄像头,该方法包括:通过该黑白摄像头和该彩色摄像头对相同的待拍摄场景同时拍摄分别得到K帧图像,其中,该黑白摄像头采用全尺寸工作模式,该彩色摄像头采用合并工作模式,K≥1;获取该黑白摄像头对应的第一图像和该彩色摄像头对应的第二图像;根据该第一图像获取高频信息;根据该第二图像获取低频信息;根据该高频信息和该低频信息,对该第一图像和该第二图像进行融合,生成该待拍摄场景的合成图像。本发明实施例的用于终端拍照的方法及终端,能够提高图像的质量。

Description

用于终端拍照的方法及终端 技术领域
本发明实施例涉及图像处理领域,并且更具体地,涉及一种用于终端拍照的方法及终端。
背景技术
受到终端体积和成本的限制,终端摄像头的镜头和传感器的面积均比较小,拍摄图像的质量较差。在夜景或低照度环境下,由于光线较弱,拍摄图像的质量会更差。
为了提升终端的拍摄效果,目前终端使用合并binning技术以提升终端在夜景或低照度环境下的拍照亮度,以改善图像质量。即将相邻的像素合并成一个像素使用,提高暗光环境下的光敏感度。但是,这种单一的合并技术会损失图像的高频细节,严重降低了图像的质量。
发明内容
本发明实施例提供一种用于终端拍照的方法及终端,能够提高图像的质量。
第一方面,提供了一种终端,包括:
黑白摄像头和彩色摄像头,用于对相同的待拍摄场景同时拍摄分别得到K帧图像,其中,所述黑白摄像头采用全尺寸工作模式,所述彩色摄像头采用合并工作模式,K≥1;
处理器,与所述黑白摄像头和所述彩色摄像头相连,用于获取所述黑白摄像头对应的第一图像和所述彩色摄像头对应的第二图像;
还用于根据所述第一图像获取高频信息;
还用于根据所述第二图像获取低频信息;
还用于根据所述高频信息和所述低频信息,对所述第一图像和所述第二图像进行融合,生成所述待拍摄场景的合成图像。
本发明实施例中,在终端上可以安装多个摄像头,包括黑白摄像头和彩色摄像头。每次拍摄时,黑白摄像头和彩色摄像头同时采集图像,使得摄像头之间发生手持抖动或者场景内物体运动的可能性降低,即黑白摄像头和彩 色摄像头头之间保持相对静止。在图像采集过程中,黑白摄像头采用全尺寸(full size)工作模式,彩色摄像头采用合并(binning)工作模式。终端的处理器可以获取所述黑白摄像头对应的第一图像和所述彩色摄像头对应的第二图像,然后提取出待拍摄场景的高频信息(比如全分辨率的细节信息)和低频信息(比如亮度信息和颜色信息),从而根据这些信息对第一图像和第二图像进行融合,生成待拍摄场景的合成图像,从而得到还原度较高的图像,该合成图像的质量优于之前拍摄的任一帧图像。本发明实施例的用于终端拍照的方法尤其适用于低照度或夜景,能够提高图像的质量。
在本发明实施例中,黑白摄像头(mono camera)对应的第一图像为黑白图像;彩色摄像头(color camera)对应的第二图像为彩色图像。其中,黑白摄像头(也可称作单色摄像头)由于透光率高,采用全像素传感器(sensor),具有解析力高、噪声低等优势,因此可以采用黑白摄像头获取待拍摄场景的全分辨率的细节信息(即高频信息);彩色摄像头可以获取待拍摄场景的亮度信息和颜色信息,比如彩色摄像头可以输出贝尔Bayer图像格式的未经处理raw数据,处理器可以通过反马赛克demosaic算法或其他图像处理算法解析出图像的颜色信息。这里,第一图像的分辨率优于第二图像。
在本发明实施例中,黑白摄像头和彩色摄像头相互独立,处于同一平面,其对应的光轴是平行的。比如,黑白摄像头和彩色摄像头在硬件设计时可以是并排设置的。
本发明实施例对K的数值不作具体限定。K帧图像可以对应K次拍摄,例如,K可以是1、2、3、4等。K的选取可以考虑连拍图像的数目和拍摄这些图像所需时间间隔的关系。换言之,终端可以拍摄一次,通过黑白摄像头和彩色摄像头各采集一帧图像(比如第一图像和第二图像);或者终端可以拍摄多次,通过黑白摄像头和彩色摄像头各采集多帧图像,然后分别对多帧图像进行多帧时域降噪(可采用多帧时域降噪方法,下文将进行描述)处理,得到黑白摄像头对应的一帧图像(比如第一图像)和彩色摄像头对应的一帧图像(比如第二图像)。
在一些可能的实现方式中,在对第一图像和第二图像进行融合时,处理器的具体处理过程可以包括:
基于所述低频信息,以所述第二图像为参考图像,对所述第一图像的亮度进行处理以得到第三图像;
基于所述高频信息,对所述第二图像进行上采样处理以得到第四图像;
以所述第三图像为参考图像,对所述第四图像进行配准以得到第五图像;
对所述第五图像和所述第三图像进行融合,生成所述待拍摄场景的合成图像。
终端可以综合高频信息和低频信息,对黑白摄像头获取的第一图像和彩色摄像头获取的第二图像进行处理,以得到所述待拍摄场景的合成图像,提高了拍摄图像的质量。
在一些可能的实现方式中,当K≥2时,处理器可以对多帧图像进行时域降噪处理,以得到所述黑白摄像头对应的第一图像和所述彩色摄像头对应的第二图像,具体可以包括:
获取所述黑白摄像头对应的K帧图像和所述彩色摄像头对应的K帧图像;
对所述黑白摄像头对应的K帧图像进行时域降噪,得到所述第一图像;
对所述彩色摄像头对应的K帧图像进行时域降噪,得到所述第二图像。
可选地,对所述黑白摄像头对应的K帧图像进行时域降噪,包括:
采用所述彩色摄像头对应的全局运动关系对所述黑白摄像头对应的K帧图像进行全局图像配准操作。
这里,在对黑白摄像头的K帧图像进行时域降噪时,处理器可以直接采用彩色摄像头对应的全局运动关系,对黑白摄像头的K帧图像进行全局图像配准操作,避免了再次计算黑白摄像头的全局运动关系,节省了计算量,能够提升处理速度。
在一些可能的实现方式中,处理器还可以对所述待拍摄场景的合成图像进行空域降噪。
在本发明实施例中,处理器通过对所述待拍摄场景的合成图像进行空域降噪,能够进一步地降低噪声,从而得到更为清晰的图像。
第二方面,提供了一种用于终端拍照的方法,所述终端包括黑白摄像头和彩色摄像头,该方法包括:
通过所述黑白摄像头和所述彩色摄像头对相同的待拍摄场景同时拍摄分别得到K帧图像,其中,所述黑白摄像头采用全尺寸工作模式,所述彩色摄像头采用合并工作模式,K≥1;
获取所述黑白摄像头对应的第一图像和所述彩色摄像头对应的第二图像;
根据所述第一图像获取高频信息;
根据所述第二图像获取低频信息;
根据所述高频信息和所述低频信息,对所述第一图像和所述第二图像进行融合,生成所述待拍摄场景的合成图像。
在本发明实施例中,终端上安装黑白摄像头(黑白摄像头)和彩色摄像头(彩色摄像头),每次拍摄时,黑白摄像头和彩色摄像头同时采集图像,使得摄像头之间发生手持抖动或者场景内物体运动的可能性降低,即黑白摄像头和彩色摄像头头之间保持相对静止。在图像采集过程中,黑白摄像头采用全尺寸(full size)工作模式,彩色摄像头采用合并(binning)工作模式。终端根据所述K次拍摄获取所述黑白摄像头对应的第一图像和所述彩色摄像头对应的第二图像,然后提取出待拍摄场景的高频信息(比如全分辨率的细节信息)和低频信息(比如亮度信息和颜色信息),从而根据这些信息对第一图像和第二图像进行融合,生成待拍摄场景的合成图像,该合成图像的质量优于所述K次拍摄中的任一帧图像。本发明实施例的用于终端拍照的方法尤其适用于低照度或夜景,能够提高图像的质量。
或者,在本发明实施例中,步骤“通过所述黑白摄像头和所述彩色摄像头对相同的待拍摄场景同时拍摄分别得到K帧图像”,也可以理解为:“通过所述黑白摄像头和所述彩色摄像头对相同的待拍摄场景进行K次拍摄,且所述黑白摄像头和所述彩色摄像头在每次拍摄时各同时采集一帧图像”。
在一些可能的实现方式中,根据所述高频信息和所述低频信息,对所述第一图像和所述第二图像进行融合,生成所述待拍摄场景的合成图像,包括:
基于所述低频信息,以所述第二图像为参考图像,对所述第一图像的亮度进行处理以得到第三图像;
基于所述高频信息,对所述第二图像进行上采样处理以得到第四图像;
以所述第三图像为参考图像,对所述第四图像进行配准以得到第五图像;
对所述第五图像和所述第三图像进行融合,生成所述待拍摄场景的合成图像。
在一些可能的实现方式中,当K≥2时,获取所述黑白摄像头对应的第 一图像和所述彩色摄像头对应的第二图像,包括:
获取所述黑白摄像头对应的K帧图像和所述彩色摄像头对应的K帧图像;
对所述黑白摄像头对应的K帧图像进行时域降噪,得到所述第一图像;
对所述彩色摄像头对应的K帧图像进行时域降噪,得到所述第二图像。
可选地,对所述黑白摄像头对应的K帧图像进行时域降噪,包括:
采用所述彩色摄像头对应的全局运动关系对所述黑白摄像头对应的K帧图像进行全局图像配准操作。
在一些可能的实现方式中,该方法还可以包括:
对所述待拍摄场景的合成图像进行空域降噪。
第三方面,提供了一种用于终端拍照的方法,所述终端包括红外摄像头和彩色摄像头,该方法与前面第二方面的用于终端拍照的方法类似,其中,红外摄像头可以执行黑白摄像头的相应方案,彩色摄像头的相关方案与第二方面中的相同。为了简洁,这里不作重复介绍。换言之,第二方面的黑白摄像头可以替换为红外摄像头。
第四方面,提供了一种终端,包括:用于执行上述第二方面或第二方面的任意可能的实现方式中的方法。具体地,该装置包括用于执行上述第二方面或第二方面的任意可能的实现方式中的方法的单元。
第五方面,提供了一种终端,包括:用于执行上述第三方面或第三方面的任意可能的实现方式中的方法。具体地,该装置包括用于执行上述第三方面或第三方面的任意可能的实现方式中的方法的单元。
第六方面,提供了一种计算机可读存储介质,该计算机可读存储介质存储有程序,该程序使得终端执行上述第二方面,及其各种实现方式中的任一种用于终端拍照的方法。
第七方面,提供了一种计算机可读存储介质,该计算机可读存储介质存储有程序,该程序使得终端执行上述第三方面,及其各种实现方式中的任一种用于终端拍照的方法。
附图说明
图1A是合并模式的一个例子的示意图。
图1B是贝尔彩色滤波阵列的结构图。
图1C是与本发明实施例相关的手机的部分结构的框图。
图1D是本发明一个实施例的终端的示意性框图。
图2是本发明一个实施例的用于终端拍照的方法的示意性流程图。
图3是应用本发明实施例的一个例子的效果对比图。
图4是应用本发明实施例的一个例子的局部效果对比图。
图5是本发明一个实施例的用于终端拍照的方法的示意性流程图。
图6是本发明一个实施例的终端的示意性框图。
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行描述。
本发明实施例的技术方案可以应用于终端(Terminal),终端可以是但不限于移动台(Mobile Station,MS)、移动终端(Mobile Terminal)、移动电话(Mobile Telephone)、手机(handset)及便携设备(portable equipment)等,可以经无线接入网(例如,Radio Access Network,RAN)与一个或多个核心网进行通信,终端可以是移动终端,如移动电话(或称为“蜂窝”电话)和具有移动终端的计算机,例如,可以是便携式、袖珍式、手持式、计算机内置的或者车载的移动终端,它们与无线接入网交换语言和/或数据。或者,该终端也可以是各类带有触摸屏的产品,例如平板电脑,触屏手机,触屏设备,手机终端等,对此不作限制。进一步地,该终端可以是具有拍照功能的设备,比如带有摄像头的手机、平板电脑或其他具有拍照功能的设备。
这里,对本发明实施例涉及的一些相关概念或术语进行描述。
合并binning模式是一种图像读出方式,即将图像传感器相邻像元中感应的电荷加在一起,以一个像元的模式读出。其中,binning模式可以对垂直方向和/或水平方向的电荷处理,垂直方向是将相邻列的电荷加在一起读出,水平方向是将相邻行的像元感应电荷加在一起读出。换言之,binning模式是将图像的相邻的N个像素的值叠加,作为一个像素值输出。例如,图1A示出了binning模式的一个例子的示意图。图1A中左图的每个
Figure PCTCN2016108293-appb-000001
代表一个像素,binning模式即将图1A中左图的4个像素
Figure PCTCN2016108293-appb-000002
联合起来作为一个像素
Figure PCTCN2016108293-appb-000003
使用。应理解,这里只是以4个像素为例进行说明,并不对本发明构成限 定。
采用binning模式拍摄能够增大感光面积,提高了暗光环境下光敏感度。因此binning模式特别适用于低照度拍摄环境。换言之,binning模式不会降低参与成像的像元数量,因此不会降低感光度。
另外,binning模式会降低图像的分辨率,也就是说binning模式是以牺牲图像分辨率为代价来提高其光灵敏度和输出速率的。
目前,大多数相机采用贝尔Bayer模板图像传感器,对应地,摄像头获取的图像为Bayer格式图像。图1B示出了Bayer彩色滤波阵列的结构图。每个像素位置处允许红色(R)、绿色(G)、蓝色(B)中一种颜色的光透过。
为了真实再现拍摄场景,在每个像素位置需采集三个颜色通道的颜色值。彩色摄像头通过彩色滤波阵列(Color Filter Array,CFA)获得图像的彩色信息,即彩色滤波阵在每个像素的位置处只允许R、G、B一种颜色的光分量透过相机(比如通过滤光片实现),这样彩色摄像头采集的每个图像像素只有一个颜色分量。因此要显示RGB三颜色分量完全的彩色图像,需通过区域估算来获取另外两种颜色的分量,这样的过程称为CFA插值。另外,目前常见的插值算法包括双线性插值算法或边缘检测算法或反马赛克demosaic算法,通过这些插值算法可以对图像进行彩色复原。
另外,由于黑白摄像头不需要通过滤光片对颜色进行过滤,因此,黑白摄像头得到的黑白图像的分辨率优于彩色摄像头得到的彩色图像。
下面以终端为手机为例,图1C示出了与本发明实施例相关的手机100的部分结构的框图。如图1C所示,手机100可以包括:射频(Radio Frequency,RF)电路110、电源120、处理器130、存储器140、输入单元150、显示单元160、传感器170、音频电路180、以及无线保真(Wireless Fidelity,WiFi)模块190等部件。应理解,图1中示出的手机结构并不构成对手机的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。
下面结合图1C对手机100的各个构成部件进行具体的介绍:
RF电路110可用于收发信息或通话过程中,信号的接收和发送,特别地,将基站的下行信息接收后,给处理器130处理;另外,将设计上行的数据发送给基站。通常,RF电路包括但不限于天线、至少一个放大器、收发信机、耦合器、低噪声放大器(Low Noise Amplifier,LNA)、双工器等。此 外,RF电路110还可以通过无线通信与网络和其他设备通信。所述无线通信可以使用任一通信标准或协议,包括但不限于全球移动通讯***(Global System of Mobile communication,GSM)、通用分组无线服务(General Packet Radio Service,GPRS)、码分多址(Code Division Multiple Access,CDMA)、宽带码分多址(Wideband Code Division Multiple Access,WCDMA)、长期演进(Long Term Evolution,LTE)、电子邮件、短消息服务(Short Messaging Service,SMS)等。
存储器140可用于存储软件程序以及模块,处理器130通过运行存储在存储器140的软件程序以及模块,从而执行手机100的各种功能应用以及数据处理。存储器140可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作***、至少一个功能所需的应用程序(比如声音播放功能、图象播放功能等)等;存储数据区可存储根据手机100的使用所创建的数据(比如音频数据、电话本等)等。此外,存储器140可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他易失性固态存储器件。
输入单元150可用于接收输入的数字或字符信息,以及产生与手机100的用户设置以及功能控制有关的键信号输入。具体地,输入单元150可包括触控面板151以及其他输入设备152。触控面板151,也称为触摸屏,可收集用户在其上或附近的触摸操作(比如用户使用手指、触笔等任何适合的物体或附件在触控面板151上或在触控面板151附近的操作),并根据预先设定的程式驱动相应的连接装置。可选地,触控面板151可包括触摸检测装置和触摸控制器两个部分。其中,触摸检测装置检测用户的触摸方位,并检测触摸操作带来的信号,将信号传送给触摸控制器;触摸控制器从触摸检测装置上接收触摸信息,并将它转换成触点坐标,再送给处理器130,并能接收处理器130发来的命令并加以执行。此外,可以采用电阻式、电容式、红外线以及表面声波等多种类型实现触控面板151。除了触控面板151,输入单元150还可以包括其他输入设备152。具体地,其他输入设备152可以包括但不限于物理键盘、功能键(比如音量控制按键、开关按键等)、轨迹球、鼠标、操作杆等中的一种或多种。
显示单元160可用于显示由用户输入的信息或提供给用户的信息以及手机100的各种菜单。显示单元160可包括显示面板161,可选地,可以采用 LCD、OLED等形式来配置显示面板161。进一步地,触控面板151可覆盖显示面板161,当触控面板151检测到在其上或附近的触摸操作后,传送给处理器130以确定触摸事件的类型,随后处理器130根据触摸事件的类型在显示面板161上提供相应的视觉输出。虽然在图1中,触控面板151与显示面板151是作为两个独立的部件来实现手机100的输入和输入功能,但是在某些实施例中,可以将触控面板151与显示面板161集成而实现手机100的输入和输出功能。
手机100还可包括至少一种传感器170,比如光传感器、运动传感器以及其他传感器。具体地,光传感器可包括环境光传感器及接近传感器,其中,环境光传感器可根据环境光线的明暗来调节显示面板161的亮度,接近传感器可在手机100移动到耳边时,关闭显示面板161和/或背光。作为运动传感器的一种,加速计传感器可检测各个方向上(一般为三轴)加速度的大小,静止时可检测出重力的大小及方向,可用于识别手机姿态的应用(比如横竖屏切换、相关游戏、磁力计姿态校准)、振动识别相关功能(比如计步器、敲击)等;至于手机100还可配置的陀螺仪、气压计、湿度计、温度计、红外线传感器等其他传感器,在此不再赘述。
音频电路180、扬声器181,麦克风182可提供用户与手机100之间的音频接口。音频电路180可将接收到的音频数据转换后的电信号,传输到扬声器181,由扬声器181转换为声音信号输出;另一方面,麦克风182将收集的声音信号转换为电信号,由音频电路180接收后转换为音频数据,再将音频数据输出至RF电路110以发送给比如另一手机,或者将音频数据输出至存储器140以便进一步处理。
WiFi属于短距离无线传输技术,手机100通过WiFi模块190可以帮助用户收发电子邮件、浏览网页和访问流式媒体等,它为用户提供了无线的宽带互联网访问。虽然图1C示出了WiFi模块190,但是可以理解的是,其并不属于手机100的必须构成,完全可以根据需要在不改变发明的本质的范围内而省略。
处理器130是手机100的控制中心,利用各种接口和线路连接整个手机的各个部分,通过运行或执行存储在存储器140内的软件程序和/或模块,以及调用存储在存储器140内的数据,执行手机100的各种功能和处理数据,从而实现基于手机的多种业务。可选地,处理器130可包括一个或多个处理 单元;优选的,处理器130可集成应用处理器和调制解调处理器,其中,应用处理器主要处理操作***、用户界面和应用程序等,调制解调处理器主要处理无线通信。可以理解的是,上述调制解调处理器也可以不集成到处理器130中。
手机100还包括给各个部件供电的电源120(比如电池),优选的,电源可以通过电源管理***与处理器130逻辑相连,从而通过电源管理***实现管理充电、放电、以及功耗等功能。
尽管未示出,手机100还可以包括摄像头、蓝牙模块等。
图1D是本发明一个实施例的终端的示意性框图。该终端10包括摄像头11(比如该摄像头11是黑白摄像头)和摄像头12(比如该摄像头12是彩色摄像头),以及处理器13(比如图1C中的处理器130)。可选地,该终端10还可以包括图1C中手机100示出的部分或全部结构,这里不再赘述结构的功能。下面将以摄像头11是黑白摄像头、摄像头12是彩色摄像头为例进行说明。
其中,黑白摄像头11和彩色摄像头12,用于对相同的待拍摄场景同时拍摄分别得到K帧图像,所述黑白摄像头采用全尺寸工作模式,所述彩色摄像头采用合并工作模式,K≥1;
处理器13,与所述黑白摄像头11和所述彩色摄像头12相连,用于获取所述黑白摄像头对应的第一图像和所述彩色摄像头对应的第二图像;
还用于根据所述第一图像获取高频信息;
还用于根据所述第二图像获取低频信息;
还用于根据所述高频信息和所述低频信息,对所述第一图像和所述第二图像进行融合,生成所述待拍摄场景的合成图像。
本发明实施例中,在终端上可以安装多个摄像头,包括黑白摄像头和彩色摄像头。每次拍摄时,黑白摄像头和彩色摄像头同时采集图像,使得摄像头之间发生手持抖动或者场景内物体运动的可能性降低,即黑白摄像头和彩色摄像头头之间保持相对静止。
在图像采集过程中,黑白摄像头采用全尺寸(full size)工作模式,以得到待拍摄场景的高频信息;彩色摄像头采用合并(binning)工作模式,将图像相邻的N个像素(pixel)合并为一个像素进行拍摄,灵敏度较高,以得到待拍摄场景的低频信息。
终端的处理器13可以根据K次拍摄分别得到K帧图像,然后根据黑白摄像头对应的K帧黑白图像获取所述黑白摄像头对应的第一图像,和根据彩色摄像头对应的K帧彩色图像获取所述彩色摄像头对应的第二图像,并提取出待拍摄场景的高频信息(比如全分辨率的细节信息,即图像的细节)和低频信息(比如亮度信息和颜色信息),继而根据这些信息对第一图像和第二图像进行融合,生成待拍摄场景的合成图像,从而得到还原度较高的图像,该合成图像的质量优于所述K次拍摄中的任一帧图像。本发明实施例的用于终端拍照的方法尤其适用于低照度或夜景,能够提高图像的质量。
为了更加凸显本申请的技术效果,下面将本申请的方案与现有技术中的一些技术方案进行比较。目前,为了提升低照度或夜景的拍照质量,主要存在以下方案:
其一,通过对图像进行亮度增强处理,以提升低照度或夜景的拍照质量,比如基于图像直方图分布的亮度增强方法,基于深度学习的亮度增强方法。由于低照度下图像的信噪比非常低,这类单一的亮度增强方法会带来很大的噪声和偏色等缺点。而本申请的方法却不会带来噪声和/或色偏,采用本申请方法得到的图像,其图片细节信息、亮度信息、颜色信息均得到改善。
其二,通过增大拍摄时的数字增益值,以提升低照度或夜景的拍照质量,比如增大国际标准化组织(International Organization for Standardization,ISO)感光度或曝光值。由于低照度下拍摄信号很弱,电路的暗电流(Dark Current)占主要成分,摄像机的背光补偿(Back Light Compesation,BLC)不是很准确,容易发生漂移,并且摄像头的自动白平衡(Automatic White Balance,AWB)的精度也不高,通常会出现偏***,在图像的四角处尤其严重,损失了图像的颜色信息,即通过增大数字增益值的方法获得的图像质量并不好。而本申请的拍照方法不会带来这些问题,采用本申请方法得到的图像,其图片细节信息、亮度信息、颜色信息均得到改善。
其三,通过增大曝光时间,以提升低照度或夜景的拍照质量。由于增加曝光时间会引入运动模糊问题,对于随时随地拍照的用户来说非常不方便。而本申请的方案中可以便于用户随时随地进行拍照,且得到的拍摄图像,其图片细节信息、亮度信息、颜色信息均得到改善。
其四,单一的binning技术会损失图像的高频信息,即图像细节信息。而本申请的拍照方法能够获取图像的高频信息和低频信息,即最终得到的拍 摄图像,其图片细节信息、亮度信息、颜色信息均得到改善。
综上所述,相对于现有的方案,本申请的用于终端拍照的方法,能够得到质量更佳的图像。
在本发明实施例中,黑白摄像头(mono camera)对应的第一图像为黑白图像;彩色摄像头(color camera)对应的第二图像为彩色图像。其中,黑白摄像头(也可称作单色摄像头)由于透光率高,采用全像素传感器(sensor),具有解析力高、噪声低等优势,因此可以采用黑白摄像头获取待拍摄场景的全分辨率的细节信息(即高频信息);彩色摄像头可以获取待拍摄场景的亮度信息和颜色信息,比如彩色摄像头可以输出Bayer格式的未经处理raw数据,处理器13可以通过反马赛克demosaic算法或其他图像处理算法解析出图像的颜色信息。这里,第一图像的分辨率优于第二图像。
可选地,在一些场景下,所述黑白摄像头也可以替换为单色摄像头,比如,红外线摄像头,以便于对黑暗环境或夜间场景进行拍摄。换言之,本申请的黑白摄像头可以替换为红外摄像头,执行相应的操作,为了避免重复,这里不作赘述。
在本发明实施例中,黑白摄像头和彩色摄像头之间相互独立,处于同一平面,其对应的光轴是平行的。比如,黑白摄像头和彩色摄像头在硬件设计时可以是并排设置的。
应理解,本发明实施例对K的数值不作具体限定。K帧图像可以是通过K次拍摄得到的,例如,K可以是1、2、3、4等。K的选取可以考虑连拍图像的数目和拍摄这些图像所需时间间隔的关系。换言之,终端可以拍摄一次,通过黑白摄像头和彩色摄像头各采集一帧图像(比如第一图像和第二图像);或者终端可以拍摄多次,通过黑白摄像头和彩色摄像头各采集多帧图像,然后分别对多帧图像进行多帧时域降噪(可采用多帧时域降噪方法,下文将进行描述)处理,得到黑白摄像头对应的一帧图像(比如第一图像)和彩色摄像头对应的一帧图像(比如第二图像)。
实际中,考虑到拍照速度和拍摄效果的综合性能,比如,K可以设置为4,对于第一图像取对应的4帧黑白全尺寸mono full size图像(比如,分辨率为12M),对于第二图像取对应的4帧彩色合并color binning图像(比如分辨率为3M)。
还应理解,本发明实施例对摄像头的数值不作具体限定。例如,除了黑 白摄像头和彩色摄像头外,还可以有更多的摄像头。具体地,摄像头数目的选取可考虑摄像头数目与终端成本的关系,摄像头数目越多,拍摄多帧图像的时间间隔越短,相应终端成本越高。
在本发明实施例中,处理器13可以是图像信号处理(Image Signal Processing,ISP)装置,也可以是中央处理单元(Central Processing Unit,CPU),还可以既包括ISP的装置,也包括CPU,即上述处理器的功能由ISP装置和CPU共同完成。处理器13可以控制摄像头进行图像采集。
因此,终端通过黑白摄像头和彩色摄像头对相同的待拍摄场景同时拍摄分别得到K帧图像,黑白摄像头采用全尺寸工作模式,彩色摄像头采用合并工作模式,获取所述黑白摄像头对应的第一图像和所述彩色摄像头对应的第二图像,然后根据所述第一图像获取高频信息,根据所述第二图像获取低频信息,最后根据所述高频信息和所述低频信息,对所述第一图像和所述第二图像进行融合,生成所述待拍摄场景的合成图像,能够提高图像的质量。
可选地,作为一个实施例,在对第一图像和第二图像进行融合时,处理器13的具体处理过程可以包括:
基于所述低频信息,以所述第二图像为参考图像,对所述第一图像的亮度进行处理以得到第三图像;
基于所述高频信息,对所述第二图像进行上采样处理以得到第四图像;
以所述第三图像为参考图像,对所述第四图像进行配准以得到第五图像;
对所述第五图像和所述第三图像进行融合,生成所述待拍摄场景的合成图像。
举例来说,以第一图像为黑白图像,第二图像为彩色图像进行说明:
首先,处理器13基于所述低频信息(主要是亮度信息),以第二图像(color binning图像)的直方图作为参考,对第一图像(mono full size图像)进行直方图匹配,得到在全局亮度上与第二图像相同或相当的第三图像。
这里,第二图像(即彩色图像)是彩色摄像头在binning模式下得到的。由于binning模式不会降低参与成像的像元数量,因此不会降低感光度。所以,这里对第一图像(即黑白图像)进行亮度处理(具体即增大黑白图像的亮度),以期得到与彩色图像的亮度水平相同或相当的图像。
然后,处理器13基于所述高频信息,对所述第二图像进行上采样处理, 以得到与第一图像的分辨率相同或相当的第四图像,其中,上采样方法不作具体限定。可选地,上采样方法可以采用现有的双线性插值法或三次立方插值法等采样算法。
随后,处理器13以所述第三图像为参考图像,对所述第四图像进行配准以得到第五图像。可选地,可以采用基于特征点匹配的全局配准算法(Speed Up Robust Feature,SURF)(比如,根据第三图像的特征点和第四图像的特征点进行匹配点对,结合最小二乘法或其他方法求出第三图像和第四图像的仿射变换关系,对第四图像进行变换,得到配准后的图像)和基于块匹配(Block Match,BM)的局部配准算法相结合的方式对所述第四图像进行配准(比如,根据第三图像的子块和第四图像的子块进行块匹配,对第四图像进行变换,得到配准后的图像)。比如,以第三图像为参考图像,对第四图像进行变换,得到与第三图像相匹配的图像,即第五图像。
最后,处理器13对第五图像和第三图像进行融合,具体即将第五图像的颜色信息和第三图像的亮度信息组合在一起,输出一帧新的彩色图像,即所述待拍摄场景的合成图像。
因此,终端可以综合待拍摄场景的高频信息和低频信息,对黑白摄像头获取的第一图像和彩色摄像头获取的第二图像进行处理,以得到所述待拍摄场景的合成图像,提高了拍摄图像的质量。
应理解,在本发明实施例中,引入编号“第一”、“第二”…仅是为了区分不同的对象,比如区分不同的图像,或区分不同的摄像头,并不对本发明构成限制。
可选地,作为一个实施例,当K≥2时,处理器13可以对多帧图像进行时域降噪处理,以得到所述黑白摄像头对应的第一图像和所述彩色摄像头对应的第二图像,具体可以包括:
根据所述K次拍摄获取所述黑白摄像头对应的K帧图像和所述彩色摄像头对应的K帧图像;
对所述黑白摄像头对应的K帧图像进行时域降噪,得到所述第一图像;
对所述彩色摄像头对应的K帧图像进行时域降噪,得到所述第二图像。
具体而言,当拍摄次数为多次时,终端可以采集到多帧图像,比如黑白摄像头对应的多帧图像,彩色摄像头对应的多帧图像。这时,处理器13可以对采集到的多帧图像分别进行时域降噪处理,以便于得到第一图像和第二 图像,从而进行后续操作。这里,时域降噪的算法可以参照现有技术的操作,包括:全局图像配准、局部鬼影检测和时域融合。
可选地,对所述黑白摄像头对应的K帧图像进行时域降噪,包括:
采用所述彩色摄像头对应的全局运动关系对所述黑白摄像头对应的K帧图像进行全局图像配准操作。
具体而言,在对黑白摄像头的K帧图像进行时域降噪时,处理器13可以直接采用彩色摄像头对应的全局运动关系,对黑白摄像头的K帧图像进行全局图像配准操作,避免了再次计算黑白摄像头的全局运动关系,节省了计算量,能够提升处理速度。
这里,由于计算全局运动关系时涉及的运算量比较大,比如计算单应性矩阵(homography)或其他用于描述摄像头运动关系的矩阵,而黑白摄像头与彩色摄像头是同步进行图像采集的,其之间的相对运动关系是相同的。因此在对黑白摄像头进行图像配准时,处理器13直接使用彩色摄像头的全局运动关系提高了处理速度。
可选地,作为一个实施例,处理器13还可以对所述待拍摄场景的合成图像进行空域降噪。
在本发明实施例中,处理器13通过对所述待拍摄场景的合成图像进行空域降噪,能够进一步地降低噪声,从而得到更为清晰的图像。这里,“空域降噪”也可以理解为“空域滤波”,可采用现有技术的方法对合成图像进行空域降噪,比如非局部均值(Non-local means)算法或业界经典的双边滤波器等其他去噪算法。
下面结合具体例子,更加详细地描述本发明实施例的用于终端拍照的方法。应注意,图2的例子仅仅是为了帮助本领域技术人员理解本发明实施例,而非要将本发明实施例限于所例示的具体数值或具体场景。
图2是本发明一个实施例的用于终端拍照的方法的示意性流程图。终端10在同一平面安装一个黑白摄像头(对应摄像头11)及一个彩色摄像头(对应摄像头12),共进行连续的4次拍摄,得到4帧黑白图像及4帧彩色图像。其中,黑白摄像头采用全尺寸工作模式;彩色摄像头采用合并工作模式。
首先,获取4帧黑白图像(比如,分辨率为12M)及4帧彩色图像(比如,分辨率为3M,对应黑白图像分辨率的1/4)。
然后,对4帧黑白图像及4帧彩色图像分别进行多帧时域降噪 (Multi-frame temporal denoise)。具体可以包括:图像配准(Image registration)操作、局部鬼影检测(Ghost detection)操作和多帧时域融合(Multi-frame temporal fusion)操作。其中,在对黑白图像进行图像配准操作时,可以采用彩色图像的全局运动关系。
这里,以4帧彩色图像为例(也适用于4帧黑白图像),对4帧彩色图像进行图像配准、鬼影消除、时域融合等处理,得到1帧彩色图像(分辨率为3M)。例如,先采用相机抖动模型对4帧彩色图像运动补偿和配准。或者,也可以基于SURF特征点匹配与Homography矩阵进行运动补偿和配准。然后,当拍摄运动场景时,可以对4帧彩色图像进行鬼影消除,例如,可以在图像之间进行运动物体检测与移除,具体可以参考现有的鬼影消除算法;当拍摄静止场景时,可以不进行鬼影消除。可选地,对鬼影消除后的4帧彩色图像进行时域降噪或空域滤波,例如,可以将4帧彩色图像每个像素点的像素值取平均,或者采用无脉冲响应数字滤波器(Infinite Impulse Response,IIR)进行滤波。最后,将4帧彩色图像进行时域融合得到1帧彩色图像。
其次,对1帧黑白图像和1帧彩色图像进行图像融合(Color-Mono Fusion),可以包括:图像预处理(Pre-processing)、图像配准(Image Alignment)、图像融合(Image Fusion)等操作。
为了便于描述图像融合(下面步骤a-c)的过程,下面将1帧黑白图像记作“图像M”,将1帧彩色图像记作“图像C”。
a.图像预处理包括:(1)亮度配准,由于图像M与图像C的亮度不同(图像C的亮度优于图像M),为了使得最终的结果图达到图像C的亮度水平,比如,以图像C的直方图作为参考,对图像M进行直方图匹配,从而得到在全局亮度上与图像C相当的图像M1;(2)图像大小转换,由于图像M与图像C的分辨率不同(图像M的分辨率优于图像C),为了使得最终的结果图达到图像M的分辨率水平,比如,对图像C(分辨率为3M)进行采样处理(比如双线性插值算法或三次立方插值算法等),从而得到与图像M的分辨率相同或相当的图像C1(分辨率为12M)。
b.对图像C1和图像M1进行图像配准,包括:以图像M1作为参考图像,对图像C1进行变换,得到与图像M1相匹配的图像C2。可选地,可以采用基于特征点匹配的全局配准算法(SUBF global registration)和基于块匹配的局部匹配算法(Block Match local registration)相结合的形式进行图像配 准。
c.对图像M1和图像C2进行融合,包括:将图像C2的颜色信息和图像M1的亮度信息进行融合,输出一帧彩色图像F,即待拍摄场景最终的合成图像。
最后,对合成图像进行空域降噪(single frame spatial denoise)。
这里,为了进一步降低噪声,可以对合成图像(即图像F)进行空域降噪或空域滤波,以得到最终的输出结果。空域滤波可以采用业界经典的双边滤波器或者非局部均值滤波器等滤波方式。
应理解,上述例子对应的算法在具体实现时,可以采用软件方式实现(比如),也可以将各个步骤对应的功能模块集成在ISP芯片中,本发明对此不作限定。其中,上述各个步骤对应相应的模块,可以比如:图像采集模块,多帧时域降噪模块,黑白彩色图像融合模块,空域降噪模块等。当然,这里列举出的模块只是示例性进行说明,并非限定本申请只能限于所例示的具体模块,显然可以进行各种等价修改或变化。
可选地,为了提高计算速度,在上述例子中,终端也可以直接获取1帧彩色图像和1帧黑白图像,此时可以忽略上述“多帧时域降噪”的步骤,直接进行黑白彩色图像融合。
本发明实施例的用于终端拍照的方法,其对应的算法可以通过仿真平台实现,例如,可以在VC6.0(即Microsoft Visual C++6.0)环境、Matlab环境下或开源计算机视觉库(Open Source Computer Vision Library,open cv)进行编程仿真实现算法,本发明对此不作限制。
下面将结合图3和图4描述本例中用于终端拍照的方法的仿真结果。应注意,这只是为了帮助本领域技术人员更好地理解本发明实施例,而非限制本发明实施例的范围。
图3是采集到的低照环境下的原始图像(位于图3的上部和中部)与采用本申请方案得到的合成图像(位于图3的下部)的效果对比图,其中,原始图像包括1帧彩色图像(位于图3的上部)和1帧黑白图像(位于图3的中部)。从图3可以看出,采用本发明实施例得到的合成图像质量显然得到了很大的提高,该合成图像综合了黑白图像的高频信息(全分辨率的细节信息),以及彩色图像的低频信息(亮度信息和颜色信息)。
另外,图4是应用本发明实施例的一个例子的局部效果对比图。图4示 出了图3中的局部放大图,可以更清楚的看到原始图像(位于图4的上部和中部)与应用本申请方案后的合成图像(位于图4的下部)的效果对比图。其中,原始图像包括对应的1帧彩色图像(位于图4的上部)和1帧黑白图像(位于图4的中部)。
因此,本发明实施例的用于终端拍照的方法对提高低照度场景下的拍照效果具有重要的意义。
下面结合图5详细描述根据本发明实施例的用于终端拍照的方法。需要说明的是,图5中用于终端拍照的方法可以由前文描述的终端执行,该方法的具体流程与上述终端的处理器的处理流程相同或相应,为避免重复,适当省略重复的描述。
图5是本发明一个实施例的用于终端拍照的方法500的示意性流程图。所述终端包括黑白摄像头和彩色摄像头,该方法500包括:
S510,通过所述黑白摄像头和所述彩色摄像头对相同的待拍摄场景同时拍摄分别得到K帧图像,其中,所述黑白摄像头采用全尺寸工作模式,所述彩色摄像头采用合并工作模式,K≥1;
S520,获取所述黑白摄像头对应的第一图像和所述彩色摄像头对应的第二图像;
S530,根据所述第一图像获取高频信息;
S540,根据所述第二图像获取低频信息;
S550,根据所述高频信息和所述低频信息,对所述第一图像和所述第二图像进行融合,生成所述待拍摄场景的合成图像。
或者,在本发明实施例中,步骤S510中“通过所述黑白摄像头和所述彩色摄像头对相同的待拍摄场景同时拍摄分别得到K帧图像”,也可以理解为:“通过所述黑白摄像头和所述彩色摄像头对相同的待拍摄场景进行K次拍摄,且所述黑白摄像头和所述彩色摄像头在每次拍摄时各同时采集一帧图像”。
在本发明实施例中,终端上可以安装黑白摄像头和彩色摄像头,每次拍摄时,黑白摄像头和彩色摄像头同时采集图像,使得摄像头之间发生手持抖动或者场景内物体运动的可能性降低,即黑白摄像头和彩色摄像头头之间保持相对静止。在图像采集过程中,黑白摄像头采用全尺寸(full size)工作模式,彩色摄像头采用合并(binning)工作模式。终端通过黑白摄像头和彩色 摄像头对待拍摄场景同时拍摄,能够分别得到K帧图像,然后根据黑白摄像头对应的K帧黑白图像获取所述黑白摄像头对应的第一图像,和根据彩色摄像头对应的K帧彩色图像获取所述彩色摄像头对应的第二图像,继而提取出待拍摄场景的高频信息(比如全分辨率的细节信息)和低频信息(比如亮度信息和颜色信息),从而根据这些信息对第一图像和第二图像进行融合,生成待拍摄场景的合成图像,该合成图像的质量优于K次拍摄获取的K帧图像中的任一帧图像。本发明实施例的用于终端拍照的方法尤其适用于低照度或夜景,能够提高图像的质量。
可选地,作为一个实施例,S550包括:
基于所述低频信息,以所述第二图像为参考图像,对所述第一图像的亮度进行处理以得到第三图像;
基于所述高频信息,对所述第二图像进行上采样处理以得到第四图像;所述第四图像与所述第一图像的分辨率相同或相当;
以所述第三图像为参考图像,对所述第四图像进行配准以得到第五图像;
对所述第五图像和所述第三图像进行融合,生成所述待拍摄场景的合成图像。
可选地,作为一个实施例,当K≥2时,S520包括:
获取所述黑白摄像头对应的K帧图像和所述彩色摄像头对应的K帧图像;
对所述黑白摄像头对应的K帧图像进行时域降噪,得到所述第一图像;
对所述彩色摄像头对应的K帧图像进行时域降噪,得到所述第二图像。
可选地,对所述黑白摄像头对应的K帧图像进行时域降噪,包括:
采用所述彩色摄像头对应的全局运动关系对所述黑白摄像头对应的K帧图像进行全局图像配准操作。
可选地,作为一个实施例,该方法500还可以包括:
对所述待拍摄场景的合成图像进行空域降噪。
下面结合图6描述根据本发明实施例的终端600,该终端600能够实现上述用于终端拍照的方法的各个步骤,为了简洁,这里不作赘述。
图6是本发明一个实施例的终端的示意性框图。如图6所示,该终端600可以包括:
黑白摄像头610和彩色摄像头620,用于对相同的待拍摄场景同时拍摄分别得到K帧图像,其中,所述黑白摄像头采用全尺寸工作模式,所述彩色摄像头采用合并工作模式,K≥1;
获取模块630,用于获取所述黑白摄像头对应的第一图像和所述彩色摄像头对应的第二图像;
还用于根据所述第一图像获取高频信息;
还用于根据所述第二图像获取低频信息;
融合模块640,用于根据所述获取模块630获取的所述高频信息和所述低频信息,对所述第一图像和所述第二图像进行融合,生成所述待拍摄场景的合成图像。
可选地,作为一个实施例,该融合模块640具体用于:
基于所述低频信息,以所述第二图像为参考图像,对所述第一图像的亮度进行处理以得到第三图像;
基于所述高频信息,对所述第二图像进行上采样处理以得到第四图像;
以所述第三图像为参考图像,对所述第四图像进行配准以得到第五图像;
对所述第五图像和所述第三图像进行融合,生成所述待拍摄场景的合成图像。
可选地,作为一个实施例,所述获取模块630具体用于:
获取所述黑白摄像头对应的K帧图像和所述彩色摄像头对应的K帧图像,其中K≥2;
对所述黑白摄像头对应的K帧图像进行时域降噪,得到所述第一图像;
对所述彩色摄像头对应的K帧图像进行时域降噪,得到所述第二图像。
可选地,作为一个实施例,对所述黑白摄像头对应的K帧图像进行时域降噪,包括:
采用所述彩色摄像头对应的全局运动关系对所述黑白摄像头对应的K帧图像进行全局图像配准操作。
可选地,作为一个实施例,该终端600还可以包括:
降噪模块,用于对所述待拍摄场景的合成图像进行空域降噪。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结 合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。本领域的普通技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的***、装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个***,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限 于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应所述以权利要求的保护范围为准。

Claims (13)

  1. 一种用于终端拍照的方法,其特征在于,所述终端包括黑白摄像头和彩色摄像头,所述方法包括:
    通过所述黑白摄像头和所述彩色摄像头对相同的待拍摄场景同时拍摄分别得到K帧图像,其中,所述黑白摄像头采用全尺寸工作模式,所述彩色摄像头采用合并工作模式,K≥1;
    获取所述黑白摄像头对应的第一图像和所述彩色摄像头对应的第二图像;
    根据所述第一图像获取高频信息;
    根据所述第二图像获取低频信息;
    根据所述高频信息和所述低频信息,对所述第一图像和所述第二图像进行融合,生成所述待拍摄场景的合成图像。
  2. 根据权利要求1所述的方法,其特征在于,所述根据所述高频信息和所述低频信息,对所述第一图像和所述第二图像进行处理,生成所述待拍摄场景的合成图像,包括:
    基于所述低频信息,以所述第二图像为参考图像,对所述第一图像的亮度进行处理以得到第三图像;
    基于所述高频信息,对所述第二图像进行上采样处理以得到第四图像;
    以所述第三图像为参考图像,对所述第四图像进行配准以得到第五图像;
    对所述第五图像和所述第三图像进行融合,生成所述待拍摄场景的合成图像。
  3. 根据权利要求1或2所述的方法,其特征在于,当K≥2时,所述获取所述黑白摄像头对应的第一图像和所述彩色摄像头对应的第二图像,包括:
    获取所述黑白摄像头对应的K帧图像和所述彩色摄像头对应的K帧图像;
    对所述黑白摄像头对应的K帧图像进行时域降噪,得到所述第一图像;
    对所述彩色摄像头对应的K帧图像进行时域降噪,得到所述第二图像。
  4. 根据权利要求3所述的方法,其特征在于,所述对所述黑白摄像头对应的K帧图像进行时域降噪,包括:
    采用所述彩色摄像头对应的全局运动关系对所述黑白摄像头对应的K帧图像进行全局图像配准操作。
  5. 根据权利要求1至4中任一项所述的方法,其特征在于,所述方法还包括:
    对所述待拍摄场景的合成图像进行空域降噪。
  6. 一种终端,其特征在于,包括:
    黑白摄像头和彩色摄像头,用于对相同的待拍摄场景同时拍摄分别得到K帧图像,其中,所述黑白摄像头采用全尺寸工作模式,所述彩色摄像头采用合并工作模式,K≥1;
    获取模块,用于获取所述黑白摄像头对应的第一图像和所述彩色摄像头对应的第二图像;
    还用于根据所述第一图像获取高频信息;
    还用于根据所述第二图像获取低频信息;
    融合模块,用于根据所述高频信息和所述低频信息,对所述第一图像和所述第二图像进行融合,生成所述待拍摄场景的合成图像。
  7. 根据权利要求6所述的终端,其特征在于,所述融合模块具体用于:
    基于所述低频信息,以所述第二图像为参考图像,对所述第一图像的亮度进行处理以得到第三图像;
    基于所述高频信息,对所述第二图像进行上采样处理以得到第四图像;
    以所述第三图像为参考图像,对所述第四图像进行配准以得到第五图像;
    对所述第五图像和所述第三图像进行融合,生成所述待拍摄场景的合成图像。
  8. 根据权利要求6或7所述的终端,其特征在于,所述获取模块具体用于:
    获取所述黑白摄像头对应的K帧图像和所述彩色摄像头对应的K帧图像,其中,K≥2;
    对所述黑白摄像头对应的K帧图像进行时域降噪,得到所述第一图像;
    对所述彩色摄像头对应的K帧图像进行时域降噪,得到所述第二图像。
  9. 根据权利要求6至8中任一项所述的终端,其特征在于,所述终端还包括:
    降噪模块,用于对所述待拍摄场景的合成图像进行空域降噪。
  10. 一种终端,其特征在于,包括:
    存储器,用于存储指令;
    黑白摄像头和彩色摄像头,用于对相同的待拍摄场景同时拍摄分别得到K帧图像,其中,所述黑白摄像头采用全尺寸工作模式,所述彩色摄像头采用合并工作模式,K≥1;
    处理器,用于调用所述存储器中的指令,与所述黑白摄像头和所述彩色摄像头相连,用于获取所述黑白摄像头对应的第一图像和所述彩色摄像头对应的第二图像;还用于根据所述第一图像获取高频信息;还用于根据所述第二图像获取低频信息;还用于根据所述高频信息和所述低频信息,对所述第一图像和所述第二图像进行融合,生成所述待拍摄场景的合成图像。
  11. 根据权利要求10所述的终端,其特征在于,所述处理器具体用于:
    基于所述低频信息,以所述第二图像为参考图像,对所述第一图像的亮度进行处理以得到第三图像;
    基于所述高频信息,对所述第二图像进行上采样处理以得到第四图像;
    以所述第三图像为参考图像,对所述第四图像进行配准以得到第五图像;
    对所述第五图像和所述第三图像进行融合,生成所述待拍摄场景的合成图像。
  12. 根据权利要求10或11所述的终端,其特征在于,所述处理器具体用于:
    获取所述黑白摄像头对应的K帧图像和所述彩色摄像头对应的K帧图像,其中,K≥2;
    对所述黑白摄像头对应的K帧图像进行时域降噪,得到所述第一图像;
    对所述彩色摄像头对应的K帧图像进行时域降噪,得到所述第二图像。
  13. 根据权利要求10至12中任一项所述的终端,其特征在于,所述处理器还用于:
    对所述待拍摄场景的合成图像进行空域降噪。
PCT/CN2016/108293 2016-10-17 2016-12-01 用于终端拍照的方法及终端 WO2018072267A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201680080689.0A CN108605099B (zh) 2016-10-17 2016-12-01 用于终端拍照的方法及终端
US16/342,299 US10827140B2 (en) 2016-10-17 2016-12-01 Photographing method for terminal and terminal

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610905284 2016-10-17
CN201610905284.8 2016-10-17

Publications (1)

Publication Number Publication Date
WO2018072267A1 true WO2018072267A1 (zh) 2018-04-26

Family

ID=62018089

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/108293 WO2018072267A1 (zh) 2016-10-17 2016-12-01 用于终端拍照的方法及终端

Country Status (3)

Country Link
US (1) US10827140B2 (zh)
CN (1) CN108605099B (zh)
WO (1) WO2018072267A1 (zh)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109005343A (zh) * 2018-08-06 2018-12-14 Oppo广东移动通信有限公司 控制方法、装置、成像设备、电子设备及可读存储介质
CN111242860A (zh) * 2020-01-07 2020-06-05 影石创新科技股份有限公司 超级夜景图像的生成方法、装置、电子设备及存储介质
US10810720B2 (en) 2016-11-03 2020-10-20 Huawei Technologies Co., Ltd. Optical imaging method and apparatus
CN112991188A (zh) * 2019-12-02 2021-06-18 RealMe重庆移动通信有限公司 图像处理方法及装置、存储介质、电子设备
CN112995490A (zh) * 2019-12-12 2021-06-18 华为技术有限公司 图像处理方法及终端拍照方法、介质和***
WO2022226701A1 (zh) * 2021-04-25 2022-11-03 Oppo广东移动通信有限公司 图像处理方法、处理装置、电子设备和存储介质

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20090120159A (ko) * 2008-05-19 2009-11-24 삼성전자주식회사 영상합성장치 및 영상합성방법
US11218626B2 (en) * 2017-07-28 2022-01-04 Black Sesame International Holding Limited Fast focus using dual cameras
KR102138483B1 (ko) * 2017-10-31 2020-07-27 가부시키가이샤 모르포 화상 합성 장치, 화상 합성 방법, 화상 합성 프로그램 및 기억 매체
EP4325879A1 (en) 2018-10-15 2024-02-21 Huawei Technologies Co., Ltd. Method for displaying image in photographic scene and electronic device
US11057558B2 (en) * 2018-12-27 2021-07-06 Microsoft Technology Licensing, Llc Using change of scene to trigger automatic image capture
CN110599410B (zh) * 2019-08-07 2022-06-10 北京达佳互联信息技术有限公司 图像处理的方法、装置、终端及存储介质
CN111355892B (zh) * 2020-03-19 2021-11-16 Tcl移动通信科技(宁波)有限公司 图片拍摄方法、装置、存储介质及电子终端
CN115705614B (zh) * 2021-08-05 2024-06-28 北京小米移动软件有限公司 图像处理方法、装置、电子设备及存储介质
CN113810601B (zh) * 2021-08-12 2022-12-20 荣耀终端有限公司 终端的图像处理方法、装置和终端设备
CN113992868A (zh) * 2021-11-30 2022-01-28 维沃移动通信有限公司 图像传感器、摄像模组和电子设备
WO2023124201A1 (zh) * 2021-12-29 2023-07-06 荣耀终端有限公司 图像处理方法与电子设备
CN115550570B (zh) * 2022-01-10 2023-09-01 荣耀终端有限公司 图像处理方法与电子设备
CN115460386B (zh) * 2022-08-31 2024-05-17 武汉精立电子技术有限公司 一种利用黑白相机获取彩色图像的方法及***
CN116156334A (zh) * 2023-02-28 2023-05-23 维沃移动通信有限公司 拍摄方法、装置、电子设备和可读存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004147229A (ja) * 2002-10-25 2004-05-20 Sony Corp 画像表示制御装置、画像表示制御方法、カラー撮像装置及びビューファインダ装置
CN105049718A (zh) * 2015-07-06 2015-11-11 深圳市金立通信设备有限公司 一种图像处理方法及终端
WO2016061757A1 (zh) * 2014-10-22 2016-04-28 宇龙计算机通信科技(深圳)有限公司 基于双摄像头模组的图像生成方法及双摄像头模组
CN105701765A (zh) * 2015-09-23 2016-06-22 河南科技学院 一种图像处理的方法及移动终端
CN105827965A (zh) * 2016-03-25 2016-08-03 维沃移动通信有限公司 一种基于移动终端的图像处理方法及移动终端

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7088391B2 (en) 1999-09-01 2006-08-08 Florida Atlantic University Color video camera for film origination with color sensor and luminance sensor
CN1522051A (zh) 2003-02-12 2004-08-18 双图像传感器摄像技术及其装置
CN101754028B (zh) 2008-12-22 2012-01-04 华晶科技股份有限公司 提高影像分辨率的方法
WO2010089830A1 (ja) * 2009-02-03 2010-08-12 パナソニック株式会社 撮像装置
JP5816015B2 (ja) * 2011-07-15 2015-11-17 株式会社東芝 固体撮像装置及びカメラモジュール
JP2015197745A (ja) * 2014-03-31 2015-11-09 キヤノン株式会社 画像処理装置、撮像装置、画像処理方法及びプログラム
US9414037B1 (en) 2014-09-26 2016-08-09 Amazon Technologies, Inc. Low light image registration
CN105430361B (zh) 2015-12-18 2018-03-20 广东欧珀移动通信有限公司 成像方法、图像传感器、成像装置及电子装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004147229A (ja) * 2002-10-25 2004-05-20 Sony Corp 画像表示制御装置、画像表示制御方法、カラー撮像装置及びビューファインダ装置
WO2016061757A1 (zh) * 2014-10-22 2016-04-28 宇龙计算机通信科技(深圳)有限公司 基于双摄像头模组的图像生成方法及双摄像头模组
CN105049718A (zh) * 2015-07-06 2015-11-11 深圳市金立通信设备有限公司 一种图像处理方法及终端
CN105701765A (zh) * 2015-09-23 2016-06-22 河南科技学院 一种图像处理的方法及移动终端
CN105827965A (zh) * 2016-03-25 2016-08-03 维沃移动通信有限公司 一种基于移动终端的图像处理方法及移动终端

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10810720B2 (en) 2016-11-03 2020-10-20 Huawei Technologies Co., Ltd. Optical imaging method and apparatus
CN109005343A (zh) * 2018-08-06 2018-12-14 Oppo广东移动通信有限公司 控制方法、装置、成像设备、电子设备及可读存储介质
CN112991188A (zh) * 2019-12-02 2021-06-18 RealMe重庆移动通信有限公司 图像处理方法及装置、存储介质、电子设备
CN112995490A (zh) * 2019-12-12 2021-06-18 华为技术有限公司 图像处理方法及终端拍照方法、介质和***
CN111242860A (zh) * 2020-01-07 2020-06-05 影石创新科技股份有限公司 超级夜景图像的生成方法、装置、电子设备及存储介质
CN111242860B (zh) * 2020-01-07 2024-02-27 影石创新科技股份有限公司 超级夜景图像的生成方法、装置、电子设备及存储介质
WO2022226701A1 (zh) * 2021-04-25 2022-11-03 Oppo广东移动通信有限公司 图像处理方法、处理装置、电子设备和存储介质

Also Published As

Publication number Publication date
CN108605099B (zh) 2020-10-09
US20190253644A1 (en) 2019-08-15
US10827140B2 (en) 2020-11-03
CN108605099A (zh) 2018-09-28

Similar Documents

Publication Publication Date Title
WO2018072267A1 (zh) 用于终端拍照的方法及终端
TWI696146B (zh) 影像處理方法、裝置、電腦可讀儲存媒體和行動終端
US10810720B2 (en) Optical imaging method and apparatus
CN108900790B (zh) 视频图像处理方法、移动终端及计算机可读存储介质
CN106937039B (zh) 一种基于双摄像头的成像方法、移动终端及存储介质
WO2019153920A1 (zh) 一种图像处理的方法以及相关设备
WO2018137267A1 (zh) 图像处理方法和终端设备
CN108322644A (zh) 一种图像处理方法、移动终端以及计算机可读存储介质
CN111145192B (zh) 图像处理方法及电子设备
CN110944160B (zh) 一种图像处理方法及电子设备
CN106993136B (zh) 移动终端及其基于多摄像头的图像降噪方法和装置
WO2017088564A1 (zh) 一种图像处理方法及装置、终端、存储介质
CN109120858B (zh) 一种图像拍摄方法、装置、设备及存储介质
CN112995467A (zh) 图像处理方法、移动终端及存储介质
CN112188082A (zh) 高动态范围图像拍摄方法、拍摄装置、终端及存储介质
CN111866388B (zh) 一种多重曝光拍摄方法、设备及计算机可读存储介质
CN113507558A (zh) 去除图像眩光的方法、装置、终端设备和存储介质
CN111885307A (zh) 一种景深拍摄方法、设备及计算机可读存储介质
CN112184722A (zh) 图像处理方法、终端及计算机存储介质
CN107743199B (zh) 图像处理方法、移动终端及计算机可读存储介质
CN107817963B (zh) 一种图像显示方法、移动终端及计算机可读存储介质
CN107734269B (zh) 一种图像处理方法及移动终端
CN112135045A (zh) 一种视频处理方法、移动终端以及计算机存储介质
CN110070569B (zh) 终端图像的配准方法、装置、移动终端及存储介质
CN109729264B (zh) 一种图像获取方法及移动终端

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16919249

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16919249

Country of ref document: EP

Kind code of ref document: A1