WO2019148997A1 - 图像的处理方法、装置、存储介质及电子设备 - Google Patents

图像的处理方法、装置、存储介质及电子设备 Download PDF

Info

Publication number
WO2019148997A1
WO2019148997A1 PCT/CN2018/122872 CN2018122872W WO2019148997A1 WO 2019148997 A1 WO2019148997 A1 WO 2019148997A1 CN 2018122872 W CN2018122872 W CN 2018122872W WO 2019148997 A1 WO2019148997 A1 WO 2019148997A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
images
frame
group
value
Prior art date
Application number
PCT/CN2018/122872
Other languages
English (en)
French (fr)
Inventor
杨涛
谭国辉
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Publication of WO2019148997A1 publication Critical patent/WO2019148997A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/04Context-preserving transformations, e.g. by using an importance map
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/45Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from two or more image sensors being of different type or operating in different modes, e.g. with a CMOS sensor for moving images in combination with a charge-coupled device [CCD] for still images
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/60Noise processing, e.g. detecting, correcting, reducing or removing noise
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/2224Studio circuitry; Studio devices; Studio equipment related to virtual studio applications
    • H04N5/2226Determination of depth image, e.g. for foreground/background separation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Definitions

  • the present application belongs to the field of image processing technologies, and in particular, to an image processing method, apparatus, storage medium, and electronic device.
  • the configuration of hardware installed on the terminal is also getting higher and higher.
  • many terminals are equipped with dual camera modules.
  • the camera's camera level can be greatly improved.
  • a dual camera module with a color camera and a black and white camera can make the terminal capture more details when taking pictures.
  • the dual camera module with two color cameras can make the terminal have double the amount of light entering when taking pictures.
  • the embodiment of the present application provides a method, an apparatus, a storage medium, and an electronic device for processing an image, which can improve an imaging effect of an image.
  • An embodiment of the present application provides an image processing method, which is applied to a terminal, where the terminal includes at least a first camera module and a second camera module, and the method includes:
  • Presetting processing is performed on the target image according to the depth of field information.
  • An embodiment of the present application provides an image processing apparatus, which is applied to a terminal, where the terminal includes at least a first camera module and a second camera module, and the device includes:
  • a first acquiring module configured to acquire a first group of images that are images collected by the first camera module, and a second group of images that are An image captured by the second camera module;
  • a determining module configured to determine a first image from the first set of images, and determine a second image from the second set of images, the first image and the second image being synchronously collected Image;
  • a second acquiring module configured to acquire depth information according to the first image and the second image
  • a noise reduction module configured to perform noise reduction processing on the first image according to the first group of images to obtain a target image
  • a processing module configured to perform preset processing on the target image according to the depth of field information.
  • the embodiment of the present application provides a storage medium on which a computer program is stored.
  • the computer program is executed on a computer, the computer is caused to execute the flow in the image processing method provided by the embodiment of the present application.
  • the embodiment of the present application further provides an electronic device, including a memory, a processor, and a first camera module and a second camera module, where the processor is configured to execute by calling a computer program stored in the memory:
  • Presetting processing is performed on the target image according to the depth of field information.
  • FIG. 1 is a schematic flowchart diagram of a method for processing an image provided by an embodiment of the present application.
  • FIG. 2 is another schematic flowchart of a method for processing an image provided by an embodiment of the present application.
  • FIG. 3 is a schematic diagram of a scenario and a processing flow of an image processing method according to an embodiment of the present application.
  • FIG. 6 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application.
  • FIG. 7 is a schematic structural diagram of a mobile terminal according to an embodiment of the present application.
  • FIG. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
  • the embodiment of the present application provides an image processing method, which is applied to a terminal, where the terminal includes at least a first camera module and a second camera module, and the method may include:
  • Presetting processing is performed on the target image according to the depth of field information.
  • the performing, by the process of performing preset processing on the target image may include performing background blurring processing on the target image.
  • the first set of images includes at least two frames of images.
  • the process of determining the first image from the first group of images may include: acquiring a sharpness of each frame image in the first group of images; and obtaining an image with the highest definition in each frame image Determined as the first image.
  • the first set of images includes at least two frames of images.
  • the process of determining the first image from the first group of images may include: if each frame image of the first group of images includes a human face, acquiring the first group of images The value of the preset parameter of each frame image, the value of the preset parameter is used to represent the eye size of the face in the image; and the image with the largest value of the preset parameter in each frame image is determined as the first image.
  • the first set of images includes at least two frames of images.
  • the process of determining the first image from the first set of images may include: acquiring a sharpness of each frame image in the first set of images; if each of the first set of images And the frame image includes a human face, and the value of the preset parameter of each frame image in the first group of images is obtained, where the value of the preset parameter is used to represent the eye size of the face in the image; The sharpness and the value of the preset parameter determine the first image from the first set of images.
  • the determining, according to the clarity of the image of each frame and the value of the preset parameter, the process of determining the first image from the first group of images may include: acquiring, corresponding to the definition a first weight, and a second weight corresponding to the preset parameter; weighting the sharpness of each frame image according to the first weight, and obtaining a weighted resolution of each frame image, and according to the The second weights respectively weight the values of the preset parameters of the frame images to obtain the values of the preset parameters weighted by the frame images; and respectively obtain the weighted resolution and weight of each frame image. And a sum of values of the preset parameters; determining, in the first set of images, an image having the largest value as the first image.
  • the weighting of the image of each frame is weighted according to the first weight to obtain a weightedness of each frame image, and respectively according to the second weight
  • the process of weighting the value of the preset parameter of each frame image to obtain the value of the preset parameter weighted by each frame image may include: normalizing the resolution of the frame image and the value of the preset parameter respectively And obtaining a normalized resolution of each frame image and a value of the normalized preset parameter; performing normalized resolution of each frame image according to the first weight Weighting, obtaining the weightedness of the image of each frame; and weighting the values of the normalized preset parameters of the frame images according to the second weight, and obtaining the weighted image of each frame The value of the preset parameter.
  • the process of acquiring depth of field information according to the first image and the second image, and the performing noise reduction processing on the first image according to the first group of images to obtain a target image The process is executed in parallel.
  • the process of performing noise reduction processing on the first image according to the first group of images to obtain a target image may include: pairing the first image according to the first group of images Performing noise reduction processing to obtain a noise-reduced image; performing tone mapping processing on the noise-reduced image to obtain the target image.
  • the first set of images includes at least two frames of images.
  • the process of performing noise reduction processing on the first image according to the first group of images to obtain a target image may include: aligning all the images in the first group of images; in the aligned image Determining a plurality of sets of pixels aligned with each other, and a target pixel belonging to the first image among the pixels aligned with each other; acquiring pixel values of each pixel in each set of mutually aligned pixels; according to the pixel values of the pixels Obtaining a pixel value mean of each set of mutually aligned pixels; adjusting a pixel value of the target pixel in the first image to the pixel value mean to obtain the target image.
  • the executive body of the embodiment of the present application may be a terminal such as a smart phone or a tablet computer.
  • FIG. 1 is a schematic flowchart diagram of a method for processing an image according to an embodiment of the present application.
  • the processing method of the image can be applied to the terminal.
  • the terminal may be any terminal equipped with a dual camera module such as a smartphone or a tablet.
  • the flow of the image processing method may include:
  • the configuration of hardware installed on the terminal is also getting higher and higher.
  • many terminals are equipped with dual camera modules.
  • the camera's camera level can be greatly improved.
  • a dual camera module with a color camera and a black and white camera can make the terminal capture more details when taking pictures.
  • the dual camera module with two color cameras can make the terminal have double the amount of light entering when taking pictures.
  • the image processed by the terminal equipped with the dual camera module has a poor imaging effect.
  • the terminal may first acquire the first group image captured by the first camera module in the dual camera module, and the second camera module in the dual camera module collects The second set of images.
  • the first set of images includes a plurality of frames of images
  • the second set of images also includes a plurality of frames of images.
  • the dual camera module of the terminal can continuously acquire images synchronously, and the collected images can be stored in the buffer queue.
  • the terminal can acquire an image from the cache queue and display it on the terminal screen for the user to preview.
  • the cache queue can be a fixed length queue.
  • the cache queue is 4 elements long. That is, the cache queue stores four frames of images that have been recently acquired by the camera module.
  • the first queue corresponding to the first camera module caches the 4 frames of images that have been collected recently
  • the second queue corresponding to the second camera module caches the 4s that are recently acquired by the first camera module. Frame image.
  • the images acquired later will overwrite the previously acquired images.
  • the first queue caches the four frames of A1, A2, A3, and A4.
  • the terminal can delete the A1 image in the first queue. And insert the A5 image so that the first queue becomes A2, A3, A4, A5.
  • the first camera module and the second camera module can simultaneously acquire images.
  • the first camera module collects the A1 image
  • the second camera module can synchronously capture the B1 image.
  • A1 and B1 are images acquired synchronously
  • A2 and B2 is an image acquired synchronously
  • A3 and B3 are images acquired synchronously
  • A4 and B4 are images acquired synchronously.
  • the camera module of the terminal can also capture images.
  • the first group of images may only include an image captured by the first camera module before the user presses the camera button, or only the user captures the camera button and is collected by the first camera module.
  • the image obtained includes either the image captured by the first camera module before the user presses the camera button, and the image captured by the first camera module after the user presses the camera button.
  • the first camera module collects four frames of images A1, A2, A3, and A4. After the user presses the camera button, the first camera module collects A5, A6, and A7. A8 this 4 frame photo. Then, the first set of images may be A1, A2, A3, A4, or A2, A3, A4, A5, or A3, A4, A5, A6, or A5, A6, A7, A8, and the like. In some embodiments, the first set of images may be consecutive frames collected by the first camera, or may be discontinuous frames, such as A2, A3, A5, A6, and the like. The second set of images is the same.
  • a first image is determined from the first set of images, and a second image is determined from the second set of images, the first image and the second image being images acquired simultaneously.
  • the terminal may The first image is determined in A1, A2, A3, A4, and then the second image is determined from B1, B2, B3, B4.
  • the second image may be an image acquired in synchronization with the first image.
  • the terminal determines A2 of A1, A2, A3, A4 as the first image, and the terminal can determine B2 as the second image accordingly.
  • depth information is acquired based on the first image and the second image.
  • the terminal may acquire depth information according to the first image and the second image. It can be understood that since the first image and the second image are images that are synchronously acquired by the dual camera module on the terminal from different shooting positions (angles), the depth information can be acquired according to the first image and the second image.
  • the depth of field information is relative to the in-focus object in the image, and is the depth of field information acquired after the in-focus object is determined.
  • the first image is subjected to noise reduction processing according to the first group of images to obtain a target image.
  • the terminal may perform noise reduction processing on the first image according to at least two frames of the first group of images, thereby obtaining a target image.
  • the terminal may perform noise reduction processing on the first image according to other images in the first group of images other than the first image.
  • the terminal may use the first image A2 as a base frame of the noise reduction process, and perform noise reduction processing on the A2 according to the three frame images A1, A3, and A4 in the first group image. That is, the terminal can recognize and reduce the random noise in the base frame A2 image according to the three frame images A1, A3, and A4, thereby obtaining the target image after the noise reduction.
  • the image denoising algorithm may also be used to perform noise reduction processing on the first image.
  • the image denoising algorithm may include a wavelet denoising algorithm, a smoothing filtering algorithm, and the like.
  • a preset process is performed on the target image based on the depth of field information.
  • the terminal may perform preset processing on the target image according to the acquired depth information.
  • the preset processing may be a 3D application processing such as background blurring and images.
  • the terminal may perform noise reduction processing on the first image according to the first group of images, and the obtained target image has less noise.
  • the terminal can acquire the depth information according to the first image and the second image acquired synchronously, so that the depth information acquired by the terminal is more accurate. Therefore, the terminal performs preset processing on the target image according to the depth information, so that the image obtained after the processing has better imaging effect.
  • FIG. 2 is another schematic flowchart of a method for processing an image according to an embodiment of the present disclosure, where the process may include:
  • the terminal acquires a first group of images, which are images acquired by the first camera module, and a second group of images, which are collected by the second camera module. image.
  • a dual camera module is installed on the terminal, and the dual camera module includes a first camera module and a second camera module.
  • the first camera module and the second camera module can simultaneously acquire images.
  • the first set of images includes a plurality of frames of images
  • the second set of images also includes a plurality of frames of images.
  • the terminal uses the dual camera module to quickly acquire multiple frames of images of the same subject in the same shooting scene.
  • the first camera module collects the first group of images as four frames of images A1, A2, A3, and A4.
  • the second group of images acquired by the second camera module are four frames of images B1, B2, B3, and B4.
  • the terminal acquires the sharpness of each frame image in the first set of images.
  • the terminal can acquire the sharpness of the images A1, A2, A3, and A4.
  • the value of the sharpness of the image ranges from 0 to 100, and the larger the value of the sharpness, the clearer the image.
  • the sharpness of A1, A2, A3, and A4 in the first group of images is 80, 83, 81, and 79, respectively.
  • each frame image of the first group of images includes a human face
  • the terminal acquires a value of a preset parameter of each frame image in the first group of images, where the value of the preset parameter is used to represent the image.
  • the terminal After acquiring the resolutions of the images A1, A2, A3, and A4, the terminal detects that each frame image in the first group of images includes a human face, and the terminal may further acquire each of the first group of images.
  • the value of the preset parameter of the frame image The value of the preset parameter can be used to represent the eye size of the face in the image.
  • the terminal may acquire the eye size of the face in the image through some preset algorithms.
  • the algorithm may output a value for indicating the size of the eye, and the larger the value, the larger the eye.
  • the terminal may first identify the part of the eye in the face and the number of the first target pixels in the area where the eye part is located, and then calculate the ratio of the number of the first target pixels to the total number of pixels in the image. The bigger the eye.
  • the terminal can calculate only the second number of pixels occupied by the eye in the image height direction and the total number of pixels in the image height direction. Then, the terminal recalculates the ratio of the number of the second target pixels to the total number of pixels in the image height direction, and the larger the ratio, the larger the eyes.
  • the value of the preset parameter for indicating the size of the eye ranges from 0 to 50.
  • a larger value indicates a larger human eye in the image.
  • the values of the preset parameters of A1, A2, A3, and A4 in the first group of images are 40, 41, 42, and 39, respectively.
  • the terminal acquires a first weight corresponding to the sharpness and a second weight corresponding to the preset parameter.
  • the terminal may acquire the first weight corresponding to the sharpness and the second weight corresponding to the preset parameter. It can be understood that the sum of the first weight and the second weight is 1.
  • the values of the first weight and the second weight may be set according to usage requirements. For example, in a scenario where the image sharpness is required to be high, the first weight corresponding to the sharpness may be set larger, and the second weight corresponding to the preset parameter may be set smaller.
  • the first weight is 0.7, the second weight is 0.3, or the first weight is 0.6, the second weight is 0.4, and so on.
  • the first weight corresponding to the sharpness may be set smaller, and the second weight corresponding to the preset parameter may be set larger. For example, the first weight is 0.3, the second weight is 0.7, or the first weight is 0.4, the second weight is 0.6, and so on.
  • the terminal may set the size of the first weight and the second weight according to the difference in sharpness between the images. For example, if the terminal detects that the difference in definition between the frames in the first group of images is within a preset threshold range, that is, when the definition between the frames is not significantly different, the terminal may correspond to the definition.
  • the first weight is set smaller, and the second weight corresponding to the preset parameter is set larger. For example, the first weight is 0.4, the second weight is 0.6, and so on.
  • the terminal detects that the difference in sharpness between the frames of the first group of images is outside the preset threshold range, that is, when the difference between the images of the frames is large, the terminal may first match the sharpness.
  • the weight is set larger, and the second weight corresponding to the preset parameter is set smaller. For example, the first weight is 0.6, the second weight is 0.4, and so on.
  • the values of the first weight and the second weight may also be set by the user according to the shooting requirements.
  • the terminal normalizes the definition of the image of each frame in the first group of images and the value of the preset parameter, respectively, to obtain the normalized resolution and normalized preset of each frame image.
  • the value of the parameter is the value of the parameter.
  • the terminal weights the normalized sharpness of each frame image according to the first weight, and obtains the weighted resolution of each frame image, and respectively performs the weight according to the second weight.
  • the values of the normalized preset parameters of the frame image are weighted to obtain the values of the preset parameters weighted by the frame images.
  • the terminal respectively obtains a sum of the weighted resolution of each frame image in the first group of images and the value of the weighted preset parameter.
  • 205, 206, and 207 can include:
  • the sharpness of A1, A2, A3, and A4 in the first group of images is 80, 83, 81, and 79, respectively.
  • the values of the preset parameters of A1, A2, A3, and A4 are 40, 41, 41, and 39, respectively.
  • the first weight is 0.4 and the second weight is 0.6.
  • the terminal can first normalize the values of its definition and preset parameters. For example, the value of the normalized 80 is 0.8 (80/100), and the value of the preset parameter 40 is 0.8 (40/50). Then, the terminal can weight the normalized sharpness 0.8 by the first weight of 0.4 to obtain the weighted sharpness, and the value is 0.32 (0.4*0.8). At the same time, the terminal may weight the value of the normalized preset parameter by 0.8 according to the second weight 0.6 to obtain the value of the weighted preset parameter, and the value is 0.48 (0.6*0.8). Next, the terminal can calculate the sum of the weighted 0.32 of the A1 image and the value of the weighted preset parameter of 0.48, which is 0.8.
  • the terminal can calculate the sum of the weighted 0.32 of the A1 image and the value of the weighted preset parameter of 0.48, which is 0.8.
  • the terminal can also normalize the values of its definition and preset parameters. For example, the normalized value of the sharpness 83 is 0.83 (83/100), and the value of the preset parameter 41 is normalized to 0.82 (41/50). Then, the terminal can weight the normalized sharpness 0.83 according to the first weight of 0.4 to obtain the weighted sharpness, and the value is 0.332 (0.4*0.83). At the same time, the terminal may weight the value of the normalized preset parameter 0.82 according to the second weight 0.6 to obtain the value of the weighted preset parameter, and the value is 0.492 (0.6*0.82). Next, the terminal can calculate the sum of the weighted 0.332 of the A2 image and the value of the preset parameter of 0.492, which is 0.824.
  • the normalized value of the sharpness 83 is 0.83 (83/100)
  • the value of the preset parameter 41 is normalized to 0.82 (41/50). Then, the terminal can weight the normalized sharpness 0.83 according to the first weight of 0.4 to obtain
  • the terminal can calculate the normalized resolution of the A3 image to be 0.81, and the normalized preset parameter has a value of 0.82.
  • the resolution after weighting according to the first weight 0.4 is 0.324, and the value of the preset parameter weighted according to the second weight 0.6 is 0.492, and the sum of the weighted resolution of the A3 image is 0.324 and the value of the weighted preset parameter is 0.492.
  • the value is 0.816.
  • the terminal can calculate the normalized resolution of the A4 image to be 0.79, and the normalized preset parameter has a value of 0.78.
  • the resolution after weighting according to the first weight 0.4 is 0.316
  • the value of the preset parameter weighted according to the second weight 0.6 is 0.468
  • the sum of the weighted resolution of the A4 image is 0.316 and the value of the weighted preset parameter is 0.468.
  • the value is 0.784.
  • the terminal determines the image with the largest value in the first group of images as the first image, and determines the second image from the second group of images, the first image and the second image are synchronized.
  • the captured image The captured image.
  • the terminal may determine the image with the largest sum value as the first image.
  • the terminal can determine A2 in the first group of images as the first image.
  • the terminal can then determine the second image from the second set of images.
  • the second image may be an image acquired in synchronization with the first image.
  • the terminal can determine B2 in the second group of images as the second image.
  • the terminal performs parallel flow of acquiring depth of field information according to the first image and the second image, and performing noise reduction processing on the first image according to the first group of images to obtain a target image.
  • the terminal may acquire depth information according to the first image and the second image.
  • first image and the second image are images of the same object collected by the dual camera module on the terminal from different positions (angles), and thus the depth information can be acquired according to the first image and the second image.
  • the depth of field information is relative to the in-focus object in the image, and is the depth of field information acquired after the in-focus object is determined.
  • the terminal may perform noise reduction processing on the first image according to the first group of images, thereby obtaining a target image.
  • the terminal may perform noise reduction processing on the first image according to other images in the first group of images other than the first image.
  • the terminal may use the first image A2 as a base frame of the noise reduction process, and perform noise reduction processing on the A2 according to the three frame images A1, A3, and A4 in the first group image. That is, the terminal can recognize and reduce the random noise in the base frame A2 image according to the three frame images A1, A3, and A4, thereby obtaining the target image after the noise reduction.
  • the process of acquiring the depth information according to the first image and the second image, and performing the noise reduction processing on the first image according to the first group image and obtaining the target image may be performed in parallel. It should be noted that performing the noise reduction processing on the first image does not affect acquiring the depth information according to the first image and the second image, and thus the flow of the noise reduction processing and the acquisition of the depth information may be performed in parallel.
  • the terminal may perform a process of acquiring depth information according to the first photo and the second photo by using a central processing unit (CPU), and the terminal may utilize a graphics processing unit (Graphics Processing Unit, The GPU) performs a process of performing noise reduction processing on the first image according to the first group of images and obtaining a target image.
  • CPU central processing unit
  • GPU graphics processing unit
  • the above two processes are executed in parallel, which can save the processing time of the terminal and improve the efficiency of processing the image.
  • the duration required for the terminal to acquire the depth information is 800 ms
  • the duration required for the noise reduction is 400 ms.
  • by acquiring the depth information and the noise reduction in parallel (for example, by multi-thread parallel processing), It can save 400ms processing time and improve the imaging speed of the terminal.
  • another 800 ms period in which one thread performs depth information acquisition another thread can perform beauty processing (requiring about 200 ms) in addition to noise reduction (about 400 ms). Processing (requiring a length of about 100ms) and the like, so that when the depth of field information acquisition is completed, more processing is performed on the target image, which saves more processing time and further improves the imaging speed of the terminal.
  • the terminal in addition to acquiring the depth information according to the first image and the second image, when the acquisition time interval of the two frames is sufficiently short or the difference between the collected frames is sufficiently small, the terminal further An image of one frame can be arbitrarily selected from the first group of images, and an image acquired synchronously with the frame image is selected from the second group of images, and then the depth information is obtained according to the two frames of images.
  • the first image is A2
  • the second image is B2.
  • the terminal may also select one frame image from A1, A3, and A4, for example, select an A4 image, and then select a B4 image acquired synchronously with the A4 image from the second group image, and according to A4 and B4 get depth of field information.
  • the terminal may also arbitrarily select one frame image from the first group of images, and from the second group of images. An image is randomly selected from the image, and depth information is obtained based on the two images. For example, the terminal selects A2 and B3 and obtains depth information based on the two frames of images.
  • the first group of images includes at least two frames of images
  • the terminal performs noise reduction processing on the first image according to the first group of images.
  • the process of obtaining the target image may include the following process:
  • the terminal aligns all the images in the first set of images
  • the terminal determines a plurality of sets of mutually aligned pixels, and target pixels belonging to the first image among the groups of mutually aligned pixels;
  • the terminal acquires pixel values of each pixel in each set of mutually aligned pixels
  • the terminal adjusts the pixel value of the target pixel in the first image to the pixel value mean to obtain the target image.
  • the first set of images includes A1, A2, A3, A4, where the first image is A2.
  • the terminal can determine A2 as the base frame of the noise reduction process.
  • the terminal can align the four frames of images A1, A2, A3, and A4 by using an image alignment algorithm.
  • the terminal After aligning the four frames of images A1, A2, A3, and A4, the terminal can determine mutually aligned pixels as a group of associated pixels, thereby obtaining a plurality of sets of mutually aligned pixels. Then, the terminal can determine the pixels belonging to the first image among the groups of pixels aligned with each other as the target pixel. Then, the terminal can acquire the pixel values of each pixel in each set of mutually aligned pixels, and further obtain the pixel value mean of each set of mutually aligned pixels. Then, the terminal may adjust the pixel value of each target pixel in the first image to the pixel value mean of the group in which the target pixel is located, and the adjusted first image is the target image.
  • the A1 image has a pixel X1
  • the A2 image has a pixel X2
  • the A3 image has a pixel X3
  • the A4 image has a pixel X4.
  • the pixels X1, X2, X3, and X4 are pixels in which the four frames of A1, A2, A3, and A4 are aligned at the same alignment position. That is, the pixels X1, X2, X3, and X4 are aligned.
  • the pixel value of the pixel X1 is 101
  • the pixel value of the pixel X2 is 102
  • the pixel value of the pixel X3 is 103
  • the pixel value of the pixel X4 is 104
  • the average of the four pixel values is 102.5.
  • the terminal can adjust the pixel value of the pixel X2 in the A2 image from 102 to 102.5, thereby performing noise reduction processing on the pixel X2.
  • the obtained image is the target image after the noise reduction processing.
  • the frame image with the sharpest sharpness may be first determined from the four frames of images A1, A2, A3, and A4, and then the pixel values of different frames are given different weights, and then weighted according to the weighted The pixel value is calculated as an average value, and the pixel value on the base frame A2 is adjusted based on the weighted average value.
  • the pixel Z2 on the A2 image is aligned with the pixel Z3 on the A1 image and the pixel Z3 on the A4 image and the pixel Z4 on the A4 image.
  • the pixel value of Z1 is 101
  • the pixel value of Z2 is 102
  • the pixel value of Z3 is 103
  • the pixel value of Z4 is 104.
  • Z2 has the sharpest sharpness in these 4 frames of images.
  • the terminal can adjust the pixel value of Z2 from 102 to 102.4, thereby reducing the noise of the pixel.
  • the terminal may not adjust the pixel value of the position on the A2 image.
  • the pixel Y2 on the A2 image and the pixel Y1 on the A1 image are aligned with the pixel Y3 on the A4 image, and the pixel Y4 on the A4 image is aligned, but the pixel value of Y2 is 100, and the pixel value of Y1 is 20, the pixel value of Y3.
  • the pixel value of 30 and Y4 is 35, that is, the pixel value of Y2 is much larger than Y1, Y3 and Y4. In this case, the pixel value of Y2 may not be adjusted.
  • the terminal may not adjust the pixel value of each pixel of the basic frame A2, but directly adopt the basic frame A2 as the target image, and perform subsequent steps. Background blurring.
  • the terminal performs background blurring processing on the target image.
  • the terminal may perform background blur processing on the target image according to the acquired depth information.
  • the terminal may perform multi-frame noise reduction processing on the first image by using the first group of images, and the obtained target image has less random noise.
  • the terminal can acquire the depth information according to the first image and the second image acquired synchronously, so that the depth information acquired by the terminal is more accurate. That is, in the case where the target image has less noise and the acquired depth information is more accurate, the effect of the background blur on the target image is better in this embodiment, that is, the target image after the background blur is better in imaging effect.
  • the embodiment can also improve the image processing speed and is effective. Avoid the problem of slow processing due to multi-frame noise reduction.
  • the terminal performs a noise reduction process on the first image according to the first group of images to obtain a target image, which may include the following processes:
  • the terminal performs noise reduction processing on the first image according to the first group of images to obtain a reduced noise image
  • the terminal performs tone mapping processing on the noise-reduced image to obtain a target image.
  • the terminal may use the first image A2 as a base frame of the noise reduction process, and identify and reduce random noise in the base frame A2 image according to the three frame images A1, A3, and A4, thereby obtaining The image after noise reduction.
  • the terminal can perform tone mapping processing on the noise-reduced image to obtain a target image.
  • the process of determining, by the terminal, the first image from the first group of images after acquiring the first group of images may also include the following process:
  • the terminal acquires the sharpness of each frame image in the first group of images, and determines the image with the highest sharpness in each frame image as the first image.
  • the sharpness of A1, A2, A3, and A4 in the first group of images is 80, 83, 81, and 79, respectively.
  • the terminal can directly determine the A2 image as the first image. That is, the terminal can determine the first image from the first set of images based only on the dimension of sharpness.
  • the terminal may also include the following processes:
  • each frame image of the first group of images includes a human face
  • the terminal acquires a value of a preset parameter of each frame image in the first group of images, and the value of the preset parameter is used to represent an eye of the face in the image. size;
  • An image having the largest value of the preset parameter in the first group of images is determined as the first image.
  • the terminal after acquiring the first group of images, the terminal detects that each frame image in the first group of images includes a human face, and the terminal may acquire preset parameters of each frame image in the first group of images.
  • the values of the preset parameters of A1, A2, A3, and A4 in the first group of images are 40, 41, 42, and 39, respectively.
  • the terminal can directly determine A2 as the first image.
  • the A2 image is the largest frame image of the human eye in the first group of images. That is, the terminal can determine the first image from the first set of images based only on the size of the human eye size in the image.
  • the terminal may also add a smile degree of the human face to determine the first An image.
  • the terminal can combine the sharpness of the image with the degree of smile of the face to determine the first image.
  • the terminal may also combine the size of the human eye with the degree of smile of the face to determine the first image.
  • the terminal can combine the image sharpness, the size of the human eye, and the degree of smile of the face to determine the first image, and the like.
  • the manner in which the degree of smile on the face is detected may be based on image recognition of a portion of the face and the corner of the mouth.
  • the terminal can recognize the corner portion of the mouth in the image and the bending extent of the corner portion of the mouth. The greater the degree of bending, the greater the degree of smile, and the like.
  • FIG. 3 to FIG. 5 are schematic diagrams of a scenario and a processing flow of an image processing method according to an embodiment of the present application.
  • a dual camera module 10 is mounted on the terminal, and the dual camera module 10 includes a first camera module 11 and a second camera module 12.
  • the first camera module 11 can be a primary camera
  • the second camera module can be a secondary camera.
  • the two cameras in the dual camera module may be arranged side by side in a lateral direction (as shown in FIG. 3).
  • the two cameras in the dual camera module may also be arranged side by side in the longitudinal direction.
  • the terminal uses the dual camera module 10 to capture images
  • the first camera module 11 and the second camera module 12 can simultaneously acquire images.
  • the user opens the camera application and is ready to take a photo, at which point the terminal interface enters the image preview interface.
  • An image for preview by the user will be displayed on the display screen of the terminal.
  • the first camera module and the second camera module can simultaneously acquire images.
  • the terminal may obtain, from the cache queue, the 4 frames of images recently acquired by the first camera module 11 before the user clicks the camera button, and the second camera module. 12 recently acquired 4 frames of images.
  • the four frames of photos (the first group of images) recently acquired by the first camera module are sequentially A1, A2, A3, and A4.
  • the 4 frames of images (the second group of images) acquired by the second camera module in the near future are sequentially B1, B2, B3, and B4. It can be understood that A1 and B1 are images acquired synchronously, A2 and B2 are images acquired synchronously, A3 and B3 are images acquired synchronously, and A4 and B4 are images acquired synchronously.
  • the terminal can acquire the sharpness of each frame image in the first group of images, and the value of the preset parameter.
  • the value of the preset parameter can be used to represent the eye size of the face in the image.
  • the definition of sharpness ranges from 0 to 100, and the larger the value of sharpness, the clearer the image.
  • the sharpness of A1, A2, A3, and A4 in the first group of images is 80, 83, 81, and 79, respectively.
  • the value of the preset parameter ranges from 0 to 50. The larger the value, the larger the human eye in the image.
  • the values of the preset parameters of A1, A2, A3, and A4 in the first group of images are 40, 41, 42, and 39, respectively.
  • the terminal may acquire a first weight corresponding to the sharpness and a second weight corresponding to the preset parameter.
  • the first weight is 0.4 and the second weight is 0.6.
  • the terminal can normalize the values of the definition and the preset parameters to obtain the normalized resolution and normalized pre-image of each frame image. Set the value of the parameter. Then, the terminal may weight the sharpness of each frame image normalized according to the first weight, and obtain the weightedness of each frame image. Moreover, the terminal may weight the values of the preset parameters normalized in each frame image according to the second weight, and obtain the values of the preset parameters weighted by each frame image. Finally, the terminal can separately obtain the sum of the weighted resolution of each frame image and the value of the weighted preset parameter.
  • the terminal can first normalize the values of its sharpness and preset parameters. For example, the value of the normalized 80 is 0.8 (80/100), and the value of the preset parameter 40 is 0.8 (40/50). Then, the terminal can weight the normalized sharpness 0.8 by the first weight of 0.4 to obtain the weighted sharpness, and the value is 0.32 (0.4*0.8). At the same time, the terminal may weight the value of the normalized preset parameter by 0.8 according to the second weight 0.6 to obtain the value of the weighted preset parameter, and the value is 0.48 (0.6*0.8). Next, the terminal can calculate the sum of the weighted 0.32 of the A1 image and the value of the weighted preset parameter of 0.48, which is 0.8.
  • the terminal can calculate the sum of the weighted 0.32 of the A1 image and the value of the weighted preset parameter of 0.48, which is 0.8.
  • the weighted resolution is 0.332, and the weighted preset parameter has a value of 0.492.
  • the sum of the two is 0.824.
  • the weighted resolution is 0.324, and the weighted preset parameter has a value of 0.492, and the sum of the two is 0.816.
  • the weighted resolution is 0.316, and the weighted preset parameter has a value of 0.468. The sum of the two is 0.784.
  • the terminal After obtaining the sum of the four frame images of A1, A2, A3, and A4, the terminal can determine the image having the largest sum value as the first image, which is used as the base frame for noise reduction. It can be understood that the first image is an image with a larger human eye and higher definition in the first group of images. For example, since the sum value of A2 is the largest, A2 is determined as the first image. Then, the terminal can determine the B2 image captured by the second camera module as the second image.
  • the terminal can acquire the depth information according to the first image A2 and the second image B2 by using the CPU. Moreover, the terminal can perform noise reduction processing on the first image A2 according to A1, A3, and A4 in the first group of images by using the GPU to obtain a noise-reduced A2 image, and determine the target image as a target image.
  • the process of calculating the depth of field information by the terminal and the process of noise reduction of the A2 image may be performed in parallel to improve the processing speed.
  • the terminal may perform background blur processing on the target image according to the acquired depth information, thereby obtaining an output image.
  • the terminal can then save the output image in an album.
  • the entire process can be as shown in Figure 5.
  • the embodiment provides an image processing device, which is applied to a terminal, where the terminal includes at least a first camera module and a second camera module, and the device includes:
  • a first acquiring module configured to acquire a first group of images that are images collected by the first camera module, and a second group of images that are An image captured by the second camera module;
  • a determining module configured to determine a first image from the first set of images, and determine a second image from the second set of images, the first image and the second image being synchronously collected Image;
  • a second acquiring module configured to acquire depth information according to the first image and the second image
  • a noise reduction module configured to perform noise reduction processing on the first image according to the first group of images to obtain a target image
  • a processing module configured to perform preset processing on the target image according to the depth of field information.
  • the processing module may be configured to perform background blurring processing on the target image.
  • the first set of images includes at least two frames of images.
  • the determining module may be configured to: acquire a sharpness of each frame image in the first group of images; and determine an image with the highest sharpness in each frame image as the first image.
  • the first set of images includes at least two frames of images.
  • the determining module may be configured to: if each frame image of the first group of images includes a human face, acquire a value of a preset parameter of each frame image in the first group of images, where the preset The numerical value of the parameter is used to indicate the eye size of the face in the image; the image having the largest value of the preset parameter in each frame image is determined as the first image.
  • the first set of images includes at least two frames of images.
  • the determining module may be configured to: acquire a sharpness of each frame image in the first group of images; if each frame image of the first group of images includes a human face, acquire the first set of images a value of a preset parameter of each frame image, the value of the preset parameter is used to represent an eye size of a face in the image; according to the definition of the image of each frame and the value of the preset parameter, from the The first image is determined from a set of images.
  • the determining module may be configured to: acquire a first weight corresponding to the sharpness, and a second weight corresponding to the preset parameter; respectively, respectively, according to the first weight, the image of each frame The weight is weighted to obtain the weightedness of the image of each frame, and the values of the preset parameters of the frame images are respectively weighted according to the second weight, and the weighted pre-image of each frame image is obtained.
  • Setting a value of the parameter respectively obtaining a sum of the weighted sharpness of the image of each frame and the value of the weighted preset parameter; determining, in the first set of images, an image having the largest sum value as the The first image.
  • the determining module may be configured to: normalize the sharpness of the image of each frame and the value of the preset parameter, respectively, to obtain a normalized resolution of each frame image. a normalized value of the preset parameter; respectively, weighting the normalized sharpness of each frame image according to the first weight, and obtaining a weighted resolution of each frame image; The second weight is used to weight the values of the normalized preset parameters of the frame images to obtain the values of the preset parameters weighted by the frame images.
  • the process of acquiring depth of field information according to the first image and the second image, and the performing noise reduction processing on the first image according to the first group of images to obtain a target image The process is executed in parallel.
  • the noise reduction module may be configured to perform noise reduction processing on the first image according to the first group of images to obtain a noise-reduced image; and the noise-reduced image A tone mapping process is performed to obtain the target image.
  • the first set of images includes at least two frames of images.
  • the noise reduction module can be configured to: align all the images in the first group of images;
  • the aligned image determining a plurality of sets of mutually aligned pixels, and target pixels belonging to the first image among the groups of mutually aligned pixels; acquiring pixel values of each of the pixels of each set of mutually aligned pixels; a pixel value of the pixel, obtaining a pixel value mean of each set of mutually aligned pixels; adjusting a pixel value of the target pixel in the first image to the pixel value mean to obtain the target image.
  • FIG. 6 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present disclosure.
  • the image processing apparatus 300 may include: a first acquisition module 301, a determination module 302, a second acquisition module 303, a noise reduction module 304, and a processing module 305.
  • a first acquiring module 301 configured to acquire a first group of images that are images collected by the first camera module, and a second group of images that are The image captured by the second camera module.
  • the first obtaining module 301 may first acquire the first group of images collected by the first camera module in the dual camera module on the terminal, and the second group collected by the second camera module in the dual camera module. image.
  • the first set of images includes a plurality of frames of images, and the second set of images also includes a plurality of frames of images.
  • a determining module 302 configured to determine a first image from the first set of images, and determine a second image from the second set of images, the first image and the second image being synchronized The captured image.
  • the first acquiring module 301 acquires the first group of images A1, A2, A3, and A4 collected by the first camera module, and the second group of images B1, B2, B3, and B4 collected by the second camera module. Thereafter, the determination module 302 can determine the first image from A1, A2, A3, A4, and then determine the second image from B1, B2, B3, B4.
  • the second image may be an image acquired in synchronization with the first image.
  • the determination module 302 determines A2 of A1, A2, A3, A4 as the first image, and the terminal can determine B2 as the second image accordingly.
  • the second obtaining module 303 is configured to acquire depth information according to the first image and the second image.
  • the second obtaining module 303 can obtain the depth information according to the first image and the second image. It can be understood that since the first image and the second image are images that are synchronously acquired by the dual camera module on the terminal from different shooting positions (angles), the depth information can be acquired according to the first image and the second image.
  • the depth of field information is relative to the in-focus object in the image, and is the depth of field information acquired after the in-focus object is determined.
  • the noise reduction module 304 is configured to perform noise reduction processing on the first image according to the first group of images to obtain a target image.
  • the noise reduction module 304 may perform noise reduction processing on the first image according to the first group of images, thereby obtaining a target image.
  • the noise reduction module 304 may perform noise reduction processing on the first image according to other images in the first group of images other than the first image.
  • the terminal may use the first image A2 as a base frame of the noise reduction process, and perform noise reduction processing on the A2 according to the three frame images A1, A3, and A4 in the first group image. That is, the noise reduction module 304 can recognize and reduce the random noise in the base frame A2 image according to the three frame images A1, A3, and A4, thereby obtaining the target image after noise reduction.
  • the processing module 305 is configured to perform preset processing on the target image according to the depth of field information.
  • the processing module 305 may perform preset processing on the target image according to the acquired depth information.
  • the preset processing may be a 3D application processing such as background blurring and images.
  • the processing module 305 can be configured to perform background blurring processing on the target image.
  • the determining module 302 may be configured to: acquire a sharpness of each frame image in the first group of images; and determine an image with the highest sharpness in each frame image as the first image.
  • the determining module 302 may be configured to: if each frame image of the first group of images includes a human face, acquire preset parameters of each frame image in the first group of images. Numerical value, the value of the preset parameter is used to represent the eye size of the face in the image; and the image having the largest value of the preset parameter in each frame image is determined as the first image.
  • the determining module 302 may be configured to: acquire a sharpness of each frame image in the first group of images; if each frame image of the first group of images includes a human face, obtain a value of a preset parameter of each frame image in the first group of images, the value of the preset parameter is used to represent an eye size of a face in the image; according to the definition and preset parameters of the image of each frame a value from which the first image is determined from the first set of images.
  • the determining module 302 may be configured to: acquire a first weight corresponding to the sharpness, and a second weight corresponding to the preset parameter; and respectively perform the image of each frame according to the first weight
  • the sharpness is weighted, and the weightedness of the image of each frame is obtained, and the values of the preset parameters of the frame images are respectively weighted according to the second weight, and the weights of the frames are obtained.
  • a value of the preset parameter respectively obtaining a sum of the weighted sharpness of the image of each frame and the value of the weighted preset parameter; determining, in the first set of images, an image having the largest sum value as The first image is described.
  • the determining module 302 may be configured to: normalize the sharpness of the image of each frame and the value of the preset parameter, respectively, to obtain a normalized image of each frame image. And a value of the normalized preset parameter; respectively, weighting the normalized sharpness of each frame image according to the first weight, and obtaining the weighted resolution of each frame image; The second weight weights the values of the normalized preset parameters of the frame images to obtain the values of the preset parameters weighted by the frame images.
  • the noise reduction module 304 may be configured to perform noise reduction processing on the first image according to the first group of images to obtain a noise-reduced image; The image is subjected to tone mapping processing to obtain the target image.
  • the first set of images includes at least two frames of images
  • the noise reduction module 304 can be configured to: align all the images in the first set of images; in the aligned images, Determining a plurality of sets of pixels aligned with each other, and a target pixel belonging to the first image among the pixels aligned with each other; acquiring pixel values of each pixel in each set of mutually aligned pixels; obtaining according to the pixel values of the pixels a pixel value average of each set of mutually aligned pixels; adjusting a pixel value of the target pixel in the first image to the pixel value mean to obtain the target image.
  • the embodiment of the present application provides a computer readable storage medium having a computer program stored thereon, and when the computer program is executed on a computer, causing the computer to execute a process in the image processing method provided by the embodiment. .
  • the embodiment of the present application further provides an electronic device, including a memory, and a processor, by using a computer program stored in the memory, to execute a process in a method for processing an image provided by the embodiment.
  • the above electronic device may be a mobile terminal such as a tablet or a smart phone.
  • a mobile terminal such as a tablet or a smart phone.
  • FIG. 7 is a schematic structural diagram of a mobile terminal according to an embodiment of the present disclosure.
  • the mobile terminal 400 can include components such as a camera module 401, a memory 402, a processor 403, and the like. It will be understood by those skilled in the art that the mobile terminal structure shown in FIG. 7 does not constitute a limitation of the mobile terminal, and may include more or less components than those illustrated, or a combination of certain components, or different component arrangements.
  • the camera module 401 can be a dual camera module or the like installed on the mobile terminal.
  • the camera module 401 includes at least a first camera module and a second camera module.
  • the first camera module and the second camera module can simultaneously acquire images.
  • Memory 402 can be used to store applications and data.
  • the application stored in the memory 402 contains executable code.
  • Applications can form various functional modules.
  • the processor 403 executes various functional applications and data processing by running an application stored in the memory 402.
  • the processor 403 is a control center of the mobile terminal, connects various parts of the entire mobile terminal using various interfaces and lines, executes the mobile terminal by running or executing an application stored in the memory 402, and calling data stored in the memory 402. The various functions and processing data to monitor the mobile terminal as a whole.
  • the processor 403 in the mobile terminal loads the executable code corresponding to the process of one or more applications into the memory 402 according to the following instructions, and is stored in the memory by the processor 403.
  • the application in 402 thus implementing the following process:
  • An image of the first image is determined from the first set of images, and a second image is determined from the second set of images, the first image and the second image being images acquired simultaneously
  • the image is pre-processed.
  • An embodiment of the present application further provides an electronic device.
  • the above electronic device includes an image processing circuit that can be implemented using hardware and/or software components, and can include various processing units that define an Image Signal Processing pipeline.
  • the image processing circuit may at least include: a camera, an image signal processor (ISP), a control logic, an image memory, a display, and the like.
  • the camera may include at least one or more lenses and an image sensor.
  • the image sensor can include a color filter array (such as a Bayer filter).
  • the image sensor can acquire light intensity and wavelength information captured with each imaging pixel of the image sensor and provide a set of raw image data that can be processed by the image signal processor.
  • the image signal processor can process the raw image data pixel by pixel in a variety of formats. For example, each image pixel may have a bit depth of 8, 10, 12, or 14 bits, and the image signal processor may perform one or more image processing operations on the raw image data, collecting statistical information about the image data. Among them, image processing operations can be performed with the same or different bit depth precision.
  • the raw image data can be stored in the image memory after being processed by the image signal processor.
  • the image signal processor can also receive image data from the image memory.
  • the image memory can be part of a memory device, a storage device, or a separate dedicated memory within an electronic device, and can include DMA (Direct Memory Access) features.
  • DMA Direct Memory Access
  • the image signal processor can perform one or more image processing operations, such as time domain filtering.
  • the processed image data can be sent to the image memory for additional processing before being displayed.
  • the image signal processor can also receive processing data from the image memory and process the image data in the original domain and in the RGB and YCbCr color spaces.
  • the processed image data can be output to a display for viewing by a user and/or further processed by a graphics engine or a GPU (Graphics Processing Unit). Additionally, the output of the image signal processor can also be sent to an image memory, and the display can read image data from the image memory.
  • the image memory can be configured to implement one or more frame buffers.
  • the statistics determined by the image signal processor can be sent to the control logic.
  • the statistics may include statistical information of image sensors such as auto exposure, auto white balance, auto focus, flicker detection, black level compensation, lens shading correction, and the like.
  • the control logic can include a processor and/or a microcontroller that executes one or more routines, such as firmware.
  • One or more routines may determine the camera's control parameters and ISP control parameters based on the received statistics.
  • the camera's control parameters may include camera flash control parameters, lens control parameters (eg, focus or zoom focal length), or a combination of these parameters.
  • the ISP control parameters may include gain levels and color correction matrices for automatic white balance and color adjustment (eg, during RGB processing), and the like.
  • FIG. 8 is a schematic structural diagram of an image processing circuit in the embodiment. As shown in FIG. 8, for convenience of explanation, only various aspects of the image processing technique related to the embodiment of the present application are shown.
  • the image processing circuit may include a first camera 510, a second camera 520, a first image signal processor 530, a second image signal processor 540, a control logic 550, an image memory 560, and a display 570.
  • the first camera 510 may include one or more first lenses 511 and a first image sensor 512.
  • the second camera 520 can include one or more second lenses 521 and a second image sensor 522.
  • the first image acquired by the first camera 510 is transmitted to the first image signal processor 530 for processing.
  • the statistical data of the first image (such as the brightness of the image, the contrast of the image, the color of the image, etc.) may be sent to the control logic 550.
  • the control logic 550 can determine the control parameters of the first camera 510 according to the statistical data, so that the first camera 510 can perform operations such as auto focus, auto exposure, and the like according to the control parameters.
  • the first image is processed by the first image signal processor 530 and stored in the image memory 560.
  • the first image signal processor 530 can also read the image stored in the image memory 560 for processing.
  • the first image is processed by the image signal processor 530 and sent directly to the display 570 for display. Display 570 can also read images in image memory 560 for display.
  • the second image acquired by the second camera 520 is transmitted to the second image signal processor 540 for processing.
  • the statistical data of the second image (such as the brightness of the image, the contrast of the image, the color of the image, etc.) may be sent to the control logic 550.
  • the control logic 550 can determine the control parameters of the second camera 520 based on the statistical data, so that the second camera 520 can perform operations such as auto focus, auto exposure, and the like according to the control parameters.
  • the second image is processed by the second image signal processor 540 and stored in the image memory 560.
  • the second image signal processor 540 can also read the image stored in the image memory 560 for processing.
  • the second image is processed by the image signal processor 540 and sent directly to the display 570 for display. Display 570 can also read images in image memory 560 for display.
  • the first image signal processor and the second image signal processor may also be synthesized into a unified image signal processor that processes data of the first image sensor and the second image sensor, respectively.
  • the electronic device may further include a CPU and a power supply module.
  • the CPU and the logic controller, the first image signal processor, the second image signal processor, the image memory, and the display are all connected, and the CPU is used to implement global control.
  • the power supply module is used to power each module.
  • a mobile phone with a dual camera module works in both camera modes and dual camera modules.
  • the CPU controls the power supply module to supply power to the first camera and the second camera.
  • the image sensor in the first camera is powered on, and the image sensor in the second camera is powered up, so that image acquisition and conversion can be realized.
  • it can be a camera in a dual camera module.
  • the CPU controls the power supply module to supply power to the image sensor of the corresponding camera.
  • the installation distance of the dual camera module in the terminal can be determined according to the size determination of the terminal and the shooting effect.
  • the two camera modules may be mounted as close as possible, for example, within 10 mm.
  • An image of the first image is determined from the first set of images, and a second image is determined from the second set of images, the first image and the second image being images acquired simultaneously
  • the image is pre-processed.
  • the electronic device when performing the preset processing on the target image, the electronic device may perform: performing background blurring processing on the target image.
  • the first group of images includes at least two frames of images, and when the electronic device performs the determining the first image from the first group of images, performing: acquiring the first group The sharpness of each frame image in the image; the image with the highest sharpness in each frame image is determined as the first image.
  • the first group of images includes at least two frames of images
  • the method may be performed: if the first group The image of each frame of the image includes a human face, and the value of the preset parameter of each frame image in the first group of images is obtained, and the value of the preset parameter is used to represent the eye size of the face in the image; An image having the largest value of the preset parameter in the frame image is determined as the first image.
  • the first group of images includes at least two frames of images, and when the electronic device performs the determining the first image from the first group of images, performing: acquiring the first group a resolution of each frame image in the image; if each frame image of the first group of images includes a human face, acquiring a value of a preset parameter of each frame image in the first group of images, the preset The value of the parameter is used to represent the eye size of the face in the image; the first image is determined from the first set of images based on the sharpness of the image of each frame and the value of the preset parameter.
  • the electronic device when the electronic device performs the determining the first image from the first group of images according to the definition of the image of the frame and the value of the preset parameter, the electronic device may perform: acquiring and sharpening Corresponding first weight, and a second weight corresponding to the preset parameter; weighting the sharpness of each frame image according to the first weight, obtaining weightedness of each frame image, and following The second weights respectively weight the values of the preset parameters of the frame images to obtain the values of the preset parameters weighted by the frame images; and respectively obtain the weighted resolution of the frame images. And a sum of values of the weighted preset parameters; determining, in the first set of images, an image having the largest sum value as the first image.
  • the electronic device performs weighting on the sharpness of each frame image according to the first weight to obtain a weightedness of each frame image, and respectively according to the second weight And performing weighting on the value of the preset parameter of each frame image to obtain the value of the preset parameter weighted by each frame image, and performing: separately performing the resolution of the frame image and the value of the preset parameter Normalizing, obtaining a normalized resolution of each frame image and a value of the normalized preset parameter; according to the first weight, respectively, normalizing the image of each frame image Performing weighting to obtain a weighted resolution of each frame image; and weighting the values of the normalized preset parameters of the frame images according to the second weight to obtain the image of each frame The value of the weighted preset parameter.
  • the process of acquiring depth of field information according to the first image and the second image, and the performing noise reduction processing on the first image according to the first group of images to obtain a target image The process is performed in parallel by the electronic device.
  • the electronic device when the electronic device performs the noise reduction processing on the first image according to the first group of images to obtain a target image, the electronic device may perform: performing, according to the first group of images, the An image is subjected to noise reduction processing to obtain a noise-reduced image; and the noise-reduced image is subjected to tone mapping processing to obtain the target image.
  • the first group of images includes at least two frames of images
  • the electronic device may perform performing noise reduction processing on the first image according to the first group of images to obtain a target image.
  • the processing device of the image provided by the embodiment of the present application is the same as the processing method of the image in the above embodiment, and any one of the embodiments provided in the processing method of the image may be executed on the processing device of the image.
  • the specific implementation process of the method is described in the embodiment of the image processing method, and details are not described herein again.
  • the computer program can be stored in a computer readable storage medium, such as in a memory, and executed by at least one processor, and can include an implementation of a processing method such as the image during execution.
  • the storage medium may be a magnetic disk, an optical disk, a read only memory (ROM), a random access memory (RAM), or the like.
  • each functional module may be integrated into one processing chip, or each module may exist physically separately, or two or more modules may be integrated into one module.
  • the above integrated modules can be implemented in the form of hardware or in the form of software functional modules.
  • the integrated module if implemented in the form of a software functional module and sold or used as a standalone product, may also be stored in a computer readable storage medium, such as a read only memory, a magnetic disk or an optical disk, etc. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

本申请公开了一种图像的处理方法,包括:获取第一组图像和第二组图像;从该第一组图像中确定出第一图像,并从该第二组图像中确定出第二图像;根据该第一图像和该第二图像获取景深信息;根据该第一组图像对该第一图像进行降噪处理,得到目标图像;根据该景深信息,对该目标图像进行预设处理。

Description

图像的处理方法、装置、存储介质及电子设备
本申请要求于2018年01月31日提交中国专利局、申请号为201810097896.8、申请名称为“图像的处理方法、装置、存储介质及电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请属于图像处理技术领域,尤其涉及一种图像的处理方法、装置、存储介质及电子设备。
背景技术
随着硬件技术的不断发展,终端上安装的硬件的配置也越来越高。当前,很多终端都搭载了双摄像模组。借助于双摄像模组,终端的拍照水平可以得到较大幅度的提升。比如,采用彩色摄像头与黑白摄像头搭配组成的双摄像模组,可以使终端在拍照时捕捉到更多的细节。而采用两颗彩色摄像头搭配组成的双摄像模组,可以使终端在拍照时拥有双倍的进光量等等。
发明内容
本申请实施例提供一种图像的处理方法、装置、存储介质及电子设备,可以提高图像的成像效果。
本申请实施例提供一种图像的处理方法,应用于终端,所述终端至少包括第一摄像模组和第二摄像模组,所述方法包括:
获取第一组图像和第二组图像,所述第一组图像是由所述第一摄像模组采集的图像,所述第二组图像是由所述第二摄像模组采集的图像;
从所述第一组图像中确定出第一图像,并从所述第二组图像中确定出第二图像,所述第一图像和所述第二图像是同步采集的图像;
根据所述第一图像和所述第二图像获取景深信息;
根据所述第一组图像对所述第一图像进行降噪处理,得到目标图像;
根据所述景深信息,对所述目标图像进行预设处理。
本申请实施例提供一种图像的处理装置,应用于终端,所述终端至少包括第一摄像模组和第二摄像模组,所述装置包括:
第一获取模块,用于获取第一组图像和第二组图像,所述第一组图像是由所述第一摄像模组采集的图像,所述第二组图像是由所述第二摄像模组采集的图像;
确定模块,用于从所述第一组图像中确定出第一图像,并从所述第二组图像中确定出第二图像,所述第一图像和所述第二图像是同步采集的图像;
第二获取模块,用于根据所述第一图像和所述第二图像获取景深信息;
降噪模块,用于根据所述第一组图像对所述第一图像进行降噪处理,得到目标图像;
处理模块,用于根据所述景深信息,对所述目标图像进行预设处理。
本申请实施例提供一种存储介质,其上存储有计算机程序,当所述计算机程序在计算机上执行时,使得所述计算机执行本申请实施例提供的图像的处理方法中的流程。
本申请实施例还提供一种电子设备,包括存储器,处理器,以及第一摄像模组和第二摄像模组,所述处理器通过调用所述存储器中存储的计算机程序,用于执行:
获取第一组图像和第二组图像,所述第一组图像是由所述第一摄像模组采集的图像,所述第二组图像是由所述第二摄像模组采集的图像;
从所述第一组图像中确定出第一图像,并从所述第二组图像中确定出第二图像,所述第一图像和所述第二图像是同步采集的图像;
根据所述第一图像和所述第二图像获取景深信息;
根据所述第一组图像对所述第一图像进行降噪处理,得到目标图像;
根据所述景深信息,对所述目标图像进行预设处理。
附图说明
下面结合附图,通过对本申请的具体实施方式详细描述,将使本申请的技术方案及其有益效果显而易见。
图1是本申请实施例提供的图像的处理方法的流程示意图。
图2是本申请实施例提供的图像的处理方法的另一流程示意图。
图3至图5是本申请实施例提供的图像的处理方法的场景及处理流程示意图。
图6是本申请实施例提供的图像的处理装置的结构示意图。
图7是本申请实施例提供的移动终端的结构示意图。
图8是本申请实施例提供的电子设备的结构示意图。
具体实施方式
请参照图示,其中相同的组件符号代表相同的组件,本申请的原理是以实施在一适当的运算环境中来举例说明。以下的说明是基于所例示的本申请具体实施例,其不应被视为限制本申请未在此详述的其它具体实施例。
本申请实施例提供一种图像的处理方法,应用于终端,所述终端至少包括第一摄像模组和第二摄像模组,所述方法可以包括:
获取第一组图像和第二组图像,所述第一组图像是由所述第一摄像模组采集的图像,所述第二组图像是由所述第二摄像模组采集的图像;
从所述第一组图像中确定出第一图像,并从所述第二组图像中确定出第二图像,所述第一图像和所述第二图像是同步采集的图像;
根据所述第一图像和所述第二图像获取景深信息;
根据所述第一组图像对所述第一图像进行降噪处理,得到目标图像;
根据所述景深信息,对所述目标图像进行预设处理。
在一种实施方式中,所述对所述目标图像进行预设处理的流程,可以包括:对所述目标图像进行背景虚化处理。
在一种实施方式中,所述第一组图像至少包括两帧图像。
那么,所述从所述第一组图像中确定出第一图像的流程,可以包括:获取所述第一组图像中各帧图像的清晰度;将各帧图像中清晰度最大的图像确定为所述第一图像。
在一种实施方式中,所述第一组图像至少包括两帧图像。
那么,所述从所述第一组图像中确定出第一图像的流程,可以包括:若所述第一组图像的各帧图像包含人脸,则获取所述第一组图像中各帧图像的预设参数的数值,所述预设参数的数值用于表示图像中人脸的眼睛大小;将各帧图像中预设参数的数值最大的图像确定为所述第一图像。
在一种实施方式中,所述第一组图像至少包括两帧图像。
那么,所述从所述第一组图像中确定出第一图像的流程,可以包括:获取所述第一组图像中各帧图像的清晰度;若所述第一组图像的各帧图像包含人脸,则获取所述第一组图像中各帧图像的预设参数的数值,所述预设参数的数值用于表示图像中人脸的眼睛大小;根据所述各帧图像的清晰度和预设参数的数值,从所述第一组图像中确定出所述第一图像。
在一种实施方式中,所述根据所述各帧图像的清晰度和预设参数的数值从所述第一组图像中确定出第一图像的流程,可以包括:获取与清晰度对应的第一权重,以及与预设参数对应的第二权重;按照所述第一权重分别对所述各帧图像的清晰度进行加权,得到所述各帧图像加权后的清晰度,并按照所述第二权重分别对所述各帧图像的预设参数的数值进行加权,得到所述各帧图像加权后的预设参数的数值;分别获取所述各帧图像的加权后的清晰度和加权后的预设参数的数值的和值;将所述第一组图像中,和值最大的图像确定为所述第一图像。
在一种实施方式中,所述按照所述第一权重分别对所述各帧图像的清晰度进行加权得到所述各帧图像加权后的清晰度,并按照所述第二权重分别对所述各帧图像的预设参数的数值进行加权得到所述各帧 图像加权后的预设参数的数值的流程,可以包括:分别对所述各帧图像的清晰度和预设参数的数值进行归一化,得到所述各帧图像归一化后的清晰度与归一化后的预设参数的数值;按照所述第一权重,分别对所述各帧图像的归一化后的清晰度进行加权,得到所述各帧图像加权后的清晰度;按照所述第二权重,分别对所述各帧图像的归一化后的预设参数的数值进行加权,得到所述各帧图像加权后的预设参数的数值。
在一种实施方式中,所述根据所述第一图像和所述第二图像获取景深信息的流程和所述根据所述第一组图像对所述第一图像进行降噪处理得到目标图像的流程是并行执行的。
在一种实施方式中,所述根据所述第一组图像对所述第一图像进行降噪处理得到目标图像的流程,可以包括:根据所述第一组图像对所述第一图像进行降噪处理,得到降噪后的图像;对所述降噪后的图像进行色调映射处理,得到所述目标图像。
在一种实施方式中,所述第一组图像至少包括两帧图像。
那么,所述根据所述第一组图像对所述第一图像进行降噪处理得到目标图像的流程,可以包括:将所述第一组图像中的所有图像对齐;在对齐的图像中,确定出多组相互对齐的像素,及各组相互对齐的像素中属于第一图像的目标像素;获取每一组相互对齐的像素中各像素的像素值;根据所述各像素的像素值,获取每一组相互对齐的像素的像素值均值;将所述第一图像中的目标像素的像素值调整为所述像素值均值,得到所述目标图像。
可以理解的是,本申请实施例的执行主体可以是诸如智能手机或平板电脑等的终端。
请参阅图1,图1是本申请实施例提供的图像的处理方法的流程示意图。该图像的处理方法可以应用于终端。该终端可以是诸如智能手机或平板电脑等搭载有双摄像模组的任何终端。该图像的处理方法的流程可以包括:
在101中,获取第一组图像和第二组图像,该第一组图像是由第一摄像模组采集的图像,该第二组图像是由第二摄像模组采集的图像。
随着硬件技术的不断发展,终端上安装的硬件的配置也越来越高。当前,很多终端都搭载了双摄像模组。借助于双摄像模组,终端的拍照水平可以得到较大幅度的提升。比如,采用彩色摄像头与黑白摄像头搭配组成的双摄像模组,可以使终端在拍照时捕捉到更多的细节。而采用两颗彩色摄像头搭配组成的双摄像模组,可以使终端在拍照时拥有双倍的进光量等等。然而,相关技术中,搭载双摄像模组的终端其处理得到的图像,成像效果较差。
在本申请实施例的101中,比如,终端可以先获取其双摄像模组中的第一摄像模组采集的第一组图像,以及该双摄像模组中的第二摄像模组采集的第二组图像。该第一组图像中包含多帧图像,该第二组图像中也包含多帧图像。
在一些实施方式中,在用户打开相机应用后且按下拍照按钮前,终端的双摄像模组可以不断地同步采集图像,这些采集到的图像可以存储到缓存队列中。终端可以从该缓存队列中获取图像,并显示在终端屏幕上,供用户预览。
在一种实施方式中,该缓存队列可以是定长队列。例如,该缓存队列的长度为4个元素。即,该缓存队列中存储了摄像模组最近采集到的4帧图像。例如,第一摄像模组对应的第一队列中缓存了其最近采集到的4帧图像,第二摄像模组对应的第二队列中缓存了其最近与第一摄像模组同步采集到的4帧图像。并且,后面采集到的图像会覆盖掉前面采集到的图像。例如,按照采集时间先后的顺序,第一队列中缓存了A1、A2、A3、A4这4帧图像,当第一摄像模组采集到A5图像时,终端可以将第一队列中的A1图像删除,并***A5图像,从而使得第一队列变为A2、A3、A4、A5。
在一种实施方式中,当终端使用双摄像模组采集图像时,第一摄像模组和第二摄像模组可以同步采集图像。例如,第一摄像模组采集到A1图像,第二摄像模组可以同步采集到B1图像。例如,第一队列中缓存了A1、A2、A3、A4这4帧图像,第二队列中缓存了B1、B2、B3、B4这4帧图像,那么A1和B1是同步采集的图像,A2和B2是同步采集的图像,A3和B3是同步采集的图像,A4和B4是同步采集的图像。
当然,在用户按下拍照按钮后,终端的摄像模组也可以采集图像。
因此,在一些实施方式中,第一组图像可以是仅包含用户按下拍照按钮前由第一摄像模组采集到的图像,或者仅包含用户按下拍照按钮后由第一摄像模组采集到的图像,或者既包含用户按下拍照按钮前由第一摄像模组采集到的图像,又包含用户按下拍照按钮后由第一摄像模组采集到的图像。
例如,在用户按下拍照按钮前,第一摄像模组采集到A1、A2、A3、A4这4帧图像,在用户按下拍照按钮后,第一摄像模组采集到A5、A6、A7、A8这4帧照片。那么,第一组图像可以是A1、A2、A3、A4,或者A2、A3、A4、A5,或者A3、A4、A5、A6,或者A5、A6、A7、A8等。在一些实施方式中,第一组图像可以是第一摄像头采集到的连续帧,也可以是不连续的帧,如A2、A3、A5、A6等。第二组图像同理。
在102中,从该第一组图像中确定出第一图像,并从该第二组图像中确定出第二图像,该第一图像和该第二图像是同步采集的图像。
比如,在获取到第一摄像模组采集的第一组图像A1、A2、A3、A4,以及第二摄像模组采集的第二组图像B1、B2、B3、B4后,终端可以从A1、A2、A3、A4中确定出第一图像,然后从B1、B2、B3、B4中确定出第二图像。其中,该第二图像可以是与第一图像同步采集的图像。
例如,终端将A1、A2、A3、A4中的A2确定为第一图像,那么终端可以相应地将B2确定为第二图像。
在103中,根据该第一图像和该第二图像获取景深信息。
比如,在确定出第一图像和第二图像后,终端可以根据该第一图像和第二图像获取景深信息。可以理解的是,由于该第一图像和第二图像是终端上的双摄像模组从不同拍摄位置(角度)同步采集到的图像,因此可以根据第一图像和第二图像获取景深信息。
需要说明的是,该景深信息是相对于图像中的对焦物体而言的,是在确定出对焦物体后获取到的景深信息。
在104中,根据该第一组图像对该第一图像进行降噪处理,得到目标图像。
比如,第一组图像包括至少两帧图像时,终端可以根据第一组图像中的至少两帧图像对第一图像进行降噪处理,从而得到目标图像。比如,终端可以根据第一组图像中除第一图像外的其它图像,对该第一图像进行降噪处理。例如,第一图像为A2图像,那么终端可以将第一图像A2作为降噪处理的基础帧,并根据第一组图像中的A1、A3、A4这3帧图像对A2进行降噪处理。即,终端可以根据A1、A3、A4这3帧图像识别并降低基础帧A2图像中的随机噪点,从而得到经过降噪的目标图像。
在一些实施例中,还可以采用图像降噪算法对第一图像进行降噪处理,例如,图像降噪算法可以包括小波降噪算法、平滑滤波算法等。
在105中,根据该景深信息,对该目标图像进行预设处理。
比如,在得到目标图像后,终端可以根据获取到的景深信息,对该目标图像进行预设处理。
在一些实施方式中,该预设处理可以是诸如背景虚化以及图像的3D应用处理等。
可以理解的是,本实施例中,终端可以根据第一组图像对第一图像进行降噪处理,得到的目标图像噪点少。并且,终端可以根据同步采集到的第一图像和第二图像获取景深信息,因此终端获取到的景深信息更准确。因此,终端根据该景深信息对该目标图像进行预设处理,可以使得处理后得到的图像其成像效果更好。
请参阅图2,图2为本申请实施例提供的图像的处理方法的另一流程示意图,流程可以包括:
在201中,终端获取第一组图像和第二组图像,该第一组图像是由第一摄像模组采集的图像,该第二组图像是由第二摄像模组采集的图像。
比如,终端上安装有双摄像模组,该双摄像模组包含第一摄像模组和第二摄像模组。其中,该第一摄像模组和该第二摄像模组可以同步采集图像。该第一组图像中包含多帧图像,该第二组图像中也包含多帧图像。终端使用该双摄像模组可以在同一拍摄场景下快速采集关于相同拍摄对象的多帧图像。
如本实施例的前文所述,例如,第一摄像模组采集到第一组图像为A1、A2、A3、A4这4帧图像。 第二摄像模组采集到的第二组图像为B1、B2、B3、B4这4帧图像。
在202中,终端获取该第一组图像中各帧图像的清晰度。
比如,在获取到第一摄像模组采集的第一组图像A1、A2、A3、A4后,终端可以获取图像A1、A2、A3、A4的清晰度。
例如,图像的清晰度的数值的取值范围为0~100,清晰度的数值越大表示图像越清晰。例如,第一组图像中A1、A2、A3、A4的清晰度依次为80、83、81、79。
在203中,若该第一组图像的各帧图像包含人脸,则终端获取该第一组图像中各帧图像的预设参数的数值,该预设参数的数值用于表示图像中人脸的眼睛大小。
比如,在获取到图像A1、A2、A3、A4的清晰度后,终端检测到该第一组图像中的各帧图像均包含人脸,那么终端可以进一步获取该第一组图像中各帧图像的预设参数的数值。其中,该预设参数的数值可以用于表示图像中人脸的眼睛大小。
在一种实施方式中,终端可以通过一些预设的算法来获取图像中人脸的眼睛大小,这些算法可以输出一个用于表示眼睛大小的数值,可以是数值越大表示眼睛越大。
或者,终端也可以先识别出人脸中的眼睛部分,以及眼睛部分所在区域的第一目标像素数量,然后再计算该第一目标像素数量和图像中总的像素数量的比值,比值越大表示眼睛越大。又或者,终端可以仅计算眼睛在图像高度方向所占的第二像素数量,以及图像高度方向的总像素数量。然后,终端再计算该第二目标像素数量和图像高度方向的总像素数量的比值,比值越大表示眼睛越大。
例如,用于表示眼睛大小的预设参数的数值的取值范围为0~50,数值越大表示图像中人眼越大。例如,第一组图像中A1、A2、A3、A4的预设参数的数值依次为40、41、42、39。
在204中,终端获取与清晰度对应的第一权重,以及与预设参数对应的第二权重。
比如,在获取到第一组图像中各帧图像对应的清晰度和预设参数的数值后,终端可以获取与清晰度对应的第一权重,以及与预设参数对应的第二权重。可以理解的是,第一权重和第二权重的和为1。
在一些实施方式中,第一权重和第二权重的数值可以根据使用需求进行设定。比如,在对图像清晰度要求较高的场景下,可以将与清晰度对应的第一权重设置得大一些,将与预设参数对应的第二权重设置得小一些。例如,第一权重为0.7,第二权重为0.3,或者第一权重为0.6,第二权重为0.4,等等。而当需要采集到大眼图像时,可以将与清晰度对应的第一权重设置得小一些,将与预设参数对应的第二权重设置得大一些。例如,第一权重为0.3,第二权重为0.7,或者第一权重为0.4,第二权重为0.6,等等。
在另一些实施方式中,终端可以根据图像间的清晰度差异来设置第一权重和第二权重的大小。比如,若终端检测到第一组图像中的各帧图像间的清晰度差异在预设阈值范围内,即在各帧图像间的清晰度相差不大时,终端可以将与清晰度对应的第一权重设置得小一些,而将与预设参数对应的第二权重设置得大一些。例如,第一权重为0.4,第二权重为0.6,等等。若终端检测到第一组图像中的各帧图像间的清晰度差异在预设阈值范围外,即在各帧图像间的清晰度相差较大时,终端可以将与清晰度对应的第一权重设置得大一些,而将与预设参数对应的第二权重设置得小一些。例如,第一权重为0.6,第二权重为0.4,等等。
当然,在其它实施方式中,第一权重和第二权重的数值也可以由用户根据拍摄需求,自行设定。
在205中,终端分别对第一组图像中各帧图像的清晰度和预设参数的数值进行归一化,得到该各帧图像归一化后的清晰度与归一化后的预设参数的数值。
在206中,终端按照该第一权重,分别对该各帧图像的归一化后的清晰度进行加权,得到该各帧图像加权后的清晰度,并按照该第二权重,分别对该各帧图像的归一化后的预设参数的数值进行加权,得到该各帧图像加权后的预设参数的数值。
在207中,终端分别获取第一组图像中各帧图像的加权后的清晰度和加权后的预设参数的数值的和值。
比如,205、206和207可以包括:
例如,第一组图像中A1、A2、A3、A4的清晰度依次为80、83、81、79。A1、A2、A3、A4的预 设参数的数值依次为40、41、41、39。第一权重为0.4,第二权重为0.6。
那么,对于A1图像,终端可以先对其清晰度和预设参数的数值进行归一化。例如,清晰度80归一化后的数值为0.8(80/100),预设参数的数值40归一化后的数值为0.8(40/50)。然后,终端可以按照第一权重0.4对归一化后的清晰度0.8进行加权,得到加权后的清晰度,其值为0.32(0.4*0.8)。同时,终端可以按照第二权重0.6对归一化后的预设参数的数值0.8进行加权,得到加权后的预设参数的数值,其值为0.48(0.6*0.8)。接着,终端可以计算A1图像加权后的清晰度0.32和加权后的预设参数的数值0.48的和值,即为0.8。
对于A2图像,终端也可以先对其清晰度和预设参数的数值进行归一化。例如,清晰度83归一化后的数值为0.83(83/100),预设参数的数值41归一化后的数值为0.82(41/50)。然后,终端可以按照第一权重0.4对归一化后的清晰度0.83进行加权,得到加权后的清晰度,其值为0.332(0.4*0.83)。同时,终端可以按照第二权重0.6对归一化后的预设参数的数值0.82进行加权,得到加权后的预设参数的数值,其值为0.492(0.6*0.82)。接着,终端可以计算A2图像加权后的清晰度0.332和加权后的预设参数的数值0.492的和值,即为0.824。
同理,终端可以计算到A3图像的归一化后的清晰度为0.81,归一化后的预设参数的数值为0.82。按照第一权重0.4加权后的清晰度为0.324,按照第二权重0.6加权后的预设参数的数值为0.492,则A3图像加权后的清晰度0.324和加权后的预设参数的数值0.492的和值为0.816。
终端可以计算到A4图像的归一化后的清晰度为0.79,归一化后的预设参数的数值为0.78。按照第一权重0.4加权后的清晰度为0.316,按照第二权重0.6加权后的预设参数的数值为0.468,则A4图像加权后的清晰度0.316和加权后的预设参数的数值0.468的和值为0.784。
在208中,终端将该第一组图像中,和值最大的图像确定为第一图像,并从第二组图像中确定出第二图像,该第一图像和该第二图像是同步采集的图像。
比如,在得到第一组图像中各帧图像的加权后的清晰度与加权后的预设参数的数值这二者的和值后,终端可以将和值最大的图像确定为第一图像。
例如,由于A1、A2、A3、A4的和值依次为0.8、0.824、0.816、0.784。因此,终端可以将第一组图像中的A2确定为第一图像。
然后,终端可以从第二组图像中确定出第二图像。该第二图像可以是与第一图像同步采集的图像。那么,终端可以将第二组图像中的B2确定为第二图像。
在209中,终端并行执行如下流程:根据该第一图像和该第二图像获取景深信息,和根据该第一组图像对该第一图像进行降噪处理得到目标图像。
比如,在确定出第一图像和第二图像后,终端可以根据该第一图像和第二图像获取景深信息。可以理解的是,第一图像和第二图像是终端上的双摄像模组从不同位置(角度)采集到的同一对象的图像,因此可以根据第一图像和第二图像获取景深信息。需要说明的是,该景深信息是相对于图像中的对焦物体而言的,是在确定出对焦物体后获取到的景深信息。
然后,终端可以根据第一组图像对第一图像进行降噪处理,从而得到目标图像。比如,终端可以根据第一组图像中除第一图像外的其它图像,对该第一图像进行降噪处理。例如,第一图像为A2图像,那么终端可以将第一图像A2作为降噪处理的基础帧,并根据第一组图像中的A1、A3、A4这3帧图像对A2进行降噪处理。即,终端可以根据A1、A3、A4这3帧图像识别并降低基础帧A2图像中的随机噪点,从而得到经过降噪的目标图像。
其中,根据第一图像和第二图像获取景深信息的流程,与根据该第一组图像对该第一图像进行降噪处理并得到目标图像的流程可以并行执行。需要说明的是,对第一图像进行降噪处理并不影响根据第一图像和第二图像获取景深信息,因此降噪处理和获取景深信息的流程可以并行执行。
例如,在一种实施方式中,终端可以利用中央处理器(Central Processing Unit,CPU)来执行根据第一照片和第二照片获取景深信息的流程,同时终端可以利用图形处理器(Graphics Processing Unit,GPU)来执行根据该第一组图像对该第一图像进行降噪处理并得到目标图像的流程。
可以理解的是,上述两个流程并行执行,可以节省终端的处理时间,提高对图像进行处理的效率。在一些实施例中,终端获取景深信息需要的时长为800ms,而降噪需要的时长为400ms,由此,通过将获取景深信息和降噪并行处理(例如,可通过多线程并行处理实现),可以节省400ms的处理时间,可提升终端的成像速度。此外,在一些实施例中,在一个线程进行景深信息获取的800ms时间内,另一个线程除了进行降噪(需要时长约400ms)外,还可以进行美颜处理(需要时长约200ms)、滤镜处理(需要时长约100ms)等处理,从而在景深信息获取完成时,对目标图像进行更多的处理,节省更多的处理时间,进一步提升终端的成像速度。
在另一些实施方式中,除了根据第一图像和第二图像获取景深信息之外,在两帧图像的采集时间间隔足够短或者采集到的各帧图像间的差异足够小的情况下,终端还可以从第一组图像中任意选取一帧图像,并从第二组图像中选取与该帧图像同步采集到的图像,再根据这两帧图像获取景深信息。例如,本实施例中第一图像为A2,第二图像为B2。那么,在另一些实施方式中,终端还可以从A1、A3、A4中任意选取一帧图像,比如选取A4图像,再从第二组图像中选取与A4图像同步采集的B4图像,并根据A4和B4获取景深信息。
此外,在两帧图像的采集时间间隔足够短或者采集到的各帧图像间的差异足够小的情况下,终端还可以从第一组图像中任意选取一帧图像,并从第二组图像中任意选取一帧图像,并根据这两帧图像获取景深信息。例如,终端选取A2和B3,并根据这两帧图像获取景深信息。
在一种实施方式中,第一组图像至少包含两帧图像,209中终端根据该第一组图像对该第一图像进行降噪处理,得到目标图像的流程可以包括如下流程:
终端将第一组图像中的所有图像对齐;
在对齐的图像中,终端确定出多组相互对齐的像素,及各组相互对齐的像素中属于第一图像的目标像素;
终端获取每一组相互对齐的像素中各像素的像素值;
根据各像素的像素值,终端获取每一组相互对齐的像素的像素值均值;
终端将该第一图像中的目标像素的像素值调整为所述像素值均值,得到所述目标图像。
例如,第一组图像包含A1、A2、A3、A4,其中第一图像为A2。那么,终端可以将A2确定为降噪处理的基础帧。然后,终端可以利用图像对齐算法,将A1、A2、A3、A4这4帧图像对齐。
在将A1、A2、A3、A4这4帧图像对齐后,终端可以将相互对齐的像素确定为一组关联的像素,从而得到多组相互对齐的像素。然后,终端可以将各组相互对齐的像素中属于第一图像的像素确定为目标像素。之后,终端可以获取每一组相互对齐的像素中各个像素的像素值,并进而获取每一组相互对齐的像素的像素值均值。然后,终端可以将第一图像中的每一目标像素的像素值调整为该目标像素所在的组的像素值均值,调整后的第一图像即为目标图像。
例如,A1图像上有一像素X1,A2图像上有一像素X2,A3图像上有一像素X3,A4图像上有一像素X4。并且,通过图像对齐算法可知,像素X1、X2、X3和X4是A1、A2、A3、A4这4帧图像对齐后处于同一对齐位置的像素。即,像素X1、X2、X3和X4是对齐的。例如,像素X1的像素值为101,像素X2的像素值为102,像素X3的像素值为103,像素X4的像素值为104,那么这四个像素值的平均值为102.5。在得到该平均值102.5后,终端可以将A2图像中的像素X2的像素值由102调整为102.5,从而对该像素X2进行了降噪处理。同理,对A2图像中所有在A1、A3和A4图像上具有对齐像素的位置的像素值调整为对应的平均值后,得到的图像即是降噪处理后的目标图像。
在一种实施方式中,也可以先从A1、A2、A3、A4这4帧图像中确定出清晰度最大的一帧图像,然后对不同帧的像素值赋予不同的权重,再根据加权后的像素值计算平均值,并根据该加权后的平均值调整基础帧A2上的像素值。
例如,A2图像上的像素Z2和A1图像上的像素Z1、A3图像上的像素Z3以及A4图像上的像素Z4对齐。其中,Z1的像素值为101、Z2的像素值为102,Z3的像素值为103,Z4的像素值为104。并且,Z2在这4帧图像中清晰度最大。那么,在计算加权均值时,终端可以赋予Z2的像素值权重为0.4, Z1、Z3、Z4的像素值权重均为0.2,则加权均值为102.4,其中102.4=102*0.4+(101+103+104)*0.2。在得到该加权均值102.4后,终端就可以将Z2的像素值由102调整为102.4,从而降低该像素点的噪点。
在一种实施方式中,如果对于某一个对齐位置,其在A1、A2、A3和A4图像上对应的像素值相差较大,那么终端可以不对A2图像上的该位置的像素值进行调整。例如,A2图像上的像素Y2和A1图像上的像素Y1、A3图像上的像素Y3以及A4图像上的像素Y4对齐,但Y2的像素值为100,Y1的像素值为20、Y3的像素值为30、Y4的像素值为35,即Y2的像素值远大于Y1、Y3和Y4。在这种情况下,可以不对Y2的像素值进行调整。
在一种实施方式中,如果A1、A2、A3、A4这4帧图像无法对齐,那么终端可以不对基础帧A2的各像素的像素值进行调整,而直接采用基础帧A2作为目标图像,进行后续的背景虚化处理。
在210中,根据该景深信息,终端对该目标图像进行背景虚化处理。
比如,在得到目标图像后,终端可以根据获取到的景深信息,对该目标图像进行背景虚化处理。
需要说明的是,本实施例中,终端可以利用第一组图像对第一图像进行多帧降噪处理,得到的目标图像其随机噪点少。并且,终端可以根据同步采集到的第一图像和第二图像获取景深信息,因此终端获取到的景深信息更准确。即在目标图像的噪点更少,获取到的景深信息更准确的情况下,本实施例对该目标图像进行背景虚化的效果也更好,即背景虚化后的目标图像其成像效果好。
此外,由于利用第一组图像对第一图像进行多帧降噪的流程可以和根据第一图像和第二图像获取景深信息的流程并行执行,因此本实施例还可以提高图像处理速度,有效避免因多帧降噪引起的处理速度缓慢的问题。
在一种实施方式中,209中,终端根据该第一组图像对该第一图像进行降噪处理得到目标图像的流程,可以包括如下流程:
终端根据第一组图像对第一图像进行降噪处理,得到降噪后的图像;
终端对该降噪后的图像进行色调映射处理,得到目标图像。
比如,第一图像为A2图像,那么终端可以将第一图像A2作为降噪处理的基础帧,并根据A1、A3、A4这3帧图像识别并降低基础帧A2图像中的随机噪点,从而得到降噪后的图像。
然后,终端可以对该降噪后的图像进行色调映射处理(Tone Mapping),从而得到目标图像。
可以理解的是,对该降噪后的图像进行色调映射处理可以提高图像的图像对比度,从而使得目标图像具有更高的动态范围,成像效果更好。
在另一种实施方式中,终端在获取到第一组图像后从该第一组图像中确定出第一图像的流程,也可以包括如下流程:
终端获取该第一组图像中各帧图像的清晰度,并将该各帧图像中清晰度最大的图像确定为第一图像。
比如,第一组图像中A1、A2、A3、A4的清晰度依次为80、83、81、79。那么,终端可以直接将A2图像确定为第一图像。即,终端可以仅根据清晰度这一维度,从第一组图像中确定出第一图像。
在又一种实施方式中,终端在获取到第一组图像后,也可以包括如下流程:
若该第一组图像的各帧图像包含人脸,则终端获取该第一组图像中各帧图像的预设参数的数值,该预设参数的数值用于表示图像中人脸的眼睛大小;
将该第一组图像中预设参数的数值最大的图像确定为第一图像。
比如,在获取到第一组图像后,终端检测到该第一组图像中的各帧图像均包含人脸,那么终端可以获取该第一组图像中各帧图像的预设参数的数值,其中该预设参数的数值用于表示图像中人脸的眼睛大小,然后,终端可以直接将该第一组图像中预设参数的数值最大的图像确定为第一图像。
例如,第一组图像中A1、A2、A3、A4的预设参数的数值依次为40、41、42、39。那么,终端可以直接将A2确定为第一图像。可以理解的是,A2图像即是第一组图像中人眼最大的那帧图像。即,终端可以仅根据图像中的人眼大小这一维度,从第一组图像中确定出第一图像。
在一种实施方式中,除了可以根据图像清晰度和人眼大小的维度,从第一组图像中确定出第一图像外,终端还可以加入人脸的微笑程度这一维度,来确定第一图像。例如,终端可以将图像清晰度和人脸 的微笑程度结合在一起,来确定第一图像。或者,终端也可以将人眼大小和人脸的微笑程度结合在一起,来确定第一图像。又或者,终端还可以将图像清晰度、人眼大小和人脸的微笑程度结合在一起,来确定第一图像,等等。
在一些实施方式中,检测人脸微笑程度的方式可以是根据对人脸部分的牙齿和嘴角等部位的图像识别来进行。比如,终端可以识别出图像中的嘴角部分,以及该嘴角部分的弯曲幅度,弯曲程度越大可以认为微笑程度越大,等等。
请参阅图3至图5,图3至图5为本申请实施例提供的图像的处理方法的场景及处理流程示意图。
比如,如图3所示,终端上安装有双摄像模组10,该双摄像模组10包括第一摄像模组11和第二摄像模组12。例如,第一摄像模组11可以为主摄像头,第二摄像模组可以为副摄像头。在一种实施方式中,双摄像模组中的两颗摄像头可以是横向并排排列(如图3所示)。在另一种实施方式中,双摄像模组中的两颗摄像头也可以是纵向并列排列。当终端使用双摄像模组10采集图像时,该第一摄像模组11和第二摄像模组12可以同步采集图像。
比如,用户打开了相机应用,并准备拍摄照片,此时终端界面进入图像预览界面。在终端的显示屏幕上将显示用于供用户预览的图像。
当终端使用双摄像模组采集图像时,第一摄像模组和第二摄像模组可以同步采集图像。
之后,用户点击了拍照按钮,如图4所示。在本实施例中,当检测到用户点击了拍照按钮后,终端可以从缓存队列中获取在用户点击拍照按钮前,第一摄像模组11最近采集到的4帧图像,以及第二摄像模组12最近采集到的4帧图像。例如,第一摄像模组最近采集到的4帧照片(第一组图像)依次为A1、A2、A3、A4。第二摄像模组最近同步采集到的4帧图像(第二组图像)依次为B1、B2、B3、B4。可以理解的是,A1和B1是同步采集的图像,A2和B2是同步采集的图像,A3和B3是同步采集的图像,A4和B4是同步采集的图像。
然后,终端可以获取第一组图像中各帧图像的清晰度,以及预设参数的数值。其中,预设参数的数值可以用于表示图像中人脸的眼睛大小。例如,清晰度的取值范围为0~100,清晰度的数值越大表示图像越清晰。第一组图像中A1、A2、A3、A4的清晰度依次为80、83、81、79。预设参数的数值的取值范围为0~50,数值越大表示图像中人眼越大。第一组图像中A1、A2、A3、A4的预设参数的数值依次为40、41、42、39。
之后,终端可以获取与清晰度对应的第一权重,以及与预设参数对应的第二权重。例如,第一权重为0.4,第二权重为0.6。
然后,对于第一组图像中的每一帧图像,终端可以对其清晰度和预设参数的数值进行归一化,得到各帧图像归一化后的清晰度与归一化后的预设参数的数值。接着,终端可以按照该第一权重分别对各帧图像归一化后的清晰度进行加权,得到各帧图像加权后的清晰度。并且,终端可以按照该第二权重,分别对各帧图像归一化后的预设参数的数值进行加权,得到各帧图像加权后的预设参数的数值。最后,终端可以分别获取各帧图像加权后的清晰度与加权后的预设参数的数值这二者的和值。
例如,对于A1图像,终端可以先对其清晰度和预设参数的数值进行归一化。例如,清晰度80归一化后的数值为0.8(80/100),预设参数的数值40归一化后的数值为0.8(40/50)。然后,终端可以按照第一权重0.4对归一化后的清晰度0.8进行加权,得到加权后的清晰度,其值为0.32(0.4*0.8)。同时,终端可以按照第二权重0.6对归一化后的预设参数的数值0.8进行加权,得到加权后的预设参数的数值,其值为0.48(0.6*0.8)。接着,终端可以计算A1图像加权后的清晰度0.32和加权后的预设参数的数值0.48的和值,即为0.8。
同理,对于A2图像,其加权后的清晰度为0.332、加权后的预设参数的数值为0.492,这二者的和值为0.824。对于A3图像,其加权后的清晰度为0.324、加权后的预设参数的数值为0.492,这二者的和值为0.816。对于A4图像,其加权后的清晰度为0.316、加权后的预设参数的数值为0.468,这二者的和值为0.784。
在得到A1、A2、A3、A4这4帧图像的和值后,终端可以将和值最大的图像确定为第一图像,该 第一图像用于作为降噪的基础帧。可以理解的是,第一图像即是第一组图像中人眼较大并且清晰度较高的图像。例如,由于A2的和值最大,因此将A2确定为第一图像。然后,终端可以将第二摄像模组拍摄的B2图像确定为第二图像。
接着,终端可以利用CPU,根据第一图像A2和第二图像B2,获取景深信息。并且,终端可以利用GPU,根据第一组图像中的A1、A3、A4对第一图像A2进行降噪处理,得到降噪后的A2图像,并将其确定为目标图像。其中,终端计算景深信息的流程和对A2图像降噪的流程可以并行执行,以提高处理速度。
之后,终端可以根据获取到的景深信息,对该目标图像进行背景虚化处理,从而得到输出图像。然后,终端可以将该输出图像保存在相册中。整个处理流程可以如图5所示。
本实施例提供一种图像的处理装置,应用于终端,所述终端至少包括第一摄像模组和第二摄像模组,所述装置包括:
第一获取模块,用于获取第一组图像和第二组图像,所述第一组图像是由所述第一摄像模组采集的图像,所述第二组图像是由所述第二摄像模组采集的图像;
确定模块,用于从所述第一组图像中确定出第一图像,并从所述第二组图像中确定出第二图像,所述第一图像和所述第二图像是同步采集的图像;
第二获取模块,用于根据所述第一图像和所述第二图像获取景深信息;
降噪模块,用于根据所述第一组图像对所述第一图像进行降噪处理,得到目标图像;
处理模块,用于根据所述景深信息,对所述目标图像进行预设处理。
在一种实施方式中,所述处理模块可以用于:对所述目标图像进行背景虚化处理。
在一种实施方式中,所述第一组图像至少包括两帧图像。
那么,所述确定模块可以用于:获取所述第一组图像中各帧图像的清晰度;将各帧图像中清晰度最大的图像确定为所述第一图像。
在一种实施方式中,所述第一组图像至少包括两帧图像。
那么,所述确定模块可以用于:若所述第一组图像的各帧图像包含人脸,则获取所述第一组图像中各帧图像的预设参数的数值,所述预设参数的数值用于表示图像中人脸的眼睛大小;将各帧图像中预设参数的数值最大的图像确定为所述第一图像。
在一种实施方式中,所述第一组图像至少包括两帧图像。
那么,所述确定模块可以用于:获取所述第一组图像中各帧图像的清晰度;若所述第一组图像的各帧图像包含人脸,则获取所述第一组图像中各帧图像的预设参数的数值,所述预设参数的数值用于表示图像中人脸的眼睛大小;根据所述各帧图像的清晰度和预设参数的数值,从所述第一组图像中确定出所述第一图像。
在一种实施方式中,所述确定模块可以用于:获取与清晰度对应的第一权重,以及与预设参数对应的第二权重;按照所述第一权重分别对所述各帧图像的清晰度进行加权,得到所述各帧图像加权后的清晰度,并按照所述第二权重分别对所述各帧图像的预设参数的数值进行加权,得到所述各帧图像加权后的预设参数的数值;分别获取所述各帧图像的加权后的清晰度和加权后的预设参数的数值的和值;将所述第一组图像中,和值最大的图像确定为所述第一图像。
在一种实施方式中,所述确定模块可以用于:分别对所述各帧图像的清晰度和预设参数的数值进行归一化,得到所述各帧图像归一化后的清晰度与归一化后的预设参数的数值;按照所述第一权重,分别对所述各帧图像的归一化后的清晰度进行加权,得到所述各帧图像加权后的清晰度;按照所述第二权重,分别对所述各帧图像的归一化后的预设参数的数值进行加权,得到所述各帧图像加权后的预设参数的数值。
在一种实施方式中,所述根据所述第一图像和所述第二图像获取景深信息的流程和所述根据所述第一组图像对所述第一图像进行降噪处理得到目标图像的流程是并行执行的。
在一种实施方式中,所述降噪模块可以用于:根据所述第一组图像对所述第一图像进行降噪处理, 得到降噪后的图像;对所述降噪后的图像进行色调映射处理,得到所述目标图像。
在一种实施方式中,所述第一组图像至少包括两帧图像。
那么,所述降噪模块可以用于:将所述第一组图像中的所有图像对齐;
在对齐的图像中,确定出多组相互对齐的像素,及各组相互对齐的像素中属于第一图像的目标像素;获取每一组相互对齐的像素中各像素的像素值;根据所述各像素的像素值,获取每一组相互对齐的像素的像素值均值;将所述第一图像中的目标像素的像素值调整为所述像素值均值,得到所述目标图像。
请参阅图6,图6为本申请实施例提供的图像的处理装置的结构示意图。图像的处理装置300可以包括:第一获取模块301,确定模块302,第二获取模块303,降噪模块304,以及处理模块305。
第一获取模块301,用于获取第一组图像和第二组图像,所述第一组图像是由所述第一摄像模组采集的图像,所述第二组图像是由所述第二摄像模组采集的图像。
比如,第一获取模块301可以先获取终端上的双摄像模组中的第一摄像模组采集的第一组图像,以及该双摄像模组中的第二摄像模组采集的第二组图像。该第一组图像中包含多帧图像,该第二组图像中也包含多帧图像。
确定模块302,用于从所述第一组图像中确定出第一图像,并从所述第二组图像中确定出第二图像,所述第一图像和所述第二图像是同步采集的图像。
比如,在第一获取模块301获取到第一摄像模组采集的第一组图像A1、A2、A3、A4,以及第二摄像模组采集的第二组图像B1、B2、B3、B4后,确定模块302可以从A1、A2、A3、A4中确定出第一图像,然后从B1、B2、B3、B4中确定出第二图像。其中,该第二图像可以是与第一图像同步采集的图像。
例如,确定模块302将A1、A2、A3、A4中的A2确定为第一图像,那么终端可以相应地将B2确定为第二图像。
第二获取模块303,用于根据所述第一图像和所述第二图像获取景深信息。
比如,在确定模块302确定出第一图像和第二图像后,第二获取模块303可以根据该第一图像和第二图像获取景深信息。可以理解的是,由于该第一图像和第二图像是终端上的双摄像模组从不同拍摄位置(角度)同步采集到的图像,因此可以根据第一图像和第二图像获取景深信息。
需要说明的是,该景深信息是相对于图像中的对焦物体而言的,是在确定出对焦物体后获取到的景深信息。
降噪模块304,用于根据所述第一组图像对所述第一图像进行降噪处理,得到目标图像。
比如,降噪模块304可以根据第一组图像对第一图像进行降噪处理,从而得到目标图像。比如,降噪模块304可以根据第一组图像中除第一图像外的其它图像,对该第一图像进行降噪处理。例如,第一图像为A2图像,那么终端可以将第一图像A2作为降噪处理的基础帧,并根据第一组图像中的A1、A3、A4这3帧图像对A2进行降噪处理。即,降噪模块304可以根据A1、A3、A4这3帧图像识别并降低基础帧A2图像中的随机噪点,从而得到经过降噪的目标图像。
处理模块305,用于根据所述景深信息,对所述目标图像进行预设处理。
比如,在得到目标图像后,处理模块305可以根据获取到的景深信息,对该目标图像进行预设处理。在一些实施方式中,该预设处理可以是诸如背景虚化以及图像的3D应用处理等。
在一种实施方式中,所述处理模块305可以用于:对所述目标图像进行背景虚化处理。
在一种实施方式中,所述确定模块302可以用于:获取所述第一组图像中各帧图像的清晰度;将各帧图像中清晰度最大的图像确定为所述第一图像。
在一种实施方式中,所述确定模块302可以用于:若所述第一组图像的各帧图像包含人脸,则获取所述第一组图像中各帧图像的预设参数的数值,所述预设参数的数值用于表示图像中人脸的眼睛大小;将各帧图像中预设参数的数值最大的图像确定为所述第一图像。
在一种实施方式中,所述确定模块302可以用于:获取所述第一组图像中各帧图像的清晰度;若所述第一组图像的各帧图像包含人脸,则获取所述第一组图像中各帧图像的预设参数的数值,所述预设参 数的数值用于表示图像中人脸的眼睛大小;根据所述各帧图像的清晰度和预设参数的数值,从所述第一组图像中确定出所述第一图像。
在一种实施方式中,所述确定模块302可以用于:获取与清晰度对应的第一权重,以及与预设参数对应的第二权重;按照所述第一权重分别对所述各帧图像的清晰度进行加权,得到所述各帧图像加权后的清晰度,并按照所述第二权重分别对所述各帧图像的预设参数的数值进行加权,得到所述各帧图像加权后的预设参数的数值;分别获取所述各帧图像的加权后的清晰度和加权后的预设参数的数值的和值;将所述第一组图像中,和值最大的图像确定为所述第一图像。
在一种实施方式中,所述确定模块302可以用于:分别对所述各帧图像的清晰度和预设参数的数值进行归一化,得到所述各帧图像归一化后的清晰度与归一化后的预设参数的数值;按照所述第一权重,分别对所述各帧图像的归一化后的清晰度进行加权,得到所述各帧图像加权后的清晰度;按照所述第二权重,分别对所述各帧图像的归一化后的预设参数的数值进行加权,得到所述各帧图像加权后的预设参数的数值。
在一种实施方式中,所述降噪模块304可以用于:根据所述第一组图像对所述第一图像进行降噪处理,得到降噪后的图像;对所述降噪后的图像进行色调映射处理,得到所述目标图像。
在一种实施方式中,所述第一组图像至少包括两帧图像,所述降噪模块304可以用于:将所述第一组图像中的所有图像对齐;在对齐的图像中,确定出多组相互对齐的像素,及各组相互对齐的像素中属于第一图像的目标像素;获取每一组相互对齐的像素中各像素的像素值;根据所述各像素的像素值,获取每一组相互对齐的像素的像素值均值;将所述第一图像中的目标像素的像素值调整为所述像素值均值,得到所述目标图像。
关于该实施例中的装置,其中各个模块执行操作的具体方式已经在有关该方法的实施例中进行了详细描述,此处将不做详细阐述说明。
本申请实施例提供一种计算机可读的存储介质,其上存储有计算机程序,当所述计算机程序在计算机上执行时,使得所述计算机执行如本实施例提供的图像的处理方法中的流程。
本申请实施例还提供一种电子设备,包括存储器,处理器,所述处理器通过调用所述存储器中存储的计算机程序,用于执行本实施例提供的图像的处理方法中的流程。
例如,上述电子设备可以是诸如平板电脑或者智能手机等移动终端。请参阅图7,图7为本申请实施例提供的移动终端的结构示意图。
该移动终端400可以包括摄像模组401、存储器402、处理器403等部件。本领域技术人员可以理解,图7中示出的移动终端结构并不构成对移动终端的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。
摄像模组401可以是移动终端上安装的双摄像模组等等。其中该摄像模组401至少包括第一摄像模组和第二摄像模组,当移动终端使用双摄像模组采集图像时,该第一摄像模组和该第二摄像模组可以同步采集图像。
存储器402可用于存储应用程序和数据。存储器402存储的应用程序中包含有可执行代码。应用程序可以组成各种功能模块。处理器403通过运行存储在存储器402的应用程序,从而执行各种功能应用以及数据处理。
处理器403是移动终端的控制中心,利用各种接口和线路连接整个移动终端的各个部分,通过运行或执行存储在存储器402内的应用程序,以及调用存储在存储器402内的数据,执行移动终端的各种功能和处理数据,从而对移动终端进行整体监控。
在本实施例中,移动终端中的处理器403会按照如下的指令,将一个或一个以上的应用程序的进程对应的可执行代码加载到存储器402中,并由处理器403来运行存储在存储器402中的应用程序,从而实现如下流程:
获取第一组图像和第二组图像,所述第一组图像是由所述第一摄像模组采集的图像,所述第二组图像是由所述第二摄像模组采集的图像;从所述第一组图像中确定出第一图像,并从所述第二组图像中确 定出第二图像,所述第一图像和所述第二图像是同步采集的图像;根据所述第一图像和所述第二图像获取景深信息;根据所述第一组图像对所述第一图像进行降噪处理,得到目标图像;根据所述景深信息,对所述目标图像进行预设处理。
本申请实施例还提供一种电子设备。上述电子设备中包括图像处理电路,图像处理电路可以利用硬件和/或软件组件实现,可包括定义图像信号处理(Image Signal Processing)管线的各种处理单元。图像处理电路至少可以包括:摄像头、图像信号处理器(Image Signal Processor,ISP处理器)、控制逻辑器、图像存储器以及显示器等。其中摄像头至少可以包括一个或多个透镜和图像传感器。
图像传感器可包括色彩滤镜阵列(如Bayer滤镜)。图像传感器可获取用图像传感器的每个成像像素捕捉的光强度和波长信息,并提供可由图像信号处理器处理的一组原始图像数据。
图像信号处理器可以按多种格式逐个像素地处理原始图像数据。例如,每个图像像素可具有8、10、12或14比特的位深度,图像信号处理器可对原始图像数据进行一个或多个图像处理操作、收集关于图像数据的统计信息。其中,图像处理操作可按相同或不同的位深度精度进行。原始图像数据经过图像信号处理器处理后可存储至图像存储器中。图像信号处理器还可从图像存储器处接收图像数据。
图像存储器可为存储器装置的一部分、存储设备、或电子设备内的独立的专用存储器,并可包括DMA(Direct Memory Access,直接直接存储器存取)特征。
当接收到来自图像存储器的图像数据时,图像信号处理器可进行一个或多个图像处理操作,如时域滤波。处理后的图像数据可发送给图像存储器,以便在被显示之前进行另外的处理。图像信号处理器还可从图像存储器接收处理数据,并对所述处理数据进行原始域中以及RGB和YCbCr颜色空间中的图像数据处理。处理后的图像数据可输出给显示器,以供用户观看和/或由图形引擎或GPU(Graphics Processing Unit,图形处理器)进一步处理。此外,图像信号处理器的输出还可发送给图像存储器,且显示器可从图像存储器读取图像数据。在一种实施方式中,图像存储器可被配置为实现一个或多个帧缓冲器。
图像信号处理器确定的统计数据可发送给控制逻辑器。例如,统计数据可包括自动曝光、自动白平衡、自动聚焦、闪烁检测、黑电平补偿、透镜阴影校正等图像传感器的统计信息。
控制逻辑器可包括执行一个或多个例程(如固件)的处理器和/或微控制器。一个或多个例程可根据接收的统计数据,确定摄像头的控制参数以及ISP控制参数。例如,摄像头的控制参数可包括照相机闪光控制参数、透镜的控制参数(例如聚焦或变焦用焦距)、或这些参数的组合。ISP控制参数可包括用于自动白平衡和颜色调整(例如,在RGB处理期间)的增益水平和色彩校正矩阵等。
请参阅图8,图8为本实施例中图像处理电路的结构示意图。如图8所示,为便于说明,仅示出与本申请实施例相关的图像处理技术的各个方面。
图像处理电路可以包括:第一摄像头510、第二摄像头520、第一图像信号处理器530、第二图像信号处理器540、控制逻辑器550、图像存储器560、显示器570。其中,第一摄像头510可以包括一个或多个第一透镜511和第一图像传感器512。第二摄像头520可以包括一个或多个第二透镜521和第二图像传感器522。
第一摄像头510采集的第一图像传输给第一图像信号处理器530进行处理。第一图像信号处理器530处理第一图像后,可将第一图像的统计数据(如图像的亮度、图像的反差值、图像的颜色等)发送给控制逻辑器550。控制逻辑器550可根据统计数据确定第一摄像头510的控制参数,从而第一摄像头510可根据控制参数进行自动对焦、自动曝光等操作。第一图像经过第一图像信号处理器530进行处理后可存储至图像存储器560中。第一图像信号处理器530也可以读取图像存储器560中存储的图像以进行处理。另外,第一图像经过图像信号处理器530进行处理后可直接发送至显示器570进行显示。显示器570也可以读取图像存储器560中的图像以进行显示。
第二摄像头520采集的第二图像传输给第二图像信号处理器540进行处理。第二图像信号处理器540处理第二图像后,可将第二图像的统计数据(如图像的亮度、图像的反差值、图像的颜色等)发送给控制逻辑器550。控制逻辑器550可根据统计数据确定第二摄像头520的控制参数,从而第二摄像头 520可根据控制参数进行自动对焦、自动曝光等操作。第二图像经过第二图像信号处理器540进行处理后可存储至图像存储器560中。第二图像信号处理器540也可以读取图像存储器560中存储的图像以进行处理。另外,第二图像经过图像信号处理器540进行处理后可直接发送至显示器570进行显示。显示器570也可以读取图像存储器560中的图像以进行显示。
在另一些实施方式中,第一图像信号处理器和第二图像信号处理器也可合成为统一的图像信号处理器,分别处理第一图像传感器和第二图像传感器的数据。
此外,图中没有展示的,电子设备还可以包括CPU和供电模块。CPU和逻辑控制器、第一图像信号处理器、第二图像信号处理器、图像存储器和显示器均连接,CPU用于实现全局控制。供电模块用于为各个模块供电。
一般的,具有双摄像模组的手机,在某些拍照模式下,双摄像模组均工作。此时,CPU控制供电模块为第一摄像头和第二摄像头供电。第一摄像头中的图像传感器上电,第二摄像头中的图像传感器上电,从而可以实现图像的采集转换。在某些拍照模式下,可以是双摄像模组中的一个摄像头工作。例如,仅长焦摄像头工作。这种情况下,CPU控制供电模块给相应摄像头的图像传感器供电即可。在本申请的实施例中,由于要进行景深计算和虚化处理,因此需要两个摄像模头同时工作的。
此外,双摄像模组在终端的安装距离可根据终端的尺寸确定和拍摄效果确定。在一些实施例中,为了使第一摄像模组和第二摄像模组拍摄的物体重叠度高,可以将两个摄像模组安装得越近越好,例如在10mm以内。
以下为运用图8中图像处理技术实现本实施例提供的图像的处理方法的流程:
获取第一组图像和第二组图像,所述第一组图像是由所述第一摄像模组采集的图像,所述第二组图像是由所述第二摄像模组采集的图像;从所述第一组图像中确定出第一图像,并从所述第二组图像中确定出第二图像,所述第一图像和所述第二图像是同步采集的图像;根据所述第一图像和所述第二图像获取景深信息;根据所述第一组图像对所述第一图像进行降噪处理,得到目标图像;根据所述景深信息,对所述目标图像进行预设处理。
在一种实施方式中,电子设备执行所述对所述目标图像进行预设处理时,可以执行:对所述目标图像进行背景虚化处理。
在一种实施方式中,所述第一组图像至少包括两帧图像,电子设备执行所述从所述第一组图像中确定出第一图像时,可以执行:获取所述第一组图像中各帧图像的清晰度;将各帧图像中清晰度最大的图像确定为所述第一图像。
在一种实施方式中,所述第一组图像至少包括两帧图像,电子设备执行所述从所述第一组图像中确定出第一图像时,可以执行:若所述第一组图像的各帧图像包含人脸,则获取所述第一组图像中各帧图像的预设参数的数值,所述预设参数的数值用于表示图像中人脸的眼睛大小;将各帧图像中预设参数的数值最大的图像确定为所述第一图像。
在一种实施方式中,所述第一组图像至少包括两帧图像,电子设备执行所述从所述第一组图像中确定出第一图像时,可以执行:获取所述第一组图像中各帧图像的清晰度;若所述第一组图像的各帧图像包含人脸,则获取所述第一组图像中各帧图像的预设参数的数值,所述预设参数的数值用于表示图像中人脸的眼睛大小;根据所述各帧图像的清晰度和预设参数的数值,从所述第一组图像中确定出所述第一图像。
在一种实施方式中,电子设备执行所述根据所述各帧图像的清晰度和预设参数的数值从所述第一组图像中确定出第一图像时,可以执行:获取与清晰度对应的第一权重,以及与预设参数对应的第二权重;按照所述第一权重分别对所述各帧图像的清晰度进行加权,得到所述各帧图像加权后的清晰度,并按照所述第二权重分别对所述各帧图像的预设参数的数值进行加权,得到所述各帧图像加权后的预设参数的数值;分别获取所述各帧图像的加权后的清晰度和加权后的预设参数的数值的和值;将所述第一组图像中,和值最大的图像确定为所述第一图像。
在一种实施方式中,电子设备执行所述按照所述第一权重分别对所述各帧图像的清晰度进行加权得 到所述各帧图像加权后的清晰度,并按照所述第二权重分别对所述各帧图像的预设参数的数值进行加权得到所述各帧图像加权后的预设参数的数值时,可以执行:分别对所述各帧图像的清晰度和预设参数的数值进行归一化,得到所述各帧图像归一化后的清晰度与归一化后的预设参数的数值;按照所述第一权重,分别对所述各帧图像的归一化后的清晰度进行加权,得到所述各帧图像加权后的清晰度;按照所述第二权重,分别对所述各帧图像的归一化后的预设参数的数值进行加权,得到所述各帧图像加权后的预设参数的数值。
在一种实施方式中,所述根据所述第一图像和所述第二图像获取景深信息的流程和所述根据所述第一组图像对所述第一图像进行降噪处理得到目标图像的流程是由电子设备并行执行的。
在一种实施方式中,电子设备执行所述根据所述第一组图像对所述第一图像进行降噪处理得到目标图像时,可以执行:根据所述第一组图像对所述第一图像进行降噪处理,得到降噪后的图像;对所述降噪后的图像进行色调映射处理,得到所述目标图像。
在一种实施方式中,所述第一组图像至少包括两帧图像,电子设备执行所述根据所述第一组图像对所述第一图像进行降噪处理得到目标图像时,可以执行:将所述第一组图像中的所有图像对齐;在对齐的图像中,确定出多组相互对齐的像素,及各组相互对齐的像素中属于第一图像的目标像素;获取每一组相互对齐的像素中各像素的像素值;根据所述各像素的像素值,获取每一组相互对齐的像素的像素值均值;将所述第一图像中的目标像素的像素值调整为所述像素值均值,得到所述目标图像。
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见上文针对图像的处理方法的详细描述,此处不再赘述。
本申请实施例提供的所述图像的处理装置与上文实施例中的图像的处理方法属于同一构思,在所述图像的处理装置上可以运行所述图像的处理方法实施例中提供的任一方法,其具体实现过程详见所述图像的处理方法实施例,此处不再赘述。
需要说明的是,对本申请实施例所述图像的处理方法而言,本领域普通技术人员可以理解实现本申请实施例所述图像的处理方法的全部或部分流程,是可以通过计算机程序来控制相关的硬件来完成,所述计算机程序可存储于一计算机可读取存储介质中,如存储在存储器中,并被至少一个处理器执行,在执行过程中可包括如所述图像的处理方法的实施例的流程。其中,所述的存储介质可为磁碟、光盘、只读存储器(ROM,Read Only Memory)、随机存取记忆体(RAM,Random Access Memory)等。
对本申请实施例的所述图像的处理装置而言,其各功能模块可以集成在一个处理芯片中,也可以是各个模块单独物理存在,也可以两个或两个以上模块集成在一个模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。所述集成的模块如果以软件功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中,所述存储介质譬如为只读存储器,磁盘或光盘等。
以上对本申请实施例所提供的一种图像的处理方法、装置、存储介质以及电子设备进行了详细介绍,本文中应用了具体个例对本申请的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本申请的方法及其核心思想;同时,对于本领域的技术人员,依据本申请的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本申请的限制。

Claims (20)

  1. 一种图像的处理方法,应用于终端,其中,所述终端至少包括第一摄像模组和第二摄像模组,所述方法包括:
    获取第一组图像和第二组图像,所述第一组图像是由所述第一摄像模组采集的图像,所述第二组图像是由所述第二摄像模组采集的图像;
    从所述第一组图像中确定出第一图像,并从所述第二组图像中确定出第二图像,所述第一图像和所述第二图像是同步采集的图像;
    根据所述第一图像和所述第二图像获取景深信息;
    根据所述第一组图像对所述第一图像进行降噪处理,得到目标图像;
    根据所述景深信息,对所述目标图像进行预设处理。
  2. 根据权利要求1所述的图像的处理方法,其中,所述对所述目标图像进行预设处理的流程,包括:
    对所述目标图像进行背景虚化处理。
  3. 根据权利要求1所述的图像的处理方法,其中,所述第一组图像至少包括两帧图像;
    所述从所述第一组图像中确定出第一图像的流程,包括:
    获取所述第一组图像中各帧图像的清晰度;
    将各帧图像中清晰度最大的图像确定为所述第一图像。
  4. 根据权利要求1所述的图像的处理方法,其中,所述第一组图像至少包括两帧图像;
    所述从所述第一组图像中确定出第一图像的流程,包括:
    若所述第一组图像的各帧图像包含人脸,则获取所述第一组图像中各帧图像的预设参数的数值,所述预设参数的数值用于表示图像中人脸的眼睛大小;
    将各帧图像中预设参数的数值最大的图像确定为所述第一图像。
  5. 根据权利要求1所述的图像的处理方法,其中,所述第一组图像至少包括两帧图像;
    所述从所述第一组图像中确定出第一图像的流程,包括:
    获取所述第一组图像中各帧图像的清晰度;
    若所述第一组图像的各帧图像包含人脸,则获取所述第一组图像中各帧图像的预设参数的数值,所述预设参数的数值用于表示图像中人脸的眼睛大小;
    根据所述各帧图像的清晰度和预设参数的数值,从所述第一组图像中确定出所述第一图像。
  6. 根据权利要求5所述的图像的处理方法,其中,所述根据所述各帧图像的清晰度和预设参数的数值从所述第一组图像中确定出第一图像的流程,包括:
    获取与清晰度对应的第一权重,以及与预设参数对应的第二权重;
    按照所述第一权重分别对所述各帧图像的清晰度进行加权,得到所述各帧图像加权后的清晰度,并按照所述第二权重分别对所述各帧图像的预设参数的数值进行加权,得到所述各帧图像加权后的预设参数的数值;
    分别获取所述各帧图像的加权后的清晰度和加权后的预设参数的数值的和值;
    将所述第一组图像中,和值最大的图像确定为所述第一图像。
  7. 根据权利要求6所述的图像的处理方法,其中,所述按照所述第一权重分别对所述各帧图像的清晰度进行加权得到所述各帧图像加权后的清晰度,并按照所述第二权重分别对所述各帧图像的预设参数的数值进行加权得到所述各帧图像加权后的预设参数的数值的流程,包括:
    分别对所述各帧图像的清晰度和预设参数的数值进行归一化,得到所述各帧图像归一化后的清晰度与归一化后的预设参数的数值;
    按照所述第一权重,分别对所述各帧图像的归一化后的清晰度进行加权,得到所述各 帧图像加权后的清晰度;
    按照所述第二权重,分别对所述各帧图像的归一化后的预设参数的数值进行加权,得到所述各帧图像加权后的预设参数的数值。
  8. 根据权利要求1所述的图像的处理方法,其中,所述方法包括:
    所述根据所述第一图像和所述第二图像获取景深信息的流程和所述根据所述第一组图像对所述第一图像进行降噪处理得到目标图像的流程是并行执行的。
  9. 根据权利要求1所述的图像的处理方法,其中,所述根据所述第一组图像对所述第一图像进行降噪处理得到目标图像的流程,包括:
    根据所述第一组图像对所述第一图像进行降噪处理,得到降噪后的图像;
    对所述降噪后的图像进行色调映射处理,得到所述目标图像。
  10. 根据权利要求1所述的图像的处理方法,其中,所述第一组图像至少包括两帧图像;
    所述根据所述第一组图像对所述第一图像进行降噪处理得到目标图像的流程,包括:
    将所述第一组图像中的所有图像对齐;
    在对齐的图像中,确定出多组相互对齐的像素,及各组相互对齐的像素中属于第一图像的目标像素;
    获取每一组相互对齐的像素中各像素的像素值;
    根据所述各像素的像素值,获取每一组相互对齐的像素的像素值均值;
    将所述第一图像中的目标像素的像素值调整为所述像素值均值,得到所述目标图像。
  11. 一种图像的处理装置,应用于终端,其中,所述终端至少包括第一摄像模组和第二摄像模组,所述装置包括:
    第一获取模块,用于获取第一组图像和第二组图像,所述第一组图像是由所述第一摄像模组采集的图像,所述第二组图像是由所述第二摄像模组采集的图像;
    确定模块,用于从所述第一组图像中确定出第一图像,并从所述第二组图像中确定出第二图像,所述第一图像和所述第二图像是同步采集的图像;
    第二获取模块,用于根据所述第一图像和所述第二图像获取景深信息;
    降噪模块,用于根据所述第一组图像对所述第一图像进行降噪处理,得到目标图像;
    处理模块,用于根据所述景深信息,对所述目标图像进行预设处理。
  12. 一种存储介质,其上存储有计算机程序,其中,当所述计算机程序在计算机上执行时,使得所述计算机执行如权利要求1至10中任一项所述的方法。
  13. 一种电子设备,包括存储器,处理器,以及第一摄像模组和第二摄像模组,其中,所述处理器通过调用所述存储器中存储的计算机程序,用于执行:
    获取第一组图像和第二组图像,所述第一组图像是由所述第一摄像模组采集的图像,所述第二组图像是由所述第二摄像模组采集的图像;
    从所述第一组图像中确定出第一图像,并从所述第二组图像中确定出第二图像,所述第一图像和所述第二图像是同步采集的图像;
    根据所述第一图像和所述第二图像获取景深信息;
    根据所述第一组图像对所述第一图像进行降噪处理,得到目标图像;
    根据所述景深信息,对所述目标图像进行预设处理。
  14. 根据权利要求13所述的电子设备,其中,所述处理器用于执行:
    对所述目标图像进行背景虚化处理。
  15. 根据权利要求13所述的电子设备,其中,所述第一组图像至少包括两帧图像,所述处理器用于执行:
    获取所述第一组图像中各帧图像的清晰度;
    将各帧图像中清晰度最大的图像确定为所述第一图像。
  16. 根据权利要求13所述的电子设备,其中,所述第一组图像至少包括两帧图像,所述处理器用于执行:
    若所述第一组图像的各帧图像包含人脸,则获取所述第一组图像中各帧图像的预设参数的数值,所述预设参数的数值用于表示图像中人脸的眼睛大小;
    将各帧图像中预设参数的数值最大的图像确定为所述第一图像。
  17. 根据权利要求13所述的电子设备,其中,所述第一组图像至少包括两帧图像,所述处理器用于执行:
    获取所述第一组图像中各帧图像的清晰度;
    若所述第一组图像的各帧图像包含人脸,则获取所述第一组图像中各帧图像的预设参数的数值,所述预设参数的数值用于表示图像中人脸的眼睛大小;
    根据所述各帧图像的清晰度和预设参数的数值,从所述第一组图像中确定出所述第一图像。
  18. 根据权利要求17所述的电子设备,其中,所述处理器用于执行:
    获取与清晰度对应的第一权重,以及与预设参数对应的第二权重;
    按照所述第一权重分别对所述各帧图像的清晰度进行加权,得到所述各帧图像加权后的清晰度,并按照所述第二权重分别对所述各帧图像的预设参数的数值进行加权,得到所述各帧图像加权后的预设参数的数值;
    分别获取所述各帧图像的加权后的清晰度和加权后的预设参数的数值的和值;
    将所述第一组图像中,和值最大的图像确定为所述第一图像。
  19. 根据权利要求18所述的电子设备,其中,所述处理器用于执行:
    分别对所述各帧图像的清晰度和预设参数的数值进行归一化,得到所述各帧图像归一化后的清晰度与归一化后的预设参数的数值;
    按照所述第一权重,分别对所述各帧图像的归一化后的清晰度进行加权,得到所述各帧图像加权后的清晰度;
    按照所述第二权重,分别对所述各帧图像的归一化后的预设参数的数值进行加权,得到所述各帧图像加权后的预设参数的数值。
  20. 根据权利要求13所述的电子设备,其中,所述根据所述第一图像和所述第二图像获取景深信息的流程和所述根据所述第一组图像对所述第一图像进行降噪处理得到目标图像的流程是并行执行的。
PCT/CN2018/122872 2018-01-31 2018-12-21 图像的处理方法、装置、存储介质及电子设备 WO2019148997A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810097896.8 2018-01-31
CN201810097896.8A CN108282616B (zh) 2018-01-31 2018-01-31 图像的处理方法、装置、存储介质及电子设备

Publications (1)

Publication Number Publication Date
WO2019148997A1 true WO2019148997A1 (zh) 2019-08-08

Family

ID=62807210

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/122872 WO2019148997A1 (zh) 2018-01-31 2018-12-21 图像的处理方法、装置、存储介质及电子设备

Country Status (2)

Country Link
CN (1) CN108282616B (zh)
WO (1) WO2019148997A1 (zh)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108282616B (zh) * 2018-01-31 2019-10-25 Oppo广东移动通信有限公司 图像的处理方法、装置、存储介质及电子设备
CN109862262A (zh) * 2019-01-02 2019-06-07 上海闻泰电子科技有限公司 图像虚化方法、装置、终端及存储介质
CN116701675A (zh) * 2022-02-25 2023-09-05 荣耀终端有限公司 图像数据的处理方法和电子设备

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070189750A1 (en) * 2006-02-16 2007-08-16 Sony Corporation Method of and apparatus for simultaneously capturing and generating multiple blurred images
CN104780313A (zh) * 2015-03-26 2015-07-15 广东欧珀移动通信有限公司 一种图像处理的方法及移动终端
CN105827964A (zh) * 2016-03-24 2016-08-03 维沃移动通信有限公司 一种图像处理方法及移动终端
CN106878605A (zh) * 2015-12-10 2017-06-20 北京奇虎科技有限公司 一种基于电子设备的图像生成的方法和电子设备
CN107613199A (zh) * 2016-06-02 2018-01-19 广东欧珀移动通信有限公司 虚化照片生成方法、装置和移动终端
CN107635093A (zh) * 2017-09-18 2018-01-26 维沃移动通信有限公司 一种图像处理方法、移动终端及计算机可读存储介质
CN108024054A (zh) * 2017-11-01 2018-05-11 广东欧珀移动通信有限公司 图像处理方法、装置及设备
CN108055452A (zh) * 2017-11-01 2018-05-18 广东欧珀移动通信有限公司 图像处理方法、装置及设备
CN108282616A (zh) * 2018-01-31 2018-07-13 广东欧珀移动通信有限公司 图像的处理方法、装置、存储介质及电子设备

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070189750A1 (en) * 2006-02-16 2007-08-16 Sony Corporation Method of and apparatus for simultaneously capturing and generating multiple blurred images
CN104780313A (zh) * 2015-03-26 2015-07-15 广东欧珀移动通信有限公司 一种图像处理的方法及移动终端
CN106878605A (zh) * 2015-12-10 2017-06-20 北京奇虎科技有限公司 一种基于电子设备的图像生成的方法和电子设备
CN105827964A (zh) * 2016-03-24 2016-08-03 维沃移动通信有限公司 一种图像处理方法及移动终端
CN107613199A (zh) * 2016-06-02 2018-01-19 广东欧珀移动通信有限公司 虚化照片生成方法、装置和移动终端
CN107635093A (zh) * 2017-09-18 2018-01-26 维沃移动通信有限公司 一种图像处理方法、移动终端及计算机可读存储介质
CN108024054A (zh) * 2017-11-01 2018-05-11 广东欧珀移动通信有限公司 图像处理方法、装置及设备
CN108055452A (zh) * 2017-11-01 2018-05-18 广东欧珀移动通信有限公司 图像处理方法、装置及设备
CN108282616A (zh) * 2018-01-31 2018-07-13 广东欧珀移动通信有限公司 图像的处理方法、装置、存储介质及电子设备

Also Published As

Publication number Publication date
CN108282616B (zh) 2019-10-25
CN108282616A (zh) 2018-07-13

Similar Documents

Publication Publication Date Title
AU2019326496B2 (en) Method for capturing images at night, apparatus, electronic device, and storage medium
CN107948519B (zh) 图像处理方法、装置及设备
CN111028189B (zh) 图像处理方法、装置、存储介质及电子设备
US11431915B2 (en) Image acquisition method, electronic device, and non-transitory computer readable storage medium
WO2019148978A1 (zh) 图像处理方法、装置、存储介质及电子设备
EP3480784B1 (en) Image processing method, and device
WO2020038072A1 (zh) 曝光控制方法、装置和电子设备
CN110213502B (zh) 图像处理方法、装置、存储介质及电子设备
CN108093158B (zh) 图像虚化处理方法、装置、移动设备和计算机可读介质
CN111028190A (zh) 图像处理方法、装置、存储介质及电子设备
JP2020533697A (ja) 画像処理のための方法および装置
WO2019105297A1 (zh) 图像虚化处理方法、装置、移动设备及存储介质
CN108259770B (zh) 图像处理方法、装置、存储介质及电子设备
CN109993722B (zh) 图像处理方法、装置、存储介质及电子设备
WO2019148997A1 (zh) 图像的处理方法、装置、存储介质及电子设备
CN108401110B (zh) 图像的获取方法、装置、存储介质及电子设备
WO2019105254A1 (zh) 背景虚化处理方法、装置及设备
CN110430370B (zh) 图像处理方法、装置、存储介质及电子设备
CN110445986B (zh) 图像处理方法、装置、存储介质及电子设备
US11503223B2 (en) Method for image-processing and electronic device
WO2019029573A1 (zh) 图像虚化方法、计算机可读存储介质和计算机设备
US8730352B2 (en) Image processing apparatus and image processing method
CN110717871A (zh) 图像处理方法、装置、存储介质及电子设备
TWI684165B (zh) 影像處理方法與電子裝置
CN108259769B (zh) 图像处理方法、装置、存储介质及电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18904261

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18904261

Country of ref document: EP

Kind code of ref document: A1