WO2018228467A1 - 图像曝光方法、装置、摄像设备及存储介质 - Google Patents

图像曝光方法、装置、摄像设备及存储介质 Download PDF

Info

Publication number
WO2018228467A1
WO2018228467A1 PCT/CN2018/091228 CN2018091228W WO2018228467A1 WO 2018228467 A1 WO2018228467 A1 WO 2018228467A1 CN 2018091228 W CN2018091228 W CN 2018091228W WO 2018228467 A1 WO2018228467 A1 WO 2018228467A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
exposure
portrait
shooting scene
current shooting
Prior art date
Application number
PCT/CN2018/091228
Other languages
English (en)
French (fr)
Inventor
曾元清
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Publication of WO2018228467A1 publication Critical patent/WO2018228467A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/71Circuitry for evaluating the brightness variation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/76Circuitry for compensating brightness variation in the scene by influencing the image signals

Definitions

  • the present application relates to the field of image processing technologies, and in particular, to an image exposure method, apparatus, imaging apparatus, and storage medium.
  • Controlling the camera to perform proper exposure with appropriate exposure compensation is an essential condition for obtaining a high quality image during image capture or framing display.
  • Most current camera devices (such as mobile terminals) support manual adjustment of exposure. Manual adjustment of exposure can be accomplished by the user clicking the set button to trigger the camera to display the exposure bar on the screen, and the user slides the cursor in the exposure bar.
  • the exposure compensation at the time of image shooting can be set to achieve the purpose of adjusting the exposure effect.
  • an image capturing device (such as a mobile terminal) performs the same exposure compensation process on the entire image when performing image exposure processing on the image, that is, if the user wants to adjust the exposure effect of an area in the image, the image is in the image. Other areas will also be adjusted to the same exposure. It can be seen that the existing manual adjustment exposure scheme cannot comprehensively consider the exposure effect of a specific area in the image, so that appropriate exposure compensation cannot be obtained, and thus a high-quality exposure image cannot be obtained, and the user experience is deteriorated.
  • the purpose of the present application is to solve at least one of the above technical problems to some extent.
  • the first object of the present application is to propose an image exposure method.
  • the method can realize the purpose of auto-exposure based on multi-frame fusion, comprehensively considers the exposure effect of each specific region in the captured image, so that the entire shot is properly compensated, thereby obtaining a high-quality exposure image and improving the user.
  • the method can realize the purpose of auto-exposure based on multi-frame fusion, comprehensively considers the exposure effect of each specific region in the captured image, so that the entire shot is properly compensated, thereby obtaining a high-quality exposure image and improving the user.
  • a second object of the present application is to provide an image exposure apparatus.
  • a third object of the present application is to propose an image pickup apparatus.
  • a fourth object of the present application is to propose a storage medium.
  • the image exposure method of the first aspect of the present application includes: when detecting a portrait in the current shooting scene, extracting a silhouette of the current shooting scene based on the depth information of the current shooting scene. And a background portion; acquiring a face region of the portrait, and locating the body region of the portrait according to the face region and the silhouette of the portrait; detecting the face region, the body region, and the background portion, respectively Brightness to obtain corresponding first photometric value, second photometric value and third photometric value; respectively according to the first photometric value, the second photometric value and the third photometric value Exposing and photographing the face area, the body area, and the background portion to obtain corresponding first exposure image, second exposure image, and third exposure image; for the first exposure image, The two exposure images and the third exposure image are subjected to fusion processing to obtain a fused target image.
  • the portrait contour and the background portion in the current shooting scene may be extracted based on the depth information of the current shooting scene, and the human face region and the body are respectively detected.
  • the brightness of the area and the background portion is obtained to obtain corresponding first photometric values, second photometric values, and third photometric values, and are respectively performed according to the first photometric value, the second photometric value, and the third photometric value, respectively.
  • Exposure control and shooting to obtain the corresponding three images.
  • the three differently exposed images are fused, and the resulting face, portrait outline, and background part are all exposed to appropriate photos, which realizes multi-frame fusion.
  • the purpose of auto-exposure is to take into account the exposure effect of each specific area in the captured image, so that the entire shot is properly compensated, and the high-quality exposure image can be obtained, which improves the user experience.
  • an image exposure apparatus includes: a first acquisition module, configured to acquire depth information of a current shooting scene; and an extraction module, configured to include, in detecting the current shooting scene And a second image acquisition module for acquiring a face contour and a background portion in the current shooting scene; a second acquiring module, configured to acquire a face region of the portrait; a positioning module, configured to use the face region and Positioning the body area of the portrait; the detecting module is configured to respectively detect brightness of the face area, the body area and the background part to obtain a corresponding first photometric value and a second measurement a light value and a third light metering value; a control module, configured to face the human face region and the body region according to the first light metering value, the second light metering value, and the third light metering value, respectively Performing exposure control and photographing with the background portion to obtain corresponding first exposure image, second exposure image, and third exposure image; and a fusion module for using
  • the portrait contour and the background portion in the current shooting scene may be extracted based on the depth information of the current shooting scene
  • the detecting module respectively detects The brightness of the face area, the body area, and the background portion to obtain corresponding first photometric values, second photometric values, and third photometric values
  • the control module respectively according to the first photometric value, the second photometric value, and
  • the third photometric value is subjected to exposure control and photographing to obtain corresponding three images
  • the fusion module performs fusion processing on the three differently exposed images to obtain a photograph of the final generated face, the silhouette of the portrait, and the background portion.
  • an image pickup apparatus includes a memory, a processor, and a computer program stored on the memory and operable on the processor, and the processor implements the program when the program is executed.
  • the image exposure method described in the first aspect of the invention is applied.
  • a non-transitory computer readable storage medium has a computer program stored thereon, and when the program is executed by the processor, the image exposure described in the first aspect of the present application is implemented. method.
  • FIG. 1 is a flow chart of an image exposure method according to an embodiment of the present application.
  • FIG. 2 is a schematic structural diagram of an image exposure apparatus according to an embodiment of the present application.
  • FIG. 3 is a schematic structural diagram of an image exposure apparatus according to an embodiment of the present application.
  • FIG. 4 is a schematic structural diagram of an image pickup apparatus according to an embodiment of the present application.
  • the camera device may be a device having a photographing function, for example, a mobile terminal (such as a mobile phone, a tablet computer, and the like having various operating systems), a digital camera, and the like.
  • the image exposure method may include:
  • the portrait contour and the background portion in the current shooting scene are extracted based on the depth information of the current shooting scene.
  • the depth information of the current shooting scene may be acquired first.
  • the depth of field refers to the distance between the front and back of the object measured by the image that can obtain a clear image at the front of the camera lens or other imager. After the focus is completed, a clear image can be formed in the range before and after the focus. This range of distances before and after is called depth of field.
  • the length of the space in which the subject is located is called the depth of field. In other words, the image in the space in which the image is blurred on the surface of the film is within the limited range of the allowable circle, and the length of the space is the depth of field.
  • the step of acquiring the depth of field information and the step of detecting whether the current shooting scene includes a portrait may not be specifically limited: as an example, the current shooting scene may be acquired first. Depth of field information, after that, face recognition technology can be used to detect whether a portrait is included in the current shooting scene. As another example, the face recognition technology may be used to detect whether a portrait is included in the current shooting scene, and when the portrait is included in the current shooting scene, the depth information of the current shooting scene is acquired.
  • the depth information of the current shooting scene may be acquired by a dual camera or a depth RGBD (RGB+depth, color depth image including color information and distance depth information) camera.
  • a dual camera the specific implementation process of acquiring the depth information of the current shooting scene by using the dual camera may be as follows: the first angle ⁇ 1 of the object to be photographed and the left camera may be calculated by an algorithm, and the object to be photographed and the right camera are calculated.
  • the second angle ⁇ 2 such that the center distance between the left camera and the right camera (where the center distance is a fixed value), the first angle ⁇ 1 and the second angle ⁇ 2, can be calculated by using the triangle principle
  • the distance between the object and the lens which is the depth of field information of the current shooting scene.
  • the specific implementation process of acquiring the depth information of the current shooting scene by the depth RGBD camera may be as follows: using the depth detector (for example, an infrared sensor, etc.) in the depth RGBD camera, the detection is taken. The distance between the object and the camera, which is the depth of field information of the current shooting scene.
  • the depth detector for example, an infrared sensor, etc.
  • the face detection technology When detecting the portrait image in the current shooting scene and obtaining the depth information of the current shooting scene, the face detection technology is used to calculate the distance between the face and the lens according to the depth information of the current shooting scene, and the distance is searched according to the distance.
  • the distance finds the entire silhouette of the portrait and extracts the background portion from the current shooting scene based on the distance difference separation technique and the portrait contour.
  • the above formula (1) can be expressed as follows:
  • ⁇ L is the depth of field information of the current shooting scene
  • f is the focal length of the lens
  • F is the aperture value at the time of lens shooting
  • is the diameter of the circle of confusion
  • L is the distance between the face and the lens.
  • a face region of the portrait is acquired, and the body region of the portrait is located according to the face region and the silhouette of the portrait.
  • the face recognition technology may be used to acquire the face region of the portrait, such as the location and size information of the face region, and then, according to the face region and the silhouette of the portrait. (ie the overall outline of the portrait), the body area of the portrait can be determined. That is to say, the body area of the portrait is the remaining part of the portrait outline after the face area is removed.
  • Block 130 detecting brightness of the face area, the body area, and the background portion, respectively, to obtain corresponding first photometric values, second photometric values, and third photometric values.
  • the brightness value of the face region can be detected to obtain a corresponding first photometric value, and the brightness value of the body region is detected to obtain a corresponding second photometric value, and the brightness of the background portion is detected. Value to get the corresponding third metering value.
  • a brightness value of each pixel in the face region may be detected, and brightness values of the respective pixels are averaged, and the average value is used as the whole person.
  • the brightness value of the face area, and the brightness value of the entire face area is used as the first light metering value of the face area.
  • Block 140 performing exposure control and photographing on the face region, the body region, and the background portion according to the first photometric value, the second photometric value, and the third photometric value, respectively, to obtain a corresponding first exposure image, second Exposure image and third exposure image.
  • the face region may be subjected to exposure control and shooting according to the first photometric value to obtain a first exposure image for the face region, and the body region of the portrait is performed according to the second photometric value.
  • a fusion process is performed on the first exposure image, the second exposure image, and the third exposure image to obtain a fused target image.
  • the face region in the first exposure image, the body region in the second exposure image, and the background portion in the third exposure image may be spliced, and a smoothing filter is adopted. , the boundary at the seam is eliminated to obtain the fused target image.
  • the final generated face, portrait outline, and background portion are all exposed to appropriate photos.
  • the purpose of auto-exposure based on multi-frame fusion is realized.
  • the portrait contour and the background portion in the current shooting scene may be extracted based on the depth information of the current shooting scene, and the human face region and the body are respectively detected.
  • the brightness of the area and the background portion is obtained to obtain corresponding first photometric values, second photometric values, and third photometric values, and are respectively performed according to the first photometric value, the second photometric value, and the third photometric value, respectively.
  • Exposure control and shooting to obtain the corresponding three images.
  • the three differently exposed images are fused, and the resulting face, portrait outline, and background part are all exposed to appropriate photos, which realizes multi-frame fusion.
  • the purpose of auto-exposure is to take into account the exposure effect of each specific area in the captured image, so that the entire shot is properly compensated, and the high-quality exposure image can be obtained, which improves the user experience.
  • an embodiment of the present application further provides an image exposure apparatus, and the image exposure apparatus provided by the embodiments of the present application and the image exposure method provided by the above embodiments
  • the embodiment of the image exposure method described above is also applicable to the image exposure apparatus provided in this embodiment, which will not be described in detail in this embodiment.
  • 2 is a schematic structural view of an image exposure apparatus according to an embodiment of the present application.
  • the image exposing device may include: a first acquiring module 210 , an extracting module 220 , a second acquiring module 230 , a positioning module 240 , a detecting module 250 , a control module 260 , and a fusion module 270 .
  • the first acquiring module 210 is configured to acquire depth information of the current shooting scene. Specifically, in an embodiment of the present application, the first acquiring module 210 may acquire the depth information of the current shooting scene through a dual camera or a deep RGBD camera.
  • the extracting module 220 is configured to extract a portrait contour and a background portion in the current shooting scene based on the depth information when detecting the portrait in the current shooting scene.
  • the extraction module 220 may include: a calculation unit 221 and an extraction unit 222.
  • the calculation unit 221 is configured to calculate the distance between the face and the lens according to the depth information of the current shooting scene by using a face detection technology.
  • the extracting unit 222 is configured to find a portrait contour according to the distance, and extract a background portion from the current photographing scene according to the distance difference separating technique and the portrait contour.
  • the second obtaining module 230 is configured to acquire a face area of the portrait.
  • the positioning module 240 is configured to locate a body region of the portrait according to the face region and the silhouette of the portrait.
  • the detecting module 250 is configured to respectively detect brightness of the face area, the body area, and the background portion to obtain corresponding first photometric values, second photometric values, and third photometric values.
  • the control module 260 is configured to perform exposure control on the face region, the body region, and the background portion according to the first photometric value, the second photometric value, and the third photometric value, respectively. Shooting to obtain corresponding first exposure image, second exposure image, and third exposure image.
  • the fusion module 270 is configured to perform fusion processing on the first exposure image, the second exposure image, and the third exposure image to obtain a fused target image. Specifically, in an embodiment of the present application, the fusion module 270 may perform splicing processing on the face region in the first exposure image, the body region in the second exposure image, and the background portion in the third exposure image, A smoothing filter is used to eliminate the boundaries at the seam to obtain a fused target image.
  • the portrait contour and the background portion in the current shooting scene may be extracted based on the depth information of the current shooting scene
  • the detecting module respectively detects The brightness of the face area, the body area, and the background portion to obtain corresponding first photometric values, second photometric values, and third photometric values
  • the control module respectively according to the first photometric value, the second photometric value, and
  • the third photometric value is subjected to exposure control and photographing to obtain corresponding three images
  • the fusion module performs fusion processing on the three differently exposed images to obtain a photograph of the final generated face, the silhouette of the portrait, and the background portion.
  • the present application also proposes an imaging apparatus.
  • FIG. 4 is a schematic structural diagram of an image pickup apparatus according to an embodiment of the present application.
  • the imaging device may be a device having a shooting function, for example, a mobile terminal (such as a mobile phone, a tablet computer, or the like having various operating systems), a digital camera, or the like.
  • the image pickup apparatus 40 may include a memory 41, a processor 42, and a computer program 43 stored on the memory 41 and operable on the processor 42, and when the processor 42 executes the computer program 43, The image exposure method described in any of the above embodiments is applied.
  • the present application further provides a non-transitory computer readable storage medium having stored thereon a computer program, which is executed by the processor to implement the image exposure method described in any of the above embodiments of the present application. .
  • the present application also provides a computer program product that, when executed by a processor, performs an image exposure method, the method comprising the steps of:
  • the portrait contour and the background portion in the current shooting scene are extracted based on the depth information of the current shooting scene.
  • the face area of the portrait is acquired, and the body area of the portrait is located according to the face area and the silhouette of the portrait.
  • the first exposure image, the second exposure image, and the third exposure image are subjected to fusion processing to obtain a fused target image.
  • first and second are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated.
  • features defining “first” or “second” may include at least one of the features, either explicitly or implicitly.
  • the meaning of "a plurality” is at least two, such as two, three, etc., unless specifically defined otherwise.
  • a "computer-readable medium” can be any apparatus that can contain, store, communicate, propagate, or transport a program for use in an instruction execution system, apparatus, or device, or in conjunction with the instruction execution system, apparatus, or device.
  • computer readable media include the following: electrical connections (electronic devices) having one or more wires, portable computer disk cartridges (magnetic devices), random access memory (RAM), Read only memory (ROM), erasable editable read only memory (EPROM or flash memory), fiber optic devices, and portable compact disk read only memory (CDROM).
  • the computer readable medium may even be a paper or other suitable medium on which the program can be printed, as it may be optically scanned, for example by paper or other medium, followed by editing, interpretation or, if appropriate, other suitable The method is processed to obtain the program electronically and then stored in computer memory.
  • portions of the application can be implemented in hardware, software, firmware, or a combination thereof.
  • multiple steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system.
  • a suitable instruction execution system For example, if implemented in hardware, as in another embodiment, it can be implemented by any one or combination of the following techniques well known in the art: having logic gates for implementing logic functions on data signals. Discrete logic circuits, application specific integrated circuits with suitable combinational logic gates, programmable gate arrays (PGAs), field programmable gate arrays (FPGAs), etc.
  • each functional unit in each embodiment of the present application may be integrated into one processing module, or each unit may exist physically separately, or two or more units may be integrated into one module.
  • the above integrated modules can be implemented in the form of hardware or in the form of software functional modules.
  • the integrated modules, if implemented in the form of software functional modules and sold or used as stand-alone products, may also be stored in a computer readable storage medium.
  • the above mentioned storage medium may be a read only memory, a magnetic disk or an optical disk or the like. While the embodiments of the present application have been shown and described above, it is understood that the above-described embodiments are illustrative and are not to be construed as limiting the scope of the present application. The embodiments are subject to variations, modifications, substitutions and variations.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)
  • Image Processing (AREA)

Abstract

本申请公开了一种图像曝光方法、装置、摄像设备及存储介质。其中方法包括:在检测到当前拍摄场景中包含人像时,基于当前拍摄场景的景深信息提取当前拍摄场景中的人像轮廓和背景部分;获取人像的人脸区域,并根据人脸区域和人像轮廓定位人像的身体区域;分别检测人脸区域、身体区域和背景部分的亮度,以得到对应的第一测光值、第二测光值和第三测光值;分别根据第一测光值、第二测光值和第三测光值进行曝光控制和拍摄,以得到对应的第一曝光图像、第二曝光图像和第三曝光图像;对该三幅不同曝光的图像进行融合处理,以得到融合后的目标图像。该方法可以使得整个拍摄得到了合适的曝光补偿,能够得到高质量的曝光图像。

Description

图像曝光方法、装置、摄像设备及存储介质
相关申请的交叉引用
本申请要求广东欧珀移动通信有限公司于2017年6月16日提交的、发明名称为“图像曝光方法、装置、摄像设备及存储介质”的、中国专利申请号“201710458977.1”的优先权。
技术领域
本申请涉及图像处理技术领域,尤其涉及一种图像曝光方法、装置、摄像设备、存储介质。
背景技术
在图像的拍摄或取景显示时,控制摄像按照适当的曝光补偿进行合理曝光是得到高质量的图像的至关重要的条件。目前的摄像设备(如移动终端)大都支持手动调节曝光,手动调节曝光可以通过以下方式完成:用户通过点击设定按钮触发摄像设备在屏幕上显示曝光条,用户通过滑动曝光条中的游标,即可对图像拍摄时的曝光补偿进行设置,达到调节曝光效果的目的。
目前,摄像设备(如移动终端)在对图像进行曝光处理时,均是对图像整体进行相同曝光补偿处理,也就是说,若用户想要调节图像中某个区域的曝光效果,则该图像中其它区域也会被调整为相同的曝光效果。可见,现有的手动调节曝光方案,因无法综合考虑图像中特定区域的曝光效果,导致无法得到合适的曝光补偿,进而无法获得出高质量的曝光图像,用户体验变差。
发明内容
本申请的目的旨在至少在一定程度上解决上述的技术问题之一。
为此,本申请的第一个目的在于提出一种图像曝光方法。该方法可以实现基于多帧融合的人像自动曝光的目的,综合考虑了拍摄图像中各特定区域的曝光效果,使得整个拍摄得到了合适的曝光补偿,进而能够得到高质量的曝光图像,提升了用户体验。
本申请的第二个目的在于提出一种图像曝光装置。
本申请的第三个目的在于提出一种摄像设备。
本申请的第四个目的在于提出一种存储介质。
为达到上述目的,本申请第一方面实施例提出的图像曝光方法,包括:在检测到当前 拍摄场景中包含人像时,基于所述当前拍摄场景的景深信息提取所述当前拍摄场景中的人像轮廓和背景部分;获取所述人像的人脸区域,并根据所述人脸区域和所述人像轮廓定位所述人像的身体区域;分别检测所述人脸区域、所述身体区域和所述背景部分的亮度,以得到对应的第一测光值、第二测光值和第三测光值;分别根据所述第一测光值、所述第二测光值和所述第三测光值对所述人脸区域、所述身体区域和所述背景部分进行曝光控制和拍摄,以得到对应的第一曝光图像、第二曝光图像和第三曝光图像;对所述第一曝光图像、第二曝光图像和第三曝光图像进行融合处理,以得到融合后的目标图像。
根据本申请实施例的图像曝光方法,在检测到当前拍摄场景中包含人像时,可基于当前拍摄场景的景深信息提取该当前拍摄场景中的人像轮廓和背景部分,并分别检测人脸区域、身体区域和背景部分的亮度以得到对应的第一测光值、第二测光值和第三测光值,并分别根据该第一测光值、第二测光值和第三测光值进行曝光控制和拍摄以得到相应的三幅图像,最后,对该三幅不同曝光的图像进行融合处理,得到最终生成的人脸、人像轮廓、背景部分均曝光合适的照片,实现了基于多帧融合的人像自动曝光的目的,综合考虑了拍摄图像中各特定区域的曝光效果,使得整个拍摄得到了合适的曝光补偿,进而能够得到高质量的曝光图像,提升了用户体验。
为达到上述目的,本申请第二方面实施例提出的图像曝光装置,包括:第一获取模块,用于获取当前拍摄场景的景深信息;提取模块,用于在检测到所述当前拍摄场景中包含人像时,基于所述景深信息提取所述当前拍摄场景中的人像轮廓和背景部分;第二获取模块,用于获取所述人像的人脸区域;定位模块,用于根据所述人脸区域和所述人像轮廓定位所述人像的身体区域;检测模块,用于分别检测所述人脸区域、所述身体区域和所述背景部分的亮度,以得到对应的第一测光值、第二测光值和第三测光值;控制模块,用于分别根据所述第一测光值、所述第二测光值和所述第三测光值对所述人脸区域、所述身体区域和所述背景部分进行曝光控制和拍摄,以得到对应的第一曝光图像、第二曝光图像和第三曝光图像;融合模块,用于对所述第一曝光图像、第二曝光图像和第三曝光图像进行融合处理,以得到融合后的目标图像。
根据本申请实施例的图像曝光装置,可通过提取模块在检测到当前拍摄场景中包含人像时,可基于当前拍摄场景的景深信息提取该当前拍摄场景中的人像轮廓和背景部分,检测模块分别检测人脸区域、身体区域和背景部分的亮度以得到对应的第一测光值、第二测光值和第三测光值,控制模块分别根据该第一测光值、第二测光值和第三测光值进行曝光控制和拍摄以得到相应的三幅图像,融合模块对该三幅不同曝光的图像进行融合处理,得到最终生成的人脸、人像轮廓、背景部分均曝光合适的照片,实现了基于多帧融合的人像自动曝光的目的,综合考虑了拍摄图像中各特定区域的曝光效果,使得整个拍摄得到了合 适的曝光补偿,进而能够得到高质量的曝光图像,提升了用户体验。
为达到上述目的,本申请第三方面实施例提出的摄像设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述程序时,实现本申请第一方面实施例所述的图像曝光方法。
为达到上述目的,本申请第四方面实施例提出的非临时性计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现本申请第一方面实施例所述的图像曝光方法。
本申请附加的方面和优点将在下面的描述中部分给出,部分将从下面的描述中变得明显,或通过本申请的实践了解到。
附图说明
本申请上述的和/或附加的方面和优点从下面结合附图对实施例的描述中将变得明显和容易理解,其中:
图1是根据本申请一个实施例的图像曝光方法的流程图;
图2是根据本申请一个实施例的图像曝光装置的结构示意图;
图3是根据本申请一个具体实施例的图像曝光装置的结构示意图;
图4是根据本申请一个实施例的摄像设备的结构示意图。
具体实施方式
下面详细描述本申请的实施例,所述实施例的示例在附图中示出,其中自始至终相同或类似的标号表示相同或类似的元件或具有相同或类似功能的元件。下面通过参考附图描述的实施例是示例性的,旨在用于解释本申请,而不能理解为对本申请的限制。
下面参考附图描述本申请实施例的图像曝光方法、装置、摄像设备、存储介质以及计算机程序产品。
图1是根据本申请一个实施例的图像曝光方法的流程图。需要说明的是,本申请实施例的图像曝光方法可应用于本申请实施例的图像曝光装置,该图像曝光装置可被配置于摄像设备。其中,该摄像设备可以是具有拍摄功能的设备,例如,移动终端(如手机、平板电脑等具有各种操作***的硬件设备)、数码相机等。
如图1所示,该图像曝光方法可以包括:
方框110,在检测到当前拍摄场景中包含人像时,基于当前拍摄场景的景深信息提取当前拍摄场景中的人像轮廓和背景部分。
在基于当前拍摄场景的景深信息提取当前拍摄场景中的人像轮廓和背景部分之前,可 先获取当前拍摄场景的景深信息。其中,该景深是指在摄影机镜头或其他成像器前沿能够取得清晰图像的成像所测定的被摄物体前后距离范围。在聚焦完成后,在焦点前后的范围内都能形成清晰的像,这一前一后的距离范围,便叫做景深。在镜头前方(调焦点的前、后)有一段一定长度的空间,当被摄物***于这段空间内时,其在底片上的成像恰位于焦点前后这两个弥散圆之间。被摄体所在的这段空间的长度,就叫景深。换言之,在这段空间内的被摄体,其呈现在底片面的影像模糊度,都在容许弥散圆的限定范围内,这段空间的长度就是景深。
在本申请的实施例中,景深信息的获取步骤与检测当前拍摄场景中是否包含人像的步骤,这两个步骤的执行顺序可以不做具体限定:作为一种示例,可先获取当前拍摄场景的景深信息,之后,可利用人脸识别技术,检测当前拍摄场景中是否包含人像。作为又一种示例,可先利用人脸识别技术,检测当前拍摄场景中是否包含人像,在检测到当前拍摄场景中包含人像时,获取当前拍摄场景的景深信息。
优选地,在本申请的一个实施例中,可通过双摄像头或深度RGBD(RGB+depth,包含色彩信息和距离深度信息的颜色深度图像)摄像头,获取当前拍摄场景的景深信息。例如,以双摄像头为例,通过双摄像头获取当前拍摄场景的景深信息的具体实现过程可如下:可通过算法算出被拍摄物体与左摄像头的第一角度θ1,并计算出被拍摄物体与右摄像头的第二角度θ2,这样,通过左摄像头与右摄像头之间的中心距(其中该中心距为一个固定值)、第一角度θ1和第二角度θ2,利用三角形原理,即可计算出被拍摄物体与镜头之间距离,该距离即为当前拍摄场景的景深信息。
又如,以深度RGBD摄像头为例,通过深度RGBD摄像头获取当前拍摄场景的景深信息的具体实现过程可如下:利用该深度RGBD摄像头中的深度探测仪(例如,红外感应器等),探测被拍摄物体与摄像头之间的距离,该距离即为当前拍摄场景的景深信息。
在检测到当前拍摄场景中包含人像,并获得当前拍摄场景的景深信息时,可通过人脸检测技术,根据该当前拍摄场景的景深信息计算人脸与镜头之间的距离,并根据该距离寻找人像轮廓,并根据距离差异分离技术和该人像轮廓从当前拍摄场景中提取背景部分。更具体地,可通过人脸检测技术定位到人像中的人脸所在区域,并通过以下公式(1),根据当前拍摄场景的景深信息计算出人脸与镜头之间的距离,之后,可根据该距离寻找整个人像轮廓,并根据距离差异分离技术和该人像轮廓,从当前拍摄场景中提取出该背景部分。其中,上述公式(1)可表示如下:
Figure PCTCN2018091228-appb-000001
其中,ΔL为当前拍摄场景的景深信息,f为镜头焦距,F为镜头拍摄时的光圈值,σ 为弥散圆直径,L为人脸与镜头之间的距离。
方框120,获取人像的人脸区域,并根据人脸区域和人像轮廓定位人像的身体区域。
具体地,在检测到当前拍摄场景中包含人像之后,可利用人脸识别技术获取该人像的人脸区域,如人脸区域的位置和大小信息等,之后,可根据该人脸区域和人像轮廓(即人像的整体轮廓),可以确定出该人像的身体区域。也就是说,该人像的身体区域即为人像轮廓中除去人脸区域之后剩下的部分。
方框130,分别检测人脸区域、身体区域和背景部分的亮度,以得到对应的第一测光值、第二测光值和第三测光值。
也就是说,在本步骤中,可检测人脸区域的亮度值以得到对应的第一测光值,并检测身体区域的亮度值以得到对应的第二测光值,并检测背景部分的亮度值以得到对应的第三测光值。作为一种示例,以获得人脸区域的第一测光值为例,可检测人脸区域中各个像素的亮度值,并将该各个像素的亮度值求平均,该平均值即作为该整个人脸区域的亮度值,并将该整个人脸区域的亮度值作为该人脸区域的第一测光值。
方框140,分别根据第一测光值、第二测光值和第三测光值对人脸区域、身体区域和背景部分进行曝光控制和拍摄,以得到对应的第一曝光图像、第二曝光图像和第三曝光图像。
也就是说,可根据该第一测光值对人脸区域进行曝光控制和拍摄,以得到针对该人脸区域的第一曝光图像,并根据该第二测光值对该人像的身体区域进行曝光控制和拍摄,以得到该身体区域的第二曝光图像,并根据第三测光值对背景部分进行曝光控制和拍摄,以得到该背景部分的第三曝光图像。
方框150,对第一曝光图像、第二曝光图像和第三曝光图像进行融合处理,以得到融合后的目标图像。
具体而言,在本申请的实施例中,可将第一曝光图像中的人脸区域、第二曝光图像中的身体区域和第三曝光图像中的背景部分进行拼接处理,同时采用平滑滤波器,消除接缝处的界限以得到融合后的目标图像。由此,通过对第一曝光图像、第二曝光图像和第三曝光图像,这三幅不同曝光的图像进行融合处理,得到最终生成的人脸、人像轮廓、背景部分均曝光合适的照片,以实现基于多帧融合的人像自动曝光的目的。
根据本申请实施例的图像曝光方法,在检测到当前拍摄场景中包含人像时,可基于当前拍摄场景的景深信息提取该当前拍摄场景中的人像轮廓和背景部分,并分别检测人脸区域、身体区域和背景部分的亮度以得到对应的第一测光值、第二测光值和第三测光值,并分别根据该第一测光值、第二测光值和第三测光值进行曝光控制和拍摄以得到相应的三幅图像,最后,对该三幅不同曝光的图像进行融合处理,得到最终生成的人脸、人像轮廓、背景部分均曝光合适的照片,实现了基于多帧融合的人像自动曝光的目的,综合考虑了拍 摄图像中各特定区域的曝光效果,使得整个拍摄得到了合适的曝光补偿,进而能够得到高质量的曝光图像,提升了用户体验。
与上述几种实施例提供的图像曝光方法相对应,本申请的一种实施例还提供一种图像曝光装置,由于本申请实施例提供的图像曝光装置与上述几种实施例提供的图像曝光方法相对应,因此在前述图像曝光方法的实施方式也适用于本实施例提供的图像曝光装置,在本实施例中不再详细描述。图2是根据本申请一个实施例的图像曝光装置的结构示意图。如图2所示,该图像曝光装置可以包括:第一获取模块210、提取模块220、第二获取模块230、定位模块240、检测模块250、控制模块260和融合模块270。
具体地,第一获取模块210用于获取当前拍摄场景的景深信息。具体而言,在本申请的一个实施例中,第一获取模块210可通过双摄像头或深度RGBD摄像头,获取当前拍摄场景的景深信息。
提取模块220用于在检测到所述当前拍摄场景中包含人像时,基于所述景深信息提取所述当前拍摄场景中的人像轮廓和背景部分。具体而言,在本申请的一个实施例中,如图3所示,该提取模块220可包括:计算单元221和提取单元222。其中,计算单元221用于通过人脸检测技术,根据当前拍摄场景的景深信息计算人脸与镜头之间的距离。提取单元222用于根据距离寻找人像轮廓,并根据距离差异分离技术和人像轮廓,从当前拍摄场景中提取背景部分。
第二获取模块230用于获取所述人像的人脸区域。
定位模块240用于根据所述人脸区域和所述人像轮廓定位所述人像的身体区域。
检测模块250用于分别检测所述人脸区域、所述身体区域和所述背景部分的亮度,以得到对应的第一测光值、第二测光值和第三测光值。
控制模块260用于分别根据所述第一测光值、所述第二测光值和所述第三测光值对所述人脸区域、所述身体区域和所述背景部分进行曝光控制和拍摄,以得到对应的第一曝光图像、第二曝光图像和第三曝光图像。
融合模块270用于对所述第一曝光图像、第二曝光图像和第三曝光图像进行融合处理,以得到融合后的目标图像。具体而言,在本申请的一个实施例中,融合模块270可将第一曝光图像中的人脸区域、第二曝光图像中的身体区域和第三曝光图像中的背景部分进行拼接处理,同时采用平滑滤波器,消除接缝处的界限以得到融合后的目标图像。
根据本申请实施例的图像曝光装置,可通过提取模块在检测到当前拍摄场景中包含人像时,可基于当前拍摄场景的景深信息提取该当前拍摄场景中的人像轮廓和背景部分,检测模块分别检测人脸区域、身体区域和背景部分的亮度以得到对应的第一测光值、第二测 光值和第三测光值,控制模块分别根据该第一测光值、第二测光值和第三测光值进行曝光控制和拍摄以得到相应的三幅图像,融合模块对该三幅不同曝光的图像进行融合处理,得到最终生成的人脸、人像轮廓、背景部分均曝光合适的照片,实现了基于多帧融合的人像自动曝光的目的,综合考虑了拍摄图像中各特定区域的曝光效果,使得整个拍摄得到了合适的曝光补偿,进而能够得到高质量的曝光图像,提升了用户体验。
为了实现上述实施例,本申请还提出了一种摄像设备。
图4是根据本申请一个实施例的摄像设备的结构示意图。需要说明的是,该摄像设备可以是具有拍摄功能的设备,例如,移动终端(如手机、平板电脑等具有各种操作***的硬件设备)、数码相机等。
如图4所示,该摄像设备40可以包括:存储器41、处理器42及存储在存储器41上并可在处理器42上运行的计算机程序43,处理器42执行该计算机程序43时,实现本申请上述任一个实施例所述的图像曝光方法。
为了实现上述实施例,本申请还提出了一种非临时性计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现本申请上述任一个实施例所述的图像曝光方法。
为了实现上述实施例,本申请还提出了一种计算机程序产品,当该计算机程序产品中的指令由处理器执行时,执行一种图像曝光方法,所述方法包括以下步骤:
S110’,在检测到当前拍摄场景中包含人像时,基于当前拍摄场景的景深信息提取当前拍摄场景中的人像轮廓和背景部分。
S120’,获取人像的人脸区域,并根据人脸区域和人像轮廓定位人像的身体区域。
S130’,分别检测人脸区域、身体区域和背景部分的亮度,以得到对应的第一测光值、第二测光值和第三测光值。
S140’,分别根据第一测光值、第二测光值和第三测光值对人脸区域、身体区域和背景部分进行曝光控制和拍摄,以得到对应的第一曝光图像、第二曝光图像和第三曝光图像。
S150’,对第一曝光图像、第二曝光图像和第三曝光图像进行融合处理,以得到融合后的目标图像。
在本申请的描述中,需要理解的是,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括至少一个该特征。在本申请的描述中,“多个”的含义是至少两个,例如两个,三个等,除非另有明确具体的限定。
在本说明书的描述中,参考术语“一个实施例”、“一些实施例”、“示例”、“具体示例”、或“一些示例”等的描述意指结合该实施例或示例描述的具体特征、结构、材料或者特点包含于本申请的至少一个实施例或示例中。在本说明书中,对上述术语的示意性表述不必 须针对的是相同的实施例或示例。而且,描述的具体特征、结构、材料或者特点可以在任一个或多个实施例或示例中以合适的方式结合。此外,在不相互矛盾的情况下,本领域的技术人员可以将本说明书中描述的不同实施例或示例以及不同实施例或示例的特征进行结合和组合。
流程图中或在此以其他方式描述的任何过程或方法描述可以被理解为,表示包括一个或更多个用于实现特定逻辑功能或过程的步骤的可执行指令的代码的模块、片段或部分,并且本申请的优选实施方式的范围包括另外的实现,其中可以不按所示出或讨论的顺序,包括根据所涉及的功能按基本同时的方式或按相反的顺序,来执行功能,这应被本申请的实施例所属技术领域的技术人员所理解。
在流程图中表示或在此以其他方式描述的逻辑和/或步骤,例如,可以被认为是用于实现逻辑功能的可执行指令的定序列表,可以具体实现在任何计算机可读介质中,以供指令执行***、装置或设备(如基于计算机的***、包括处理器的***或其他可以从指令执行***、装置或设备取指令并执行指令的***)使用,或结合这些指令执行***、装置或设备而使用。就本说明书而言,"计算机可读介质"可以是任何可以包含、存储、通信、传播或传输程序以供指令执行***、装置或设备或结合这些指令执行***、装置或设备而使用的装置。计算机可读介质的更具体的示例(非穷尽性列表)包括以下:具有一个或多个布线的电连接部(电子装置),便携式计算机盘盒(磁装置),随机存取存储器(RAM),只读存储器(ROM),可擦除可编辑只读存储器(EPROM或闪速存储器),光纤装置,以及便携式光盘只读存储器(CDROM)。另外,计算机可读介质甚至可以是可在其上打印所述程序的纸或其他合适的介质,因为可以例如通过对纸或其他介质进行光学扫描,接着进行编辑、解译或必要时以其他合适方式进行处理来以电子方式获得所述程序,然后将其存储在计算机存储器中。
应当理解,本申请的各部分可以用硬件、软件、固件或它们的组合来实现。在上述实施方式中,多个步骤或方法可以用存储在存储器中且由合适的指令执行***执行的软件或固件来实现。例如,如果用硬件来实现,和在另一实施方式中一样,可用本领域公知的下列技术中的任一项或他们的组合来实现:具有用于对数据信号实现逻辑功能的逻辑门电路的离散逻辑电路,具有合适的组合逻辑门电路的专用集成电路,可编程门阵列(PGA),现场可编程门阵列(FPGA)等。
本技术领域的普通技术人员可以理解实现上述实施例方法携带的全部或部分步骤是可以通过程序来指令相关的硬件完成,所述的程序可以存储于一种计算机可读存储介质中,该程序在执行时,包括方法实施例的步骤之一或其组合。
此外,在本申请各个实施例中的各功能单元可以集成在一个处理模块中,也可以是各 个单元单独物理存在,也可以两个或两个以上单元集成在一个模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。所述集成的模块如果以软件功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。
上述提到的存储介质可以是只读存储器,磁盘或光盘等。尽管上面已经示出和描述了本申请的实施例,可以理解的是,上述实施例是示例性的,不能理解为对本申请的限制,本领域的普通技术人员在本申请的范围内可以对上述实施例进行变化、修改、替换和变型。

Claims (10)

  1. 一种图像曝光方法,其特征在于,包括以下步骤:
    在检测到当前拍摄场景中包含人像时,基于所述当前拍摄场景的景深信息提取所述当前拍摄场景中的人像轮廓和背景部分;
    获取所述人像的人脸区域,并根据所述人脸区域和所述人像轮廓定位所述人像的身体区域;
    分别检测所述人脸区域、所述身体区域和所述背景部分的亮度,以得到对应的第一测光值、第二测光值和第三测光值;
    分别根据所述第一测光值、所述第二测光值和所述第三测光值对所述人脸区域、所述身体区域和所述背景部分进行曝光控制和拍摄,以得到对应的第一曝光图像、第二曝光图像和第三曝光图像;
    对所述第一曝光图像、第二曝光图像和第三曝光图像进行融合处理,以得到融合后的目标图像。
  2. 如权利要求1所述的图像曝光方法,其特征在于,通过以下方式获取所述当前拍摄场景的景深信息;
    通过双摄像头或深度RGBD摄像头,获取所述当前拍摄场景的景深信息。
  3. 如权利要求1或2所述的图像曝光方法,其特征在于,所述基于所述当前拍摄场景的景深信息提取所述当前拍摄场景中的人像轮廓和背景部分,包括:
    通过人脸检测技术,根据所述当前拍摄场景的景深信息计算人脸与镜头之间的距离;
    根据所述距离寻找所述人像轮廓,并根据距离差异分离技术和所述人像轮廓,从所述当前拍摄场景中提取所述背景部分。
  4. 如权利要求1至3中任一项所述的图像曝光方法,其特征在于,所述对所述第一曝光图像、第二曝光图像和第三曝光图像进行融合处理,以得到融合后的目标图像,包括:
    将所述第一曝光图像中的人脸区域、所述第二曝光图像中的身体区域和所述第三曝光图像中的背景部分进行拼接处理,同时采用平滑滤波器,消除接缝处的界限以得到所述融合后的目标图像。
  5. 一种图像曝光装置,其特征在于,包括:
    第一获取模块,用于获取当前拍摄场景的景深信息;
    提取模块,用于在检测到所述当前拍摄场景中包含人像时,基于所述景深信息提 取所述当前拍摄场景中的人像轮廓和背景部分;
    第二获取模块,用于获取所述人像的人脸区域;
    定位模块,用于根据所述人脸区域和所述人像轮廓定位所述人像的身体区域;
    检测模块,用于分别检测所述人脸区域、所述身体区域和所述背景部分的亮度,以得到对应的第一测光值、第二测光值和第三测光值;
    控制模块,用于分别根据所述第一测光值、所述第二测光值和所述第三测光值对所述人脸区域、所述身体区域和所述背景部分进行曝光控制和拍摄,以得到对应的第一曝光图像、第二曝光图像和第三曝光图像;
    融合模块,用于对所述第一曝光图像、第二曝光图像和第三曝光图像进行融合处理,以得到融合后的目标图像。
  6. 如权利要求5所述的图像曝光装置,其特征在于,所述第一获取模块,用于通过双摄像头或深度RGBD摄像头,获取所述当前拍摄场景的景深信息。
  7. 如权利要求5或6所述的图像曝光装置,其特征在于,所述提取模块包括:
    计算单元,用于通过人脸检测技术,根据所述当前拍摄场景的景深信息计算人脸与镜头之间的距离;
    提取单元,用于根据所述距离寻找所述人像轮廓,并根据距离差异分离技术和所述人像轮廓,从所述当前拍摄场景中提取所述背景部分。
  8. 如权利要求5至7中任一项所述的图像曝光装置,其特征在于,所述融合模块具体用于:
    将所述第一曝光图像中的人脸区域、所述第二曝光图像中的身体区域和所述第三曝光图像中的背景部分进行拼接处理,同时采用平滑滤波器,消除接缝处的界限以得到所述融合后的目标图像。
  9. 一种摄像设备,其特征在于,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,其特征在于,所述处理器执行所述程序时,实现如权利要求1至4中任一所述的图像曝光方法。
  10. 一种非临时性计算机可读存储介质,其上存储有计算机程序,其特征在于,该程序被处理器执行时实现如权利要求1至4中任一所述的图像曝光方法。
PCT/CN2018/091228 2017-06-16 2018-06-14 图像曝光方法、装置、摄像设备及存储介质 WO2018228467A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710458977.1A CN107241557A (zh) 2017-06-16 2017-06-16 图像曝光方法、装置、摄像设备及存储介质
CN201710458977.1 2017-06-16

Publications (1)

Publication Number Publication Date
WO2018228467A1 true WO2018228467A1 (zh) 2018-12-20

Family

ID=59986386

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/091228 WO2018228467A1 (zh) 2017-06-16 2018-06-14 图像曝光方法、装置、摄像设备及存储介质

Country Status (2)

Country Link
CN (1) CN107241557A (zh)
WO (1) WO2018228467A1 (zh)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111553231A (zh) * 2020-04-21 2020-08-18 上海锘科智能科技有限公司 基于信息融合的人脸抓拍与去重***、方法、终端及介质
CN111582171A (zh) * 2020-05-08 2020-08-25 济南博观智能科技有限公司 一种行人闯红灯监测方法、装置、***及可读存储介质
CN112053389A (zh) * 2020-07-28 2020-12-08 北京迈格威科技有限公司 人像处理方法、装置、电子设备及可读存储介质
CN112085686A (zh) * 2020-08-21 2020-12-15 北京迈格威科技有限公司 图像处理方法、装置、电子设备及计算机可读存储介质
CN112887612A (zh) * 2021-01-27 2021-06-01 维沃移动通信有限公司 一种拍摄方法、装置和电子设备
EP3883236A1 (en) * 2020-03-16 2021-09-22 Canon Kabushiki Kaisha Information processing apparatus, imaging apparatus, method, and storage medium
CN116112657A (zh) * 2023-01-11 2023-05-12 网易(杭州)网络有限公司 图像处理方法、装置、计算机可读存储介质及电子装置

Families Citing this family (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107241557A (zh) * 2017-06-16 2017-10-10 广东欧珀移动通信有限公司 图像曝光方法、装置、摄像设备及存储介质
CN107592468B (zh) * 2017-10-23 2019-12-03 维沃移动通信有限公司 一种拍摄参数调整方法及移动终端
CN107623818B (zh) * 2017-10-30 2020-04-17 维沃移动通信有限公司 一种图像曝光方法和移动终端
CN107948519B (zh) * 2017-11-30 2020-03-27 Oppo广东移动通信有限公司 图像处理方法、装置及设备
CN107995425B (zh) * 2017-12-11 2019-08-20 维沃移动通信有限公司 一种图像处理方法及移动终端
CN109981992B (zh) * 2017-12-28 2021-02-23 周秦娜 一种在高环境光变化下提升测距准确度的控制方法及装置
CN108616689B (zh) * 2018-04-12 2020-10-02 Oppo广东移动通信有限公司 基于人像的高动态范围图像获取方法、装置及设备
CN108650466A (zh) * 2018-05-24 2018-10-12 努比亚技术有限公司 一种强光或逆光拍摄人像时提升照片宽容度的方法及电子设备
CN108683862B (zh) 2018-08-13 2020-01-10 Oppo广东移动通信有限公司 成像控制方法、装置、电子设备及计算机可读存储介质
CN109242794B (zh) * 2018-08-29 2021-05-11 Oppo广东移动通信有限公司 图像处理方法、装置、电子设备及计算机可读存储介质
CN109068060B (zh) * 2018-09-05 2021-06-08 Oppo广东移动通信有限公司 图像处理方法和装置、终端设备、计算机可读存储介质
CN108833804A (zh) * 2018-09-20 2018-11-16 Oppo广东移动通信有限公司 成像方法、装置和电子设备
CN108881701B (zh) * 2018-09-30 2021-04-02 华勤技术股份有限公司 拍摄方法、摄像头、终端设备及计算机可读存储介质
CN109360176B (zh) * 2018-10-15 2021-03-02 Oppo广东移动通信有限公司 图像处理方法、装置、电子设备和计算机可读存储介质
CN109819176A (zh) * 2019-01-31 2019-05-28 深圳达闼科技控股有限公司 一种拍摄方法、***、装置、电子设备及存储介质
CN110211024A (zh) * 2019-03-14 2019-09-06 厦门启尚科技有限公司 一种图像智能退底的方法
CN111402135B (zh) * 2020-03-17 2023-06-20 Oppo广东移动通信有限公司 图像处理方法、装置、电子设备及计算机可读存储介质
CN112819722A (zh) * 2021-02-03 2021-05-18 东莞埃科思科技有限公司 一种红外图像人脸曝光方法、装置、设备及存储介质
CN113347369B (zh) * 2021-06-01 2022-08-19 中国科学院光电技术研究所 一种深空探测相机曝光调节方法、调节***及其调节装置

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6654062B1 (en) * 1997-11-13 2003-11-25 Casio Computer Co., Ltd. Electronic camera
JP2005109757A (ja) * 2003-09-29 2005-04-21 Fuji Photo Film Co Ltd 画像撮像装置、画像処理装置、画像撮像方法、及びプログラム
CN104092955A (zh) * 2014-07-31 2014-10-08 北京智谷睿拓技术服务有限公司 闪光控制方法及控制装置、图像采集方法及采集设备
CN104092954A (zh) * 2014-07-25 2014-10-08 北京智谷睿拓技术服务有限公司 闪光控制方法及控制装置、图像采集方法及采集装置
CN106161980A (zh) * 2016-07-29 2016-11-23 宇龙计算机通信科技(深圳)有限公司 基于双摄像头的拍照方法及***
CN106851124A (zh) * 2017-03-09 2017-06-13 广东欧珀移动通信有限公司 基于景深的图像处理方法、处理装置和电子装置
CN106851123A (zh) * 2017-03-09 2017-06-13 广东欧珀移动通信有限公司 曝光控制方法、曝光控制装置及电子装置
CN107241557A (zh) * 2017-06-16 2017-10-10 广东欧珀移动通信有限公司 图像曝光方法、装置、摄像设备及存储介质

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5386793B2 (ja) * 2006-12-11 2014-01-15 株式会社リコー 撮像装置および撮像装置の露出制御方法
CN106303250A (zh) * 2016-08-26 2017-01-04 维沃移动通信有限公司 一种图像处理方法及移动终端
CN106331510B (zh) * 2016-10-31 2019-10-15 维沃移动通信有限公司 一种逆光拍照方法及移动终端

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6654062B1 (en) * 1997-11-13 2003-11-25 Casio Computer Co., Ltd. Electronic camera
JP2005109757A (ja) * 2003-09-29 2005-04-21 Fuji Photo Film Co Ltd 画像撮像装置、画像処理装置、画像撮像方法、及びプログラム
CN104092954A (zh) * 2014-07-25 2014-10-08 北京智谷睿拓技术服务有限公司 闪光控制方法及控制装置、图像采集方法及采集装置
CN104092955A (zh) * 2014-07-31 2014-10-08 北京智谷睿拓技术服务有限公司 闪光控制方法及控制装置、图像采集方法及采集设备
CN106161980A (zh) * 2016-07-29 2016-11-23 宇龙计算机通信科技(深圳)有限公司 基于双摄像头的拍照方法及***
CN106851124A (zh) * 2017-03-09 2017-06-13 广东欧珀移动通信有限公司 基于景深的图像处理方法、处理装置和电子装置
CN106851123A (zh) * 2017-03-09 2017-06-13 广东欧珀移动通信有限公司 曝光控制方法、曝光控制装置及电子装置
CN107241557A (zh) * 2017-06-16 2017-10-10 广东欧珀移动通信有限公司 图像曝光方法、装置、摄像设备及存储介质

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3883236A1 (en) * 2020-03-16 2021-09-22 Canon Kabushiki Kaisha Information processing apparatus, imaging apparatus, method, and storage medium
US11575841B2 (en) 2020-03-16 2023-02-07 Canon Kabushiki Kaisha Information processing apparatus, imaging apparatus, method, and storage medium
CN111553231A (zh) * 2020-04-21 2020-08-18 上海锘科智能科技有限公司 基于信息融合的人脸抓拍与去重***、方法、终端及介质
CN111553231B (zh) * 2020-04-21 2023-04-28 上海锘科智能科技有限公司 基于信息融合的人脸抓拍与去重***、方法、终端及介质
CN111582171A (zh) * 2020-05-08 2020-08-25 济南博观智能科技有限公司 一种行人闯红灯监测方法、装置、***及可读存储介质
CN111582171B (zh) * 2020-05-08 2024-04-09 济南博观智能科技有限公司 一种行人闯红灯监测方法、装置、***及可读存储介质
CN112053389A (zh) * 2020-07-28 2020-12-08 北京迈格威科技有限公司 人像处理方法、装置、电子设备及可读存储介质
CN112085686A (zh) * 2020-08-21 2020-12-15 北京迈格威科技有限公司 图像处理方法、装置、电子设备及计算机可读存储介质
CN112887612A (zh) * 2021-01-27 2021-06-01 维沃移动通信有限公司 一种拍摄方法、装置和电子设备
CN112887612B (zh) * 2021-01-27 2022-10-04 维沃移动通信有限公司 一种拍摄方法、装置和电子设备
CN116112657A (zh) * 2023-01-11 2023-05-12 网易(杭州)网络有限公司 图像处理方法、装置、计算机可读存储介质及电子装置
CN116112657B (zh) * 2023-01-11 2024-05-28 网易(杭州)网络有限公司 图像处理方法、装置、计算机可读存储介质及电子装置

Also Published As

Publication number Publication date
CN107241557A (zh) 2017-10-10

Similar Documents

Publication Publication Date Title
WO2018228467A1 (zh) 图像曝光方法、装置、摄像设备及存储介质
CN107241559B (zh) 人像拍照方法、装置以及摄像设备
US10997696B2 (en) Image processing method, apparatus and device
WO2019105262A1 (zh) 背景虚化处理方法、装置及设备
US10771697B2 (en) Still image stabilization/optical image stabilization synchronization in multi-camera image capture
WO2018201809A1 (zh) 基于双摄像头的图像处理装置及方法
US7929042B2 (en) Imaging apparatus, control method of imaging apparatus, and computer program
CN106899781B (zh) 一种图像处理方法及电子设备
WO2019105214A1 (zh) 图像虚化方法、装置、移动终端和存储介质
US11776307B2 (en) Arrangement for generating head related transfer function filters
WO2019011147A1 (zh) 逆光场景的人脸区域处理方法和装置
WO2015180609A1 (zh) 一种实现自动拍摄的方法、装置及计算机存储介质
CN108605087B (zh) 终端的拍照方法、拍照装置和终端
WO2021136078A1 (zh) 图像处理方法、图像处理***、计算机可读介质和电子设备
JP6497987B2 (ja) 画像処理装置及び画像処理方法、プログラム、記憶媒体
KR20130031207A (ko) 촬상장치 및 그 제어 방법
WO2019105254A1 (zh) 背景虚化处理方法、装置及设备
WO2018228466A1 (zh) 对焦区域显示方法、装置及终端设备
WO2019105260A1 (zh) 景深获取方法、装置及设备
CN106791451B (zh) 一种智能终端的拍照方法
WO2019011110A1 (zh) 逆光场景的人脸区域处理方法和装置
CN106412423A (zh) 一种对焦方法及装置
CN108289170B (zh) 能够检测计量区域的拍照装置、方法及计算机可读介质
WO2018076529A1 (zh) 场景深度计算方法、装置及终端
EP4050881A1 (en) High-dynamic range image synthesis method and electronic device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18817105

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18817105

Country of ref document: EP

Kind code of ref document: A1