WO2018228466A1 - 对焦区域显示方法、装置及终端设备 - Google Patents

对焦区域显示方法、装置及终端设备 Download PDF

Info

Publication number
WO2018228466A1
WO2018228466A1 PCT/CN2018/091227 CN2018091227W WO2018228466A1 WO 2018228466 A1 WO2018228466 A1 WO 2018228466A1 CN 2018091227 W CN2018091227 W CN 2018091227W WO 2018228466 A1 WO2018228466 A1 WO 2018228466A1
Authority
WO
WIPO (PCT)
Prior art keywords
target
image
area
roi
focus
Prior art date
Application number
PCT/CN2018/091227
Other languages
English (en)
French (fr)
Inventor
***
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Priority to EP18817013.8A priority Critical patent/EP3627822B1/en
Publication of WO2018228466A1 publication Critical patent/WO2018228466A1/zh
Priority to US16/706,660 priority patent/US11283987B2/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • H04N23/671Focus control based on electronic image sensor signals in combination with active ranging signals, e.g. using light or sound signals emitted toward objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • H04N23/675Focus control based on electronic image sensor signals comprising setting of focusing regions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/633Control of cameras or camera modules by using electronic viewfinders for displaying additional information relating to control or operation of the camera
    • H04N23/635Region indicators; Field of view indicators
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/64Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals

Definitions

  • the present application relates to the field of terminal technologies, and in particular, to a method, an apparatus, and a terminal device for displaying a focus area.
  • the present application aims to solve at least one of the technical problems in the related art to some extent.
  • the present application proposes a focus area display method for displaying a focus area to a user in a captured image, so as to solve the problem in the prior art that the cause of the out-of-focus or the paste is unsuitable for the focus area setting. Caused.
  • the present application proposes a focus area display device.
  • the application proposes a terminal device.
  • the application proposes a computer program product.
  • the application proposes a non-transitory computer readable storage medium.
  • the embodiment of the first aspect of the present application provides a method for displaying a focus area, including:
  • the location of the target ROI region is displayed in the formed image.
  • the focus area determined at the time of focusing can be marked in the formed image.
  • the formed image is out of focus or smeared, if the position of the marked ROI area and the position of the ROI area during focusing do not change, the user can quickly recognize the cause of the image being out of focus or smearing. Not caused by improper ROI regional settings.
  • the border of the marked ROI area is in the face area, if the image appears blurred or out of focus, the factor of improper setting of the ROI area can be excluded.
  • the embodiment of the second aspect of the present application provides a focus area display device, including:
  • a region obtaining module configured to acquire a target ROI region from a preview image of the target object collected by the image sensor
  • An image acquisition module configured to capture image data of the target object
  • a display module configured to display a location of the target ROI region in the formed image during imaging using the image data.
  • the in-focus area display device of the embodiment of the present application can mark the in-focus area determined at the time of focusing in the formed image by imaging with image data.
  • the formed image is out of focus or smeared, if the position of the marked ROI area and the position of the ROI area during focusing do not change, the user can quickly recognize the cause of the image being out of focus or smearing. Not caused by improper ROI regional settings.
  • the border of the marked ROI area is in the face area, if the image appears blurred or out of focus, the factor of improper setting of the ROI area can be excluded.
  • the embodiment of the third aspect of the present application provides a terminal device, including:
  • the terminal device of the embodiment of the present application can mark the focus area determined at the time of focusing in the formed image by imaging with the image data.
  • the formed image is out of focus or smeared, if the position of the marked ROI area and the position of the ROI area during focusing do not change, the user can quickly recognize the cause of the image being out of focus or smearing. Not caused by improper ROI regional settings.
  • the border of the marked ROI area is in the face area, if the image appears blurred or out of focus, the factor of improper setting of the ROI area can be excluded.
  • the fourth aspect of the present application provides a computer program product that, when executed by a processor, executes a focus area display method as described in the first aspect.
  • the fifth aspect of the present application provides a non-transitory computer readable storage medium having a computer program stored thereon, and when the computer program is executed by the processor, the focus area display method as described in the first aspect of the present invention is implemented.
  • FIG. 1 is a schematic flowchart diagram of a method for displaying a focus area according to an embodiment of the present application
  • FIG. 2 is a schematic flowchart of drawing a border for a target ROI area according to an embodiment of the present application
  • FIG. 3 is a schematic diagram of application of determining a frame according to an embodiment of the present disclosure
  • FIG. 4 is a schematic diagram of an application for displaying a border of a target ROI area in a formed image according to an embodiment of the present application
  • FIG. 5 is a schematic flowchart of determining a target location of a target ROI region in a formed image according to an embodiment of the present disclosure
  • FIG. 6 is a schematic diagram of a positional relationship between a start pixel and a pixel in a target ROI area in an image according to an embodiment of the present disclosure
  • FIG. 7 is a schematic flowchart of obtaining a target ROI region from a preview image of a target object collected by an image sensor according to an embodiment of the present disclosure
  • FIG. 8 is a schematic flowchart diagram of still another method for displaying a focus area according to an embodiment of the present application.
  • FIG. 9 is a schematic structural diagram of a focus area display device according to an embodiment of the present application.
  • FIG. 10 is a schematic structural diagram of another focus area display device according to an embodiment of the present application.
  • FIG. 11 is a schematic structural diagram of a terminal device according to an embodiment of the present disclosure.
  • FIG. 1 is a schematic flowchart diagram of a method for displaying a focus area according to an embodiment of the present application.
  • the embodiment of the present application can be applied in the process of shooting a terminal by a user.
  • the terminal may be a hardware device with various operating systems such as a smart phone, a tablet computer, and a personal digital assistant.
  • the focus area display method includes the following steps:
  • a preview image of the target object to be captured by the image sensor in the camera head may be displayed on the shooting interface of the terminal.
  • the target ROI area may be acquired from the acquired preview image.
  • an area may be selected from the preview image as the target ROI area by the autofocus technique, and for example, the user may manually select an area from the preview image as the target ROI area.
  • the target object may not be in a stable state, and the target object often takes a short time to be in a stable state.
  • the target object when the target object is a person, in the process of photographing, the target object often needs to adjust the posture or stand stably after 3 to 5 frames, although the camera continuously collects the preview image of the target object throughout the process, and The ROI area is displayed in the preview image.
  • the target ROI area can be acquired from the preview image acquired by the image sensor at this time when the target object is in a stable state.
  • the target object to be photographed is a human face
  • auto focus can be performed, and the area where the face is located in the preview image is used as the focus area.
  • the target object to be photographed is a tree
  • autofocusing is performed, and the area of the canopy of the tree in the preview image can be used as the focus area.
  • the user can send a shooting instruction by holding down the shutter, and when the shooting instruction is detected, the target object is photographed to obtain image data.
  • the image data of the target object can be cached into a designated buffer after the target object is photographed.
  • a storage device such as a flash memory card on the photographing device can be used as a buffer area.
  • the image data can be used for imaging.
  • the target ROI area may be marked in the formed image, specifically, a border is drawn for the target ROI area, and the target ROI area is determined. The target position in the resulting image is then displayed on the target position in the resulting image to display the target ROI area carrying the border.
  • the focus area determined at the time of focusing can be marked in the formed image.
  • the formed image is out of focus or smeared, if the position of the marked ROI area and the position of the ROI area during focusing do not change, the user can quickly recognize the cause of the image being out of focus or smearing. Not caused by improper ROI regional settings.
  • the border of the marked ROI area is in the face area, if the image appears blurred or out of focus, the factor of improper setting of the ROI area can be excluded.
  • FIG. 2 is a schematic flowchart of another method for displaying a focus area according to an embodiment of the present application.
  • the drawing a border for the target ROI region specifically includes the following steps:
  • the first row of the behavior of the pixel in the upper left corner of the target ROI region is located in the first column.
  • the preset number of rows is selected as the first lateral edge of the border.
  • the first row of the inverse of the pixel in the lower right corner of the target ROI region is located in the first column of the last.
  • the preset number of rows is selected as the second horizontal edge of the border.
  • each lateral edge of the frame can be determined by the preset number of rows.
  • the preset number of rows can be 4 rows, and the width of each lateral edge is correspondingly 4 pixels.
  • the preset number of columns is selected as the first vertical edge of the border.
  • the preset number of columns is selected as the second vertical edge of the frame.
  • each vertical side of the frame can be determined by the preset number of columns.
  • the preset number of lines can be 4 lines, and the width of each vertical side is correspondingly 4 pixels.
  • FIG. 3 it is a schematic diagram of the frame determination provided by the embodiment of the present application.
  • the pixel in the upper left corner of the figure is in the first row and the column in the first column.
  • the bottom row of the pixel is in the first row of the last row, and the column in the column is the first column in the last.
  • 4 lines starting from the first line are used as the first horizontal side of the border
  • 4 columns starting from the first column are used as the first vertical side of the border
  • 4 lines starting from the last line are used as the second frame.
  • the four columns starting from the first column of the last column are used as the second vertical side of the border.
  • the pixels occupied by each side of the border are filled with gradation.
  • the first horizontal side, the second horizontal side, the first vertical side, and the second vertical side are drawn according to a preset format to form a border of the target ROI area.
  • a format may be preset, and after determining the horizontal side and the vertical side of the border, the first horizontal side, the second horizontal side, the first vertical side, and the second vertical side may be drawn according to a preset format, and further Get the border of the target ROI area.
  • the format may include the color of the border, the shape of the border line, and the like.
  • FIG. 4 is a schematic diagram of an application of an embodiment of the present application.
  • the face area is often recognized as an ROI area.
  • the white border on the left side marks the target ROI area as a face area, and a clear image can be formed.
  • the white border on the right side deviates from the face area, and the resulting image appears out of focus or blur.
  • FIG. 5 is a schematic flowchart of determining a target location of a target ROI region in a formed image according to an embodiment of the present application.
  • Determining a target location of the target ROI region in the formed image includes the following steps:
  • the first coordinate of the starting pixel in the formed image may be acquired according to the image data of the formed image, and the starting pixel may be the pixel in the upper left corner of the formed image. Further, the second coordinates of each pixel in the target ROI area may be acquired from the image data of the target ROI area.
  • the positional relationship of each pixel with respect to the first coordinate of the starting pixel of the formed image may be calculated, and the positional relationship may be included between the starting pixel and the starting pixel. The distance between the distance and the starting pixel.
  • the target ROI region is determined at the target position of the formed image.
  • the pixel and the formed image may be determined.
  • the positional relationship of the starting pixels is as shown in Fig. 6.
  • white dots indicate the starting pixels in the formed image
  • gray dots indicate one pixel in the target ROI region.
  • FIG. 7 is a schematic flowchart of acquiring a target ROI region from a preview image of a target object collected by an image sensor according to an embodiment of the present disclosure.
  • the acquiring the target ROI region from the preview image of the target object collected by the image sensor includes the following steps:
  • the target object is stable for a duration of a preview image of 3 to 5 frames, and specifically, the number of image frames of the preview image is related to the performance of the terminal used by the user.
  • the camera when the user turns on the camera, the camera is aimed at the target object to be photographed at the time of shooting, and the image sensor in the camera can pre-collect the target object to form a preview image of the target object, and can count the image sensor. The number of image frames of the captured preview image.
  • the target object may be in a completely static state. After the number of image frames reaches the preset number of frames, all the preview images may be analyzed, and the phase difference between the adjacent two frames of preview images may be acquired, thereby acquiring the target object. Moving speed.
  • the target object After obtaining the moving speed of the target object, it can be determined whether the moving speed is within a preset range. If the moving of the target object can be considered as not affecting the shooting effect within the preset range, the target object can be determined to be in a stable state. .
  • the ROI area in the preview image when the target object is in a stable state is used as the target ROI area.
  • the target ROI region can be obtained from the preview image acquired by the image sensor at the time when the target object is in a stable state, that is, the ROI region in the preview image acquired by the image sensor at this time is targeted. ROI area.
  • FIG. 8 is a schematic flowchart of still another method for displaying a focus area according to an embodiment of the present application.
  • step 103 the following steps may be further included:
  • Step 501 In the case that the target object is a human face, whether the area in which the face is presented in the formed image is identified is blurred.
  • the face recognition algorithm recognizes the face region in the imaged image, acquires the pixel value of the face region, and obtains the color and brightness of the face region according to the pixel value, and the preset clear face. If the threshold value of the corresponding color and brightness is compared, if it is smaller than the threshold, the face area is considered to be blurred.
  • Step 502 If the presented face area is blurred, according to the degree of coincidence between the target ROI area and the presented face area, it is determined whether the presented face area is blurred by the focus error.
  • the area coincidence degree is calculated according to the target ROI area displayed in the imaged image and the recognized face area, which is because when the face is photographed,
  • the default recognition face area is the ROI area. If the ROI area and the face area overlap less than the preset threshold, it is because the ROI area recognition is inaccurate, that is, the focus error causes the captured face area to be blurred. As shown in the right figure of Figure 4, it can help the user to confirm the captured face image, which is caused by the inconsistency of the face area; if the ROI area and the face area overlap, the face is indicated. Image blur is not due to focus error.
  • the present application also proposes a focus area display device.
  • FIG. 9 is a focus area display device according to an embodiment of the present application. As shown in FIG. 9, the focus area display device includes an area acquisition module 11, an image acquisition module 12, and a display module 13.
  • the area obtaining module 11 is configured to acquire a target ROI area from a preview image of the target object collected by the image sensor.
  • the image acquisition module 12 is configured to capture image data of the target object.
  • the display module 13 is configured to display a location of the target ROI region in the formed image during imaging using the image data.
  • the display module 13 of the in-focus area display device further includes: a drawing unit 131, a determining unit 132, and a display unit 133.
  • the drawing unit 131 is configured to draw a border for the target ROI area.
  • the determining unit 132 is configured to determine a target location of the target ROI region in the formed image.
  • the display unit 133 is configured to display the border of the target ROI area on the target position in the formed image.
  • drawing unit 131 is specifically configured to:
  • the first lateral side, the second lateral side, the first vertical side, and the second vertical side are drawn in a preset format to form the frame.
  • determining unit 132 is specifically configured to:
  • the focus area display device further includes: a state determination module 14.
  • the state determining module 14 is configured to determine that the target object is in a stable state according to the continuously acquired preview image that meets the preset number of frames.
  • the state determining module 14 may be configured to: after aligning the target object from the image sensor, counting the number of image frames of the preview image collected by the image sensor, when the image frame is After the number reaches the preset number of frames, the moving speed of the target object is determined according to all the preview images collected, and if the moving speed is within the preset range, it is determined that the target object is in a stable state.
  • the area obtaining module 11 is specifically configured to use, as the target ROI area, an ROI area in the preview image when the target object is in a stable state.
  • the ROI area includes a focus area.
  • the focus area display device further includes: a blur determination module 15, configured to:
  • the target object is a human face
  • whether the face region presented in the formed image is blurred is recognized
  • the presented face area is blurred, it is determined whether the rendered face area is blurred by the focus error according to the degree of coincidence between the target ROI area and the presented face area. .
  • the in-focus area display device of the present embodiment can mark the in-focus area determined at the time of focusing in the formed image by imaging with image data.
  • the formed image is out of focus or smeared, if the position of the marked ROI area and the position of the ROI area during focusing do not change, the user can quickly recognize the cause of the image being out of focus or smearing. Not caused by improper ROI regional settings.
  • the border of the marked ROI area is in the face area, if the image appears blurred or out of focus, the factor of improper setting of the ROI area can be excluded.
  • the present application also proposes a terminal device.
  • FIG. 11 is a schematic structural diagram of a terminal device according to an embodiment of the present disclosure.
  • the terminal device includes a housing 21 and a processor 22 located in the housing 21, a memory 23, and a display interface 24, wherein the processor 22 reads the executable program code stored in the memory 23 by Run the program corresponding to the executable code to perform the following steps:
  • the location of the target ROI region is displayed in the formed image.
  • the terminal device of the present embodiment can mark the focus area determined at the time of focusing in the formed image by imaging with image data.
  • the formed image is out of focus or smeared, if the position of the marked ROI area and the position of the ROI area during focusing do not change, the user can quickly recognize the cause of the image being out of focus or smearing. Not caused by improper ROI regional settings.
  • the border of the marked ROI area is in the face area, if the image appears blurred or out of focus, the factor of improper setting of the ROI area can be excluded.
  • the present application also provides a computer program product that performs a focus area display method as described in the foregoing embodiments when instructions in the computer program product are executed by the processor.
  • the present application further provides a non-transitory computer readable storage medium having stored thereon a computer program capable of implementing a focus area display method as described in the foregoing embodiments when the computer program is executed by a processor .
  • first and second are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated.
  • features defining “first” or “second” may include at least one of the features, either explicitly or implicitly.
  • the meaning of "a plurality” is at least two, such as two, three, etc., unless specifically defined otherwise.
  • a "computer-readable medium” can be any apparatus that can contain, store, communicate, propagate, or transport a program for use in an instruction execution system, apparatus, or device, or in conjunction with the instruction execution system, apparatus, or device.
  • computer readable media include the following: electrical connections (electronic devices) having one or more wires, portable computer disk cartridges (magnetic devices), random access memory (RAM), Read only memory (ROM), erasable editable read only memory (EPROM or flash memory), fiber optic devices, and portable compact disk read only memory (CDROM).
  • the computer readable medium may even be a paper or other suitable medium on which the program can be printed, as it may be optically scanned, for example by paper or other medium, followed by editing, interpretation or, if appropriate, other suitable The method is processed to obtain the program electronically and then stored in computer memory.
  • portions of the application can be implemented in hardware, software, firmware, or a combination thereof.
  • multiple steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system.
  • a suitable instruction execution system For example, if implemented in hardware and in another embodiment, it can be implemented by any one or combination of the following techniques well known in the art: discrete with logic gates for implementing logic functions on data signals Logic circuits, application specific integrated circuits with suitable combinational logic gates, programmable gate arrays (PGAs), field programmable gate arrays (FPGAs), and the like.
  • each functional unit in each embodiment of the present application may be integrated into one processing module, or each unit may exist physically separately, or two or more units may be integrated into one module.
  • the above integrated modules can be implemented in the form of hardware or in the form of software functional modules.
  • the integrated modules, if implemented in the form of software functional modules and sold or used as stand-alone products, may also be stored in a computer readable storage medium.
  • the above mentioned storage medium may be a read only memory, a magnetic disk or an optical disk or the like. While the embodiments of the present application have been shown and described above, it is understood that the above-described embodiments are illustrative and are not to be construed as limiting the scope of the present application. The embodiments are subject to variations, modifications, substitutions and variations.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Studio Devices (AREA)
  • Image Processing (AREA)

Abstract

本申请提出一种对焦区域显示方法、装置及终端设备,其中,方法包括:从图像传感器所采集的预览图像中获取目标ROI区域,对所述目标对象进行拍摄获取图像数据,在利用所述图像数据成像过程中,在所成图像中显示所述目标ROI区域的所在位置。本申请中,在对目标对象成像时,可以将对焦时确定的对焦区域在所成的图像中标记出来。当所成的图像处于图像失焦或者拍糊时,用户可以根据标记出的ROI区域,识别出该图像失焦或者拍糊时原因是否由于ROI区域设置不当造成的。尤其针对人脸成像,如果标记出的ROI区域的边框在人脸区域,如果图像出现模糊或者失焦,就可以排除ROI区域设置不当这一因素。

Description

对焦区域显示方法、装置及终端设备
相关申请的交叉引用
本申请要求广东欧珀移动通信有限公司于2017年6月16日提交的、发明名称为“对焦区域显示方法、装置及终端设备”的、中国专利申请号“201710458367.1”的优先权。
技术领域
本申请涉及终端技术领域,尤其涉及一种对焦区域显示方法、装置及终端设备。
背景技术
随着智能终端的发展,用户可以利用智能终端拍照的频率越来越高,用户对智能终端所拍摄的图像的质量要求也越来越高。
目前,智能终端的摄像头大多支持自动对焦,以获取到最佳清晰点来拍摄照片。但是即使确定了对焦区域也可能出现拍摄出来的图像存在失焦的问题,使得拍摄出的图像模糊。然而实际应用中,拍摄时如果用户手抖或者拍摄的对象处于移动状态时,也可能导致拍摄所得到的图像失去了正确的感兴趣区域ROI(Region Of Interest,简称ROI)区域,通常称为对焦区域。
在这些情况下,用户只能看到一个失焦或者模糊的图像,无法确定该图像失焦或者拍糊的原因是否为ROI区域设置出现不当造成的。
发明内容
本申请旨在至少在一定程度上解决相关技术中的技术问题之一。
为此,本申请提出一种对焦区域显示方法,以实现在拍摄的图像中向用户显示出对焦区域,以解决现有技术中无法确定该失焦或者拍糊的原因是否为对焦区域设置出现不当造成的。
本申请提出一种对焦区域显示装置。
本申请提出一种终端设备。
本申请提出一种计算机程序产品。
本申请提出一种非临时性计算机可读存储介质。
本申请第一方面实施例提出了一种对焦区域显示方法,包括:
从图像传感器所采集的目标对象的预览图像中获取目标ROI区域;
对所述目标对象进行拍摄获取图像数据;
在利用所述图像数据成像过程中,在所成图像中显示所述目标ROI区域的所在位置。
本申请实施例的对焦区域显示方法,通过在利用图像数据成像时,可以将对焦时确定的对焦区域在所成的图像中标记出来。当所成的图像处于图像失焦或者拍糊时,若标记出的ROI区域的所在位置与对焦时ROI区域的位置未发生变化,用户则可以很快地识别出该图像失焦或者拍糊时原因并不是ROI区域设置不当造成的。尤其针对人脸成像,如果标记出的ROI区域的边框在人脸区域,如果图像出现模糊或者失焦,就可以排除ROI区域设置不当这一因素。
本申请第二方面实施例提出了一种对焦区域显示装置,包括:
区域获取模块,用于从图像传感器所采集的目标对象的预览图像中获取目标ROI区域;
图像获取模块,用于对所述目标对象进行拍摄获取图像数据;
显示模块,用于在利用所述图像数据成像过程中,在所成图像中显示所述目标ROI区域的所在位置。
本申请实施例的对焦区域显示装置,通过在利用图像数据成像时,可以将对焦时确定的对焦区域在所成的图像中标记出来。当所成的图像处于图像失焦或者拍糊时,若标记出的ROI区域的所在位置与对焦时ROI区域的位置未发生变化,用户则可以很快地识别出该图像失焦或者拍糊时原因并不是ROI区域设置不当造成的。尤其针对人脸成像,如果标记出的ROI区域的边框在人脸区域,如果图像出现模糊或者失焦,就可以排除ROI区域设置不当这一因素。
本申请第三方面实施例提出了一种终端设备,包括:
壳体和位于所述壳体内的处理器、存储器和显示界面,其中,所述处理器通过读取所述存储器中存储的可执行程序代码来运行与所述可执行程序代码对应的程序,以用于执行如第一方面实施例所述的对焦区域显示方法。
本申请实施例的终端设备,通过在利用图像数据成像时,可以将对焦时确定的对焦区域在所成的图像中标记出来。当所成的图像处于图像失焦或者拍糊时,若标记出的ROI区域的所在位置与对焦时ROI区域的位置未发生变化,用户则可以很快地识别出该图像失焦或者拍糊时原因并不是ROI区域设置不当造成的。尤其针对人脸成像,如果标记出的ROI区域的边框在人脸区域,如果图像出现模糊或者失焦,就可以排除ROI区域设置不当这一因素。
本申请第四方面实施例提出了一种计算机程序产品,当所述计算机程序产品中的指令由处理器执行时,执行如第一方面实施例所述的对焦区域显示方法。
本申请第五方面实施例提出了一种非临时性计算机可读存储介质,其上存储有计算机程 序,当计算机程序被处理器执行时实现如第一方面实施例所述的对焦区域显示方法。
本申请附加的方面和优点将在下面的描述中部分给出,部分将从下面的描述中变得明显,或通过本申请的实践了解到。
附图说明
本申请上述的和/或附加的方面和优点从下面结合附图对实施例的描述中将变得明显和容易理解,其中:
图1为本申请实施例提供的一种对焦区域显示方法的流程示意图;
图2为本申请实施例提供的一种为目标ROI区域绘制边框的流程示意图;
图3为本申请实施例提供的边框确定的应用示意图;
图4为本申请实施例提供在所成图像中显示目标ROI区域的边框的应用示意图;
图5为本申请实施例提供的一种确定目标ROI区域在所成图像中的目标位置的流程示意图;
图6为本申请实施例提供的所处图像中起始像素与目标ROI区域中某一像素的位置关系的示意图;
图7为本申请实施例提供的一种从图像传感器所采集的目标对象的预览图像中获取目标ROI区域的流程示意图;
图8为本申请实施例提供的又一种对焦区域显示方法的流程示意图;
图9为本申请实施例提供一种对焦区域显示装置的结构示意图;
图10为本申请实施例提供另一种的对焦区域显示装置的结构示意图;
图11为本申请实施例提供的终端设备的结构示意图。
具体实施方式
下面详细描述本申请的实施例,所述实施例的示例在附图中示出,其中自始至终相同或类似的标号表示相同或类似的元件或具有相同或类似功能的元件。下面通过参考附图描述的实施例是示例性的,旨在用于解释本申请,而不能理解为对本申请的限制。
下面参考附图描述本申请实施例的对焦区域显示方法、装置及终端设备。
图1为本申请一实施例提供的对焦区域显示方法的流程示意图。本申请实施例可以应用在用户对终端进行拍摄的过程中。其中,终端可以是智能手机、平板电脑、个人数字助理等具有各种操作***的硬件设备。
如图1所示,该对焦区域显示方法包括以下步骤:
S101,从图像传感器所采集的目标对象的预览图像中获取目标ROI区域。
当用户开启终端上的摄像装置如摄像头后,可以在终端的拍摄界面显示由拍摄头中的图像传感器对待拍摄的目标对象所成的预览图像。本实施例中,可以从所采集的预览图像中获取目标ROI区域。例如,可以通过自动对焦技术,从预览图像中选择一个区域作为目标ROI区域,再例如,可以通过用户从预览图像中,手动地选择一个区域作为目标ROI区域。
实际拍照的过程中,摄像头对准目标对象后,目标对象可能并未处于稳定状态,目标对象往往需要经过短暂的时间才能够处于稳定状态。例如,目标对象为一个人时,在拍照的过程中,目标对象往往需要经过3~5帧的时间调整姿势或者站稳,虽然在整个过程中,摄像头不断地采集目标对象的预览图像,并且在预览图像中显示ROI区域。但是为了得到清晰的图像,可以在目标对象处于稳定状态时,从此时图像传感器所采集的预览图像中获取目标ROI区域。
举例说明,当待拍摄的目标对象为一个人脸时,当人脸处于稳定状态后,可以进行自动对焦,将预览图像中人脸所在的区域作为对焦区域。再例如,待拍摄的目标对象为一颗树时,当待拍摄的树处于稳定状态后,进行自动对焦,可以将预览图像中这颗树的树冠所在区域作为对焦区域。
S102,对目标对象进行拍摄获取图像数据。
具体地,用户可以通过按住快门发送拍摄指令,当检测到拍摄指令后,对目标对象进行拍摄,得到图像数据。需要说明的是,实际拍摄过程中,对目标对象拍摄后将目标对象的图像数据可以缓存到指定的缓存区(Buffer)内,例如,可以将拍摄装置上的闪存卡等存储设备作为缓存区。
S103,在利用图像数据成像过程中,在所成图像中显示目标ROI区域的所在位置。
为了能够向用户在拍摄界面显示出目标对象的图像,在获取到目标对象的图像数据后,可以利用图像数据进行成像。实际应用中,当对焦区域设置不当时,拍摄装置拍摄的图像可能会出现失焦或者拍糊的现象。为了能够识别出上述现象的是否由对焦区域设置不当造成的,本实施例中,可以在所成的图像中将目标ROI区域标记出来,具体地,为目标ROI区域绘制边框,确定目标ROI区域在所成图像中的目标位置,然后在所成图像中的目标位置上显示携带边框的目标ROI区域。
本实施例的对焦区域显示方法,通过在利用图像数据成像时,可以将对焦时确定的对焦区域在所成的图像中标记出来。当所成的图像处于图像失焦或者拍糊时,若标记出的ROI区域的所在位置与对焦时ROI区域的位置未发生变化,用户则可以很快地识别出该图像失焦或者拍糊时原因并不是ROI区域设置不当造成的。尤其针对人脸成像,如果标记出的ROI区域的边框在人脸区域,如果图像出现模糊或者失焦,就可以排除ROI区域设置不当这一因素。
为了更加清楚地说明为目标ROI区域绘制边框的过程,本申请实施例提出了另一种对焦区域显示方法,图2为本申请实施例提供的另一种对焦区域显示方法的流程示意图。
如图2所示,在上述如图1所示实施例的基础上,所述为目标ROI区域绘制边框具体包括以下步骤:
S201,从目标ROI区域的第一行像素开始选择预设行数作为边框的第一横边所在位置。
具体地,以目标ROI区域左上角的像素所在的行为第一行,所在的列为第一列。本实施例中,以第一行像素开始即左上角像素所在行开始,选择预设行数作为边框的第一横边。
S202,从目标ROI区域的倒数第一行像素开始选择预设行数作为边框的第二横边所在位置。
具体地,以目标ROI区域右下角的像素所在的行为倒数第一行,所在的列为倒数第一列。本实施例中,以倒数第一行像素即右下角像素所在行开始,选择预设行数作为边框的第二横边。
需要说明的是,通过预设行数可以确定边框每个横边的宽度,例如,预设的行数可以为4行,相应地每个横边的宽度为4个像素的宽度。
S203,从目标ROI区域的第一列像素开始选择预设列数作为边框的第一竖边所在位置。
本实施例中,以第一行像素即左上角像素所在列开始,选择预设列数作为边框的第一竖边。
S204,从目标ROI区域的倒数第一列像素开始选择预设列数作为边框的第二竖边所在位置。
本实施例中,以倒数第一行像素即右下角像素所在列开始,选择预设列数作为边框的第二竖边。
需要说明的是,通过预设列数可以确定边框每个竖边的宽度,例如,预设的行数可以为4行,相应地每个竖边的宽度为4个像素的宽度。
如图3所示,其为本申请实施例提供的边框确定的示意图。图中左上角的像素所在的行为第一行和所在的列为第一列,右下角像素所在的行为倒数第一行,所在的列为倒数第一列。图中将从第一行开始的4行作为边框的第一横边,将从第一列开始的4列作为边框的第一竖边,从倒数第一行开始的4行作为边框的第二横边,将从倒数第一列开始的4列作为边框的第二竖边。在图3中,为了与目标ROI区域中的其他像素行区分,将边框中每个边所占的像素填充成灰度。
S205,按照预设的格式绘制第一横边、第二横边、第一竖边和第二竖边,形成目标ROI区域的边框。
具体地,可以预先设置一个格式,在确定了边框的横边和竖边后,可以按照预设的格式 来绘制第一横边、第二横边、第一竖边和第二竖边,进而得到目标ROI区域的边框。格式可以包括边框的颜色、边框线条的形状等。
举例说明的,当目标对象为一个人物时,可以在将摄像头对准目标对象后,在对焦的过程中,将目标对象的人脸自动识别为一个目标ROI区域,然后拍摄得到目标对象的图像,即本实施例中的所成图像,为了能够在图像中将对焦时确定的目标ROI区域在图像中的所在位置显示出来,可以按照上述方法将目标ROI区域的边框绘制出来,然后将边框显示在图像中。如图4所示,为本申请实施例的一个应用示意图。在对人脸进行拍摄时,人脸区域往往被识别为ROI区域,在图4中,左侧的白色边框标记出目标ROI区域为人脸区域,从可以形成清晰的图像。右侧的白色边框偏离了人脸区域,形成的图像出现失焦或者模糊的现象。
进一步地,为了能够在所成图像中准确地显示出目标ROI区域的边框。在所成图像中显示目标ROI区域的边框之前,需要确定目标ROI区域在所处图像中的位置。图5为本申请实施例提供一种确定目标ROI区域在所成图像中的目标位置的流程示意图。
所述确定目标ROI区域在所成图像中的目标位置,具体包括以下步骤:
S301,获取所成图像中起始像素的第一坐标以及目标ROI区域中各像素的第二坐标。
具体地,可以根据所成图像的图像数据获取到所成图像中起始像素的第一坐标,该起始像素可以为所成图像中左上角的像素。进一步地,可以从目标ROI区域的图像数据获取到目标ROI区中每个像素的第二坐标。
S302,根据第一坐标和第二坐标,确定目标ROI区域在所成图像中的目标位置。
在获取到目标ROI区域的每个像素的第二坐标后,就可以计算出每个像素相对于所成图像的起始像素第一坐标的位置关系,该位置关系可以包括与起始像素之间的距离,与该起始像素之间的方位关系。
在得到目标ROI区域中每个像素的第二坐标相对于第一坐标的位置关系后,相应地,确定出了目标ROI区域在所成图像的目标位置。
举例说明,当设定所成图像的起始像素的第一坐标为(0,0),而目标ROI区域的一个像素坐标为(15,20),则可以确定出该像素与所成图像的起始像素的位置关系如图6所示,图中白色圆点表示所成图像中的起始像素,灰色圆点表示目标ROI区域中的一个像素。
为了提高所获取的目标ROI区域能够更加准确,拍摄到的图像更加清晰,在获取目标ROI区域之前,需要判断待拍摄的目标对象处于稳定状态。图7为本申请实施例提供的从图像传感器所采集的目标对象的预览图像中获取目标ROI区域的流程示意图。
如图7所示,所述从图像传感器所采集的目标对象的预览图像中获取目标ROI区域,具体包括以下步骤:
S401,开启摄像装置。
S402,统计图像传感器所采集的预览图像的图像帧数。
S403,当图像帧数到达预设帧数后,根据采集的所有预览图像,获取目标对象的移动速度。
S404,如果移动速度在预设范围内,则确定目标对象处于稳定状态。
实际应用中,当开启摄像头之后,需要等待目标对象稳定后,才会触发对焦。一般情况下,目标对象稳定需要经过3~5帧的预览图像的时长,具体地该预览图像的图像帧数与用户所使用的终端的性能有关。
本实施例中,当用户开启了摄像头后,在拍摄时将摄像头对准待拍摄的目标对象,摄像头中的图像传感器就可以预先采集目标对象,形成目标对象的预览图像,并且可以统计图像传感器所采集的预览图像的图像帧数。
进一步地,目标对象可能处于完全静止的状态,当图像帧数到达预设帧数后,可以将所有的预览图像进行分析,可以获取相邻两帧预览图像之间的相差,进而获取目标对象的移动速度。
在获取到目标对象的移动速度后,可以判断移动速度是否在预设范围内,如果在该预设范围内可以认为目标对象的移动不会影响拍摄效果,就可以将目标对象确定出处于稳定状态。
S405,将目标对象处于稳定状态时的预览图像中的ROI区域作为目标ROI区域。
为了得到清晰的图像,可以在目标对象处于稳定状态时,从此时图像传感器所采集的预览图像中获取目标ROI区域,也就是说,将此时图像传感器所采集的预览图像中的ROI区域作为目标ROI区域。
进一步,目标对象为人脸时,在成像图像中显示目标ROI区域的所在位置后,可以根据ROI的位置,和成像图像中人脸的清晰程度,确定是否是因ROI区域确定不准确导致图像模糊,为此,本申请实施例提出了又一种对焦区域显示方法,图8为本申请实施例提供的又一种对焦区域显示方法的流程示意图。
如图8所示,在上述如图1所示实施例的基础上,步骤103之后,还可以包括如下步骤:
步骤501,在目标对象为人脸的情况下,对所成图像中呈现人脸的区域识别是否模糊。
作为一种可能的实现方式,通过人脸识别算法识别出成像图像中的人脸区域,获取人脸区域的像素值,根据像素值获取人脸区域的色彩和亮度,与预设的清晰人脸对应的色彩和亮度的阈值比较,若小于阈值,则认为人脸区域模糊。
步骤502,若呈现的人脸区域模糊,根据目标ROI区域和呈现的人脸区域之间的重合度,判断是否由对焦误差导致呈现的人脸区域模糊。
具体地,若呈现人脸的区域是模糊的,则根据在成像图像中显示的目标ROI区域,和识别出的呈现的人脸区域进行区域重合度计算,这是因为在人脸拍摄时,会默认识别人脸区域即为ROI区域,若ROI区域和人脸区域重合度较少,小于预设阈值,则说明是因ROI区域识别不准确,也就是说对焦误差导致拍摄得到的人脸区域模糊,如图4中右图所示,从而可帮助用户确认拍摄得到的人脸图像,是因对焦不准导致的人脸区域模糊;若ROI区域和人脸区域重合度较大,则说明人脸图像模糊不是因为对焦误差导致的。
为了实现上述实施例,本申请还提出一种对焦区域显示装置。
图9为本申请实施例提供的对焦区域显示装置。如图9所示,该对焦区域显示装置,包括:区域获取模块11、图像获取模块12和显示模块13。
区域获取模块11,用于从图像传感器所采集的目标对象的预览图像中获取目标ROI区域。
图像获取模块12,用于对所述目标对象进行拍摄获取图像数据。
显示模块13,用于在利用所述图像数据成像过程中,在所成图像中显示所述目标ROI区域的所在位置。
可选地,在本申请实施例一种可能的实现方式中,如图10所示,该对焦区域显示装置的显示模块13还包括:绘制单元131、确定单元132和显示单元133。
其中,绘制单元131,用于为所述目标ROI区域绘制边框。
确定单元132,用于确定所述目标ROI区域在所述所成图像中的目标位置。
显示单元133,用于在所述所成图像中的所述目标位置上显示所述目标ROI区域的所述边框。
进一步地,绘制单元131,具体用于:
从所述目标ROI区域的第一行像素开始选择预设行数作为所述边框的第一横边所在位置;
从所述目标ROI区域的倒数第一行像素开始选择预设行数作为所述边框的第二横边所在位置;
从所述目标ROI区域的第一列像素开始选择预设列数作为所述边框的第一竖边所在位置;
从所述目标ROI区域的倒数第一列像素开始选择预设作为所述边框的第二竖边所在位置;
按照预设的格式绘制所述第一横边、所述第二横边、所述第一竖边和所述第二竖边,形成所述边框。
进一步地,确定单元132,具体用于:
获取所述所成图像中起始像素的第一坐标以及所述目标ROI区域中各像素的第二坐标;
根据所述第一坐标和所述第二坐标,确定所述目标ROI区域在所述所成图像中的所述目标位置。
进一步地,所述的对焦区域显示装置还包括:状态确定模块14。
状态确定模块14,用于根据连续采集的符合预设帧数的预览图像,确定所述目标对象处于稳定状态。
作为一种可能的实现方式,状态确定模块14,具体可以用于从所述图像传感器对准所述目标对象后,统计所述图像传感器所采集的预览图像的图像帧数,当所述图像帧数到达预设帧数后,根据采集的所有预览图像,确定所述目标对象的移动速度,如果所述移动速度在预设范围内,则确定所述目标对象处于稳定状态。
作为一种可能的实现方式,上述区域获取模块11,具体用于将所述目标对象处于稳定状态时的所述预览图像中的ROI区域作为所述目标ROI区域。
作为一种可能的实现方式,ROI区域包括对焦区域。
进一步,所述的对焦区域显示装置还包括:模糊确定模块15,用于
在所述目标对象为人脸的情况下,对所述所成图像中呈现的人脸区域识别是否模糊;
若所述呈现的人脸区域模糊,根据所述目标ROI区域和所述呈现的人脸区域之间的重合度,判断是否由对焦误差导致所述呈现的人脸区域模糊。。
本实施例的对焦区域显示装置,通过在利用图像数据成像时,可以将对焦时确定的对焦区域在所成的图像中标记出来。当所成的图像处于图像失焦或者拍糊时,若标记出的ROI区域的所在位置与对焦时ROI区域的位置未发生变化,用户则可以很快地识别出该图像失焦或者拍糊时原因并不是ROI区域设置不当造成的。尤其针对人脸成像,如果标记出的ROI区域的边框在人脸区域,如果图像出现模糊或者失焦,就可以排除ROI区域设置不当这一因素。
为了实现上述实施例,本申请还提出一种终端设备。
图11为本申请实施例提供的终端设备的结构示意图。
如图11所示,该终端设备包括:壳体21和位于壳体21内的处理器22、存储器23和显示界面24,其中,处理器22通过读取存储器23中存储的可执行程序代码来运行与可执行程序代码对应的程序,以用于执行以下步骤:
从图像传感器所采集的目标对象的预览图像中获取目标ROI区域;
对所述目标对象进行拍摄获取图像数据;
在利用所述图像数据成像过程中,在所成图像中显示所述目标ROI区域的所在位置。
需要说明的是,前述对对焦区域显示方法实施例的解释说明也适用于本实施例的终端, 其实现原理类似,此处不再赘述。
本实施例的终端设备,通过在利用图像数据成像时,可以将对焦时确定的对焦区域在所成的图像中标记出来。当所成的图像处于图像失焦或者拍糊时,若标记出的ROI区域的所在位置与对焦时ROI区域的位置未发生变化,用户则可以很快地识别出该图像失焦或者拍糊时原因并不是ROI区域设置不当造成的。尤其针对人脸成像,如果标记出的ROI区域的边框在人脸区域,如果图像出现模糊或者失焦,就可以排除ROI区域设置不当这一因素。
为了实现上述实施例,本申请还提出一种计算机程序产品,当计算机程序产品中的指令由处理器执行时,执行如前述实施例所述的对焦区域显示方法。
为了实现上述实施例,本申请还提出一种非临时性计算机可读存储介质,其上存储有计算机程序,当该计算机程序被处理器执行时能够实现如前述实施例所述的对焦区域显示方法。
在本说明书的描述中,参考术语“一个实施例”、“一些实施例”、“示例”、“具体示例”、或“一些示例”等的描述意指结合该实施例或示例描述的具体特征、结构、材料或者特点包含于本申请的至少一个实施例或示例中。在本说明书中,对上述术语的示意性表述不必须针对的是相同的实施例或示例。而且,描述的具体特征、结构、材料或者特点可以在任一个或多个实施例或示例中以合适的方式结合。此外,在不相互矛盾的情况下,本领域的技术人员可以将本说明书中描述的不同实施例或示例以及不同实施例或示例的特征进行结合和组合。
此外,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括至少一个该特征。在本申请的描述中,“多个”的含义是至少两个,例如两个,三个等,除非另有明确具体的限定。
流程图中或在此以其他方式描述的任何过程或方法描述可以被理解为,表示包括一个或更多个用于实现定制逻辑功能或过程的步骤的可执行指令的代码的模块、片段或部分,并且本申请的优选实施方式的范围包括另外的实现,其中可以不按所示出或讨论的顺序,包括根据所涉及的功能按基本同时的方式或按相反的顺序,来执行功能,这应被本申请的实施例所属技术领域的技术人员所理解。
在流程图中表示或在此以其他方式描述的逻辑和/或步骤,例如,可以被认为是用于实现逻辑功能的可执行指令的定序列表,可以具体实现在任何计算机可读介质中,以供指令执行***、装置或设备(如基于计算机的***、包括处理器的***或其他可以从指令执行***、装置或设备取指令并执行指令的***)使用,或结合这些指令执行***、装置或设备而使用。就本说明书而言,"计算机可读介质"可以是任何可以包含、存储、通信、传播或传输程序以供指令执行***、装置或设备或结合这些指令执行***、装置或设备而使用的装置。计算机 可读介质的更具体的示例(非穷尽性列表)包括以下:具有一个或多个布线的电连接部(电子装置),便携式计算机盘盒(磁装置),随机存取存储器(RAM),只读存储器(ROM),可擦除可编辑只读存储器(EPROM或闪速存储器),光纤装置,以及便携式光盘只读存储器(CDROM)。另外,计算机可读介质甚至可以是可在其上打印所述程序的纸或其他合适的介质,因为可以例如通过对纸或其他介质进行光学扫描,接着进行编辑、解译或必要时以其他合适方式进行处理来以电子方式获得所述程序,然后将其存储在计算机存储器中。
应当理解,本申请的各部分可以用硬件、软件、固件或它们的组合来实现。在上述实施方式中,多个步骤或方法可以用存储在存储器中且由合适的指令执行***执行的软件或固件来实现。如,如果用硬件来实现和在另一实施方式中一样,可用本领域公知的下列技术中的任一项或他们的组合来实现:具有用于对数据信号实现逻辑功能的逻辑门电路的离散逻辑电路,具有合适的组合逻辑门电路的专用集成电路,可编程门阵列(PGA),现场可编程门阵列(FPGA)等。
本技术领域的普通技术人员可以理解实现上述实施例方法携带的全部或部分步骤是可以通过程序来指令相关的硬件完成,所述的程序可以存储于一种计算机可读存储介质中,该程序在执行时,包括方法实施例的步骤之一或其组合。
此外,在本申请各个实施例中的各功能单元可以集成在一个处理模块中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。所述集成的模块如果以软件功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。
上述提到的存储介质可以是只读存储器,磁盘或光盘等。尽管上面已经示出和描述了本申请的实施例,可以理解的是,上述实施例是示例性的,不能理解为对本申请的限制,本领域的普通技术人员在本申请的范围内可以对上述实施例进行变化、修改、替换和变型。

Claims (20)

  1. 一种对焦区域显示方法,其特征在于,包括:
    从图像传感器所采集的目标对象的预览图像中获取目标ROI区域;
    对所述目标对象进行拍摄获取图像数据;
    在利用所述图像数据成像过程中,在所成图像中显示所述目标ROI区域的所在位置。
  2. 根据所述权利要求1所述的对焦区域显示方法,其特征在于,所述在利用所述图像数据成像过程中,在所成图像中显示所述目标ROI区域的所在位置,包括:
    为所述目标ROI区域绘制边框;
    确定所述目标ROI区域在所述所成图像中的目标位置;
    在所述所成图像中的所述目标位置上显示所述目标ROI区域的所述边框。
  3. 根据所述权利要求2所述的对焦区域显示方法,其特征在于,所述为所述目标ROI区域绘制边框,包括:
    从所述目标ROI区域的第一行像素开始选择预设行数作为所述边框的第一横边所在位置;
    从所述目标ROI区域的倒数第一行像素开始选择预设行数作为所述边框的第二横边所在位置;
    从所述目标ROI区域的第一列像素开始选择预设列数作为所述边框的第一竖边所在位置;
    从所述目标ROI区域的倒数第一列像素开始选择预设列数作为所述边框的第二竖边所在位置;
    按照预设的格式绘制所述第一横边、所述第二横边、所述第一竖边和所述第二竖边,形成所述边框。
  4. 根据所述权利要求2或3所述的对焦区域显示方法,其特征在于,所述确定所述目标ROI区域在所述所成图像中的目标位置,包括:
    获取所述所成图像中起始像素的第一坐标以及所述目标ROI区域中各像素的第二坐标;
    根据所述第一坐标和所述第二坐标,确定所述目标ROI区域在所述所成图像中的目标位置。
  5. 根据所述权利要求1-4任一项所述的对焦区域显示方法,其特征在于,所述从图像传感器所采集的目标对象的预览图像中获取目标ROI区域之前,还包括:
    根据连续采集的符合预设帧数的预览图像,确定所述目标对象处于稳定状态。
  6. 根据权利要求5所述的对焦区域显示方法,其特征在于,所述根据连续采集的符合预设帧数的预览图像,确定所述目标对象处于稳定状态,包括:
    开启摄像装置;
    统计所述图像传感器所采集的预览图像的图像帧数;
    当所述图像帧数到达预设帧数后,根据采集的所有预览图像,获取所述目标对象的移动速度;
    如果所述移动速度在预设范围内,则确定所述目标对象处于稳定状态。
  7. 根据权利要求6所述的对焦区域显示方法,其特征在于,所述从图像传感器所采集的目标对象的预览图像中获取目标ROI区域,包括:
    将所述目标对象处于稳定状态时的所述预览图像中的ROI区域作为所述目标ROI区域。
  8. 根据权利要求1-7任一项所述的对焦区域显示方法,其特征在于,所述ROI区域包括对焦区域。
  9. 根据权利要求8所述的对焦区域显示方法,其特征在于,所述在所成图像中显示所述目标ROI区域的所在位置之后,还包括:
    在所述目标对象为人脸的情况下,对所述所成图像中呈现的人脸区域识别是否模糊;
    若所述呈现的人脸区域模糊,根据所述目标ROI区域和所述呈现的人脸区域之间的重合度,判断是否由对焦误差导致所述呈现的人脸区域模糊。
  10. 一种对焦区域显示装置,其特征在于,包括:
    区域获取模块,用于从图像传感器所采集的目标对象的预览图像中获取目标ROI区域;
    图像获取模块,用于对所述目标对象进行拍摄获取图像数据;
    显示模块,用于在利用所述图像数据成像过程中,在所成图像中显示所述目标ROI区域的所在位置。
  11. 根据所述权利要求10所述的对焦区域显示装置,其特征在于,所述显示模块,包括:
    绘制单元,用于为所述目标ROI区域绘制边框;
    确定单元,用于确定所述目标ROI区域在所述所成图像中的目标位置;
    显示单元,用于在所述所成图像中的所述目标位置上显示所述目标ROI区域的所述边框。
  12. 根据所述权利要求11所述的对焦区域显示装置,其特征在于,所述绘制单元,具体用于:
    从所述目标ROI区域的第一行像素开始选择预设行数作为所述边框的第一横边所在位置;
    从所述目标ROI区域的倒数第一行像素开始选择预设行数作为所述边框的第二横边所在位置;
    从所述目标ROI区域的第一列像素开始选择预设列数作为所述边框的第一竖边所在位置;
    从所述目标ROI区域的倒数第一列像素开始选择预设列数作为所述边框的第二竖边所在位置;
    按照预设的格式绘制所述第一横边、所述第二横边、所述第一竖边和所述第二竖边,形成所述边框。
  13. 根据所述权利要求11或12所述的对焦区域显示装置,其特征在于,所述确定单元,具体用于:
    获取所述所成图像中起始像素的第一坐标以及所述目标ROI区域中各像素的第二坐标;
    根据所述第一坐标和所述第二坐标,确定所述目标ROI区域在所述所成图像中的目标位置。
  14. 根据所述权利要求10-13任一项所述的对焦区域显示装置,其特征在于,所述装置,还包括:
    状态确定模块,用于根据连续采集的符合预设帧数的预览图像,确定所述目标对象处于稳定状态。
  15. 根据权利要求14所述的对焦区域显示装置,其特征在于,所述状态确定模块,具体用于:
    开启摄像装置;
    统计所述图像传感器所采集的预览图像的图像帧数;
    当所述图像帧数到达预设帧数后,根据采集的所有预览图像,获取所述目标对象的移动速度;
    如果所述移动速度在预设范围内,则确定所述目标对象处于稳定状态。
  16. 根据权利要求15所述的对焦区域显示装置,其特征在于,所述区域获取模块,具体用于:
    将所述目标对象处于稳定状态时的所述预览图像中的ROI区域作为所述目标ROI区域。
  17. 根据权利要求10-16任一项所述的对焦区域显示装置,其特征在于,所述ROI区域包括对焦区域。
  18. 一种终端设备,其特征在于,包括以下一个或多个组件:壳体和位于所述壳体内的处理器、存储器和显示界面,其中,所述处理器通过读取所述存储器中存储的可执行程序代码来运行与所述可执行程序代码对应的程序,以用于实现如权利要求1-9中任一所述的对焦区域显示方法。
  19. 一种计算机程序产品,其特征在于,当所述计算机程序产品中的指令由处理器执行 时,实现如权利要求1-9中任一项所述的对焦区域显示方法。
  20. 一种非临时性计算机可读存储介质,其上存储有计算机程序,其特征在于,该计算机程序被处理器执行时,实现如权利要求1-9中任一项所述的对焦区域显示方法。
PCT/CN2018/091227 2017-06-16 2018-06-14 对焦区域显示方法、装置及终端设备 WO2018228466A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP18817013.8A EP3627822B1 (en) 2017-06-16 2018-06-14 Focus region display method and apparatus, and terminal device
US16/706,660 US11283987B2 (en) 2017-06-16 2019-12-06 Focus region display method and apparatus, and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710458367.1A CN107295252B (zh) 2017-06-16 2017-06-16 对焦区域显示方法、装置及终端设备
CN201710458367.1 2017-06-16

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/706,660 Continuation US11283987B2 (en) 2017-06-16 2019-12-06 Focus region display method and apparatus, and storage medium

Publications (1)

Publication Number Publication Date
WO2018228466A1 true WO2018228466A1 (zh) 2018-12-20

Family

ID=60097984

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/091227 WO2018228466A1 (zh) 2017-06-16 2018-06-14 对焦区域显示方法、装置及终端设备

Country Status (4)

Country Link
US (1) US11283987B2 (zh)
EP (1) EP3627822B1 (zh)
CN (1) CN107295252B (zh)
WO (1) WO2018228466A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116095481A (zh) * 2023-01-13 2023-05-09 杭州微影软件有限公司 一种辅助聚焦方法、装置、电子设备及存储介质

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107295252B (zh) * 2017-06-16 2020-06-05 Oppo广东移动通信有限公司 对焦区域显示方法、装置及终端设备
CN112235503A (zh) * 2019-07-15 2021-01-15 北京字节跳动网络技术有限公司 一种对焦测试方法、装置、计算机设备及存储介质
CN112733575A (zh) * 2019-10-14 2021-04-30 北京字节跳动网络技术有限公司 图像处理方法、装置、电子设备及存储介质
CN111914739A (zh) * 2020-07-30 2020-11-10 深圳创维-Rgb电子有限公司 智能跟随方法、装置、终端设备和可读存储介质
CN117652152A (zh) * 2022-06-02 2024-03-05 北京小米移动软件有限公司 一种对焦方法、装置及存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103024265A (zh) * 2011-09-21 2013-04-03 奥林巴斯映像株式会社 摄像装置和摄像装置的摄像方法
CN103780824A (zh) * 2012-10-19 2014-05-07 爱国者数码科技有限公司 调整影像构图的数码影像摄取装置及影像构图调整的方法
CN103780826A (zh) * 2012-10-19 2014-05-07 爱国者数码科技有限公司 提示影像拍摄构图效果的数码影像摄取装置
US20150022713A1 (en) * 2013-07-16 2015-01-22 Canon Kabushiki Kaisha Imaging apparatus and imaging method
CN105872363A (zh) * 2016-03-28 2016-08-17 广东欧珀移动通信有限公司 人脸对焦清晰度的调整方法及调整装置
CN107295252A (zh) * 2017-06-16 2017-10-24 广东欧珀移动通信有限公司 对焦区域显示方法、装置及终端设备

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4172507B2 (ja) * 2006-07-13 2008-10-29 ソニー株式会社 撮像装置、および撮像装置制御方法、並びにコンピュータ・プログラム
JP4796007B2 (ja) * 2007-05-02 2011-10-19 富士フイルム株式会社 撮像装置
JP5387949B2 (ja) * 2009-03-03 2014-01-15 株式会社リコー 撮像装置、再生表示装置、撮像記録方法および再生表示方法
US10237466B2 (en) * 2014-01-17 2019-03-19 Sony Corporation Recognition of degree of focus of an image
JP6364782B2 (ja) * 2014-01-20 2018-08-01 ソニー株式会社 フォーカス制御装置、フォーカス制御方法、カメラ装置およびカメラ装置におけるフォーカス制御方法
CN104270562B (zh) 2014-08-15 2017-10-17 广东欧珀移动通信有限公司 一种拍照对焦方法和拍照对焦装置
CN104460185A (zh) 2014-11-28 2015-03-25 小米科技有限责任公司 自动对焦方法及装置
CN106060373B (zh) * 2015-04-03 2019-12-20 佳能株式会社 焦点检测装置及其控制方法
CN106161962B (zh) 2016-08-29 2018-06-29 广东欧珀移动通信有限公司 一种图像处理方法及终端

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103024265A (zh) * 2011-09-21 2013-04-03 奥林巴斯映像株式会社 摄像装置和摄像装置的摄像方法
CN103780824A (zh) * 2012-10-19 2014-05-07 爱国者数码科技有限公司 调整影像构图的数码影像摄取装置及影像构图调整的方法
CN103780826A (zh) * 2012-10-19 2014-05-07 爱国者数码科技有限公司 提示影像拍摄构图效果的数码影像摄取装置
US20150022713A1 (en) * 2013-07-16 2015-01-22 Canon Kabushiki Kaisha Imaging apparatus and imaging method
CN105872363A (zh) * 2016-03-28 2016-08-17 广东欧珀移动通信有限公司 人脸对焦清晰度的调整方法及调整装置
CN107295252A (zh) * 2017-06-16 2017-10-24 广东欧珀移动通信有限公司 对焦区域显示方法、装置及终端设备

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3627822A4 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116095481A (zh) * 2023-01-13 2023-05-09 杭州微影软件有限公司 一种辅助聚焦方法、装置、电子设备及存储介质

Also Published As

Publication number Publication date
EP3627822A1 (en) 2020-03-25
EP3627822B1 (en) 2023-07-12
US20200112685A1 (en) 2020-04-09
US11283987B2 (en) 2022-03-22
EP3627822A4 (en) 2020-06-03
CN107295252B (zh) 2020-06-05
CN107295252A (zh) 2017-10-24

Similar Documents

Publication Publication Date Title
WO2018228466A1 (zh) 对焦区域显示方法、装置及终端设备
US10825146B2 (en) Method and device for image processing
CN107087107B (zh) 基于双摄像头的图像处理装置及方法
WO2018228467A1 (zh) 图像曝光方法、装置、摄像设备及存储介质
KR102143456B1 (ko) 심도 정보 취득 방법 및 장치, 그리고 이미지 수집 디바이스
JP6011470B2 (ja) 予備画像を分析する装置、方法及びプログラム
WO2019105214A1 (zh) 图像虚化方法、装置、移动终端和存储介质
TWI527448B (zh) 成像設備,影像處理方法,及用以記錄程式於其上之記錄媒體
US10805508B2 (en) Image processing method, and device
TWI425828B (zh) 攝影裝置、圖像區域之判定方法以及電腦可讀取之記錄媒體
KR20170106325A (ko) 다중 기술 심도 맵 취득 및 융합을 위한 방법 및 장치
JP2020537382A (ja) デュアルカメラベースの撮像のための方法および装置ならびに記憶媒体
WO2019105261A1 (zh) 背景虚化处理方法、装置及设备
JP2005215750A (ja) 顔検知装置および顔検知方法
US20170011525A1 (en) Image capturing apparatus and method of operating the same
EP3544286A1 (en) Focusing method, device and storage medium
JP2018528631A (ja) ステレオオートフォーカス
WO2018228329A1 (zh) 对焦控制方法、装置、计算机可存储介质和移动终端
US8295609B2 (en) Image processing apparatus, image processing method and computer readable-medium
CN112017137A (zh) 图像处理方法、装置、电子设备及计算机可读存储介质
WO2018076529A1 (zh) 场景深度计算方法、装置及终端
CN108289170B (zh) 能够检测计量区域的拍照装置、方法及计算机可读介质
WO2015141185A1 (ja) 撮像制御装置、撮像制御方法および記録媒体
CN106878616B (zh) 一种基于移动终端自动确定动态照片焦点的方法及***
JP2014116789A (ja) 撮影装置、その制御方法及びプログラム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18817013

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2018817013

Country of ref document: EP

Effective date: 20191216