WO2020038109A1 - 拍照方法、装置、终端及计算机可读存储介质 - Google Patents

拍照方法、装置、终端及计算机可读存储介质 Download PDF

Info

Publication number
WO2020038109A1
WO2020038109A1 PCT/CN2019/093682 CN2019093682W WO2020038109A1 WO 2020038109 A1 WO2020038109 A1 WO 2020038109A1 CN 2019093682 W CN2019093682 W CN 2019093682W WO 2020038109 A1 WO2020038109 A1 WO 2020038109A1
Authority
WO
WIPO (PCT)
Prior art keywords
photographing
terminal
camera
frame image
preset
Prior art date
Application number
PCT/CN2019/093682
Other languages
English (en)
French (fr)
Inventor
张光辉
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Publication of WO2020038109A1 publication Critical patent/WO2020038109A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/681Motion detection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/682Vibration or motion blur correction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof

Definitions

  • the present application belongs to the field of photographing technology, and particularly relates to a photographing method, device, terminal, and computer-readable storage medium.
  • a terminal such as a mobile phone generally needs to perform a photograph preview first, and select a photographing scene to be photographed during the photographing preview, and then focus the photographic object in the photographing scene, and then trigger a photographing instruction to generate a photograph.
  • the embodiments of the present application provide a photographing method, a device, a terminal, and a computer-readable storage medium, which can solve the technical problem that the terminal cannot quickly take clear photos.
  • a first aspect of the embodiments of the present application provides a photographing method, including:
  • the camera is controlled to perform auto-focusing, acquire a photographed frame image that is successfully focused, and output a prompt message to complete the photographing.
  • a second aspect of the embodiments of the present application provides a photographing device, including:
  • a detection unit configured to detect whether the terminal is in a preset motion state according to the camera startup instruction
  • the photographing unit is configured to control the camera to perform auto-focusing if it is detected that the terminal is in a preset motion state, acquire a photographed frame image that is successfully focused, and output a prompt message to complete photographing.
  • a third aspect of the embodiments of the present application provides a terminal including a memory, a processor, and a computer program stored in the memory and executable on the processor.
  • the processor executes the computer program, the following steps are implemented:
  • the camera is controlled to perform auto-focusing, acquire a photographed frame image that is successfully focused, and output a prompt message to complete the photographing.
  • a fourth aspect of the embodiments of the present application provides a computer-readable storage medium.
  • the computer-readable storage medium stores a computer program, and the computer program is executed by a processor to implement the steps of the foregoing method.
  • FIG. 1 is a schematic flowchart of an implementation of a photographing method according to an embodiment of the present application
  • FIG. 2 is a schematic flowchart of a specific implementation of controlling a camera to perform autofocus according to an embodiment of the present application
  • FIG. 3 is a schematic diagram of acquiring position information of a target photographing object according to an embodiment of the present application.
  • FIG. 4 is a schematic flowchart of a specific implementation of obtaining a photographed frame image that is successfully focused according to an embodiment of the present application
  • FIG. 5 is a schematic structural diagram of a photographing device according to an embodiment of the present application.
  • FIG. 6 is a schematic structural diagram of a terminal according to an embodiment of the present application.
  • the term “if” can be construed as “when” or “once” or “in response to a determination” or “in response to a detection” depending on the context .
  • the phrase “if determined” or “if [the described condition or event] is detected” can be interpreted, depending on the context, to mean “once determined” or “in response to the determination” or “once [the condition or event described ] “Or” In response to [Description of condition or event] detected ".
  • a terminal such as a mobile phone generally needs to perform a photograph preview first, and select a photographing scene to be photographed during the photographing preview, and then focus the photographic object in the photographing scene, and then trigger a photographing instruction to generate a photograph.
  • the user wants to quickly capture the content of the courseware displayed on the screen without affecting himself or others.
  • the user needs to click the camera button immediately after focusing, and quickly take back the phone to take a picture, but This may cause the photos to be taken too quickly because the phone is retracted too fast, and it is not possible to quickly take clear photos.
  • whether the terminal is in a preset motion state is detected according to the received camera start instruction, and when it is detected that the terminal is in the preset motion state, the camera is controlled to perform autofocus, and a photograph frame that is successfully focused is obtained. Image, complete the photo, and output a prompt message to complete the photo. That is to say, this application does not need to take a picture after the user triggers a photographing instruction. Instead, after the focus is completed, the photograph frame image that has been successfully focused is obtained at the first time to complete the picture, and when the terminal completes the picture, the output is completed.
  • the prompt information for taking pictures allows users to know that the terminal has finished taking photos in a timely manner, which effectively avoids the user from changing the motion state of the terminal when the terminal has not taken photos, resulting in blurring of the photo shooting, enabling the terminal to quickly take clear pictures.
  • Photo which improves the efficiency of photo shooting.
  • FIG. 1 shows a schematic flowchart of a photographing method according to an embodiment of the present application. This method is applied to a terminal and can be executed by a photographing device configured on the terminal. It is applicable to a situation where the efficiency of photographing needs to be improved, including steps 101 to Step 103.
  • the terminal includes a terminal device equipped with a photographing device, such as a smart phone, a tablet computer, and a learning machine.
  • the terminal device may be installed with applications such as a photographing application, a browser, and WeChat.
  • step 101 a camera start instruction is received.
  • the above camera startup instruction includes a camera startup instruction triggered by a user clicking a photographing application icon on a system desktop, a camera startup instruction triggered by a user clicking a physical button, a camera startup instruction triggered by a user through a voice, or other methods. Camera start instruction.
  • step 102 it is detected whether the terminal is in a preset motion state according to the camera startup instruction.
  • the preset motion state refers to a state where the terminal is located at a shooting position and is relatively stationary.
  • the detecting whether the terminal is in a preset motion state according to the camera startup instruction includes: detecting whether the displacement of the terminal in the three directions of the X axis, Y axis, and Z axis is less than the first A preset threshold, if it is detected that the displacement of the terminal in the three directions of the X axis, Y axis, and Z axis is less than the first preset threshold, it is confirmed that the terminal is in a preset motion state.
  • the gyroscope or accelerometer on the terminal is used to detect the displacement of the terminal in the three directions of X, Y, and Z. If it is detected that the displacement of the terminal in X, Y, or Z is greater than Or equal to the first preset threshold, it means that the terminal has not moved to the optimal shooting position, that is, the user is still moving the terminal to find the best shooting position that the user thinks; if it is detected that the terminal is on the X axis, Y If the displacement in the three directions of the axis or the Z axis is less than the first preset threshold, it means that the terminal is already at the optimal shooting position and can enter the focus state before shooting.
  • the first preset threshold may be set according to practical experience.
  • the first preset threshold may be set to 1 mm to 3 mm.
  • step 103 if it is detected that the terminal is in a preset motion state, the camera is controlled to perform autofocus to obtain a photographed frame image that is successfully focused, and output a prompt message to complete the photographing.
  • the above-mentioned photographing frame image refers to a frame image generated by the camera by collecting an external light signal according to a photographing instruction, and the photographing frame image is used to generate a final photo.
  • the camera when it is detected that the terminal is in a preset motion state, the camera is immediately controlled to perform autofocus and acquire a successfully captured photo frame image.
  • a reminder message is output to remind the user that the photo has been taken, so that the user It can quickly finish taking photos and obtain clear photos; effectively avoids the user's changing the motion state of the terminal when the terminal has not finished taking pictures, resulting in blurry photo shooting problems, enabling the terminal to quickly take clear photos, improving The efficiency of photo shooting.
  • controlling the camera to perform autofocus in step 103 includes steps 201 to 202.
  • Step 201 Identify a target shooting object included in the current preview frame image, and obtain position information of the target shooting object.
  • the preview frame image refers to a frame image generated by the camera collecting an external light signal when the photographing application is in a preview state.
  • the data output by the camera every time it collects external light signals is called frame data.
  • the terminal obtains the preview frame image by acquiring the frame data collected by the camera and displaying it.
  • the frame data is collected at a frequency of 30 frames per second, and is generally divided into a preview frame and a photographing frame, which are used for previewing and photographing respectively.
  • a preview frame image is acquired in real time, and a target photographic object included in the preview frame image is detected, so as to obtain position information of the target photographic object.
  • the above-mentioned target photographing object refers to the object currently being photographed.
  • the target photographing object is a person
  • the target photographing object is a building.
  • the above-mentioned target photographing object may be a photographic object occupying the largest area in the preview frame image.
  • the above-mentioned detection of the target photographic object included in the preview frame image includes performing target detection on the preview frame image, realizing pixel-level classification of the foreground and background, removing the background, and retaining one or more Target objects, that is, one or more of the above target photographic objects.
  • target detection objects are detected through local binary pattern algorithms, directional gradient features combined with support vector machine models, and convolutional neural network models.
  • the convolutional neural network model can achieve more accurate and rapid detection of target photographic objects. Therefore, a trained convolutional neural network model can be selected to detect the target photographic objects in the preview frame image.
  • a trained convolutional neural network model Before using the trained convolutional neural network model to detect the target shooting object in the preview frame image, a trained convolutional neural network model needs to be obtained first.
  • the trained convolutional neural network model is trained according to each sample image and the detection results corresponding to each sample image, where the detection result corresponding to each sample image is used to indicate all target photographic objects included in the sample image .
  • the training step of the convolutional neural network model may include: obtaining a sample image and a detection result corresponding to the sample image; using the convolutional neural network model to detect the sample image, and adjusting the convolutional neural network model according to the detection result. Parameters until the adjusted convolutional neural network model can detect all the target photographic objects in the sample image, or the accuracy rate of the target photographic objects in the sample image is greater than a preset value, then the adjusted The convolutional neural network model is used as a trained convolutional neural network model.
  • the parameters of the above convolutional neural network model may include the weight, deviation, and coefficient of the regression function of each convolutional layer in the convolutional neural network model, and may also include the learning rate, the number of iterations, and the number of neurons in each layer. .
  • the method for detecting the above-mentioned target photographic object is only given as an example, and is not expressed as a limitation on the protection scope of the present application. Other methods for detecting the object of photographic subject are also applicable to this application. List them one by one.
  • Step 202 Determine a photometric area and a focal length of the camera according to the position information.
  • the position information of the target shooting object can be confirmed.
  • the position information may be the position information of the target photographic object in the current preview frame image, and by obtaining the internal and external parameters of the camera, the position information of the target photographic object in the current preview frame image may be mapped to the target photographic object Location information relative to the terminal in the real environment.
  • the selection of the metering area is one of the important basis for accurately selecting the shutter and aperture values.
  • the metering system of the camera generally selects the metering area by measuring the brightness of the light reflected from the subject, which is also called reflective metering.
  • the camera generally automatically assumes a reflectance of 18% in the photometric area, and performs photometry through this ratio, and then determines the values of the aperture and shutter.
  • the value of 18% is based on the reflection performance of the midtones (gray tones) in natural scenes. If the white tones in the viewfinder are mostly, the reflected light will exceed 18%. If it is a completely white scene, it can reflect about 90%. Incident light, if it is a black scene, the reflectivity may be only a few percent.
  • the standard gray card is an 8 ⁇ 10-inch card. When you place this gray card on the same light source as the subject, the overall reflectance of the light measurement area is 18% of the standard. Then you only need to press the The aperture shutter value is taken, and the photos will be accurately exposed.
  • the overall reflectance of the entire metering area is greater than 18%, for example, the background of the metering area is dominated by white.
  • the photo will be An underexposed photo, the white background will look gray, if it is a white paper, it will become a black paper. Therefore, when shooting a scene with a reflectance greater than 18%, it is necessary to increase the exposure compensation value EV of the camera. Conversely, if you shoot a scene with a reflectance lower than 18%, such as a black background, the photos you take will often be overexposed and the black background will turn gray. Therefore, when shooting scenes with a reflectance below 18%, the EV exposure needs to be reduced.
  • the current metering methods mainly include central average metering, central partial metering, spot metering, multi-spot metering, and evaluation metering.
  • the central average photometry is the most commonly used photometry mode.
  • the selection of the photometric area is described by using the central average photometry.
  • the central average metering mainly considers that ordinary photographers are used to placing the subject, that is, the target object that needs accurate exposure, in the middle of the viewfinder, so this part of the shooting content is the most important. Therefore, the sensory elements responsible for metering will organically separate the overall metering value of the camera.
  • the metering data in the central part occupies most of the proportion, and the metering data outside the center of the screen will assist in metering as a small proportion. .
  • the ratio of the two grid values after weighted average is obtained by the camera processor to obtain the photometric data captured by the camera. For example, the metering of the central part of the camera occupies 75% of the entire metering ratio, and the metering data for other non-central parts that gradually extend to the edge occupy 25%.
  • the position of the target object needs to be determined, and then the photometric area is selected, for example, the position of the target object is used as the central part of the photometric area.
  • the camera focal length is generally selected by the camera emitting a group of infrared rays or other rays, and the distance of the subject is determined after the subject reflects, and then the lens combination is adjusted according to the measured distance to achieve automatic focusing. Therefore, after determining the position of the target object, it is also necessary to obtain the focal length of the photographed frame image.
  • obtaining the position information of the target photographic object in the preview frame image by identifying the target photographic object contained in the current preview frame image it may also be received by the user in the photo preview interface
  • a triggering selection instruction for a target photographic object included in the preview frame image is used to obtain position information of the target photographic object corresponding to the selection instruction.
  • the user acquires the position information of the target shooting object corresponding to the selected instruction by selecting the target shooting object 32 included in the preview frame image 31 triggered on the photo preview interface, that is, by The user directly selects the target photographic object to be focused, instead of acquiring it by identifying the current preview frame image, so that the selection of the target photographic object is more accurate.
  • the above-mentioned obtaining the photographed frame image with successful focusing includes: step 401 to step 402.
  • Step 401 Calculate a difference between a pixel value of a feature point of a current preview frame image and a pixel value of a feature point at a corresponding position of a previous preview frame image.
  • the feature point of the current preview frame image refers to a pixel point at a preset position in the current preview frame image, for example, a pixel point located at the center position of the current preview frame image, or a contour edge of a target photographed object in the current preview frame image. Of pixels.
  • step 402 if the difference is smaller than the second preset threshold, the current preview frame image is used as the photographed frame image with successful focusing.
  • the difference between the pixel value of the feature point of the current preview frame image and the pixel value of the feature point at the corresponding position of the previous preview frame image is less than the above-mentioned second preset threshold, it indicates that the focusing of the photographed object has been completed, and the current The preview frame image is used as the photo frame image.
  • the terminal before detecting whether the terminal is in a preset motion state according to the camera startup instruction, it includes detecting whether the camera's photographing mode is a preset photographing mode. If it is detected that the camera's photographing mode does not belong to the preset If a photographing mode is set, after receiving a photographing instruction, acquisition of a photographing frame image is performed.
  • the foregoing preset photographing mode refers to a mode for instructing the terminal to perform quick photographing.
  • detecting whether the photographing mode of the camera is the preset photographing mode includes: reading the camera photographing mode parameters stored in the register, and determining whether the photographing mode of the camera is the preset photographing mode according to the camera photographing mode parameters.
  • the camera shooting mode parameter After reading the camera shooting mode parameter stored in the register, determine whether the camera shooting mode parameter is a preset parameter. If the camera shooting mode parameter is a preset parameter and is a preset parameter, determine that the camera's shooting mode is a preset photo. Mode; if the camera shooting mode parameter is not a preset parameter, determine that the camera's shooting mode is not a preset shooting mode.
  • the output of the prompt information for completing the photographing includes: playing an animation for prompting the completion of photographing; or changing the background color displayed on the current display interface of the terminal.
  • the user does not know whether the terminal has successfully taken a picture, so that the user changes the motion state of the terminal without the terminal successfully taking a picture, resulting in a photograph taken Blurred.
  • the changing the background color displayed on the current display interface of the terminal may refer to a photo preview interface that changes the current color preview interface of the terminal to a black and white color, or a photo preview interface with a mask of another color attached.
  • the above-mentioned prompt information for completing the photographing may include: displaying the obtained successfully-focused photo frame image on a small picture control of the photo preview interface, so as to remind the user that the photo with successful focus is successfully acquired Frame image, the small picture control is a control that needs to be triggered when the user views the terminal album, that is, the small picture control is used to display the terminal's album when triggered.
  • FIG. 5 shows a schematic structural diagram of a photographing apparatus 500 according to an embodiment of the present application, including a receiving unit 501, a detection unit 502, and a photographing unit 503.
  • the receiving unit 501 is configured to receive a camera startup instruction.
  • the detecting unit 502 is configured to detect whether the terminal is in a preset motion state according to the camera startup instruction.
  • the photographing unit 503 is configured to control the camera to perform auto-focusing if it is detected that the terminal is in a preset motion state, obtain a photographed frame image that is successfully focused, and output a prompt message to complete photographing.
  • the detection unit is specifically configured to detect whether the displacement of the terminal in the three directions of the X-axis, Y-axis, and Z-axis is less than a first preset threshold. If it is detected that the terminal is in the X-axis, If the displacements in the three directions of the Y axis and the Z axis are smaller than the first preset threshold, it is confirmed that the terminal is in a preset motion state.
  • the photographing unit is specifically configured to identify a target photographic object included in the current preview frame image, and obtain position information of the target photographic object; determine the photometric area and focal length of the camera according to the position information. .
  • the photographing unit is further specifically configured to receive a selection instruction of the target photographic object included in the preview frame image triggered by the user on the photographic preview interface, and obtain position information of the target photographic object corresponding to the selection instruction.
  • the photographing unit is further specifically configured to calculate a difference between a pixel value of a feature point of the current preview frame image and a pixel value of a feature point at a corresponding position of the previous preview frame image; if the difference is less than the second preset Threshold, the current preview frame image is taken as the photographed frame image with successful focus.
  • the detection unit is further specifically configured to detect whether the camera's photographing mode is a preset photographing mode according to the camera activation instruction before detecting whether the terminal is in a preset motion state according to the camera activation instruction, If it is detected that the photographing mode of the camera does not belong to the preset photographing mode, after receiving a photographing instruction, acquiring a photographing frame image.
  • the detection unit is further specifically configured to read a camera shooting mode parameter stored in a register, and determine whether the camera shooting mode is a preset shooting mode according to the camera shooting mode parameter.
  • the above-mentioned photographing unit is further specifically configured to play an animation for prompting completion of photographing after acquiring a photographed frame image of successful focusing; or change the background color displayed on the current display interface of the terminal.
  • the above-mentioned photographing unit is further specifically configured to display the acquired successfully-focused photographic frame image in a small picture control of the photographic preview interface after acquiring the successfully-focused photographic frame image; the small picture The control is used to display the terminal's photo album when triggered.
  • the terminal may be a mobile terminal.
  • the mobile terminal may be a terminal such as a smart phone, a tablet computer, a personal computer (PC), or a learning machine.
  • One or more input devices 63 (only one is shown in FIG. 6) and one or more output devices 64 (only one is shown in FIG. 6).
  • the processor 61, the memory 62, the input device 63, the output device 64, and the camera 65 are connected through a bus 66.
  • the camera is used for generating a preview frame image and a photographing frame image according to the collected external light signals.
  • the processor 61 may be a central processing unit (CPU), and the processor may also be another general-purpose processor or a digital signal processor (DSP). , Application Specific Integrated Circuit (ASIC), Field-Programmable Gate Array (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
  • a general-purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
  • the input device 63 may include a virtual keyboard, a touchpad, a fingerprint sensor (for collecting fingerprint information and orientation information of a user), a microphone, and the like, and the output device 64 may include a display, a speaker, and the like.
  • the memory 62 may include a read-only memory and a random access memory, and provide instructions and data to the processor 61. A part or all of the memory 62 may further include a non-volatile random access memory. For example, the memory 62 may also store information of a device type.
  • the memory 62 stores a computer program that can be run on the processor 61.
  • the computer program is a program of a photographing method.
  • the steps in the embodiment of the photographing method are implemented, for example, steps 101 to 103 shown in FIG. 1.
  • the processor 61 executes the computer program, the functions of the modules / units in the foregoing device embodiments are implemented, for example, the functions of the units 501 to 503 shown in FIG. 5.
  • the computer program may be divided into one or more modules / units.
  • the one or more modules / units are stored in the memory 62 and executed by the processor 61 to complete the present application.
  • the one or more modules / units may be a series of computer program instruction segments capable of performing specific functions, and the instruction segments are used to describe the execution process of the computer program in the terminal for taking pictures.
  • the above computer program can be divided into a receiving unit, a detecting unit, and a photographing unit.
  • the specific functions of each unit are as follows: the receiving unit is used to receive a camera startup instruction; and the detection unit is used to detect whether the terminal is in a preset according to the camera startup instruction. Motion state; a photographing unit, configured to control the camera to automatically focus if it detects that the terminal is in a preset motion state, acquire a photographed frame image that has been successfully focused, and output a prompt message to complete the photographing.
  • the disclosed devices / terminals and methods may be implemented in other ways.
  • the device / terminal embodiments described above are only schematic.
  • the division of the above modules or units is only a logical function division.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, which may be electrical, mechanical or other forms.
  • the units described above as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, which may be located in one place, or may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objective of the solution of this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each of the units may exist separately physically, or two or more units may be integrated into one unit.
  • the above integrated unit may be implemented in the form of hardware or in the form of software functional unit.
  • the above integrated modules / units When the above integrated modules / units are implemented in the form of software functional units and sold or used as independent products, they can be stored in a computer-readable storage medium. Based on this understanding, this application implements all or part of the processes in the methods of the above embodiments, and can also be completed by a computer program instructing related hardware.
  • the above computer program can be stored in a computer-readable storage medium.
  • the computer program When executed by a processor, the steps of the foregoing method embodiments may be implemented.
  • the computer program includes computer program code, and the computer program code may be in a source code form, an object code form, an executable file, or some intermediate form.
  • the computer-readable medium may include: any entity or device capable of carrying the computer program code, a recording medium, a U disk, a mobile hard disk, a magnetic disk, an optical disk, a computer memory, a read-only memory (ROM), a random access memory Random Access Memory (RAM), electric carrier signals, telecommunication signals, and software distribution media.
  • ROM read-only memory
  • RAM random access memory Random Access Memory

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Studio Devices (AREA)

Abstract

本申请属于拍照技术领域,尤其涉及一种拍照方法、装置、终端及计算机可读存储介质,其中,所述方法包括:接收相机启动指令;根据所述相机启动指令检测终端是否处于预设运动状态;若检测到所述终端处于预设运动状态,则控制相机进行自动对焦,获取对焦成功的拍照帧图像,并输出完成拍照的提示信息;使得在照片拍摄时,不需要在用户触发拍照指令之后,进行拍照,而是在完成对焦后,第一时间获取对焦成功的拍照帧图像完成拍照,并且,在终端完成拍照时,还通过输出完成拍照的提示信息,使得用户可以及时获知终端已完成照片的拍摄,有效避免了用户在终端未完成拍照时,改变终端的运动状态,导致照片拍摄出现模糊的问题,提高了照片的拍摄效率。

Description

拍照方法、装置、终端及计算机可读存储介质 技术领域
本申请属于拍照技术领域,尤其涉及一种拍照方法、装置、终端及计算机可读存储介质。
背景技术
目前,手机等终端实现拍照功能一般需要先进行拍照预览,并在拍照预览的过程中选择需要进行拍照的拍照场景,再对该拍照场景中的拍照对象进行对焦,然后触发拍照指令,生成照片。
发明内容
本申请实施例提供一种拍照方法、装置、终端及计算机可读存储介质,可以解决终端无法实现快速拍摄到清晰的照片的技术问题。
本申请实施例第一方面提供一种拍照方法,包括:
接收相机启动指令;
根据所述相机启动指令检测终端是否处于预设运动状态;
若检测到所述终端处于预设运动状态,则控制相机进行自动对焦,获取对焦成功的拍照帧图像,并输出完成拍照的提示信息。
本申请实施例第二方面提供一种拍照装置,包括:
接收单元,用于接收相机启动指令;
检测单元,用于根据所述相机启动指令检测终端是否处于预设运动状态;
拍照单元,用于若检测到所述终端处于预设运动状态,则控制相机进行自动对焦,获取对焦成功的拍照帧图像,并输出完成拍照的提示信息。
本申请实施例第三方面提供一种终端,包括存储器、处理器以及存储在存储器中并可在处理器上运行的计算机程序,处理器执行计算机程序时实现以下步骤:
接收相机启动指令;
根据所述相机启动指令检测终端是否处于预设运动状态;
若检测到所述终端处于预设运动状态,则控制相机进行自动对焦,获取对焦成功的拍照帧图像,并输出完成拍照的提示信息。
本申请实施例第四方面提供一种计算机可读存储介质,计算机可读存储介质存储有计算机程序,计算机程序被处理器执行时实现上述方法的步骤。
附图说明
为了更清楚地说明本申请实施例的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,应当理解,以下附图仅示出了本申请的某些实施例,因此不应被看作是对范围的限定,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他相关的附图。
图1是本申请实施例提供的一种拍照方法的实现流程示意图;
图2是本申请实施例提供的控制相机进行自动对焦的具体实现流程示意图;
图3是本申请实施例提供的获取目标拍摄对象的位置信息的示意图;
图4是本申请实施例提供的获取对焦成功的拍照帧图像的具体实现流程示意图;
图5是本申请实施例提供的拍照装置的结构示意图;
图6是本申请实施例提供的终端的结构示意图。
具体实施方式
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。同时,在本申请的描述中,术语“第一”、“第二”等仅用于区分描述,而不能理解为指示或暗示相对重要性。
应当理解,当在本说明书和所附权利要求书中使用时,术语“包括”指示所描述特征、整体、步骤、操作、元素和/或组件的存在,但并不排除一个或多个其它特征、整体、步骤、操作、元素、组件和/或其集合的存在或添加。
还应当理解,在此本申请说明书中所使用的术语仅仅是出于描述特定实施例的目的而并不意在限制本申请。如在本申请说明书和所附权利要求书中所使用的那样,除非上下文清楚地指明其它情况,否则单数形式的“一”、“一个”及“该”意在包括复数形式。
还应当进一步理解,在本申请说明书和所附权利要求书中使用的术语“和/或”是指相关联列出的项中的一个或多个的任何组合以及所有可能组合,并且包括这些组合。
如在本说明书和所附权利要求书中所使用的那样,术语“如果”可以依据上下文被解释为“当...时”或“一旦”或“响应于确定”或“响应于检测到”。类似地,短语“如果确定”或“如果检测到[所描述条件或事件]”可以依据上下文被解释为意指“一旦确定”或“响应于确定”或“一旦检测到[所描述条件或事件]”或“响 应于检测到[所描述条件或事件]”。
为了说明本申请上述的技术方案,下面通过具体实施例来进行说明。
目前,手机等终端实现拍照功能一般需要先进行拍照预览,并在拍照预览的过程中选择需要进行拍照的拍照场景,再对该拍照场景中的拍照对象进行对焦,然后触发拍照指令,生成照片。
在这个过程中,当用户想要快速完成拍照动作,则有可能需要在对焦好之后,立即触发拍照指令,并迅速收回手机实现拍照,但是,此时有可能会因为手机收回得太快,导致拍摄到的照片出现模糊,而无法实现快速拍摄到清晰的照片。
例如,在上培训课时,用户希望快速拍下屏幕显示的课件的内容,并且不影响自己或他人听讲,此时,用户需要在对焦好之后,马上点击拍照按钮,并迅速收回手机实现拍照,但是,这有可能会因为手机收回得太快,导致照片拍摄模糊,而无法实现快速拍摄到清晰的照片。
本申请实施例中,通过根据接收到的相机启动指令对终端是否处于预设运动状态进行检测,并在检测到终端处于预设运动状态时,控制相机进行自动对焦,并得到对焦成功的拍照帧图像,完成拍照,并输出完成拍照的提示信息。也就是说,本申请不需要在用户触发拍照指令之后,进行拍照,而是在完成对焦后,第一时间获取对焦成功的拍照帧图像完成拍照,并且,在终端完成拍照时,还通过输出完成拍照的提示信息,使得用户可以及时获知终端已完成照片的拍摄,有效避免了用户在终端未完成拍照时,改变终端的运动状态,导致照片拍摄出现模糊的问题,使得终端能够快速拍摄到清晰的照片,提高了照片的拍摄效率。
如图1示出了本申请实施例提供的一种拍照方法实现流程示意图,该方法应用于终端,可以由终端上配置的拍照装置执行,适用于需提高照片拍摄效率的情形,包括步骤101至步骤103。
其中,上述终端包括智能手机、平板电脑、学习机等配置有拍照装置的终端设备。上述终端设备上可以安装有拍照应用、浏览器、微信等应用。
步骤101中,接收相机启动指令。
本申请实施例中,上述相机启动指令包括用户在***桌面中点击拍照应用图标触发的相机启动指令、用户通过点击物理按键触发的相机启动指令、用户通过语音触发的相机启动指令或者其他方式触发的相机启动指令。
步骤102中,根据上述相机启动指令检测终端是否处于预设运动状态。
其中,上述预设运动状态是指终端位于拍摄位置并且相对静止的状态。
可选的,在本申请的一些实施方式中,上述根据相机启动指令检测终端是否处于预设运动状态,包括:检测终端在X轴、Y轴和Z轴三个方向上的位移是否均小于第一预设阈值,若检测到终端在X轴、Y轴和Z轴三个方向上的位移均小于上述第一预设阈值,则确认终端处于预设运动状态。
例如,通过上述终端上设置的陀螺仪或加速度计检测终端在X轴、Y轴和Z轴三个方向上的位移大小,若检测到终端在X轴、Y轴或Z轴上的位移存在大于或等于上述第一预设阈值的情况,则表示终端还没有移动到最佳拍摄位置,即,用户还在移动上述终端,寻找用户认为的最佳拍摄位置;若检测到终端在X轴、Y轴或Z轴三个方向上的位移均小于上述第一预设阈值,则表示终端已经位于最佳拍摄位置,可以进入拍摄前的对焦状态。
其中,上述第一预设阈值可以根据实践经验进行设定,例如,上述第一预设阈值可以设置为1mm~3mm。
步骤103中,若检测到上述终端处于预设运动状态,则控制相机进行自动对焦,获取对焦成功的拍照帧图像,并输出完成拍照的提示信息。
其中,选择合适的焦距进行照片拍摄,是保证拍摄出清晰的照片的重要前提。上述拍照帧图像是指摄像头根据拍照指令采集外界光信号生成的帧图像,该拍照帧图像用于生成最终的照片。
本申请实施例中,在检测到终端处于预设运动状态时,则立即控制相机进行自动对焦,并获取对焦成功的拍照帧图像,同时输出完成拍照的提示信息,提醒用户已完成拍照,使得用户可以最快速的结束拍照,并获得清晰的照片;有效地避免了用户在终端未完成拍照时,改变终端的运动状态,导致照片拍摄出现模糊的问题,使得终端能够快速拍摄到清晰的照片,提高了照片的拍摄效率。
作为本申请的一种实施方式,如图2所示,上述步骤103中控制相机进行自动对焦,包括:步骤201至步骤202。
步骤201,识别当前预览帧图像中包含的目标拍摄对象,并获取上述目标拍摄对象的位置信息。
其中,预览帧图像指拍照应用处于预览状态时,摄像头采集外界光信号生成的帧图像。摄像头每次采集外界光信号输出的数据称为帧数据,用户开启终端上的拍照应用后,进入预览模式,终端通过获取摄像头采集回来的帧数据,并进行显示得到上述预览帧图像。
一般情况下,帧数据的采集频率为1秒钟30帧,通常分为预览帧和拍照帧, 分别用于预览和拍照。
本申请实施例中,通过在预览状态下,实时获取预览帧图像,并检测预览帧图像中包含的目标拍摄对象,以便获取上述目标拍摄对象的位置信息。
具体的,上述目标拍摄对象是指当前拍照的对象,例如,当前属于人物拍摄时,则该目标拍摄对象为人,当前为建筑物拍摄时,则该目标拍摄对象为建筑物。需要说明的是,上述目标拍摄对象可以是在上述预览帧图像中占据区域最大的拍照对象。
在本申请的一些实施方式中,上述检测预览帧图像中包含的目标拍摄对象包括对该预览帧图像进行目标检测,实现像素级的对前景与背景进行分类,将背景剔除,并保留一个或多个目标物体,即,一个或多个上述目标拍摄对象。
例如,通过局部二进制模式算法、定向梯度特征结合支持向量机模型以及卷积神经网络模型等目标检测算法进行目标拍摄对象的检测。
其中,卷积神经网络模型可以实现对目标拍摄对象更为精准快速的检测,因此,可以选用训练好的卷积神经网络模型检测上述预览帧图像中的目标拍摄对象。
在上述利用训练好的卷积神经网络模型检测上述预览帧图像中的目标拍摄对象之前,需要先得到训练好的卷积神经网络模型。该训练好的卷积神经网络模型是根据各个样本图像以及各个样本图像所对应的检测结果训练得到,其中,每一个样本图像所对应的检测结果用以指示该样本图像中包含的所有目标拍摄对象。
可选的,上述卷积神经网络模型的训练步骤可以包括:获取样本图像以及样本图像对应的检测结果;利用卷积神经网络模型对上述样本图像进行检测,根据检测结果调整上述卷积神经网络模型的参数,直到调整后的上述卷积神经网络模型可以检测出上述样本图像中的所有目标拍摄对象,或者检测出上述样本图像中目标拍摄对象的准确率大于预设值,则将该调整后的卷积神经网络模型作为训练好的卷积神经网络模型。
其中,上述卷积神经网络模型的参数可以包括卷积神经网络模型中每个卷积层的权重、偏差、回归函数的系数,还可以包括学习速率、迭代次数、每层神经元的个数等。
需要说明的是,此处仅仅是对上述目标拍摄对象的检测方法进行举例说明,不表示为对本申请保护范围的限制,其他可以实现目标拍摄对象检测方法同样适用于本申请中,此处,不再一一列举。
步骤202,根据上述位置信息确定相机的测光区域和焦距。
本申请实施例中,在识别出当前预览帧图像中包含的目标拍摄对象之后,即可确认出目标拍摄对象的位置信息。该位置信息可以为目标拍摄对象在当前预览帧图像中的位置信息,并且,通过获取摄像头的内参和外参,还可以将目标拍摄对象在当前预览帧图像中的位置信息映射为目标拍摄对象在现实环境中相对于终端的位置信息。
测光区域的选取,是准确选取快门和光圈数值的重要依据之一。摄像头的测光***一般是通过测定被摄对象反射回来的光亮度进行测光区域的选取,也称之为反射式测光。
具体的,摄像头一般自动假设测光区域的反光率为18%,通过这个比例进行测光,随后确定光圈和快门的数值。
在同样的光照条件下,如果要得到相同的曝光量,光圈值越大,则需要快门值越小,而如果光圈值越小,则需要快门值越大。18%这个数值来源是根据自然景物中中间调(灰色调)的反光表现而定,如果取景画面中白色调居多,那么反射光线将超过18%,如果是全白场景,可以反射大约90%的入射光,而如果是黑色场景,可能反射率只有百分之几。
标准灰卡是一张8×10英寸的卡片,将这张灰卡放在被摄主体同一测光源,所得到的测光区域整体反光率就是标准的18%,随后只需要按摄像头给出的光圈快门值进行拍摄,拍摄出来的照片就会是曝光准确的。
如果整个测光区域的整体反射率大于18%,例如,测光区域的背景以白色调为主,这时如果按照摄像头自动测光测定的光圈快门值来拍摄的话,拍摄得到的照片将会是一张欠曝的照片,白色的背景看起来会显得发灰,如果是一张白纸的话拍摄出来的就会变成一张黑纸了。所以,拍摄反光率大于18%的场景,需要增加相机的曝光补偿值EV。反之,如果拍摄反光率低于18%的场景,例如黑色的背景,拍出的照片往往会过曝,黑色的背景也会变成灰色。所以,拍摄反光率低于18%的场景,需要减少EV曝光。
目前的测光方式主要有中央平均测光、中央局部测光、点测光、多点测光以及评价测光。其中,中央平均测光是采用最多的一种测光模式,本申请实施例以中央平均测光的方式对测光区域的选取进行举例说明。
其中,中央平均测光主要是考虑到一般摄影者***均之后的比例,得到摄像头拍摄的测光数据。例如,设置摄像头中央部分测光占据整个测光比例的75%,其他非中央部分逐渐延伸至边缘的测光数据占据了25%的比例。
由此可以看出,需要确定好目标对象的位置之后,再进行测光区域的选取,例如,将目标对象的所处的位置作为测光区域的中央部分。
另外,摄像头焦距的选取一般是由摄像头发射一组红外线或其他射线,经被摄体反射后确定被摄体的距离,然后根据测得距离调整镜头组合,实现自动对焦。因此,也需要确定好目标对象的位置之后,获得拍照帧图像的焦距。
可选的,在本申请的一些实施方式中,除了通过识别当前预览帧图像中包含的目标拍摄对象,得到预览帧图像中目标拍摄对象的位置信息之外,还可以通过接收用户在拍照预览界面触发的对上述预览帧图像中包含的目标拍摄对象的选中指令,获取上述选中指令对应的目标拍摄对象的位置信息。
例如,如图3所示,用户通过在拍照预览界面上触发的对上述预览帧图像31中包含的目标拍摄对象32的选中指令,获取上述选中指令对应的目标拍摄对象的位置信息,即,由用户直接选取需要对焦的目标拍摄对象,而不需要通过对当前预览帧图像进行识别的方式获取,使得目标拍摄对象的选取更加准确。
可选的,如图4所示,上述获取对焦成功的拍照帧图像,包括:步骤401至步骤402。
步骤401,计算当前预览帧图像的特征点的像素值与上一预览帧图像对应位置的特征点的像素值的差值。
其中,当前预览帧图像的特征点是指当前预览帧图像中预设位置的像素点,例如,位于当前预览帧图像中心位置的像素点,或者,位于当前预览帧图像中的目标拍摄对象轮廓边缘的像素点。
步骤402,若上述差值小于第二预设阈值,则将当前预览帧图像作为对焦成功的拍照帧图像。
在当前预览帧图像的特征点的像素值与上一预览帧图像对应位置的特征点的像素值的差值小于上述第二预设阈值时,表示已经完成对拍照对象的对焦,可以直接获取当前预览帧图像作为拍照帧图像。
可选的,在上述描述的实施方式中,根据相机启动指令检测终端是否处于预设运动状态之前,包括:检测相机的拍照模式是否为预设拍照模式,若检测到相机的 拍照模式不属于预设拍照模式,则在接收到拍照指令后,进行拍照帧图像的获取。
本申请实施例中,上述预设拍照模式是指用于指示终端进行快速拍照的模式。
具体的,检测相机的拍照模式是否为预设拍照模式包括:读取寄存器中存储的相机拍摄模式参数,根据相机拍摄模式参数确定相机的拍照模式是否为预设拍照模式。
例如,在读取寄存器中存储的相机拍摄模式参数之后,判断相机拍摄模式参数是否为预设参数,若相机拍摄模式参数为预设参数为预设参数,则确定相机的拍照模式为预设拍照模式;若相机拍摄模式参数不是预设参数,则确定相机的拍照模式不是预设拍照模式。
可选的,上述输出完成拍照的提示信息,包括:播放用于提示拍照已完成的动画;或者,变换终端当前显示界面显示的背景颜色。
由于终端自动进行对焦并获取对焦成功后的拍照帧图像的过程中,用户并不知道终端是否已经拍照成功,使得用户在终端未拍照成功的情况下,改变终端的运动状态,导致拍摄出的照片出现模糊。
因此,需要输出完成拍照的提示信息,使得用户可以及时获知终端已完成照片的拍摄,有效避免了用户在终端未完成拍照时,改变终端的运动状态,导致照片拍摄出现模糊的问题,并且,通过提醒用户已经完成拍照,使得用户可以将终端以最快速度收回,并同时拍摄到清晰的照片,提高了照片的拍摄效率。
上述变换终端当前显示界面显示的背景颜色可以是指将终端当前的彩色预览界面变换成黑白颜色的拍照预览界面,或者,附着有其他颜色的掩膜的拍照预览界面。
另外,本申请的一些实施方式中,上述完成拍照的提示信息可以包括:将获取到的对焦成功的拍照帧图像显示在拍照预览界面的小图片控件,以提示用户已经成功获取了对焦成功的拍照帧图像,该小图片控件为用户查看终端相册时需要触发的控件,即,该小图片控件用于在被触发时,显示终端的相册。
图5示出了本申请实施例提供的一种拍照装置500的结构示意图,包括接收单元501、检测单元502和拍照单元503。
接收单元501,用于接收相机启动指令。
检测单元502,用于根据上述相机启动指令检测终端是否处于预设运动状态。
拍照单元503,用于若检测到上述终端处于预设运动状态,则控制相机进行自动对焦,获取对焦成功的拍照帧图像,并输出完成拍照的提示信息。
在本申请的一些实施方式中,上述检测单元具体用于,检测终端在X轴、Y轴和Z轴三个方向上的位移是否均小于第一预设阈值,若检测到终端在X轴、Y轴和Z轴三个方向上的位移均小于上述第一预设阈值,则确认终端处于预设运动状态。
在本申请的一些实施方式中,上述拍照单元具体用于,识别当前预览帧图像中包含的目标拍摄对象,并获取上述目标拍摄对象的位置信息;根据上述位置信息确定相机的测光区域和焦距。
可选的,上述拍照单元还具体用于,接收用户在拍照预览界面触发的对预览帧图像中包含的目标拍摄对象的选中指令,获取上述选中指令对应的目标拍摄对象的位置信息。
可选的,上述拍照单元还具体用于,计算当前预览帧图像的特征点的像素值与上一预览帧图像对应位置的特征点的像素值的差值;若上述差值小于第二预设阈值,则将当前预览帧图像作为对焦成功的拍照帧图像。
在本申请的一些实施方式中,上述检测单元还具体用于,在根据上述相机启动指令检测终端是否处于预设运动状态之前,根据上述相机启动指令检测相机的拍照模式是否为预设拍照模式,若检测到相机的拍照模式不属于预设拍照模式,则在接收到拍照指令后,进行拍照帧图像的获取。
在本申请的一些实施方式中,上述检测单元还具体用于,读取寄存器中存储的相机拍摄模式参数,根据所述相机拍摄模式参数确定相机的拍照模式是否为预设拍照模式。
在本申请的一些实施方式中,上述拍照单元还具体用于,在获取对焦成功的拍照帧图像之后播放用于提示拍照已完成的动画;或者,变换终端当前显示界面显示的背景颜色。
在本申请的一些实施方式中,上述拍照单元还具体用于,在获取对焦成功的拍照帧图像之后将获取到的对焦成功的拍照帧图像显示在拍照预览界面的小图片控件;所述小图片控件用于在被触发时,显示终端的相册。
需要说明的是,为描述的方便和简洁,上述描述的拍照装置500的具体工作过程,可以参考上述图1至图4中描述的方法的对应过程,在此不再赘述。
如图6所示,本申请提供一种用于实现上述拍照方法的终端,该终端可以为移动终端,该移动终端可以为智能手机、平板电脑、个人电脑(PC)、学习机等终端,包括:一个或多个输入设备63(图6中仅示出一个)和一个或多个输出设备64(图6中仅示出一个)。处理器61、存储器62、输入设备63、输出设备64和摄像头65 通过总线66连接。该摄像头用于根据采集外界光信号,生成预览帧图像和拍照帧图像。
应当理解,在本申请实施例中,所称处理器61可以是中央处理单元(Central Processing Unit,CPU),该处理器还可以是其他通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。
输入设备63可以包括虚拟键盘、触控板、指纹采传感器(用于采集用户的指纹信息和指纹的方向信息)、麦克风等,输出设备64可以包括显示器、扬声器等。
存储器62可以包括只读存储器和随机存取存储器,并向处理器61提供指令和数据。存储器62的一部分或全部还可以包括非易失性随机存取存储器。例如,存储器62还可以存储设备类型的信息。
上述存储器62存储有计算机程序,上述计算机程序可在上述处理器61上运行,例如,上述计算机程序为拍照方法的程序。上述处理器61执行上述计算机程序时实现上述拍照方法实施例中的步骤,例如图1所示的步骤101至步骤103。或者,上述处理器61执行上述计算机程序时实现上述各装置实施例中各模块/单元的功能,例如图5所示单元501至503的功能。
上述计算机程序可以被分割成一个或多个模块/单元,上述一个或者多个模块/单元被存储在上述存储器62中,并由上述处理器61执行,以完成本申请。上述一个或多个模块/单元可以是能够完成特定功能的一系列计算机程序指令段,该指令段用于描述上述计算机程序在上述进行拍照的终端中的执行过程。例如,上述计算机程序可以被分割成接收单元、检测单元和拍照单元,各单元具体功能如下:接收单元,用于接收相机启动指令;检测单元,用于根据上述相机启动指令检测终端是否处于预设运动状态;拍照单元,用于若检测到上述终端处于预设运动状态,则控制相机进行自动对焦,获取对焦成功的拍照帧图像,并输出完成拍照的提示信息。
所属领域的技术人员可以清楚地了解到,为了描述的方便和简洁,仅以上述各功能单元、模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能单元、模块完成,即将上述装置的内部结构划分成不同的功能单元或模块,以完成以上描述的全部或者部分功能。实施例中的各功能单元、模块可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上 单元集成在一个单元中,上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。另外,各功能单元、模块的具体名称也只是为了便于相互区分,并不用于限制本申请的保护范围。上述***中单元、模块的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述或记载的部分,可以参见其它实施例的相关描述。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
在本申请所提供的实施例中,应该理解到,所揭露的装置/终端和方法,可以通过其它的方式实现。例如,以上所描述的装置/终端实施例仅仅是示意性的,例如,上述模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个***,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通讯连接可以是通过一些接口,装置或单元的间接耦合或通讯连接,可以是电性,机械或其它的形式。
上述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
上述集成的模块/单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请实现上述实施例方法中的全部或部分流程,也可以通过计算机程序来指令相关的硬件来完成,上述的计算机程序可存储于一计算机可读存储介质中,该计算机程序在被处理器执行时,可实现上述各个方法实施例的步骤。其中,上述计算机程序包括计算机程序代码,上述计算机程序代码可以为源代码形式、对象代码形式、可执行文 件或某些中间形式等。上述计算机可读介质可以包括:能够携带上述计算机程序代码的任何实体或装置、记录介质、U盘、移动硬盘、磁碟、光盘、计算机存储器、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、电载波信号、电信信号以及软件分发介质等。需要说明的是,上述计算机可读介质包含的内容可以根据司法管辖区内立法和专利实践的要求进行适当的增减,例如在某些司法管辖区,根据立法和专利实践,计算机可读介质不包括电载波信号和电信信号。
以上上述实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围,均应包含在本申请的保护范围之内。

Claims (20)

  1. 一种拍照方法,其特征在于,包括:
    接收相机启动指令;
    根据所述相机启动指令检测终端是否处于预设运动状态;
    若检测到所述终端处于预设运动状态,则控制相机进行自动对焦,获取对焦成功的拍照帧图像,并输出完成拍照的提示信息。
  2. 如权利要求1所述的拍照方法,其特征在于,所述根据所述相机启动指令检测终端是否处于预设运动状态,包括:
    检测终端在X轴、Y轴和Z轴三个方向上的位移是否均小于第一预设阈值,若检测到终端在X轴、Y轴和Z轴三个方向上的位移均小于所述第一预设阈值,则确认终端处于预设运动状态。
  3. 如权利要求1或2所述的拍照方法,其特征在于,所述控制相机进行自动对焦,包括:
    识别当前预览帧图像中包含的目标拍摄对象,并获取所述目标拍摄对象的位置信息;
    根据所述位置信息确定相机的测光区域和焦距。
  4. 如权利要求3所述的拍照方法,其特征在于,所述识别当前预览帧图像中包含的目标拍摄对象,并获取所述目标拍摄对象的位置信息,包括:
    接收用户在拍照预览界面触发的对所述预览帧图像中包含的目标拍摄对象的选中指令,获取所述选中指令对应的目标拍摄对象的位置信息。
  5. 如权利要求1所述的拍照方法,其特征在于,所述获取对焦成功的拍照帧图像,包括:
    计算当前预览帧图像的特征点的像素值与上一预览帧图像对应位置的特征点的像素值的差值;
    若所述差值小于第二预设阈值,则将当前预览帧图像作为对焦成功的拍照帧图像。
  6. 如权利要求1所述的拍照方法,其特征在于,在根据所述相机启动指令检测终端是否处于预设运动状态之前,包括:
    根据所述相机启动指令检测相机的拍照模式是否为预设拍照模式,若检测到相机的拍照模式不属于预设拍照模式,则在接收到拍照指令后,进行拍照帧图像的获取。
  7. 如权利要求6所述的拍照方法,其特征在于,所述检测相机的拍照模式是否为预设拍照模式包括:
    读取寄存器中存储的相机拍摄模式参数,根据所述相机拍摄模式参数确定相机的拍照模式是否为预设拍照模式。
  8. 如权利要求1所述的拍照方法,其特征在于,所述输出完成拍照的提示信息,包括:
    播放用于提示拍照已完成的动画;或者,
    变换终端当前显示界面显示的背景颜色。
  9. 如权利要求1所述的拍照方法,其特征在于,所述输出完成拍照的提示信息,包括:
    将获取到的对焦成功的拍照帧图像显示在拍照预览界面的小图片控件;所述小图片控件用于在被触发时,显示终端的相册。
  10. 一种拍照装置,其特征在于,包括:
    接收单元,用于接收相机启动指令;
    检测单元,用于根据所述相机启动指令检测终端是否处于预设运动状态;
    拍照单元,用于若检测到所述终端处于预设运动状态,则控制相机进行自动对焦,获取对焦成功的拍照帧图像,并输出完成拍照的提示信息。
  11. 一种终端,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机程序,其特征在于,所述处理器执行所述计算机程序时实现以下步骤:
    接收相机启动指令;
    根据所述相机启动指令检测终端是否处于预设运动状态;
    若检测到所述终端处于预设运动状态,则控制相机进行自动对焦,获取对焦成功的拍照帧图像,并输出完成拍照的提示信息。
  12. 如权利要求11所述的一种终端,其特征在于,所述处理器执行所述计算机程序时还实现以下步骤:
    检测终端在X轴、Y轴和Z轴三个方向上的位移是否均小于第一预设阈值,若检测到终端在X轴、Y轴和Z轴三个方向上的位移均小于所述第一预设阈值,则确认终端处于预设运动状态。
  13. 如权利要求11或12所述的一种终端,其特征在于,所述处理器执行所述计算机程序时还实现以下步骤:
    识别当前预览帧图像中包含的目标拍摄对象,并获取所述目标拍摄对象的位置信息;
    根据所述位置信息确定相机的测光区域和焦距。
  14. 如权利要求13所述的一种终端,其特征在于,所述处理器执行所述计算机程序时还实现以下步骤:
    接收用户在拍照预览界面触发的对所述预览帧图像中包含的目标拍摄对象的选中指令,获取所述选中指令对应的目标拍摄对象的位置信息。
  15. 如权利要求11所述的一种终端,其特征在于,所述处理器执行所述计算机程序时还实现以下步骤:
    计算当前预览帧图像的特征点的像素值与上一预览帧图像对应位置的特征点的像素值的差值;
    若所述差值小于第二预设阈值,则将当前预览帧图像作为对焦成功的拍照帧图像。
  16. 如权利要求11所述的一种终端,其特征在于,所述处理器执行所述计算机程序时还实现以下步骤:
    在根据所述相机启动指令检测终端是否处于预设运动状态之前,根据所述相机启动指令检测相机的拍照模式是否为预设拍照模式,若检测到相机的拍照模式不属于预设拍照模式,则在接收到拍照指令后,进行拍照帧图像的获取。
  17. 如权利要求16所述的一种终端,其特征在于,所述处理器执行所述计算机程序时还实现以下步骤:
    读取寄存器中存储的相机拍摄模式参数,根据所述相机拍摄模式参数确定相机的拍照模式是否为预设拍照模式。
  18. 如权利要求11所述的一种终端,其特征在于,所述处理器执行所述计算机程序时还实现以下步骤:
    播放用于提示拍照已完成的动画;或者,
    变换终端当前显示界面显示的背景颜色。
  19. 如权利要求11所述的一种终端,其特征在于,所述处理器执行所述计算机程序时还实现以下步骤:
    将获取到的对焦成功的拍照帧图像显示在拍照预览界面的小图片控件;所述小图片控件用于在被触发时,显示终端的相册。
  20. 一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序, 其特征在于,所述计算机程序被处理器执行时实现如权利要求1至9任意一项所述方法的步骤。
PCT/CN2019/093682 2018-08-22 2019-06-28 拍照方法、装置、终端及计算机可读存储介质 WO2020038109A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810966306.0 2018-08-22
CN201810966306.0A CN108777767A (zh) 2018-08-22 2018-08-22 拍照方法、装置、终端及计算机可读存储介质

Publications (1)

Publication Number Publication Date
WO2020038109A1 true WO2020038109A1 (zh) 2020-02-27

Family

ID=64028866

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/093682 WO2020038109A1 (zh) 2018-08-22 2019-06-28 拍照方法、装置、终端及计算机可读存储介质

Country Status (2)

Country Link
CN (1) CN108777767A (zh)
WO (1) WO2020038109A1 (zh)

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112132227A (zh) * 2020-09-30 2020-12-25 石家庄铁道大学 桥梁列车荷载作用时程提取方法、装置及终端设备
CN112184722A (zh) * 2020-09-15 2021-01-05 上海传英信息技术有限公司 图像处理方法、终端及计算机存储介质
CN113065410A (zh) * 2021-03-10 2021-07-02 广州云从鼎望科技有限公司 集装箱掏箱流程智能化控制方法、***、介质及装置
CN113497887A (zh) * 2020-04-03 2021-10-12 中兴通讯股份有限公司 拍摄方法、电子设备及存储介质
CN113562401A (zh) * 2021-07-23 2021-10-29 杭州海康机器人技术有限公司 控制目标对象传送方法、装置、***、终端和存储介质
CN113567452A (zh) * 2021-07-27 2021-10-29 北京深点视觉科技有限公司 一种毛刺检测方法、装置、设备及存储介质
CN113766119A (zh) * 2021-05-11 2021-12-07 腾讯科技(深圳)有限公司 虚拟形象显示方法、装置、终端及存储介质
CN113852646A (zh) * 2020-06-10 2021-12-28 漳州立达信光电子科技有限公司 一种智能设备的控制方法、装置、电子设备及***
CN113873161A (zh) * 2021-10-11 2021-12-31 维沃移动通信有限公司 拍摄方法、装置及电子设备
CN114264653A (zh) * 2021-12-15 2022-04-01 知辛电子科技(苏州)有限公司 一种用于小间距光电控制板的拍照检测方法
CN114286004A (zh) * 2021-12-28 2022-04-05 维沃移动通信有限公司 对焦方法、拍摄装置、电子设备及介质
CN114371696A (zh) * 2021-12-06 2022-04-19 深圳市普渡科技有限公司 移动设备、控制方法、机器人及存储介质
CN114500979A (zh) * 2020-11-12 2022-05-13 海信视像科技股份有限公司 显示设备、控制设备以及同步校准方法
CN114897762A (zh) * 2022-02-18 2022-08-12 众信方智(苏州)智能技术有限公司 一种煤矿工作面采煤机自动定位方法及装置
CN115278079A (zh) * 2022-07-27 2022-11-01 维沃移动通信有限公司 拍摄方法及其装置
CN116347212A (zh) * 2022-08-05 2023-06-27 荣耀终端有限公司 一种自动拍照方法及电子设备
CN116631908A (zh) * 2023-05-16 2023-08-22 台州勃美科技有限公司 一种晶圆自动加工方法、装置及电子设备

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108777767A (zh) * 2018-08-22 2018-11-09 Oppo广东移动通信有限公司 拍照方法、装置、终端及计算机可读存储介质
CN109451240B (zh) * 2018-12-04 2021-01-26 百度在线网络技术(北京)有限公司 对焦方法、装置、计算机设备和可读存储介质
CN110717452B (zh) * 2019-10-09 2022-04-19 Oppo广东移动通信有限公司 图像识别方法、装置、终端及计算机可读存储介质
CN111511002B (zh) * 2020-04-23 2023-12-05 Oppo广东移动通信有限公司 检测帧率的调节方法和装置、终端和可读存储介质
CN113170053A (zh) * 2020-07-24 2021-07-23 深圳市大疆创新科技有限公司 拍摄方法、拍摄装置及存储介质
CN112422823B (zh) * 2020-11-09 2022-08-09 广汽本田汽车有限公司 一种自动触发视觉拍照方法及装置
CN112738403B (zh) * 2020-12-30 2023-12-05 维沃移动通信(杭州)有限公司 拍摄方法、拍摄装置、电子设备和介质
CN113507549B (zh) * 2021-05-28 2022-10-14 西安闻泰信息技术有限公司 一种摄像头、拍照方法、终端及存储介质
CN114143456B (zh) * 2021-11-26 2023-10-20 青岛海信移动通信技术有限公司 拍照方法及装置
CN114241014A (zh) * 2021-12-03 2022-03-25 上海锡鼎智能科技有限公司 一种用于实验考头戴式***头智能抓拍方法
CN114339035A (zh) * 2021-12-20 2022-04-12 青岛海尔科技有限公司 图像获取的方法和装置、存储介质及电子装置
CN115118871B (zh) * 2022-02-11 2023-12-15 东莞市步步高教育软件有限公司 一种拍照像素模式切换方法、***、终端设备及存储介质
CN116723382B (zh) * 2022-02-28 2024-05-03 荣耀终端有限公司 一种拍摄方法及相关设备
CN115146805A (zh) * 2022-05-19 2022-10-04 新瑞鹏宠物医疗集团有限公司 基于宠物鼻纹的宠物游乐园入园的方法以及相关装置
CN117579938A (zh) * 2022-06-29 2024-02-20 荣耀终端有限公司 一种拍照方法和电子设备

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102883104A (zh) * 2011-09-02 2013-01-16 微软公司 自动图像捕捉
JP2013015806A (ja) * 2011-06-09 2013-01-24 Nikon Corp 焦点検出装置および撮像装置
CN103856709A (zh) * 2012-12-04 2014-06-11 腾讯科技(深圳)有限公司 图像获取方法及装置
CN104469001A (zh) * 2014-12-02 2015-03-25 王国忠 一种具有拍照防抖功能的手机及其在拍照中的防抖方法
CN105072331A (zh) * 2015-07-20 2015-11-18 魅族科技(中国)有限公司 一种拍照方法及终端
CN108777767A (zh) * 2018-08-22 2018-11-09 Oppo广东移动通信有限公司 拍照方法、装置、终端及计算机可读存储介质

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4859625B2 (ja) * 2006-10-27 2012-01-25 Hoya株式会社 手ぶれ補正装置を備えたカメラ
CN103167141A (zh) * 2012-09-14 2013-06-19 深圳市金立通信设备有限公司 一种手机相机连续对焦***及方法
CN102970484B (zh) * 2012-11-27 2016-02-24 惠州Tcl移动通信有限公司 一种拍照时声音提示的方法及基于该方法的电子设备
CN104580884B (zh) * 2014-12-03 2018-03-27 广东欧珀移动通信有限公司 一种拍摄方法及终端
CN104994297B (zh) * 2015-07-09 2019-01-08 厦门美图之家科技有限公司 全景拍照的对焦测光锁定方法和***
CN107087102B (zh) * 2017-03-13 2020-07-24 联想(北京)有限公司 对焦信息处理方法及电子设备

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013015806A (ja) * 2011-06-09 2013-01-24 Nikon Corp 焦点検出装置および撮像装置
CN102883104A (zh) * 2011-09-02 2013-01-16 微软公司 自动图像捕捉
CN103856709A (zh) * 2012-12-04 2014-06-11 腾讯科技(深圳)有限公司 图像获取方法及装置
CN104469001A (zh) * 2014-12-02 2015-03-25 王国忠 一种具有拍照防抖功能的手机及其在拍照中的防抖方法
CN105072331A (zh) * 2015-07-20 2015-11-18 魅族科技(中国)有限公司 一种拍照方法及终端
CN108777767A (zh) * 2018-08-22 2018-11-09 Oppo广东移动通信有限公司 拍照方法、装置、终端及计算机可读存储介质

Cited By (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113497887A (zh) * 2020-04-03 2021-10-12 中兴通讯股份有限公司 拍摄方法、电子设备及存储介质
CN113852646A (zh) * 2020-06-10 2021-12-28 漳州立达信光电子科技有限公司 一种智能设备的控制方法、装置、电子设备及***
CN112184722A (zh) * 2020-09-15 2021-01-05 上海传英信息技术有限公司 图像处理方法、终端及计算机存储介质
CN112184722B (zh) * 2020-09-15 2024-05-03 上海传英信息技术有限公司 图像处理方法、终端及计算机存储介质
CN112132227A (zh) * 2020-09-30 2020-12-25 石家庄铁道大学 桥梁列车荷载作用时程提取方法、装置及终端设备
CN112132227B (zh) * 2020-09-30 2024-04-05 石家庄铁道大学 桥梁列车荷载作用时程提取方法、装置及终端设备
CN114500979A (zh) * 2020-11-12 2022-05-13 海信视像科技股份有限公司 显示设备、控制设备以及同步校准方法
CN114500979B (zh) * 2020-11-12 2023-09-19 海信视像科技股份有限公司 显示设备、控制设备以及同步校准方法
CN113065410A (zh) * 2021-03-10 2021-07-02 广州云从鼎望科技有限公司 集装箱掏箱流程智能化控制方法、***、介质及装置
CN113065410B (zh) * 2021-03-10 2024-06-04 广州云从鼎望科技有限公司 集装箱掏箱流程智能化控制方法、***、介质及装置
CN113766119B (zh) * 2021-05-11 2023-12-05 腾讯科技(深圳)有限公司 虚拟形象显示方法、装置、终端及存储介质
CN113766119A (zh) * 2021-05-11 2021-12-07 腾讯科技(深圳)有限公司 虚拟形象显示方法、装置、终端及存储介质
CN113562401A (zh) * 2021-07-23 2021-10-29 杭州海康机器人技术有限公司 控制目标对象传送方法、装置、***、终端和存储介质
CN113562401B (zh) * 2021-07-23 2023-07-18 杭州海康机器人股份有限公司 控制目标对象传送方法、装置、***、终端和存储介质
CN113567452B (zh) * 2021-07-27 2024-03-15 北京深点视觉科技有限公司 一种毛刺检测方法、装置、设备及存储介质
CN113567452A (zh) * 2021-07-27 2021-10-29 北京深点视觉科技有限公司 一种毛刺检测方法、装置、设备及存储介质
CN113873161A (zh) * 2021-10-11 2021-12-31 维沃移动通信有限公司 拍摄方法、装置及电子设备
CN114371696B (zh) * 2021-12-06 2024-02-27 深圳市普渡科技有限公司 移动设备、控制方法、机器人及存储介质
CN114371696A (zh) * 2021-12-06 2022-04-19 深圳市普渡科技有限公司 移动设备、控制方法、机器人及存储介质
CN114264653B (zh) * 2021-12-15 2024-04-30 知辛电子科技(苏州)有限公司 一种用于小间距光电控制板的拍照检测方法
CN114264653A (zh) * 2021-12-15 2022-04-01 知辛电子科技(苏州)有限公司 一种用于小间距光电控制板的拍照检测方法
CN114286004A (zh) * 2021-12-28 2022-04-05 维沃移动通信有限公司 对焦方法、拍摄装置、电子设备及介质
CN114897762B (zh) * 2022-02-18 2023-04-07 众信方智(苏州)智能技术有限公司 一种煤矿工作面采煤机自动定位方法及装置
CN114897762A (zh) * 2022-02-18 2022-08-12 众信方智(苏州)智能技术有限公司 一种煤矿工作面采煤机自动定位方法及装置
CN115278079A (zh) * 2022-07-27 2022-11-01 维沃移动通信有限公司 拍摄方法及其装置
CN116347212A (zh) * 2022-08-05 2023-06-27 荣耀终端有限公司 一种自动拍照方法及电子设备
CN116347212B (zh) * 2022-08-05 2024-03-08 荣耀终端有限公司 一种自动拍照方法及电子设备
CN116631908A (zh) * 2023-05-16 2023-08-22 台州勃美科技有限公司 一种晶圆自动加工方法、装置及电子设备
CN116631908B (zh) * 2023-05-16 2024-04-26 台州勃美科技有限公司 一种晶圆自动加工方法、装置及电子设备

Also Published As

Publication number Publication date
CN108777767A (zh) 2018-11-09

Similar Documents

Publication Publication Date Title
WO2020038109A1 (zh) 拍照方法、装置、终端及计算机可读存储介质
CN108933899B (zh) 全景拍摄方法、装置、终端及计算机可读存储介质
CN108495050B (zh) 拍照方法、装置、终端及计算机可读存储介质
JP5096017B2 (ja) 撮像装置
CN101465972B (zh) 在数字图像处理装置中使图像背景模糊的设备和方法
TWI549501B (zh) An imaging device, and a control method thereof
US8508652B2 (en) Autofocus method
CN107920211A (zh) 一种拍照方法、终端及计算机可读存储介质
JP2010177894A (ja) 撮像装置、画像管理装置及び画像管理方法、並びにコンピューター・プログラム
KR20060050871A (ko) 촬상장치 및 그 제어방법
JP2003344891A (ja) 撮影モード自動設定カメラ
JP5278564B2 (ja) 撮像装置
JP2003046844A (ja) 強調表示方法、カメラおよび焦点強調表示システム
GB2467391A (en) Self-timer photography
CN108200335A (zh) 基于双摄像头的拍照方法、终端及计算机可读存储介质
CN106412423A (zh) 一种对焦方法及装置
WO2019084756A1 (zh) 一种图像处理方法、装置及飞行器
WO2021218536A1 (zh) 一种高动态范围图像合成方法和电子设备
CN106060404A (zh) 一种拍摄模式选择方法及终端
JP2009290255A (ja) 撮像装置、および撮像装置制御方法、並びにコンピュータ・プログラム
WO2023071933A1 (zh) 相机拍摄参数调整方法、装置及电子设备
JP2005223658A (ja) デジタルカメラ
US20150254856A1 (en) Smart moving object capture methods, devices and digital imaging systems including the same
JP3510063B2 (ja) スチルビデオカメラの露光量制御装置
JP2008199461A (ja) 撮像装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19852881

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19852881

Country of ref document: EP

Kind code of ref document: A1