WO2020038109A1 - Procédé et dispositif de photographie, terminal et support de stockage lisible par ordinateur - Google Patents

Procédé et dispositif de photographie, terminal et support de stockage lisible par ordinateur Download PDF

Info

Publication number
WO2020038109A1
WO2020038109A1 PCT/CN2019/093682 CN2019093682W WO2020038109A1 WO 2020038109 A1 WO2020038109 A1 WO 2020038109A1 CN 2019093682 W CN2019093682 W CN 2019093682W WO 2020038109 A1 WO2020038109 A1 WO 2020038109A1
Authority
WO
WIPO (PCT)
Prior art keywords
photographing
terminal
camera
frame image
preset
Prior art date
Application number
PCT/CN2019/093682
Other languages
English (en)
Chinese (zh)
Inventor
张光辉
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Publication of WO2020038109A1 publication Critical patent/WO2020038109A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/681Motion detection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/682Vibration or motion blur correction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof

Definitions

  • the present application belongs to the field of photographing technology, and particularly relates to a photographing method, device, terminal, and computer-readable storage medium.
  • a terminal such as a mobile phone generally needs to perform a photograph preview first, and select a photographing scene to be photographed during the photographing preview, and then focus the photographic object in the photographing scene, and then trigger a photographing instruction to generate a photograph.
  • the embodiments of the present application provide a photographing method, a device, a terminal, and a computer-readable storage medium, which can solve the technical problem that the terminal cannot quickly take clear photos.
  • a first aspect of the embodiments of the present application provides a photographing method, including:
  • the camera is controlled to perform auto-focusing, acquire a photographed frame image that is successfully focused, and output a prompt message to complete the photographing.
  • a second aspect of the embodiments of the present application provides a photographing device, including:
  • a detection unit configured to detect whether the terminal is in a preset motion state according to the camera startup instruction
  • the photographing unit is configured to control the camera to perform auto-focusing if it is detected that the terminal is in a preset motion state, acquire a photographed frame image that is successfully focused, and output a prompt message to complete photographing.
  • a third aspect of the embodiments of the present application provides a terminal including a memory, a processor, and a computer program stored in the memory and executable on the processor.
  • the processor executes the computer program, the following steps are implemented:
  • the camera is controlled to perform auto-focusing, acquire a photographed frame image that is successfully focused, and output a prompt message to complete the photographing.
  • a fourth aspect of the embodiments of the present application provides a computer-readable storage medium.
  • the computer-readable storage medium stores a computer program, and the computer program is executed by a processor to implement the steps of the foregoing method.
  • FIG. 1 is a schematic flowchart of an implementation of a photographing method according to an embodiment of the present application
  • FIG. 2 is a schematic flowchart of a specific implementation of controlling a camera to perform autofocus according to an embodiment of the present application
  • FIG. 3 is a schematic diagram of acquiring position information of a target photographing object according to an embodiment of the present application.
  • FIG. 4 is a schematic flowchart of a specific implementation of obtaining a photographed frame image that is successfully focused according to an embodiment of the present application
  • FIG. 5 is a schematic structural diagram of a photographing device according to an embodiment of the present application.
  • FIG. 6 is a schematic structural diagram of a terminal according to an embodiment of the present application.
  • the term “if” can be construed as “when” or “once” or “in response to a determination” or “in response to a detection” depending on the context .
  • the phrase “if determined” or “if [the described condition or event] is detected” can be interpreted, depending on the context, to mean “once determined” or “in response to the determination” or “once [the condition or event described ] “Or” In response to [Description of condition or event] detected ".
  • a terminal such as a mobile phone generally needs to perform a photograph preview first, and select a photographing scene to be photographed during the photographing preview, and then focus the photographic object in the photographing scene, and then trigger a photographing instruction to generate a photograph.
  • the user wants to quickly capture the content of the courseware displayed on the screen without affecting himself or others.
  • the user needs to click the camera button immediately after focusing, and quickly take back the phone to take a picture, but This may cause the photos to be taken too quickly because the phone is retracted too fast, and it is not possible to quickly take clear photos.
  • whether the terminal is in a preset motion state is detected according to the received camera start instruction, and when it is detected that the terminal is in the preset motion state, the camera is controlled to perform autofocus, and a photograph frame that is successfully focused is obtained. Image, complete the photo, and output a prompt message to complete the photo. That is to say, this application does not need to take a picture after the user triggers a photographing instruction. Instead, after the focus is completed, the photograph frame image that has been successfully focused is obtained at the first time to complete the picture, and when the terminal completes the picture, the output is completed.
  • the prompt information for taking pictures allows users to know that the terminal has finished taking photos in a timely manner, which effectively avoids the user from changing the motion state of the terminal when the terminal has not taken photos, resulting in blurring of the photo shooting, enabling the terminal to quickly take clear pictures.
  • Photo which improves the efficiency of photo shooting.
  • FIG. 1 shows a schematic flowchart of a photographing method according to an embodiment of the present application. This method is applied to a terminal and can be executed by a photographing device configured on the terminal. It is applicable to a situation where the efficiency of photographing needs to be improved, including steps 101 to Step 103.
  • the terminal includes a terminal device equipped with a photographing device, such as a smart phone, a tablet computer, and a learning machine.
  • the terminal device may be installed with applications such as a photographing application, a browser, and WeChat.
  • step 101 a camera start instruction is received.
  • the above camera startup instruction includes a camera startup instruction triggered by a user clicking a photographing application icon on a system desktop, a camera startup instruction triggered by a user clicking a physical button, a camera startup instruction triggered by a user through a voice, or other methods. Camera start instruction.
  • step 102 it is detected whether the terminal is in a preset motion state according to the camera startup instruction.
  • the preset motion state refers to a state where the terminal is located at a shooting position and is relatively stationary.
  • the detecting whether the terminal is in a preset motion state according to the camera startup instruction includes: detecting whether the displacement of the terminal in the three directions of the X axis, Y axis, and Z axis is less than the first A preset threshold, if it is detected that the displacement of the terminal in the three directions of the X axis, Y axis, and Z axis is less than the first preset threshold, it is confirmed that the terminal is in a preset motion state.
  • the gyroscope or accelerometer on the terminal is used to detect the displacement of the terminal in the three directions of X, Y, and Z. If it is detected that the displacement of the terminal in X, Y, or Z is greater than Or equal to the first preset threshold, it means that the terminal has not moved to the optimal shooting position, that is, the user is still moving the terminal to find the best shooting position that the user thinks; if it is detected that the terminal is on the X axis, Y If the displacement in the three directions of the axis or the Z axis is less than the first preset threshold, it means that the terminal is already at the optimal shooting position and can enter the focus state before shooting.
  • the first preset threshold may be set according to practical experience.
  • the first preset threshold may be set to 1 mm to 3 mm.
  • step 103 if it is detected that the terminal is in a preset motion state, the camera is controlled to perform autofocus to obtain a photographed frame image that is successfully focused, and output a prompt message to complete the photographing.
  • the above-mentioned photographing frame image refers to a frame image generated by the camera by collecting an external light signal according to a photographing instruction, and the photographing frame image is used to generate a final photo.
  • the camera when it is detected that the terminal is in a preset motion state, the camera is immediately controlled to perform autofocus and acquire a successfully captured photo frame image.
  • a reminder message is output to remind the user that the photo has been taken, so that the user It can quickly finish taking photos and obtain clear photos; effectively avoids the user's changing the motion state of the terminal when the terminal has not finished taking pictures, resulting in blurry photo shooting problems, enabling the terminal to quickly take clear photos, improving The efficiency of photo shooting.
  • controlling the camera to perform autofocus in step 103 includes steps 201 to 202.
  • Step 201 Identify a target shooting object included in the current preview frame image, and obtain position information of the target shooting object.
  • the preview frame image refers to a frame image generated by the camera collecting an external light signal when the photographing application is in a preview state.
  • the data output by the camera every time it collects external light signals is called frame data.
  • the terminal obtains the preview frame image by acquiring the frame data collected by the camera and displaying it.
  • the frame data is collected at a frequency of 30 frames per second, and is generally divided into a preview frame and a photographing frame, which are used for previewing and photographing respectively.
  • a preview frame image is acquired in real time, and a target photographic object included in the preview frame image is detected, so as to obtain position information of the target photographic object.
  • the above-mentioned target photographing object refers to the object currently being photographed.
  • the target photographing object is a person
  • the target photographing object is a building.
  • the above-mentioned target photographing object may be a photographic object occupying the largest area in the preview frame image.
  • the above-mentioned detection of the target photographic object included in the preview frame image includes performing target detection on the preview frame image, realizing pixel-level classification of the foreground and background, removing the background, and retaining one or more Target objects, that is, one or more of the above target photographic objects.
  • target detection objects are detected through local binary pattern algorithms, directional gradient features combined with support vector machine models, and convolutional neural network models.
  • the convolutional neural network model can achieve more accurate and rapid detection of target photographic objects. Therefore, a trained convolutional neural network model can be selected to detect the target photographic objects in the preview frame image.
  • a trained convolutional neural network model Before using the trained convolutional neural network model to detect the target shooting object in the preview frame image, a trained convolutional neural network model needs to be obtained first.
  • the trained convolutional neural network model is trained according to each sample image and the detection results corresponding to each sample image, where the detection result corresponding to each sample image is used to indicate all target photographic objects included in the sample image .
  • the training step of the convolutional neural network model may include: obtaining a sample image and a detection result corresponding to the sample image; using the convolutional neural network model to detect the sample image, and adjusting the convolutional neural network model according to the detection result. Parameters until the adjusted convolutional neural network model can detect all the target photographic objects in the sample image, or the accuracy rate of the target photographic objects in the sample image is greater than a preset value, then the adjusted The convolutional neural network model is used as a trained convolutional neural network model.
  • the parameters of the above convolutional neural network model may include the weight, deviation, and coefficient of the regression function of each convolutional layer in the convolutional neural network model, and may also include the learning rate, the number of iterations, and the number of neurons in each layer. .
  • the method for detecting the above-mentioned target photographic object is only given as an example, and is not expressed as a limitation on the protection scope of the present application. Other methods for detecting the object of photographic subject are also applicable to this application. List them one by one.
  • Step 202 Determine a photometric area and a focal length of the camera according to the position information.
  • the position information of the target shooting object can be confirmed.
  • the position information may be the position information of the target photographic object in the current preview frame image, and by obtaining the internal and external parameters of the camera, the position information of the target photographic object in the current preview frame image may be mapped to the target photographic object Location information relative to the terminal in the real environment.
  • the selection of the metering area is one of the important basis for accurately selecting the shutter and aperture values.
  • the metering system of the camera generally selects the metering area by measuring the brightness of the light reflected from the subject, which is also called reflective metering.
  • the camera generally automatically assumes a reflectance of 18% in the photometric area, and performs photometry through this ratio, and then determines the values of the aperture and shutter.
  • the value of 18% is based on the reflection performance of the midtones (gray tones) in natural scenes. If the white tones in the viewfinder are mostly, the reflected light will exceed 18%. If it is a completely white scene, it can reflect about 90%. Incident light, if it is a black scene, the reflectivity may be only a few percent.
  • the standard gray card is an 8 ⁇ 10-inch card. When you place this gray card on the same light source as the subject, the overall reflectance of the light measurement area is 18% of the standard. Then you only need to press the The aperture shutter value is taken, and the photos will be accurately exposed.
  • the overall reflectance of the entire metering area is greater than 18%, for example, the background of the metering area is dominated by white.
  • the photo will be An underexposed photo, the white background will look gray, if it is a white paper, it will become a black paper. Therefore, when shooting a scene with a reflectance greater than 18%, it is necessary to increase the exposure compensation value EV of the camera. Conversely, if you shoot a scene with a reflectance lower than 18%, such as a black background, the photos you take will often be overexposed and the black background will turn gray. Therefore, when shooting scenes with a reflectance below 18%, the EV exposure needs to be reduced.
  • the current metering methods mainly include central average metering, central partial metering, spot metering, multi-spot metering, and evaluation metering.
  • the central average photometry is the most commonly used photometry mode.
  • the selection of the photometric area is described by using the central average photometry.
  • the central average metering mainly considers that ordinary photographers are used to placing the subject, that is, the target object that needs accurate exposure, in the middle of the viewfinder, so this part of the shooting content is the most important. Therefore, the sensory elements responsible for metering will organically separate the overall metering value of the camera.
  • the metering data in the central part occupies most of the proportion, and the metering data outside the center of the screen will assist in metering as a small proportion. .
  • the ratio of the two grid values after weighted average is obtained by the camera processor to obtain the photometric data captured by the camera. For example, the metering of the central part of the camera occupies 75% of the entire metering ratio, and the metering data for other non-central parts that gradually extend to the edge occupy 25%.
  • the position of the target object needs to be determined, and then the photometric area is selected, for example, the position of the target object is used as the central part of the photometric area.
  • the camera focal length is generally selected by the camera emitting a group of infrared rays or other rays, and the distance of the subject is determined after the subject reflects, and then the lens combination is adjusted according to the measured distance to achieve automatic focusing. Therefore, after determining the position of the target object, it is also necessary to obtain the focal length of the photographed frame image.
  • obtaining the position information of the target photographic object in the preview frame image by identifying the target photographic object contained in the current preview frame image it may also be received by the user in the photo preview interface
  • a triggering selection instruction for a target photographic object included in the preview frame image is used to obtain position information of the target photographic object corresponding to the selection instruction.
  • the user acquires the position information of the target shooting object corresponding to the selected instruction by selecting the target shooting object 32 included in the preview frame image 31 triggered on the photo preview interface, that is, by The user directly selects the target photographic object to be focused, instead of acquiring it by identifying the current preview frame image, so that the selection of the target photographic object is more accurate.
  • the above-mentioned obtaining the photographed frame image with successful focusing includes: step 401 to step 402.
  • Step 401 Calculate a difference between a pixel value of a feature point of a current preview frame image and a pixel value of a feature point at a corresponding position of a previous preview frame image.
  • the feature point of the current preview frame image refers to a pixel point at a preset position in the current preview frame image, for example, a pixel point located at the center position of the current preview frame image, or a contour edge of a target photographed object in the current preview frame image. Of pixels.
  • step 402 if the difference is smaller than the second preset threshold, the current preview frame image is used as the photographed frame image with successful focusing.
  • the difference between the pixel value of the feature point of the current preview frame image and the pixel value of the feature point at the corresponding position of the previous preview frame image is less than the above-mentioned second preset threshold, it indicates that the focusing of the photographed object has been completed, and the current The preview frame image is used as the photo frame image.
  • the terminal before detecting whether the terminal is in a preset motion state according to the camera startup instruction, it includes detecting whether the camera's photographing mode is a preset photographing mode. If it is detected that the camera's photographing mode does not belong to the preset If a photographing mode is set, after receiving a photographing instruction, acquisition of a photographing frame image is performed.
  • the foregoing preset photographing mode refers to a mode for instructing the terminal to perform quick photographing.
  • detecting whether the photographing mode of the camera is the preset photographing mode includes: reading the camera photographing mode parameters stored in the register, and determining whether the photographing mode of the camera is the preset photographing mode according to the camera photographing mode parameters.
  • the camera shooting mode parameter After reading the camera shooting mode parameter stored in the register, determine whether the camera shooting mode parameter is a preset parameter. If the camera shooting mode parameter is a preset parameter and is a preset parameter, determine that the camera's shooting mode is a preset photo. Mode; if the camera shooting mode parameter is not a preset parameter, determine that the camera's shooting mode is not a preset shooting mode.
  • the output of the prompt information for completing the photographing includes: playing an animation for prompting the completion of photographing; or changing the background color displayed on the current display interface of the terminal.
  • the user does not know whether the terminal has successfully taken a picture, so that the user changes the motion state of the terminal without the terminal successfully taking a picture, resulting in a photograph taken Blurred.
  • the changing the background color displayed on the current display interface of the terminal may refer to a photo preview interface that changes the current color preview interface of the terminal to a black and white color, or a photo preview interface with a mask of another color attached.
  • the above-mentioned prompt information for completing the photographing may include: displaying the obtained successfully-focused photo frame image on a small picture control of the photo preview interface, so as to remind the user that the photo with successful focus is successfully acquired Frame image, the small picture control is a control that needs to be triggered when the user views the terminal album, that is, the small picture control is used to display the terminal's album when triggered.
  • FIG. 5 shows a schematic structural diagram of a photographing apparatus 500 according to an embodiment of the present application, including a receiving unit 501, a detection unit 502, and a photographing unit 503.
  • the receiving unit 501 is configured to receive a camera startup instruction.
  • the detecting unit 502 is configured to detect whether the terminal is in a preset motion state according to the camera startup instruction.
  • the photographing unit 503 is configured to control the camera to perform auto-focusing if it is detected that the terminal is in a preset motion state, obtain a photographed frame image that is successfully focused, and output a prompt message to complete photographing.
  • the detection unit is specifically configured to detect whether the displacement of the terminal in the three directions of the X-axis, Y-axis, and Z-axis is less than a first preset threshold. If it is detected that the terminal is in the X-axis, If the displacements in the three directions of the Y axis and the Z axis are smaller than the first preset threshold, it is confirmed that the terminal is in a preset motion state.
  • the photographing unit is specifically configured to identify a target photographic object included in the current preview frame image, and obtain position information of the target photographic object; determine the photometric area and focal length of the camera according to the position information. .
  • the photographing unit is further specifically configured to receive a selection instruction of the target photographic object included in the preview frame image triggered by the user on the photographic preview interface, and obtain position information of the target photographic object corresponding to the selection instruction.
  • the photographing unit is further specifically configured to calculate a difference between a pixel value of a feature point of the current preview frame image and a pixel value of a feature point at a corresponding position of the previous preview frame image; if the difference is less than the second preset Threshold, the current preview frame image is taken as the photographed frame image with successful focus.
  • the detection unit is further specifically configured to detect whether the camera's photographing mode is a preset photographing mode according to the camera activation instruction before detecting whether the terminal is in a preset motion state according to the camera activation instruction, If it is detected that the photographing mode of the camera does not belong to the preset photographing mode, after receiving a photographing instruction, acquiring a photographing frame image.
  • the detection unit is further specifically configured to read a camera shooting mode parameter stored in a register, and determine whether the camera shooting mode is a preset shooting mode according to the camera shooting mode parameter.
  • the above-mentioned photographing unit is further specifically configured to play an animation for prompting completion of photographing after acquiring a photographed frame image of successful focusing; or change the background color displayed on the current display interface of the terminal.
  • the above-mentioned photographing unit is further specifically configured to display the acquired successfully-focused photographic frame image in a small picture control of the photographic preview interface after acquiring the successfully-focused photographic frame image; the small picture The control is used to display the terminal's photo album when triggered.
  • the terminal may be a mobile terminal.
  • the mobile terminal may be a terminal such as a smart phone, a tablet computer, a personal computer (PC), or a learning machine.
  • One or more input devices 63 (only one is shown in FIG. 6) and one or more output devices 64 (only one is shown in FIG. 6).
  • the processor 61, the memory 62, the input device 63, the output device 64, and the camera 65 are connected through a bus 66.
  • the camera is used for generating a preview frame image and a photographing frame image according to the collected external light signals.
  • the processor 61 may be a central processing unit (CPU), and the processor may also be another general-purpose processor or a digital signal processor (DSP). , Application Specific Integrated Circuit (ASIC), Field-Programmable Gate Array (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
  • a general-purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
  • the input device 63 may include a virtual keyboard, a touchpad, a fingerprint sensor (for collecting fingerprint information and orientation information of a user), a microphone, and the like, and the output device 64 may include a display, a speaker, and the like.
  • the memory 62 may include a read-only memory and a random access memory, and provide instructions and data to the processor 61. A part or all of the memory 62 may further include a non-volatile random access memory. For example, the memory 62 may also store information of a device type.
  • the memory 62 stores a computer program that can be run on the processor 61.
  • the computer program is a program of a photographing method.
  • the steps in the embodiment of the photographing method are implemented, for example, steps 101 to 103 shown in FIG. 1.
  • the processor 61 executes the computer program, the functions of the modules / units in the foregoing device embodiments are implemented, for example, the functions of the units 501 to 503 shown in FIG. 5.
  • the computer program may be divided into one or more modules / units.
  • the one or more modules / units are stored in the memory 62 and executed by the processor 61 to complete the present application.
  • the one or more modules / units may be a series of computer program instruction segments capable of performing specific functions, and the instruction segments are used to describe the execution process of the computer program in the terminal for taking pictures.
  • the above computer program can be divided into a receiving unit, a detecting unit, and a photographing unit.
  • the specific functions of each unit are as follows: the receiving unit is used to receive a camera startup instruction; and the detection unit is used to detect whether the terminal is in a preset according to the camera startup instruction. Motion state; a photographing unit, configured to control the camera to automatically focus if it detects that the terminal is in a preset motion state, acquire a photographed frame image that has been successfully focused, and output a prompt message to complete the photographing.
  • the disclosed devices / terminals and methods may be implemented in other ways.
  • the device / terminal embodiments described above are only schematic.
  • the division of the above modules or units is only a logical function division.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, which may be electrical, mechanical or other forms.
  • the units described above as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, which may be located in one place, or may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objective of the solution of this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each of the units may exist separately physically, or two or more units may be integrated into one unit.
  • the above integrated unit may be implemented in the form of hardware or in the form of software functional unit.
  • the above integrated modules / units When the above integrated modules / units are implemented in the form of software functional units and sold or used as independent products, they can be stored in a computer-readable storage medium. Based on this understanding, this application implements all or part of the processes in the methods of the above embodiments, and can also be completed by a computer program instructing related hardware.
  • the above computer program can be stored in a computer-readable storage medium.
  • the computer program When executed by a processor, the steps of the foregoing method embodiments may be implemented.
  • the computer program includes computer program code, and the computer program code may be in a source code form, an object code form, an executable file, or some intermediate form.
  • the computer-readable medium may include: any entity or device capable of carrying the computer program code, a recording medium, a U disk, a mobile hard disk, a magnetic disk, an optical disk, a computer memory, a read-only memory (ROM), a random access memory Random Access Memory (RAM), electric carrier signals, telecommunication signals, and software distribution media.
  • ROM read-only memory
  • RAM random access memory Random Access Memory

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Studio Devices (AREA)

Abstract

La présente invention appartient au domaine technique de la photographie, et concerne en particulier un procédé et un dispositif de photographie, un terminal et un support de stockage lisible par ordinateur. Le procédé comprend les étapes consistant à : recevoir une instruction de démarrage d'appareil photo ; détecter, selon l'instruction de démarrage d'appareil photo, si un terminal est dans un état de mouvement prédéfini ; et s'il est détecté que le terminal est dans l'état de mouvement prédéfini, commander un appareil photo pour effectuer une mise au point automatique, acquérir une image de trame de photographie mise au point avec succès, et délivrer en sortie des informations d'indication indiquant que la photographie est achevée. La présente invention élimine, dans un processus de photographie, le besoin pour un utilisateur de déclencher une instruction de photographie pour effectuer une photographie, et acquiert immédiatement, à l'achèvement de la mise au point, une image de trame mise au point avec succès pour achever une photographie. De plus, lorsque le terminal achève la photographie, les informations d'indication indiquant que la photographie est achevée sont en outre délivrées, de telle sorte que l'utilisateur est informé rapidement que le terminal a achevé la photographie, ce qui permet d'empêcher efficacement l'acquisition d'une photographie floue suite au changement, par l'utilisateur, d'un état de mouvement du terminal avant que le terminal ait achevé la photographie, et d'améliorer l'efficacité de photographie.
PCT/CN2019/093682 2018-08-22 2019-06-28 Procédé et dispositif de photographie, terminal et support de stockage lisible par ordinateur WO2020038109A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810966306.0 2018-08-22
CN201810966306.0A CN108777767A (zh) 2018-08-22 2018-08-22 拍照方法、装置、终端及计算机可读存储介质

Publications (1)

Publication Number Publication Date
WO2020038109A1 true WO2020038109A1 (fr) 2020-02-27

Family

ID=64028866

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/093682 WO2020038109A1 (fr) 2018-08-22 2019-06-28 Procédé et dispositif de photographie, terminal et support de stockage lisible par ordinateur

Country Status (2)

Country Link
CN (1) CN108777767A (fr)
WO (1) WO2020038109A1 (fr)

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112132227A (zh) * 2020-09-30 2020-12-25 石家庄铁道大学 桥梁列车荷载作用时程提取方法、装置及终端设备
CN112184722A (zh) * 2020-09-15 2021-01-05 上海传英信息技术有限公司 图像处理方法、终端及计算机存储介质
CN113065410A (zh) * 2021-03-10 2021-07-02 广州云从鼎望科技有限公司 集装箱掏箱流程智能化控制方法、***、介质及装置
CN113497887A (zh) * 2020-04-03 2021-10-12 中兴通讯股份有限公司 拍摄方法、电子设备及存储介质
CN113562401A (zh) * 2021-07-23 2021-10-29 杭州海康机器人技术有限公司 控制目标对象传送方法、装置、***、终端和存储介质
CN113567452A (zh) * 2021-07-27 2021-10-29 北京深点视觉科技有限公司 一种毛刺检测方法、装置、设备及存储介质
CN113766119A (zh) * 2021-05-11 2021-12-07 腾讯科技(深圳)有限公司 虚拟形象显示方法、装置、终端及存储介质
CN113852646A (zh) * 2020-06-10 2021-12-28 漳州立达信光电子科技有限公司 一种智能设备的控制方法、装置、电子设备及***
CN113873161A (zh) * 2021-10-11 2021-12-31 维沃移动通信有限公司 拍摄方法、装置及电子设备
CN114264653A (zh) * 2021-12-15 2022-04-01 知辛电子科技(苏州)有限公司 一种用于小间距光电控制板的拍照检测方法
CN114286004A (zh) * 2021-12-28 2022-04-05 维沃移动通信有限公司 对焦方法、拍摄装置、电子设备及介质
CN114371696A (zh) * 2021-12-06 2022-04-19 深圳市普渡科技有限公司 移动设备、控制方法、机器人及存储介质
CN114500979A (zh) * 2020-11-12 2022-05-13 海信视像科技股份有限公司 显示设备、控制设备以及同步校准方法
CN114897762A (zh) * 2022-02-18 2022-08-12 众信方智(苏州)智能技术有限公司 一种煤矿工作面采煤机自动定位方法及装置
CN115278079A (zh) * 2022-07-27 2022-11-01 维沃移动通信有限公司 拍摄方法及其装置
CN116347212A (zh) * 2022-08-05 2023-06-27 荣耀终端有限公司 一种自动拍照方法及电子设备
CN116631908A (zh) * 2023-05-16 2023-08-22 台州勃美科技有限公司 一种晶圆自动加工方法、装置及电子设备

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108777767A (zh) * 2018-08-22 2018-11-09 Oppo广东移动通信有限公司 拍照方法、装置、终端及计算机可读存储介质
CN109451240B (zh) * 2018-12-04 2021-01-26 百度在线网络技术(北京)有限公司 对焦方法、装置、计算机设备和可读存储介质
CN110717452B (zh) * 2019-10-09 2022-04-19 Oppo广东移动通信有限公司 图像识别方法、装置、终端及计算机可读存储介质
CN111511002B (zh) * 2020-04-23 2023-12-05 Oppo广东移动通信有限公司 检测帧率的调节方法和装置、终端和可读存储介质
CN113170053A (zh) * 2020-07-24 2021-07-23 深圳市大疆创新科技有限公司 拍摄方法、拍摄装置及存储介质
CN112422823B (zh) * 2020-11-09 2022-08-09 广汽本田汽车有限公司 一种自动触发视觉拍照方法及装置
CN112738403B (zh) * 2020-12-30 2023-12-05 维沃移动通信(杭州)有限公司 拍摄方法、拍摄装置、电子设备和介质
CN113507549B (zh) * 2021-05-28 2022-10-14 西安闻泰信息技术有限公司 一种摄像头、拍照方法、终端及存储介质
CN114143456B (zh) * 2021-11-26 2023-10-20 青岛海信移动通信技术有限公司 拍照方法及装置
CN114241014A (zh) * 2021-12-03 2022-03-25 上海锡鼎智能科技有限公司 一种用于实验考头戴式***头智能抓拍方法
CN114339035A (zh) * 2021-12-20 2022-04-12 青岛海尔科技有限公司 图像获取的方法和装置、存储介质及电子装置
CN115118871B (zh) * 2022-02-11 2023-12-15 东莞市步步高教育软件有限公司 一种拍照像素模式切换方法、***、终端设备及存储介质
CN116723382B (zh) * 2022-02-28 2024-05-03 荣耀终端有限公司 一种拍摄方法及相关设备
CN115146805A (zh) * 2022-05-19 2022-10-04 新瑞鹏宠物医疗集团有限公司 基于宠物鼻纹的宠物游乐园入园的方法以及相关装置
CN117579938A (zh) * 2022-06-29 2024-02-20 荣耀终端有限公司 一种拍照方法和电子设备

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102883104A (zh) * 2011-09-02 2013-01-16 微软公司 自动图像捕捉
JP2013015806A (ja) * 2011-06-09 2013-01-24 Nikon Corp 焦点検出装置および撮像装置
CN103856709A (zh) * 2012-12-04 2014-06-11 腾讯科技(深圳)有限公司 图像获取方法及装置
CN104469001A (zh) * 2014-12-02 2015-03-25 王国忠 一种具有拍照防抖功能的手机及其在拍照中的防抖方法
CN105072331A (zh) * 2015-07-20 2015-11-18 魅族科技(中国)有限公司 一种拍照方法及终端
CN108777767A (zh) * 2018-08-22 2018-11-09 Oppo广东移动通信有限公司 拍照方法、装置、终端及计算机可读存储介质

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4859625B2 (ja) * 2006-10-27 2012-01-25 Hoya株式会社 手ぶれ補正装置を備えたカメラ
CN103167141A (zh) * 2012-09-14 2013-06-19 深圳市金立通信设备有限公司 一种手机相机连续对焦***及方法
CN102970484B (zh) * 2012-11-27 2016-02-24 惠州Tcl移动通信有限公司 一种拍照时声音提示的方法及基于该方法的电子设备
CN104580884B (zh) * 2014-12-03 2018-03-27 广东欧珀移动通信有限公司 一种拍摄方法及终端
CN104994297B (zh) * 2015-07-09 2019-01-08 厦门美图之家科技有限公司 全景拍照的对焦测光锁定方法和***
CN107087102B (zh) * 2017-03-13 2020-07-24 联想(北京)有限公司 对焦信息处理方法及电子设备

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013015806A (ja) * 2011-06-09 2013-01-24 Nikon Corp 焦点検出装置および撮像装置
CN102883104A (zh) * 2011-09-02 2013-01-16 微软公司 自动图像捕捉
CN103856709A (zh) * 2012-12-04 2014-06-11 腾讯科技(深圳)有限公司 图像获取方法及装置
CN104469001A (zh) * 2014-12-02 2015-03-25 王国忠 一种具有拍照防抖功能的手机及其在拍照中的防抖方法
CN105072331A (zh) * 2015-07-20 2015-11-18 魅族科技(中国)有限公司 一种拍照方法及终端
CN108777767A (zh) * 2018-08-22 2018-11-09 Oppo广东移动通信有限公司 拍照方法、装置、终端及计算机可读存储介质

Cited By (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113497887A (zh) * 2020-04-03 2021-10-12 中兴通讯股份有限公司 拍摄方法、电子设备及存储介质
CN113852646A (zh) * 2020-06-10 2021-12-28 漳州立达信光电子科技有限公司 一种智能设备的控制方法、装置、电子设备及***
CN112184722A (zh) * 2020-09-15 2021-01-05 上海传英信息技术有限公司 图像处理方法、终端及计算机存储介质
CN112184722B (zh) * 2020-09-15 2024-05-03 上海传英信息技术有限公司 图像处理方法、终端及计算机存储介质
CN112132227A (zh) * 2020-09-30 2020-12-25 石家庄铁道大学 桥梁列车荷载作用时程提取方法、装置及终端设备
CN112132227B (zh) * 2020-09-30 2024-04-05 石家庄铁道大学 桥梁列车荷载作用时程提取方法、装置及终端设备
CN114500979A (zh) * 2020-11-12 2022-05-13 海信视像科技股份有限公司 显示设备、控制设备以及同步校准方法
CN114500979B (zh) * 2020-11-12 2023-09-19 海信视像科技股份有限公司 显示设备、控制设备以及同步校准方法
CN113065410A (zh) * 2021-03-10 2021-07-02 广州云从鼎望科技有限公司 集装箱掏箱流程智能化控制方法、***、介质及装置
CN113065410B (zh) * 2021-03-10 2024-06-04 广州云从鼎望科技有限公司 集装箱掏箱流程智能化控制方法、***、介质及装置
CN113766119B (zh) * 2021-05-11 2023-12-05 腾讯科技(深圳)有限公司 虚拟形象显示方法、装置、终端及存储介质
CN113766119A (zh) * 2021-05-11 2021-12-07 腾讯科技(深圳)有限公司 虚拟形象显示方法、装置、终端及存储介质
CN113562401A (zh) * 2021-07-23 2021-10-29 杭州海康机器人技术有限公司 控制目标对象传送方法、装置、***、终端和存储介质
CN113562401B (zh) * 2021-07-23 2023-07-18 杭州海康机器人股份有限公司 控制目标对象传送方法、装置、***、终端和存储介质
CN113567452B (zh) * 2021-07-27 2024-03-15 北京深点视觉科技有限公司 一种毛刺检测方法、装置、设备及存储介质
CN113567452A (zh) * 2021-07-27 2021-10-29 北京深点视觉科技有限公司 一种毛刺检测方法、装置、设备及存储介质
CN113873161A (zh) * 2021-10-11 2021-12-31 维沃移动通信有限公司 拍摄方法、装置及电子设备
CN114371696B (zh) * 2021-12-06 2024-02-27 深圳市普渡科技有限公司 移动设备、控制方法、机器人及存储介质
CN114371696A (zh) * 2021-12-06 2022-04-19 深圳市普渡科技有限公司 移动设备、控制方法、机器人及存储介质
CN114264653B (zh) * 2021-12-15 2024-04-30 知辛电子科技(苏州)有限公司 一种用于小间距光电控制板的拍照检测方法
CN114264653A (zh) * 2021-12-15 2022-04-01 知辛电子科技(苏州)有限公司 一种用于小间距光电控制板的拍照检测方法
CN114286004A (zh) * 2021-12-28 2022-04-05 维沃移动通信有限公司 对焦方法、拍摄装置、电子设备及介质
CN114897762B (zh) * 2022-02-18 2023-04-07 众信方智(苏州)智能技术有限公司 一种煤矿工作面采煤机自动定位方法及装置
CN114897762A (zh) * 2022-02-18 2022-08-12 众信方智(苏州)智能技术有限公司 一种煤矿工作面采煤机自动定位方法及装置
CN115278079A (zh) * 2022-07-27 2022-11-01 维沃移动通信有限公司 拍摄方法及其装置
CN116347212A (zh) * 2022-08-05 2023-06-27 荣耀终端有限公司 一种自动拍照方法及电子设备
CN116347212B (zh) * 2022-08-05 2024-03-08 荣耀终端有限公司 一种自动拍照方法及电子设备
CN116631908A (zh) * 2023-05-16 2023-08-22 台州勃美科技有限公司 一种晶圆自动加工方法、装置及电子设备
CN116631908B (zh) * 2023-05-16 2024-04-26 台州勃美科技有限公司 一种晶圆自动加工方法、装置及电子设备

Also Published As

Publication number Publication date
CN108777767A (zh) 2018-11-09

Similar Documents

Publication Publication Date Title
WO2020038109A1 (fr) Procédé et dispositif de photographie, terminal et support de stockage lisible par ordinateur
CN108933899B (zh) 全景拍摄方法、装置、终端及计算机可读存储介质
CN108495050B (zh) 拍照方法、装置、终端及计算机可读存储介质
JP5096017B2 (ja) 撮像装置
CN101465972B (zh) 在数字图像处理装置中使图像背景模糊的设备和方法
TWI549501B (zh) An imaging device, and a control method thereof
US8508652B2 (en) Autofocus method
CN107920211A (zh) 一种拍照方法、终端及计算机可读存储介质
JP2010177894A (ja) 撮像装置、画像管理装置及び画像管理方法、並びにコンピューター・プログラム
KR20060050871A (ko) 촬상장치 및 그 제어방법
JP2003344891A (ja) 撮影モード自動設定カメラ
JP5278564B2 (ja) 撮像装置
JP2003046844A (ja) 強調表示方法、カメラおよび焦点強調表示システム
GB2467391A (en) Self-timer photography
CN108200335A (zh) 基于双摄像头的拍照方法、终端及计算机可读存储介质
CN106412423A (zh) 一种对焦方法及装置
WO2019084756A1 (fr) Procédé et dispositif de traitement d'images, et véhicule aérien
WO2021218536A1 (fr) Procédé de synthèse d'image de plage dynamique élevée et dispositif électronique
CN106060404A (zh) 一种拍摄模式选择方法及终端
JP2009290255A (ja) 撮像装置、および撮像装置制御方法、並びにコンピュータ・プログラム
WO2023071933A1 (fr) Procédé et appareil de réglage de paramètre de photographie de caméra et dispositif électronique
JP2005223658A (ja) デジタルカメラ
US20150254856A1 (en) Smart moving object capture methods, devices and digital imaging systems including the same
JP3510063B2 (ja) スチルビデオカメラの露光量制御装置
JP2008199461A (ja) 撮像装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19852881

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19852881

Country of ref document: EP

Kind code of ref document: A1