CN107888829B - Focusing method of mobile terminal, mobile terminal and storage medium - Google Patents

Focusing method of mobile terminal, mobile terminal and storage medium Download PDF

Info

Publication number
CN107888829B
CN107888829B CN201711183828.5A CN201711183828A CN107888829B CN 107888829 B CN107888829 B CN 107888829B CN 201711183828 A CN201711183828 A CN 201711183828A CN 107888829 B CN107888829 B CN 107888829B
Authority
CN
China
Prior art keywords
focusing
focusing mode
mode
depth
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711183828.5A
Other languages
Chinese (zh)
Other versions
CN107888829A (en
Inventor
姬向东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nubia Technology Co Ltd
Original Assignee
Nubia Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nubia Technology Co Ltd filed Critical Nubia Technology Co Ltd
Priority to CN201711183828.5A priority Critical patent/CN107888829B/en
Publication of CN107888829A publication Critical patent/CN107888829A/en
Application granted granted Critical
Publication of CN107888829B publication Critical patent/CN107888829B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Telephone Function (AREA)

Abstract

The invention discloses a focusing method of a mobile terminal, the mobile terminal and a storage medium; the method comprises the following steps: acquiring at least two candidate focusing modes when the mobile terminal carries out image acquisition; trying to focus the target object based on each candidate focusing mode respectively to obtain the depth of field value of each candidate focusing mode when trying to focus; determining the reliability of the actual depth of field as the depth of field value when each candidate focusing mode tries to focus; selecting a focusing mode with the determined credibility meeting preset conditions from the candidate focusing modes; and focusing the target object based on the depth of field value output by the selected focusing mode. The focusing effect and the focusing efficiency are improved, so that the shooting effect of a user is better, and the user experience is enhanced.

Description

Focusing method of mobile terminal, mobile terminal and storage medium
Technical Field
The present invention relates to the field of electronic technologies, and in particular, to a focusing method for a mobile terminal, and a storage medium.
Background
When a user uses a mobile terminal to shoot an image, the shot image needs to be focused. The focusing process is a process of changing the object distance and the image distance through a focusing mechanism of the mobile terminal so that the shot object can be clearly imaged.
Currently, the commonly used image focusing methods mainly include two types: the first is relying on the emission and reception of light of a specific wavelength (infrared light), such as laser focusing; the other is dependent on image characteristics of the subject itself, such as Phase Detection Auto Focus (PDAF); because scenes encountered by a user when the user uses the mobile terminal to shoot images are different, the first focusing mode is not suitable for a strong light scene, a scene with a shot object made of transparent materials such as glass and the like because the first focusing mode adopts light rays with specific wavelengths emitted by the first focusing mode, and the second focusing mode has high requirements on the light rays and is not suitable for a dark light scene; however, there is no effective solution in the related art for selecting a proper focusing mode for focusing according to the requirement of the actual shooting effect (focusing effect).
Disclosure of Invention
In view of the above, embodiments of the present invention provide a focusing method for a mobile terminal, a mobile terminal and a storage medium to solve at least one problem in the prior art.
The technical scheme of the embodiment of the invention is realized as follows:
the embodiment of the invention provides a focusing method of a mobile terminal, which comprises the following steps:
acquiring at least two candidate focusing modes when the mobile terminal carries out image acquisition;
trying to focus the target object based on each candidate focusing mode respectively to obtain the depth of field value of each candidate focusing mode when trying to focus;
determining the reliability of the actual depth of field as the depth of field value when each candidate focusing mode tries to focus;
selecting a focusing mode with the determined credibility meeting preset conditions from the candidate focusing modes;
and focusing the target object based on the depth of field value output by the selected focusing mode.
In the foregoing solution, the determining that the depth of field value when the candidate focusing mode attempts to focus is the reliability of the actual depth of field includes:
responding to the candidate focusing mode as a first focusing mode, and acquiring a first image corresponding to a left pixel point and a second image corresponding to a right pixel point when attempting to focus based on the first focusing mode;
respectively extracting the features of the first image and the second image;
and performing feature matching on the first image and the second image based on the extracted features to obtain the image matching degree of the first image and the second image, and taking the obtained image matching degree as the reliability of the actual depth of field when the depth of field value of the first focusing mode is attempted to be focused.
In the foregoing solution, the determining that the depth of field value when the candidate focusing mode attempts to focus is the reliability of the actual depth of field includes:
responding to the candidate focusing mode as a second focusing mode, and acquiring the emission light intensity and the receiving light intensity of the light projected to the target object when attempting focusing based on the second focusing mode;
and determining the ratio of the received light intensity to the emitted light intensity, and taking the determined ratio as the reliability of the actual depth of field when the depth of field value of the second focusing mode tries to focus.
In the foregoing solution, the selecting a focusing mode with the determined reliability meeting a preset condition from the candidate focusing modes includes:
in response to that the at least two candidate focusing modes comprise a first focusing mode and a second focusing mode, and the credibility corresponding to the first focusing mode and the second focusing mode both meet a preset condition, comparing the depth of field value when the first focusing mode attempts to focus with the depth of field value when the second focusing mode attempts to focus;
and selecting a focusing mode with a smaller depth of field value when focusing is tried as a target focusing mode of the target object.
In the foregoing solution, the selecting a focusing mode with the determined reliability meeting a preset condition from the candidate focusing modes includes:
and in response to that the at least two candidate focusing modes comprise a first focusing mode and a second focusing mode, the credibility corresponding to the first focusing mode and the credibility corresponding to the second focusing mode do not meet the preset condition, and the mobile terminal has a CAF mode, selecting the CAF mode as a target focusing mode of the target object.
In the above scheme, the method further comprises:
in the process of focusing the target object in response to the depth of field value output based on the selected focusing mode, the reliability corresponding to the selected focusing mode no longer meets the preset condition, and mode switching is performed based on the candidate focusing mode of which the current reliability meets the preset condition.
An embodiment of the present invention further provides a mobile terminal, where the mobile terminal includes:
the memory is used for storing a processing program of a focusing method of the mobile terminal;
the processor is used for executing the processing program of the focusing method of the mobile terminal stored in the memory so as to realize the following steps:
acquiring at least two candidate focusing modes when the mobile terminal carries out image acquisition;
trying to focus the target object based on each candidate focusing mode respectively to obtain the depth of field value of each candidate focusing mode when trying to focus;
determining the reliability of the actual depth of field as the depth of field value when each candidate focusing mode tries to focus;
selecting a focusing mode with the determined credibility meeting preset conditions from the candidate focusing modes;
and focusing the target object based on the depth of field value output by the selected focusing mode.
In the foregoing solution, the processor is further configured to, in response to that the candidate focusing mode is a first focusing mode, obtain a first image corresponding to a left pixel and a second image corresponding to a right pixel when attempting to focus based on the first focusing mode;
respectively extracting the features of the first image and the second image;
and performing feature matching on the first image and the second image based on the extracted features to obtain the image matching degree of the first image and the second image, and taking the obtained image matching degree as the reliability of the actual depth of field when the depth of field value of the first focusing mode is attempted to be focused.
In the foregoing solution, the processor is further configured to, in response to that the candidate focusing mode is a second focusing mode, obtain emission light intensity and reception light intensity of light projected to the target object when attempting to focus based on the second focusing mode;
and determining the ratio of the received light intensity to the emitted light intensity, and taking the determined ratio as the reliability of the actual depth of field when the depth of field value of the second focusing mode tries to focus.
The embodiment of the invention also provides a storage medium, which stores an executable program, and when the executable program is executed by a processor, the focusing method of the mobile terminal is realized.
By applying the focusing method of the mobile terminal, the mobile terminal and the storage medium in the embodiment of the invention, the mobile terminal can automatically calculate the credibility of each candidate focusing mode according to the current application scene, adapt the focusing mode meeting the preset conditions based on the calculated credibility, and perform focusing based on the selected focusing mode. The focusing effect and the focusing efficiency are improved, so that the shooting effect of a user is better, and the user experience is enhanced.
Drawings
Fig. 1 is a schematic hardware configuration diagram of an alternative mobile terminal 100 implementing various embodiments of the present invention;
FIG. 2 is a diagram of a wireless communication system for the mobile terminal 100 shown in FIG. 1;
FIG. 3 is a schematic view illustrating an alternative flow chart of a focusing method of a mobile terminal according to an embodiment of the present invention;
FIG. 4 is a schematic view illustrating an alternative flow chart of a focusing method of a mobile terminal according to an embodiment of the present invention;
fig. 5 is an alternative schematic diagram of a composition structure of a mobile terminal in an embodiment of the present invention.
Detailed Description
It should be understood that the embodiments described herein are only for explaining the technical solutions of the present invention, and are not intended to limit the scope of the present invention.
A mobile terminal implementing various embodiments of the present invention will now be described with reference to the accompanying drawings. In the following description, suffixes such as "module", "component", or "unit" used to denote elements are used only for facilitating the explanation of the present invention, and have no specific meaning in themselves. Thus, "module" and "component" may be used in a mixture.
The mobile terminal may be implemented in various forms. For example, the terminal described in the present invention may include a mobile terminal such as a mobile phone, a smart phone, a notebook computer, a digital broadcast receiver, a Personal Digital Assistant (PDA), a tablet computer (PAD), a Portable Multimedia Player (PMP), a navigation device, etc., and a stationary terminal such as a digital TV, a desktop computer, etc. In the following, it is assumed that the terminal is a mobile terminal. However, it will be understood by those skilled in the art that the configuration according to the embodiment of the present invention can be applied to a fixed type terminal in addition to elements particularly used for moving purposes.
Fig. 1 is a schematic hardware structure of a mobile terminal 100 for implementing various embodiments of the present invention, and as shown in fig. 1, the mobile terminal 100 may include: RF (Radio Frequency) unit 101, WiFi module 102, audio output unit 103, a/V (audio/video) input unit 104, sensor 105, display unit 106, user input unit 107, interface unit 108, memory 109, processor 110, and power supply 111. Those skilled in the art will appreciate that the mobile terminal architecture shown in fig. 1 is not intended to be limiting of mobile terminals, which may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
The following describes each component of the mobile terminal in detail with reference to fig. 1:
the radio frequency unit 101 may be configured to receive and transmit signals during information transmission and reception or during a call, and specifically, receive downlink information of a base station and then process the downlink information to the processor 110; in addition, the uplink data is transmitted to the base station. Typically, radio frequency unit 101 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 101 can also communicate with a network and other devices through wireless communication. The wireless communication may use any communication standard or protocol, including but not limited to GSM (Global System for Mobile communications), GPRS (General Packet Radio Service), CDMA2000(Code Division Multiple Access 2000), WCDMA (Wideband Code Division Multiple Access), TD-SCDMA (Time Division-Synchronous Code Division Multiple Access), FDD-LTE (Frequency Division duplex-Long Term Evolution), and TDD-LTE (Time Division duplex-Long Term Evolution).
WiFi belongs to short-distance wireless transmission technology, and the mobile terminal can help a user to receive and send e-mails, browse webpages, access streaming media and the like through the WiFi module 102, and provides wireless broadband internet access for the user. Although fig. 2 shows the WiFi module 102, it is understood that it does not belong to the essential constitution of the mobile terminal, and may be omitted entirely as needed within the scope not changing the essence of the invention.
The audio output unit 103 may convert audio data received by the radio frequency unit 101 or the WiFi module 102 or stored in the memory 109 into an audio signal and output as sound when the mobile terminal 100 is in a call signal reception mode, a call mode, a recording mode, a voice recognition mode, a broadcast reception mode, or the like. Also, the audio output unit 103 may also provide audio output related to a specific function performed by the mobile terminal 100 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 103 may include a speaker, a buzzer, and the like.
The a/V input unit 104 is used to receive audio or video signals. The a/V input Unit 104 may include a Graphics Processing Unit (GPU) 1041 and a microphone 1042, the graphics processing Unit 1041 processing image data of still pictures or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 106. The image frames processed by the graphic processor 1041 may be stored in the memory 109 (or other storage medium) or transmitted via the radio frequency unit 101 or the WiFi module 102. The microphone 1042 may receive sound (audio data) via the microphone 1042 in an operation mode such as a phone call mode, a recording mode, a voice recognition mode, and the like, and can process such sound into audio data. The processed audio (voice) data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 101 in case of a phone call mode. The microphone 1042 may implement various types of noise cancellation (or suppression) algorithms to cancel (or suppress) noise or interference generated in the course of receiving and transmitting audio signals.
The mobile terminal 100 also includes at least one sensor 105, such as a light sensor, a motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 1061 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 1061 and/or a backlight when the mobile terminal 100 is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally, three axes), can detect the magnitude and direction of gravity when stationary, and can be used for applications of recognizing the posture of a mobile phone (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration recognition related functions (such as pedometer and tapping), and the like; as for other sensors such as a fingerprint sensor, a pressure sensor, an iris sensor, a molecular sensor, a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which can be configured on the mobile phone, further description is omitted here.
The display unit 106 is used to display information input by a user or information provided to the user. The Display unit 106 may include a Display panel 1061, and the Display panel 1061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 107 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the mobile terminal. Specifically, the user input unit 107 may include a touch panel 1071 and other input devices 1072. The touch panel 1071, also referred to as a touch screen, may collect a touch operation performed by a user on or near the touch panel 1071 (e.g., an operation performed by the user on or near the touch panel 1071 using a finger, a stylus, or any other suitable object or accessory), and drive a corresponding connection device according to a predetermined program. The touch panel 1071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 110, and can receive and execute commands sent by the processor 110. In addition, the touch panel 1071 may be implemented in various types, such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 1071, the user input unit 107 may include other input devices 1072. In particular, other input devices 1072 may include, but are not limited to, one or more of a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like, and are not limited to these specific examples.
In some embodiments, the touch panel 1071 may cover the display panel 1061, and when the touch panel 1071 detects a touch operation thereon or nearby, the touch panel 1071 transmits the touch operation to the processor 110 to determine the type of the touch event, and then the processor 110 provides a corresponding visual output on the display panel 1061 according to the type of the touch event. Although the touch panel 1071 and the display panel 1061 are shown in fig. 2 as two separate components to implement the input and output functions of the mobile terminal, in some embodiments, the touch panel 1071 and the display panel 1061 may be integrated to implement the input and output functions of the mobile terminal, and is not limited herein.
The interface unit 108 serves as an interface through which at least one external device is connected to the mobile terminal 100. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 108 may be used to receive input (e.g., data information, power, etc.) from external devices and transmit the received input to one or more elements within the mobile terminal 100 or may be used to transmit data between the mobile terminal 100 and external devices.
The memory 109 may be used to store software programs as well as various data. The memory 109 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 109 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 110 is a control center of the mobile terminal, connects various parts of the entire mobile terminal using various interfaces and lines, and performs various functions of the mobile terminal and processes data by operating or executing software programs and/or modules stored in the memory 109 and calling data stored in the memory 109, thereby performing overall monitoring of the mobile terminal. Processor 110 may include one or more processing units; preferably, the processor 110 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
The mobile terminal 100 may further include a power supply 111 (e.g., a battery) for supplying power to various components, and preferably, the power supply 111 may be logically connected to the processor 110 via a power management system, so as to manage charging, discharging, and power consumption management functions via the power management system.
Although not shown in fig. 1, the mobile terminal 100 may further include a bluetooth module or the like, which is not described in detail herein.
In order to facilitate understanding of the embodiments of the present invention, a communication network system on which the mobile terminal of the present invention is based is described below.
Referring to fig. 2, fig. 2 is an architecture diagram of a communication Network system according to an embodiment of the present invention, where the communication Network system is an LTE system of a universal mobile telecommunications technology, and the LTE system includes a UE (User Equipment) 201, an E-UTRAN (Evolved UMTS Terrestrial Radio Access Network) 202, an EPC (Evolved Packet Core) 203, and an IP service 204 of an operator, which are in communication connection in sequence.
Specifically, the UE201 may be the terminal 100 described above, and is not described herein again.
The E-UTRAN202 includes eNodeB2021 and other eNodeBs 2022, among others. Among them, the eNodeB2021 may be connected with other eNodeB2022 through backhaul (e.g., X2 interface), the eNodeB2021 is connected to the EPC203, and the eNodeB2021 may provide the UE201 access to the EPC 203.
The EPC203 may include an MME (Mobility Management Entity) 2031, an HSS (Home Subscriber Server) 2032, other MMEs 2033, an SGW (Serving gateway) 2034, a PGW (PDN gateway) 2035, and a PCRF (Policy and charging functions Entity) 2036, and the like. The MME2031 is a control node that handles signaling between the UE201 and the EPC203, and provides bearer and connection management. HSS2032 is used to provide registers to manage functions such as home location register (not shown) and holds subscriber specific information about service characteristics, data rates, etc. All user data may be sent through SGW2034, PGW2035 may provide IP address assignment for UE201 and other functions, and PCRF2036 is a policy and charging control policy decision point for traffic data flow and IP bearer resources, which selects and provides available policy and charging control decisions for a policy and charging enforcement function (not shown).
The IP services 204 may include the internet, intranets, IMS (IP Multimedia Subsystem), or other IP services, among others.
Although the LTE system is described as an example, it should be understood by those skilled in the art that the present invention is not limited to the LTE system, but may also be applied to other wireless communication systems, such as GSM, CDMA2000, WCDMA, TD-SCDMA, and future new network systems.
Based on the above mobile terminal hardware structure and communication network system, the present invention provides various embodiments of the method.
Example one
Fig. 3 is a diagram illustrating a focusing method of a mobile terminal according to an embodiment of the present invention, which is applied to the mobile terminal, and as shown in fig. 3, the focusing method of the mobile terminal according to the embodiment of the present invention involves steps 301 to 305, which are described below.
Step 301: at least two candidate focusing modes when the mobile terminal carries out image acquisition are obtained.
In practical applications, when a user uses the mobile terminal to take a picture, the mobile terminal detects a shooting instruction triggered by the user, performs a focusing operation in image acquisition, and triggers the execution of step 301.
In an embodiment of the present invention, the image capturing device of the mobile terminal has at least two candidate focusing modes, and in an embodiment, the candidate focusing modes of the mobile terminal may include a first focusing mode and a second focusing mode different from the first focusing mode;
the first focusing mode may be a focusing mode depending on the image characteristics of the subject itself, such as a phase focusing mode, a double-shot focusing; the second focusing mode may be a focusing mode depending on the emission and reception of light (infrared light) of a specific wavelength, such as laser focusing, structured light focusing; of course, in some embodiments, the mobile terminal may have other focusing modes, such as Contrast focusing (also called inverse differential focusing) (CAF), and the like.
Next, several focusing modes mentioned above will be explained.
Laser focusing, which is a focusing mode for calculating the distance from a target to a test instrument by recording the time difference between infrared laser emitted from a device, reflected by the surface of the target and finally received by a distance meter; for example, the photographing device is a, the photographing object is B, and the principle is that after infrared laser (light with infrared wavelength) is emitted at a to B, the infrared laser is emitted back to a, the distance between a and B is measured, and then focusing is completed. The focusing mode has strong adaptability to weak light or pure color environment and belongs to an active focusing mode.
Structured light is focused, and structured light refers to light having a specific pattern, which pattern may be a dot, a line, a plane, etc. Structured light focusing is based on light encoding, a known infrared mode (coded infrared light) is projected to a target object, another infrared CMOS imager receives a structured light pattern reflected by the surface of the object, and the received pattern is necessarily deformed due to the three-dimensional shape of the object, so that the depth information is finally determined. For focusing modes such as laser focusing and structured light focusing which depend on the emission and the reception of light rays with specific wavelengths, as the light source emitted by the focusing device is identified, accurate depth-of-field information can be obtained as long as the light source can be emitted back even if a shot object has no edge, and the focusing device is further based on depth-of-field focusing and is suitable for short-distance shooting; however, it is not suitable for shooting a transparent object such as glass, because light can pass through the object to cause a depth of field measurement error, nor suitable for a scene with a wide spectrum, such as an outdoor scene of a big sun, a display scene, etc.
The method comprises the steps that phase focusing is carried out, pixel points of an image sensor are formed by left and right pixel points in pairs, a mobile terminal obtains a first image through the left pixel points, a second image is obtained through the right pixel points, the mobile terminal respectively generates an image data waveform corresponding to the first image and an image data waveform corresponding to the second image, when the data waveforms of the first image and the second image are superposed, the current state of focusing is known, the shot image is the clearest at the moment, namely, the mobile terminal guides focusing through the phase difference of the two data waveforms, shielding pixel points are reserved on a photosensitive element and used for carrying out phase detection, the detected phase difference is converted into an out-of-focus rate, and the focusing is guided through the out-of-focus rate, so that accurate focusing is realized. The focusing mode does not need repeated movement of a lens and a focusing process block, but on the other hand, because shielding pixel points on the image sensor are needed to carry out phase detection, the requirement of the focusing mode on light intensity is higher, the focusing mode is not easy to realize under weak light, and the application range is limited.
Double-shot focusing is realized by adopting a human eye triangulation positioning principle and utilizing the inverse proportional relationship between the difference (namely parallax) between the transverse coordinates of the target object imaged on the left view and the right view and the distance between the target object and an imaging plane, so that the depth of field is calculated, and further the focusing is based on the depth of field.
And (4) contrast type focusing, wherein the focusing mode searches the lens position with the maximum contrast, namely the accurate focusing position according to the contrast change of the picture at the focus. In the focusing process, the picture is gradually clear and the contrast ratio begins to rise along with the movement of the focusing lens, when the picture is clearest and the contrast ratio is highest, the picture is in a focusing state, but the terminal does not know, so the lens can be continuously moved, when the contrast ratio begins to fall, the lens is further moved, the contrast ratio is further reduced, the terminal knows that the focus is missed, and the lens retreats to the position with the highest contrast ratio to finish focusing. Therefore, in the focusing process of the focusing mode, the lens needs to move continuously from the beginning of focusing to the end of focusing, and moves back after passing through the optimal focusing position, so that the focusing stroke is increased in the whole process, the focusing speed is low, and the efficiency is low.
Step 302: and respectively trying to focus the target object based on each candidate focusing mode to obtain the depth of field value when each candidate focusing mode tries to focus.
Here, in actual implementation, the mobile terminal performs focusing based on the depth of field, and before focusing, in order to perform a focusing mode suitable for the current application scene, it is necessary to perform trial focusing on the candidate focusing modes provided in the mobile terminal one by one, and obtain the depth of field value when each candidate focusing mode attempts focusing.
Step 303: and determining the confidence level that the depth of field value of each candidate focusing mode when focusing is attempted is the actual depth of field.
The confidence level mentioned in the embodiments of the present invention refers to the confidence level that the depth of field value output by a certain focusing mode is the actual depth of field, and is between 0 and 1. And the calculation criteria of the confidence level may be different for different candidate focusing modes.
In an embodiment, when the candidate focusing mode is a first focusing mode (e.g. phase focusing), the confidence level of the first focusing mode can be determined by:
acquiring a first image corresponding to a left pixel point and a second image corresponding to a right pixel point when attempting focusing based on a first focusing mode; respectively extracting the characteristics of the first image and the second image; and performing feature matching on the first image and the second image based on the extracted features to obtain the image matching degree of the first image and the second image, and taking the obtained image matching degree as the reliability of the actual depth of field when the depth of field value is used for trying to focus in the first focusing mode.
Here, in practical implementation, the extracted features of the image may include one or more of: edges, corners, areas, etc. In an embodiment, the extracted features of the image may be edges, and feature extraction of the first image and the second image is achieved through edge detection. Where the edge of an image refers to the set of those pixels whose surrounding pixels have sharp changes in gray, which is the most basic feature of an image, the edge exists in objects, backgrounds, and regions. The basic idea of edge detection is to detect edge points in an image, and then connect the edge points into a contour according to a certain strategy, thereby forming a segmentation region.
In an embodiment, the performing feature matching on the first image and the second image based on the extracted features to obtain the image matching degree of the first image and the second image includes:
performing edge matching on the extracted edge of the first image and the extracted edge of the second image; determining the image matching degree of the first image and the second image based on the number of matched same edges in the first image and the second image.
In one embodiment, when the candidate focusing mode is the second focusing mode (e.g., laser focusing), the confidence level of the first focusing mode can be determined as follows:
responding to the candidate focusing mode as a second focusing mode, and acquiring the emission light intensity and the receiving light intensity of the light projected to the target object when attempting focusing based on the second focusing mode; and determining the ratio of the received light intensity to the transmitted light intensity, and taking the determined ratio as the reliability of the actual depth of field when the depth of field value of the second focusing mode tries to focus.
Step 304: and selecting the focusing mode with the reliability meeting the preset condition from the candidate focusing modes.
Here, based on the above-mentioned embodiment of the present invention, in practical applications, a confidence threshold (e.g. 0.7) corresponding to the focusing mode may be preset, and when the calculated confidence threshold of the candidate focusing mode exceeds the preset confidence threshold, it may be determined that the candidate focusing mode satisfies the preset condition.
In practical implementation, when the candidate focusing modes include a first focusing mode and a second focusing mode, the following focusing modes are respectively selected according to different reliability calculation results:
1, comparing a depth of field value when the first focusing mode tries to focus with a depth of field value when the second focusing mode tries to focus, wherein the credibility corresponding to the first focusing mode and the credibility corresponding to the second focusing mode both meet preset conditions; and selecting a focusing mode with a smaller depth of field value when the focusing is attempted as a target focusing mode of the target object. Taking the first focusing mode as laser focusing and the second focusing mode as phase focusing as an example, if the reliability of the first focusing mode and the second focusing mode both satisfy the preset condition, when the target object is a glass object, the laser focusing mode causes a larger depth of field value because light rays can penetrate through the glass to irradiate towards the subsequent object, and at this time, the phase focusing mode is preferably selected.
And 2, if one of the first focusing mode and the second focusing mode has the reliability meeting the preset condition, selecting the focusing mode meeting the preset condition as the target focusing mode of the target object.
And 3, the credibility corresponding to the first focusing mode and the second focusing mode does not meet the preset condition, the mobile terminal has a CAF mode, and the CAF focusing mode is selected as a target focusing mode of the target object.
Here, it should be noted that, if the first focus mode or the second focus mode includes two or more focus modes, when the determined focus mode with the reliability satisfying the preset condition is the first focus mode or the second focus mode, one of the focus modes may be randomly selected as the target focus mode. For example, the second focusing mode includes laser focusing and structured light focusing, the determined focusing mode with the reliability meeting the preset condition is the second focusing mode, and one of the laser focusing and the structured light focusing is randomly selected as the target focusing mode.
Step 305: and focusing the target object based on the depth of field value output by the selected focusing mode.
It should be noted that, in the process of focusing the target object based on the depth value output by the selected focusing mode, if the reliability corresponding to the currently selected focusing mode no longer satisfies the preset condition, the mode switching is performed based on the candidate focusing mode whose current reliability satisfies the preset condition.
By applying the embodiment of the invention, the mobile terminal can automatically calculate the credibility of each candidate focusing mode according to the current application scene, adapt the focusing mode meeting the preset conditions based on the calculated credibility, and focus based on the selected focusing mode. The focusing effect and the focusing efficiency are improved, so that the shooting effect of a user is better, and the user experience is enhanced.
Example two
Fig. 4 is a diagram of a focusing method of a mobile terminal according to an embodiment of the present invention, which is applied to a mobile terminal, in an embodiment of the present invention, the mobile terminal has three types, i.e., a laser focusing mode, a phase focusing mode, and a contrast focusing mode, as shown in fig. 4, the focusing method of the mobile terminal according to the embodiment of the present invention includes:
step 401: and the mobile terminal receives the image acquisition instruction and acquires a focusing mode of the mobile terminal.
Step 402: and respectively adopting a laser focusing mode and a phase focusing mode to try focusing, and determining the corresponding credibility of each mode.
Here, the reliability mentioned in the embodiments of the present invention refers to a reliability degree that a depth of field value output by a certain focus mode is an actual depth of field, and is between 0 and 1. And the calculation criteria of the confidence level may be different for different candidate focusing modes.
In the embodiment of the present invention, for the phase-in-focus mode, the confidence level can be determined as follows:
acquiring a first image corresponding to a left pixel point and a second image corresponding to a right pixel point when attempting focusing based on a phase focusing mode; respectively extracting the characteristics of the first image and the second image; and performing feature matching on the first image and the second image based on the extracted features to obtain the image matching degree of the first image and the second image, and taking the obtained image matching degree as the reliability of the actual depth of field when the depth of field value is used for trying to focus in the phase focusing mode.
In this embodiment, the extracted features of the image are edges, and feature extraction of the first image and the second image is realized by edge detection. Accordingly, the image matching degree of the first image and the second image can be calculated by the following method:
performing edge matching on the extracted edge of the first image and the extracted edge of the second image; determining the image matching degree of the first image and the second image based on the number of matched same edges in the first image and the second image. Exemplarily, the following steps are carried out: the number of the edges in the first image is 80 through edge detection, the number of the edges in the second image is 100, the 80 edges are identical through feature matching, the matching degree of the first image and the second image is 80% through calculation, and the corresponding reliability is 0.8 through the mapping relation between the preset image matching degree and the reliability of focusing operation.
In the embodiment of the present invention, for the laser focusing mode, the reliability thereof can be determined as follows:
acquiring the transmitting light intensity and the receiving light intensity of light projected to a target object when attempting focusing based on a laser focusing mode; and determining the ratio of the received light intensity to the transmitted light intensity, and taking the determined ratio as the reliability that the depth of field value is the actual depth of field when the laser focusing mode tries to focus. Taking the light projected to the target object as infrared light for example, if the emitted light intensity is 10mw and the received light intensity is 1mw, the ratio of the received light intensity to the emitted light intensity is 0.1.
Step 403: judging whether a focusing mode with the reliability exceeding a preset reliability threshold exists in the laser focusing mode and the phase focusing mode, and executing a step 404 if one focusing mode with the reliability exceeding the preset reliability threshold exists; if two focusing modes with reliability exceeding the preset reliability threshold exist, executing step 405; if there is no focusing mode with the confidence level exceeding the preset confidence level threshold, step 406 is executed.
In the embodiment of the present invention, a confidence threshold of the focusing mode is preset, for example, 0.75.
Step 404: selecting the focusing mode with the reliability exceeding the preset reliability threshold as the target focusing mode, and then executing step 407.
Step 405: comparing the depth of field value of the laser focusing mode when trying to focus with the depth of field value of the phase focusing mode when trying to focus, selecting the focusing mode with smaller depth of field value as the target focusing mode, and then executing step 407.
Step 406: and selecting the CAF focusing mode as a target focusing mode.
Step 407: and focusing the image based on the depth of field value output by the selected target focusing mode.
It should be noted that, in the process of focusing the target object based on the depth of field value output by the selected focusing mode, if the reliability corresponding to the currently selected focusing mode (e.g., phase focusing) no longer satisfies the preset condition, the mode switching is performed based on the candidate focusing mode (e.g., laser focusing) whose current reliability satisfies the preset condition.
Step 408: and ending the processing flow.
EXAMPLE III
The embodiment provides a mobile terminal, as shown in fig. 5, the mobile terminal in the embodiment of the present invention includes: a processor 51, a memory 52, a communication bus 53 and an image acquisition device 54 (such as a camera); wherein the content of the first and second substances,
the communication bus 53 is used for realizing connection communication between the processor 51 and the memory 52;
the memory 52 is used for storing an information processing program of a focusing method of the mobile terminal;
the processor 51 is configured to execute an information processing program of a focusing method of the mobile terminal stored in the memory 52, so as to implement the following steps:
acquiring at least two candidate focusing modes when the mobile terminal carries out image acquisition;
trying to focus the target object based on each candidate focusing mode respectively to obtain the depth of field value of each candidate focusing mode when trying to focus;
determining the reliability of the actual depth of field as the depth of field value when each candidate focusing mode tries to focus;
selecting a focusing mode with the determined credibility meeting preset conditions from the candidate focusing modes;
and focusing the target object based on the depth of field value output by the selected focusing mode.
In an embodiment, the processor 51 is further configured to, when executing the information processing program, implement:
responding to the candidate focusing mode as a first focusing mode, and acquiring a first image corresponding to a left pixel point and a second image corresponding to a right pixel point when attempting to focus based on the first focusing mode;
respectively extracting the features of the first image and the second image;
and performing feature matching on the first image and the second image based on the extracted features to obtain the image matching degree of the first image and the second image, and taking the obtained image matching degree as the reliability of the actual depth of field when the depth of field value of the first focusing mode is attempted to be focused.
In an embodiment, the processor 51 is further configured to, when executing the information processing program, implement:
responding to the candidate focusing mode as a second focusing mode, and acquiring the emission light intensity and the receiving light intensity of the light projected to the target object when attempting focusing based on the second focusing mode;
and determining the ratio of the received light intensity to the emitted light intensity, and taking the determined ratio as the reliability of the actual depth of field when the depth of field value of the second focusing mode tries to focus.
In an embodiment, the processor 51 is further configured to, when executing the information processing program, implement:
in response to that the at least two candidate focusing modes comprise a first focusing mode and a second focusing mode, and the credibility corresponding to the first focusing mode and the second focusing mode both meet a preset condition, comparing the depth of field value when the first focusing mode attempts to focus with the depth of field value when the second focusing mode attempts to focus;
and selecting a focusing mode with a smaller depth of field value when focusing is tried as a target focusing mode of the target object.
In an embodiment, the processor 51 is further configured to, when executing the information processing program, implement:
and in response to that the at least two candidate focusing modes comprise a first focusing mode and a second focusing mode, the credibility corresponding to the first focusing mode and the credibility corresponding to the second focusing mode do not meet the preset condition, and the mobile terminal has a contrast focusing CAF mode, and selects the CAF mode as a target focusing mode of the target object.
In an embodiment, the processor 51 is further configured to, when executing the information processing program, implement:
in the process of focusing the target object in response to the depth of field value output based on the selected focusing mode, the reliability corresponding to the selected focusing mode no longer meets the preset condition, and mode switching is performed based on the candidate focusing mode of which the current reliability meets the preset condition.
Here, it should be noted that: the above description related to the mobile terminal is similar to the above description of the method, and the description of the beneficial effects of the same method is omitted for brevity. For technical details not disclosed in the embodiments of the mobile terminal of the present invention, refer to the description of the embodiments of the method of the present invention.
In the embodiment of the present invention, the functions executed by the processor 31 in the mobile terminal may be implemented by a Central Processing Unit (CPU) or a Digital Signal Processor (DSP), a Field Programmable Gate Array (FPGA), or an Integrated Circuit (ASIC) in the terminal.
To implement the information processing embodiment of the motion sensing application, an embodiment of the present invention further provides a computer-readable storage medium, where one or more programs are stored in the computer-readable storage medium, and the one or more programs are executable by one or more processors to implement the following steps:
acquiring at least two candidate focusing modes when the mobile terminal carries out image acquisition;
trying to focus the target object based on each candidate focusing mode respectively to obtain the depth of field value of each candidate focusing mode when trying to focus;
determining the reliability of the actual depth of field as the depth of field value when each candidate focusing mode tries to focus;
selecting a focusing mode with the determined credibility meeting preset conditions from the candidate focusing modes;
and focusing the target object based on the depth of field value output by the selected focusing mode.
In one embodiment, the one or more programs are specifically executable by the one or more processors to perform the steps of:
responding to the candidate focusing mode as a first focusing mode, and acquiring a first image corresponding to a left pixel point and a second image corresponding to a right pixel point when attempting to focus based on the first focusing mode;
respectively extracting the features of the first image and the second image;
and performing feature matching on the first image and the second image based on the extracted features to obtain the image matching degree of the first image and the second image, and taking the obtained image matching degree as the reliability of the actual depth of field when the depth of field value of the first focusing mode is attempted to be focused.
In one embodiment, the one or more programs are specifically executable by the one or more processors to perform the steps of:
responding to the candidate focusing mode as a second focusing mode, and acquiring the emission light intensity and the receiving light intensity of the light projected to the target object when attempting focusing based on the second focusing mode;
and determining the ratio of the received light intensity to the emitted light intensity, and taking the determined ratio as the reliability of the actual depth of field when the depth of field value of the second focusing mode tries to focus.
In one embodiment, the one or more programs are specifically executable by the one or more processors to perform the steps of:
in response to that the at least two candidate focusing modes comprise a first focusing mode and a second focusing mode, and the credibility corresponding to the first focusing mode and the second focusing mode both meet a preset condition, comparing the depth of field value when the first focusing mode attempts to focus with the depth of field value when the second focusing mode attempts to focus;
and selecting a focusing mode with a smaller depth of field value when focusing is tried as a target focusing mode of the target object.
In one embodiment, the one or more programs are specifically executable by the one or more processors to perform the steps of:
and in response to that the at least two candidate focusing modes comprise a first focusing mode and a second focusing mode, the credibility corresponding to the first focusing mode and the credibility corresponding to the second focusing mode do not meet the preset condition, and the mobile terminal has a contrast focusing CAF mode, and selects the CAF mode as a target focusing mode of the target object.
In one embodiment, the one or more programs are specifically executable by the one or more processors to perform the steps of:
in the process of focusing the target object in response to the depth of field value output based on the selected focusing mode, the reliability corresponding to the selected focusing mode no longer meets the preset condition, and mode switching is performed based on the candidate focusing mode of which the current reliability meets the preset condition.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described device embodiments are merely illustrative, for example, the division of the unit is only a logical functional division, and there may be other division ways in actual implementation, such as: multiple units or components may be combined, or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the coupling, direct coupling or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection between the devices or units may be electrical, mechanical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed on a plurality of network units; some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, all the functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may be separately regarded as one unit, or two or more units may be integrated into one unit; the integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
Those of ordinary skill in the art will understand that: all or part of the steps for implementing the method embodiments may be implemented by hardware related to program instructions, and the program may be stored in a computer readable storage medium, and when executed, the program performs the steps including the method embodiments; and the aforementioned storage medium includes: a mobile storage device, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
Alternatively, the integrated unit of the present invention may be stored in a computer-readable storage medium if it is implemented in the form of a software functional module and sold or used as a separate product. Based on such understanding, the technical solutions of the embodiments of the present invention may be essentially implemented or a part contributing to the prior art may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the methods described in the embodiments of the present invention. And the aforementioned storage medium includes: a removable storage device, a ROM, a RAM, a magnetic or optical disk, or various other media that can store program code.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.

Claims (9)

1. A focusing method of a mobile terminal, the method comprising:
acquiring at least two candidate focusing modes when the mobile terminal carries out image acquisition;
trying to focus the target object based on each candidate focusing mode respectively to obtain the depth of field value of each candidate focusing mode when trying to focus;
determining the reliability of the actual depth of field as the depth of field value when each candidate focusing mode tries to focus;
selecting a focusing mode with the determined credibility meeting preset conditions from the candidate focusing modes;
focusing the target object based on the depth of field value output by the selected focusing mode;
wherein, the selecting the focusing mode with the reliability meeting the preset condition in the candidate focusing modes comprises:
in response to that the at least two candidate focusing modes comprise a first focusing mode and a second focusing mode, and the credibility corresponding to the first focusing mode and the second focusing mode both meet a preset condition, comparing the depth of field value when the first focusing mode attempts to focus with the depth of field value when the second focusing mode attempts to focus;
and selecting a focusing mode with a smaller depth of field value when focusing is tried as a target focusing mode of the target object.
2. The method of claim 1, wherein the determining the depth of field value at which each candidate focus mode attempts to focus is a confidence level of the actual depth of field comprises:
responding to the candidate focusing mode as a first focusing mode, and acquiring a first image corresponding to a left pixel point and a second image corresponding to a right pixel point when attempting to focus based on the first focusing mode;
respectively extracting the features of the first image and the second image;
and performing feature matching on the first image and the second image based on the extracted features to obtain the image matching degree of the first image and the second image, and taking the obtained image matching degree as the reliability of the actual depth of field when the depth of field value of the first focusing mode is attempted to be focused.
3. The method of claim 1, wherein the determining the depth of field value at which each candidate focus mode attempts to focus is a confidence level of the actual depth of field comprises:
responding to the candidate focusing mode as a second focusing mode, and acquiring the emission light intensity and the receiving light intensity of the light projected to the target object when attempting focusing based on the second focusing mode;
and determining the ratio of the received light intensity to the emitted light intensity, and taking the determined ratio as the reliability of the actual depth of field when the depth of field value of the second focusing mode tries to focus.
4. The method of claim 1, wherein selecting the focusing mode with the determined reliability satisfying a preset condition from the candidate focusing modes comprises:
and in response to that the at least two candidate focusing modes comprise a first focusing mode and a second focusing mode, the credibility corresponding to the first focusing mode and the credibility corresponding to the second focusing mode do not meet the preset condition, and the mobile terminal has a contrast focusing CAF mode, and selects the CAF mode as a target focusing mode of the target object.
5. The method of claim 1, wherein the method further comprises:
in the process of focusing the target object in response to the depth of field value output based on the selected focusing mode, the reliability corresponding to the selected focusing mode no longer meets the preset condition, and mode switching is performed based on the candidate focusing mode of which the current reliability meets the preset condition.
6. A mobile terminal, characterized in that the mobile terminal comprises:
a memory for storing a processing program of a focusing method of the mobile terminal;
a processor for executing a processing program of a focusing method of the mobile terminal stored in the memory to implement the following steps:
acquiring at least two candidate focusing modes when the mobile terminal carries out image acquisition;
trying to focus the target object based on each candidate focusing mode respectively to obtain the depth of field value of each candidate focusing mode when trying to focus;
determining the reliability of the actual depth of field as the depth of field value when each candidate focusing mode tries to focus;
selecting a focusing mode with the determined credibility meeting preset conditions from the candidate focusing modes;
focusing the target object based on the depth of field value output by the selected focusing mode;
the processor is further configured to execute a processing program of a focusing method of the mobile terminal stored in the memory to implement the following steps:
in response to that the at least two candidate focusing modes comprise a first focusing mode and a second focusing mode, and the credibility corresponding to the first focusing mode and the second focusing mode both meet a preset condition, comparing the depth of field value when the first focusing mode attempts to focus with the depth of field value when the second focusing mode attempts to focus;
and selecting a focusing mode with a smaller depth of field value when focusing is tried as a target focusing mode of the target object.
7. The mobile terminal of claim 6,
the processor is further configured to, in response to the candidate focusing mode being a first focusing mode, obtain a first image corresponding to a left pixel and a second image corresponding to a right pixel when attempting to focus based on the first focusing mode;
respectively extracting the features of the first image and the second image;
and performing feature matching on the first image and the second image based on the extracted features to obtain the image matching degree of the first image and the second image, and taking the obtained image matching degree as the reliability of the actual depth of field when the depth of field value of the first focusing mode is attempted to be focused.
8. The mobile terminal of claim 6,
the processor is further configured to, in response to the candidate focusing mode being a second focusing mode, obtain emission light intensity and reception light intensity of light projected to the target object when attempting focusing based on the second focusing mode;
and determining the ratio of the received light intensity to the emitted light intensity, and taking the determined ratio as the reliability of the actual depth of field when the depth of field value of the second focusing mode tries to focus.
9. A computer-readable storage medium, characterized in that an executable program is stored, which when executed by a processor, implements the focusing method of a mobile terminal according to any one of claims 1 to 5.
CN201711183828.5A 2017-11-23 2017-11-23 Focusing method of mobile terminal, mobile terminal and storage medium Active CN107888829B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711183828.5A CN107888829B (en) 2017-11-23 2017-11-23 Focusing method of mobile terminal, mobile terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711183828.5A CN107888829B (en) 2017-11-23 2017-11-23 Focusing method of mobile terminal, mobile terminal and storage medium

Publications (2)

Publication Number Publication Date
CN107888829A CN107888829A (en) 2018-04-06
CN107888829B true CN107888829B (en) 2020-08-28

Family

ID=61774843

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711183828.5A Active CN107888829B (en) 2017-11-23 2017-11-23 Focusing method of mobile terminal, mobile terminal and storage medium

Country Status (1)

Country Link
CN (1) CN107888829B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110198411B (en) * 2019-05-31 2021-11-02 努比亚技术有限公司 Depth of field control method and device in video shooting process and computer readable storage medium
US11818462B2 (en) * 2019-08-30 2023-11-14 Qualcomm Incorporated Phase detection autofocus sensor apparatus and method for depth sensing
CN111131703A (en) * 2019-12-25 2020-05-08 北京东宇宏达科技有限公司 Whole-process automatic focusing method of continuous zooming infrared camera
CN113141468B (en) * 2021-05-24 2022-08-19 维沃移动通信(杭州)有限公司 Focusing method and device and electronic equipment

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013130759A (en) * 2011-12-22 2013-07-04 Canon Inc Focus detector and imaging apparatus
CN104954680A (en) * 2015-06-16 2015-09-30 深圳市金立通信设备有限公司 Camera focusing method and terminal
CN104994298A (en) * 2015-07-14 2015-10-21 厦门美图之家科技有限公司 Focusing triggering method and system capable of intelligently selecting focusing mode
CN105022137A (en) * 2014-04-30 2015-11-04 聚晶半导体股份有限公司 Automatic focusing system applying multiple lenses and method thereof
CN105659580A (en) * 2014-09-30 2016-06-08 华为技术有限公司 Autofocus method, device and electronic apparatus
CN105791680A (en) * 2016-02-29 2016-07-20 广东欧珀移动通信有限公司 Control method, control device and electronic device
CN106331499A (en) * 2016-09-13 2017-01-11 努比亚技术有限公司 Focusing method and shooting equipment
CN107124555A (en) * 2017-05-31 2017-09-01 广东欧珀移动通信有限公司 Control method, device, computer equipment and the computer-readable recording medium of focusing

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013130759A (en) * 2011-12-22 2013-07-04 Canon Inc Focus detector and imaging apparatus
CN105022137A (en) * 2014-04-30 2015-11-04 聚晶半导体股份有限公司 Automatic focusing system applying multiple lenses and method thereof
CN105659580A (en) * 2014-09-30 2016-06-08 华为技术有限公司 Autofocus method, device and electronic apparatus
CN104954680A (en) * 2015-06-16 2015-09-30 深圳市金立通信设备有限公司 Camera focusing method and terminal
CN104994298A (en) * 2015-07-14 2015-10-21 厦门美图之家科技有限公司 Focusing triggering method and system capable of intelligently selecting focusing mode
CN105791680A (en) * 2016-02-29 2016-07-20 广东欧珀移动通信有限公司 Control method, control device and electronic device
CN106331499A (en) * 2016-09-13 2017-01-11 努比亚技术有限公司 Focusing method and shooting equipment
CN107124555A (en) * 2017-05-31 2017-09-01 广东欧珀移动通信有限公司 Control method, device, computer equipment and the computer-readable recording medium of focusing

Also Published As

Publication number Publication date
CN107888829A (en) 2018-04-06

Similar Documents

Publication Publication Date Title
CN108024065B (en) Terminal shooting method, terminal and computer readable storage medium
CN107038681B (en) Image blurring method and device, computer readable storage medium and computer device
CN107888829B (en) Focusing method of mobile terminal, mobile terminal and storage medium
CN108156393B (en) Image photographing method, mobile terminal and computer-readable storage medium
CN107231470B (en) Image processing method, mobile terminal and computer readable storage medium
CN107172349B (en) Mobile terminal shooting method, mobile terminal and computer readable storage medium
CN107124556B (en) Focusing method, focusing device, computer readable storage medium and mobile terminal
CN107465873B (en) Image information processing method, equipment and storage medium
CN107707821B (en) Distortion parameter modeling method and device, correction method, terminal and storage medium
CN107613208B (en) Focusing area adjusting method, terminal and computer storage medium
CN108921212B (en) Image matching method, mobile terminal and computer readable storage medium
CN111885307B (en) Depth-of-field shooting method and device and computer readable storage medium
CN110086993B (en) Image processing method, image processing device, mobile terminal and computer readable storage medium
CN112188082A (en) High dynamic range image shooting method, shooting device, terminal and storage medium
CN113888452A (en) Image fusion method, electronic device, storage medium, and computer program product
CN108022227B (en) Black and white background picture acquisition method and device and computer readable storage medium
CN110177207B (en) Backlight image shooting method, mobile terminal and computer readable storage medium
CN108848321B (en) Exposure optimization method, device and computer-readable storage medium
CN112135060B (en) Focusing processing method, mobile terminal and computer storage medium
CN108197560B (en) Face image recognition method, mobile terminal and computer-readable storage medium
CN108024013B (en) Screen control method, terminal and computer readable storage medium
CN107295262B (en) Image processing method, mobile terminal and computer storage medium
CN108198195B (en) Focusing method, terminal and computer readable storage medium
CN107809586B (en) Mobile terminal focusing mode switching method, mobile terminal and storage medium
CN114143471B (en) Image processing method, system, mobile terminal and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant