CN115802161A - Focusing method, system, terminal and medium based on self-learning - Google Patents

Focusing method, system, terminal and medium based on self-learning Download PDF

Info

Publication number
CN115802161A
CN115802161A CN202310086377.2A CN202310086377A CN115802161A CN 115802161 A CN115802161 A CN 115802161A CN 202310086377 A CN202310086377 A CN 202310086377A CN 115802161 A CN115802161 A CN 115802161A
Authority
CN
China
Prior art keywords
focusing
value
object distance
credible
self
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310086377.2A
Other languages
Chinese (zh)
Other versions
CN115802161B (en
Inventor
程景
葛天杰
洪志冰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Xingxi Technology Co ltd
Original Assignee
Hangzhou Xingxi Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Xingxi Technology Co ltd filed Critical Hangzhou Xingxi Technology Co ltd
Priority to CN202310086377.2A priority Critical patent/CN115802161B/en
Publication of CN115802161A publication Critical patent/CN115802161A/en
Application granted granted Critical
Publication of CN115802161B publication Critical patent/CN115802161B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Focusing (AREA)

Abstract

The application provides a focusing method, a focusing system, a focusing terminal and a focusing medium based on self-learning, wherein the method comprises the following steps: acquiring object distance data of a current shot subject; searching whether a corresponding credible value exists in the updated focusing curve according to the object distance value data of the current shot subject; and if the credible value exists, taking the found credible value as a lens motor control value to drive the lens motor to move to the specified position for focusing. The method can learn and calibrate through a self-learning algorithm to obtain an updated focusing curve, so that in the subsequent focusing process, the object distance value obtained through the TOF can be focused to an accurate position in one step without repeated CAF fine focusing, the influence of a picture blurring stage generated by CAF focusing is eliminated, the system calculation force required by CAF focusing is eliminated, and the focusing time is greatly shortened. The method has the advantages of low implementation cost and good system compatibility, and parameters related to the algorithm can be dynamically adjusted, so that the algorithm has excellent extensibility and adjustability.

Description

Focusing method, system, terminal and medium based on self-learning
Technical Field
The present application relates to the field of camera focusing algorithm technologies, and in particular, to a self-learning based focusing method, system, terminal, and medium.
Background
Currently, the mainstream focusing algorithms are divided into three types: CAF focusing algorithm, PD focusing algorithm, and TOF focusing algorithm. The TOF focusing algorithm usually combines with the CAF algorithm to generate a certain error due to various reasons, such as calibration error, focusing curve fitting error, motor control error, lens consistency error, camera lens assembly consistency error, temperature drift, etc.
The current common focusing strategy is to combine TOF rough focusing and CAF fine focusing, wherein TOF is adopted to calculate a focusing value in each focusing process, and then a motor control value is obtained from a pre-calibrated focusing fitting curve to control a focusing motor, so as to complete the rough focusing process. After the rough focusing is finished, the CAF algorithm is used for carrying out fine focusing (focusing is stable), motor control is carried out according to the stepping, the definition index is repeatedly calculated, and finally the motor is controlled to move to the optimal focusing position. However, this focusing strategy has the following disadvantages:
(1) A TOF coarse focusing link may have a large error, so that subsequent CAF algorithm searching needs to consume much time;
(2) Each focusing needs a CAF algorithm to carry out fine focusing, so that the focusing time is longer;
(3) The CAF focusing process causes a process of repeatedly blurring an image due to the repeated driving of the focusing motor.
Disclosure of Invention
In view of the above drawbacks of the prior art, the present application aims to provide a focusing method, a focusing system, a focusing terminal and a focusing medium based on self-learning, so as to solve the technical problems that the existing focusing algorithm is not only long in focusing time, but also has repeated clear and fuzzy processes.
To achieve the above and other related objects, a first aspect of the present application provides a self-learning based focusing method, including: acquiring object distance data of a current shot subject; searching whether a corresponding credible value exists in the updated focusing curve according to the object distance value data of the current shot subject; if the credible value exists, taking the found credible value as a lens motor control value to drive the lens motor to move to a specified position for focusing; and otherwise, focusing based on a TOF focusing algorithm and a CAF focusing algorithm.
In some embodiments of the first aspect of the present application, the generating of the updated focusing curve includes: performing primary focusing on a target object under an object distance value based on a TOF focusing algorithm, and performing stable focusing on the target object based on a CAF focusing algorithm to obtain a lens motor control value corresponding to the object distance value; judging whether an initial focusing curve is calibrated according to the obtained lens motor control value according to a focusing self-learning strategy so as to generate the updated focusing curve; the initial focusing curve is a mapping relation between an object distance value before updating and a lens motor control value.
In some embodiments of the first aspect of the present application, the determining whether to calibrate the initial focusing curve according to the obtained lens motor control value according to the focusing self-learning strategy includes: searching whether the object distance value of the target object has a credible value in a credible set; wherein the trusted set comprises a number of data groups; each data set consists of an object distance value and a corresponding statistical set thereof; each of the statistical sets includes a plurality of lens motor control values; the credible value is the optimal lens motor control value corresponding to the object distance value; if not, adding the lens motor control value corresponding to the object distance value into the statistic set corresponding to the object distance value in the credible set; otherwise, ending; judging whether the number of the lens motor control values in the current statistical set exceeds a sample number threshold value or not; if the sample number exceeds the sample number threshold value, calculating a characteristic value of the statistical set based on a data set characteristic algorithm to serve as a credible value corresponding to the object distance value; otherwise, ending; judging whether the current credible value in a preset interval meets the construction conditions of a credible section; if so, calibrating the initial focusing curve according to the credible segment; otherwise, caching the current credible value.
In some embodiments of the first aspect of the present application, the data set feature algorithm includes, but is not limited to, any one or combination of mean calculation, median calculation, and K-MEANS clustering algorithm.
In some embodiments of the first aspect of the present application, before calculating the trusted value, a step of guaranteeing fault tolerance based on a dataset checking policy is further performed, where the performing process includes: judging whether the average difference of the samples of the current statistical set exceeds a preset threshold value or not; and if so, clearing the statistic set or increasing the sample number threshold of the statistic set so as to enable the statistic set to be jumped.
In some embodiments of the first aspect of the present application, the configuration condition of the trusted segment includes:
q = (u 0-u 1) < a × n; wherein u0 and u1 represent the object distance starting point and the object distance end point of the credible segment; n represents the number of confidence values in the interval; a denotes a scaling factor; if (u 0-u 1) < a x n is true, Q =1, which represents that the interval is true credible segment; if (u 0-u 1) < a × n does not hold, Q =0, indicating that the interval does not hold a reliable segment.
In some embodiments of the first aspect of the present application, the method further comprises: calibrating and updating the initial focusing curve of the focusing object distance with lower frequency by updating each scattered credible point; and/or calibrating and updating the initial focusing curve of the higher-frequency focusing object distance by updating each credible segment of the paragraph aggregation.
To achieve the above and other related objects, a second aspect of the present application provides a self-learning based focusing system, comprising: the data acquisition module is used for acquiring object distance value data of the current shot subject; the focusing logic judgment module is used for searching whether a corresponding credible value exists in the updated focusing curve according to the object distance value data of the current shot main body; the focusing control module is used for taking the searched credible value as a lens motor control value under the condition of judging that the credible value exists so as to drive the lens motor to move to a specified position for focusing; and otherwise, focusing based on a TOF focusing algorithm and a CAF focusing algorithm.
In some embodiments of the second aspect of the present application, the focusing system further comprises a curve generating module, configured to perform the following: performing primary focusing on a target object under an object distance value based on a TOF focusing algorithm, and performing stable focusing on the target object based on a CAF focusing algorithm to obtain a lens motor control value corresponding to the object distance value; judging whether an initial focusing curve is calibrated according to the obtained lens motor control value according to a focusing self-learning strategy so as to generate the updated focusing curve; the initial focusing curve is a mapping relation between an object distance value before updating and a lens motor control value.
To achieve the above and other related objects, a third aspect of the present application provides a computer-readable storage medium having a first computer program stored thereon, which when executed by a processor, implements the self-learning based focusing method of the first aspect of the present application.
To achieve the above and other related objects, a fourth aspect of the present application provides an electronic terminal comprising: a processor and a memory; the memory is used for storing a computer program, and the processor is used for executing the computer program stored by the memory so as to enable the terminal to execute the self-learning based focusing method provided by the first aspect of the application.
As described above, the self-learning based focusing method, system, terminal and medium of the present application have the following beneficial effects: according to the invention, through a self-learning algorithm, a credible focusing curve can be obtained through learning and calibration, so that in the subsequent focusing process, the accurate position can be focused to by one step only through an object distance value obtained by TOF (time of flight), CAF (computer aided design) fine focusing is not required to be carried out repeatedly, the influence of a picture blurring stage generated by CAF focusing is eliminated, the system calculation force required by CAF focusing is also eliminated, and the focusing time is greatly shortened. Meanwhile, the method has low implementation cost and good system compatibility, and parameters related to the algorithm can be dynamically adjusted, so that the algorithm has excellent extensibility and adjustability.
Drawings
Fig. 1 is a flowchart illustrating a self-learning based focusing method according to an embodiment of the present application.
Fig. 2 is a schematic flow chart illustrating generation of an updated focusing curve according to an embodiment of the present application.
Fig. 3 is a schematic flow chart illustrating a process of determining whether to calibrate an initial focusing curve according to an obtained lens motor control value according to a focusing self-learning strategy in an embodiment of the present application.
Fig. 4 is a diagram illustrating a data structure of a trusted set according to an embodiment of the present application.
Fig. 5 is a schematic diagram illustrating an updated focusing curve according to an embodiment of the present application.
FIG. 6 is a schematic structural diagram of a self-learning based focusing system according to an embodiment of the present application.
Fig. 7 is a schematic structural diagram of an electronic terminal according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application is provided by way of specific examples, and other advantages and effects of the present application will be readily apparent to those skilled in the art from the disclosure herein. The present application is capable of other and different embodiments and its several details are capable of modifications and/or changes in various respects, all without departing from the spirit of the present application. It is to be noted that the features in the following embodiments and examples may be combined with each other without conflict.
It is noted that in the following description, reference is made to the accompanying drawings which illustrate several embodiments of the present application. It is to be understood that other embodiments may be utilized and that mechanical, structural, electrical, and operational changes may be made without departing from the spirit and scope of the present application. The following detailed description is not to be taken in a limiting sense, and the scope of embodiments of the present application is defined only by the claims of the issued patent. The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. Spatially relative terms, such as "upper," "lower," "left," "right," "lower," "below," "lower," "above," "upper," and the like, may be used herein to facilitate describing one element or feature's relationship to another element or feature as illustrated in the figures.
In this application, unless expressly stated or limited otherwise, the terms "mounted," "connected," "secured," "retained," and the like are to be construed broadly and can, for example, be fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meaning of the above terms in the present application can be understood by those of ordinary skill in the art as the case may be.
Also, as used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context indicates otherwise. It will be further understood that the terms "comprises," "comprising," and/or "comprising," when used in this specification, specify the presence of stated features, operations, elements, components, items, species, and/or groups, but do not preclude the presence, or addition of one or more other features, operations, elements, components, items, species, and/or groups thereof. The terms "or" and/or "as used herein are to be construed as inclusive or meaning any one or any combination. Thus, "a, B or C" or "a, B and/or C" means "any of the following: a; b; c; a and B; a and C; b and C; A. b and C ". An exception to this definition will occur only when a combination of elements, functions or operations are inherently mutually exclusive in some way.
In order to solve the problems in the background art, the invention provides a self-learning focusing method, a system, a terminal and a medium based on TOF and CAF focusing algorithms, and aims to adopt a self-learning technical idea, record a complete effective focusing result and calibrate an initial focusing calibration curve based on a certain learning algorithm; if the target object distance is found to be subjected to relearning and calibration in the subsequent focusing process, a new focusing value is directly adopted for focusing, and the most clear focusing is directly focused, so that the subsequent CAF link is omitted. By the technical scheme, the focusing time can be greatly shortened, and the influence of repeated clear and fuzzy images brought by CAF links can be reduced.
In order to make the objects, technical solutions and advantages of the present invention more apparent, the technical solutions in the embodiments of the present invention are further described in detail by the following embodiments in conjunction with the accompanying drawings. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Before the present invention is explained in further detail, terms and expressions referred to in the embodiments of the present invention are explained, and the terms and expressions referred to in the embodiments of the present invention are applicable to the following explanations:
(1) TOF (Time of Flight) ranging refers to Time-of-Flight techniques that aim at measuring the Time taken by an object, particle or wave to fly a certain distance in a fixed medium. The TOF ranging method is a data two-way ranging technology, and mainly utilizes the time of flight of a signal back and forth between two asynchronous transceivers (transceivers) to measure the distance between nodes;
(2) CAF (Continuous Auto Focus), which is an Auto Focus mode that continuously tracks as a subject moves, refers to Continuous Auto Focus.
The embodiment of the invention provides a self-learning-based focusing method, a system of the self-learning-based focusing method, a storage medium for storing an executable program for realizing the self-learning-based focusing method and a self-learning-based focusing electronic terminal. In terms of implementation of the self-learning based focusing method, the embodiment of the present invention will describe an exemplary implementation scenario of the self-learning based focusing.
Fig. 1 shows a schematic flow chart of a focusing method based on self-learning in an embodiment of the present invention. The focusing method based on self-learning in the embodiment is used for generating a trusted focusing curve, and mainly comprises the following steps.
Step S1: object distance value data of a current subject to be photographed is acquired.
It should be understood that the object distance value refers to the distance between the subject and the optical center of the lens, and is generally indicated by the letter U. A conjugate relation exists between the object distance U and the image distance V; the farther the object distance U is, the closer the image distance V is; conversely, the closer the object distance U, the farther the image distance V.
Step S2: and searching whether a corresponding credible value exists in the updated focusing curve according to the object distance value data of the current shot subject.
In the present embodiment, the updated focus curve is relative to the initial focus curve, which is an un-updated curve describing the mapping relationship between the object distance value and the lens motor control value. According to the embodiment of the invention, the initial focusing curve is directly used after being calibrated and updated, after the object distance value of the shot main body is obtained, CAF secondary focusing is not needed, and the accurate lens motor control value can be obtained by searching and updating the focusing curve.
In this embodiment, the generation process of the updated focusing curve is as shown in fig. 2, and includes steps S21 and S22.
Step S21: and carrying out primary focusing on a target object under an object distance value based on a TOF focusing algorithm, and carrying out stable focusing on the target object based on a CAF focusing algorithm to obtain a lens motor control value corresponding to the object distance value.
The specific process of performing preliminary focusing on a target object based on the TOF focusing algorithm and performing stable focusing on the target object based on the CAF focusing algorithm is as follows.
Step S21A: image data and depth data of a subject are acquired.
It should be understood that the "subject" in this step and the "subject" in the above step S1 both refer to a subject captured by the lens, but at different processing stages. The 'subject' in step S1 is an application stage after the updated focusing curve is generated; the "subject" in step S211 is data for generating a calibration initial focus curve in the generation stage of the updated focus curve.
In this embodiment, the method of acquiring the image data and the depth data of the subject includes: acquiring image data of a subject acquired by an image acquisition unit; the depth data of the subject acquired by the depth acquisition unit is acquired, and the depth data is subjected to coordinate conversion according to a coordinate system in which the image data is located, so that the depth data in the world coordinate system is converted to a pixel coordinate system to unify the coordinate system with the image data.
Optionally, the image acquisition unit may be a camera module, and the camera module includes a camera device, a storage device, and a processing device; the image capturing device includes but is not limited to: CAMERAs, video CAMERAs, CAMERA modules integrated with optical systems or CCD chips, CAMERA modules integrated with optical systems and CMOS chips, HDMI image sources or USB CAMERA image capturing devices, etc.
Further, image data formats suitable for use with embodiments of the present invention include, but are not limited to, for example, a Bayer picture format, an RGB picture format, or a YUV picture format. The RGB picture format obtains various colors by the change of three color channels of red (R), green (G) and blue (B) and the superposition of the three color channels; the Bayer picture format is an array digital image (suffix name. Raw); the YUV picture format is mainly used in the field of analog video, and separates the luminance information (Y) from the color information (UV), and does not require simultaneous transmission of three independent video signals, thereby occupying a very small bandwidth.
Optionally, the depth acquisition unit may be a TOF depth acquisition unit, and the acquired depth data mainly refers to distance data from the subject to a TOF transceiver screen. The TOF depth acquisition unit may be composed of an ITOF or a DTOF, and may be a single-point TOF or a dot-matrix TOF, which is not limited in the embodiment of the present invention.
It should be understood that DTOF refers to Direct TOF, i.e. directly measuring time of flight, measuring the time interval between transmitted and received pulses, which transmits and receives light signals N times within a single frame of measurement time, and making histogram statistics of the N recorded times of flight, wherein the time of flight with the highest frequency of occurrence is used for calculating the target distance. ITOF refers to index TOF, i.e. indirectly measuring time of flight, and usually employs a method of measuring phase shift, such as phase difference between transmitting sine wave/square wave and receiving sine wave/square wave.
In this embodiment, the depth data is subjected to coordinate transformation according to a coordinate system in which the image data is located, and the transformation process includes: and converting the depth data from a world coordinate system to a corresponding camera coordinate system, converting the depth data from the camera coordinate system to a corresponding image coordinate system, and converting the depth data from the image coordinate system to a corresponding pixel coordinate system.
Specifically, theIn other words, the rotation matrix is needed in the conversion process from the world coordinate system to the camera coordinate system
Figure SMS_1
Translation matrix
Figure SMS_2
The specific conversion method is as follows:
Figure SMS_3
(formula 1)
Wherein,
Figure SMS_5
Figure SMS_6
Figure SMS_7
representing a camera coordinate system;
Figure SMS_8
Figure SMS_9
Figure SMS_10
representing a world coordinate system;
Figure SMS_11
represents an orthogonal unit rotation matrix and is represented by,
Figure SMS_4
representing a three-dimensional translation vector.
The conversion process from the camera coordinate system to the image coordinate system is a central projection process, that is, the perspective relationship from the camera coordinate system to the image coordinate system is calculated by using similar triangles, and the calculation method is as follows:
Figure SMS_12
(formula 2)
Wherein,
Figure SMS_13
representing points in an image coordinate system;
Figure SMS_14
representing a camera focal length matrix;
Figure SMS_15
representing points in the camera coordinate system.
The conversion process from the image coordinate system to the pixel coordinate system is a discretization process, and the pixel coordinate system and the image coordinate system are actually on the imaging plane and only the respective origin and measurement unit are different; the origin of the image coordinate system is usually the midpoint of the imaging plane, in mm, which belongs to the physical unit; the unit of the pixel coordinate system is pixel, and the description pixel is usually several rows and columns. Therefore, the conversion from the image coordinate system to the pixel coordinate system is as follows:
Figure SMS_16
(formula 3)
Figure SMS_17
(formula 4)
Figure SMS_18
(formula 5)
Wherein,
Figure SMS_19
refers to a point in the pixel coordinate system,
Figure SMS_20
refers to a point in the image coordinate system
Figure SMS_21
And
Figure SMS_22
indicating how many mm each column and each row respectively represents, i.e. 1 pixell=
Figure SMS_23
mm。
Combining the above equations (1) to (5) yields the complete equation for the one-step conversion from the world coordinate system to the pixel coordinate system of TOF as follows:
Figure SMS_24
Figure SMS_25
(formula 6)
Wherein,
Figure SMS_27
Figure SMS_29
,K=
Figure SMS_30
Figure SMS_31
Figure SMS_32
Figure SMS_33
Figure SMS_34
Figure SMS_26
representing camera intrinsic parameters; k represents a camera internal parameter matrix;
Figure SMS_28
an extrinsic parameter matrix representing the camera; r and t represent external parameters of the camera; m denotes a projection matrix.
It should be understood that the purpose of performing one-step conversion from the world coordinate system of the TOF to the pixel coordinate system is to uniformly map the world coordinate system where the TOF is located and the pixel coordinate system of the image plane, and after coordinates are unified, subsequent processing procedures can be uniformly performed in the pixel coordinate system, so that calculation processing is facilitated.
Step S21B: and determining a focusing trigger time and a focusing target object according to the image data and the depth data of the shot subject, and performing primary focusing control on the focusing target object based on a TOF (time of flight) focusing algorithm after the focusing is triggered so as to drive the focusing lens to move to a preset position.
In this embodiment, a focusing target object is selected from the subject based on an image focusing algorithm, wherein the image focusing algorithm completes a complete focusing process from triggering to focusing, which determines the triggering timing of the focusing algorithm and the selection of the focusing target object. It should be noted that, the image focusing algorithm is not limited in the embodiments of the present invention, and existing algorithms (for example, a focusing algorithm based on a portrait mode, a focusing algorithm based on an image center area, and the like) capable of realizing focusing triggering may be applied to the technical solution of the present invention.
In this embodiment, the performing focus control on the focus target object based on the TOF focus algorithm to drive the focus lens to move to the preset position includes: acquiring the object distance of a focusing target object according to the depth data; obtaining a corresponding image distance value according to the object distance value and the lens focal length value based on an imaging formula; and then obtaining the incidence relation between the object distance value and the lens motor control value according to the incidence relation between the image distance value and the lens motor control value so as to correspondingly drive the focusing lens to move to a preset position according to the object distance value.
It should be understood that the correlation between the image distance value and the lens motor control value refers to: when a lens motor control value (e.g., a driving current value of a lens motor) changes, the lens motor rotates accordingly, the rotation of the lens motor drives a photographing lens of an image capturing device (e.g., a camera) to move, and an image distance value also changes accordingly since the image distance is a distance between the photographing lens and a sensor. Therefore, the lens motor control value and the image distance value have a corresponding relationship, and the image distance value is correspondingly changed by changing the lens motor control value.
Wherein, the imaging formula refers to:
Figure SMS_35
(formula 7)
Wherein,
Figure SMS_36
the distance between the object and the object is shown,
Figure SMS_37
the image distance is represented by the distance between the images,
Figure SMS_38
represents a focal length; and acquiring a corresponding image distance according to the currently known lens focal length and the object distance of the focusing target object obtained after the TOF depth data is processed.
Optionally, a relationship curve between the object distance and the image distance may be calibrated in advance (that is, the image distance which can make the imaging definition highest is measured at different object distances at a given focal length f), and stored in an array record or fitted to form a mapping relationship curve between the object distance u and the image distance v, and the mapping relationship curve is directly searched through the current object distance value during focusing, so as to obtain a corresponding image distance value.
Step S21C: and performing stable focusing control on the focusing target object based on a CAF focusing algorithm to obtain a lens motor control value corresponding to the current object distance value.
Since TOF focusing may have a certain error, the CAF algorithm is required to perform fine focusing to achieve a stable image definition. Preferably, the embodiment adopts a CAF contrast focusing algorithm, and the specific process is, for example: and (3) setting a lens position based on a TOF algorithm, controlling the lens to move from back to front by a large step number (such as a value of 3) within a range of 10 motor steps (step) from front to back, calculating an image characteristic value f once every movement, solving a motor position capable of maximizing the image characteristic value f in the moving process by using a preference method, and moving the motor to the solved motor position to serve as an optimal focusing position.
Due to the movement of a large number of steps, the maximum image characteristic value f may be crossed in the moving process, because when the step is larger, a maximum value is included in a certain step, and when the image characteristic value f is found to fall back from a gradually larger step in the stepping process, the maximum image characteristic value f is crossed; at this time, the motor position may be backed by a small number of steps (e.g., 1), and the image feature value f may be calculated once per movement until the best focus position is reached. Reaching the optimal focusing position means that the current focusing state has stabilized. For example: the image characteristic value of the step 1 is 20, the image characteristic value of the step 4 is 45, and the image characteristic value of the step 7 falls back to 35, which indicates that the maximum image characteristic value is spanned in the steps 4 to 7; step can thus be reduced from 3 to 1, step back from step 7, until the best focus position (i.e. the maximum image feature value f) is moved.
The optimization method is to arrange reasonable test points according to different problems in production and scientific research so as to reduce the test times and find the optimal point quickly. The preferred methods include, but are not limited to, golden section, stepwise boosting (also known as hill climbing), batch testing, scoring, contrast, parabolic, and the like.
Step S22: judging whether an initial focusing curve is calibrated according to the obtained lens motor control value according to a focusing self-learning strategy so as to generate the updated focusing curve; the initial focusing curve is a mapping relation between an object distance value before updating and a lens motor control value.
In this embodiment, the process of determining whether to calibrate the initial focusing curve according to the obtained lens motor control value according to the focusing self-learning strategy includes the steps shown in fig. 3:
step S22A: and searching whether the object distance value of the target object has a credible value in a credible set.
In this embodiment, the confidence set is composed of a plurality of data sets, each data set is composed of an object distance value and a corresponding statistical set, and each statistical set includes a plurality of lens motor control values. The credible value of the object distance value refers to the optimal lens motor control value corresponding to the object distance value.
For ease of understanding, the description will be given by taking the schematic diagram of the credit set shown in fig. 4 as an example: in the credible collection SStoring object distance value U 1 、U 2 …U n Value of object distance U 1 Corresponding statistic set T 1 Value of object distance U 2 Corresponding statistical set T 2 8230and object distance value U n Corresponding statistical set T n . Statistical set T 1 In which there are x lens motor control values (t) 11 ,t 12 ,…t 1x ) Statistical set T 2 In which there are y lens motor control values (t) 21 ,t 22 ,…t 2y ),T n Of which there are z lens motor control values (t) n1 ,t n2 ,…t nz )。
It should be noted that the initial confidence set S is empty, data in the confidence set S is generated by each focusing, taking 1 meter of object distance as an example, a lens motor control value 0.60 is obtained after TOF primary focusing and CAF stable focusing are performed for the first time, a lens motor control value 0.59 is obtained after TOF primary focusing and CAF stable focusing are performed for the second time, and a lens motor control value 0.62 is obtained after TOF primary focusing and CAF stable focusing are performed for the third time. Therefore, the lens motor controls corresponding to the same object distance value constitute a statistical set of object distance values of 1 meter.
It should be understood that, corresponding to the structure of the trusted set S illustrated in fig. 4, in computer storage, the trusted set may be stored in an array manner, for example, the trusted set S = { [ U ]) 1 , t 11 ,t 12 ,…t 1x ], [U 2 ,t 21 ,t 22 ,…t 2y ],…[U n ,t n1 ,t n2 ,…t nz ]. The above examples are provided for illustrative purposes and should not be construed as limiting. In practical applications, any data format capable of being stored in the computer device except the group can be applied to the technical solution of the embodiment of the present invention.
Step S22B: and if the object distance value does not exist, adding the lens motor control value corresponding to the object distance value into the statistic set corresponding to the object distance value in the credible set.
In this embodiment, for an object distance value without a confidence value, the object distance value needs to be added to the statistical set corresponding to the object distance value according to the currently acquired lens motor control value, so as to update the statistical set.
Step S22C: if so, ending the process.
In this embodiment, for the object distance value with the existing confidence value, no calculation is needed, and the confidence value corresponding to the object distance value can be directly used for focusing control in the subsequent focusing process.
Step S22D: and judging whether the number of the lens motor control values in the current statistical concentration exceeds a sample number threshold value.
In the present embodiment, the threshold of the number of samples is set to determine whether to perform the confidence calculation on a statistical set, which is advantageous in that if the threshold of the number of samples is not set, the confidence calculation needs to be performed every time a new lens motor control value is added to the statistical set, which results in many redundant calculations in practical applications.
It should be noted that, in practical applications, a threshold value of the number of samples with a moderate size should be selected, and setting an excessively large threshold value of the number of samples can improve the accuracy of the finally calculated confidence value Y, but will correspondingly reduce the learning speed of the algorithm and prolong the time for learning the confidence value. Conversely, setting a smaller threshold number of samples may increase the algorithm learning speed, but may decrease the accuracy of the confidence value.
Step S22E: and if the sample number exceeds the sample number threshold, calculating the characteristic value of the statistical set based on a data set characteristic algorithm to serve as a credible value corresponding to the object distance value.
The data set feature algorithm is used for performing feature calculation on a data set, extracting feature values of the data set, and the feature values can be used for representing the whole data set. The data set feature algorithm in this embodiment includes, but is not limited to, any one or combination of mean calculation, median calculation, and K-MEANS clustering algorithm.
Further, before calculating the trusted value, a step of securing fault tolerance based on a dataset checking policy is also performed, wherein the performing process comprises: judging whether the average difference of the samples of the current statistical set exceeds a preset threshold value or not; and if so, clearing the statistic set or increasing the sample number threshold of the statistic set so as to enable the statistic set to be jumped.
The jumping of some data sets is to consider that some data sets have a certain number of lens motor control values, but the average difference between the lens motor control values is too large, and even though the credible values are calculated by the average value calculation method, the median calculation method, the K-MEANS clustering algorithm and the like, the values are distorted and cannot be used for accurately controlling the lens motor. Therefore, the embodiment of the invention also executes the step of ensuring the fault tolerance based on the data set checking strategy before calculating the credible value so as to ensure that the credible value with better control effect is obtained.
Step S22F: and if the sample number threshold is not exceeded, ending the process.
Step S22G: and judging whether the current credible value in the preset interval meets the constitution condition of the credible section.
The credible value corresponding to a certain object distance is represented as a single credible point in the object distance-motor control value mapping relation graph, and a plurality of credible points can form a credible section.
In this embodiment, the forming conditions of the trusted segment include:
q = (u 0-u 1) < a × n; (formula 8)
Wherein u0 and u1 represent the object distance starting point and the object distance end point of the credible segment; n represents the number of confidence values in the interval; a denotes a scaling factor; if (u 0-u 1) < a x n, Q =1, which represents that the interval is a reliable segment; if (u 0-u 1) < a × n does not hold, Q =0, indicating that the interval does not hold a reliable segment.
Step S22H: and if so, calibrating the initial focusing curve according to the credible segment.
In this embodiment, the calibration and updating are performed on the initial focusing curve of the focusing object distance at a lower frequency by updating each of the scattered credible points; and/or calibrating and updating the initial focusing curve of the higher-frequency focusing object distance by updating each credible segment of the paragraph aggregation. It should be understood that the lower frequency focusing object distance and the higher frequency focusing object distance are two relative concepts, and in practical applications, the lower frequency and the higher frequency can be divided by self-definition, and the embodiment of the present invention is not limited thereto.
Further explanation is as follows: after learning, the confidence points may be scattered or may be clustered in segments at high frequency in-focus object distances. The updating of the focusing curve may be staged, and if the number of the reliable points in a certain reliable segment reaches a threshold value, the updating of the reliable segment may also be performed. The subsequent focusing process directly refers to the focusing value of the credible segment. As the learning time increases, as more and more trusted segments become available, it is believed that the trusted segments eventually combine to form a complete updated focus curve.
It will be appreciated that one confidence value constitutes a single confidence point, and multiple confidence points may be fitted to form a confidence segment. The trust points are discrete and the trust segments are continuous. Even though the initial focusing curve can be calibrated and updated according to the individual credible points, the discrete points are difficult to be exhausted, which not only hinders the calibration efficiency of the focusing curve, but also seriously affects the practicability of the focusing curve.
For example, 11 fitting points in 1.0-1.2 meters are used, and an object distance value 1.0 meter and a corresponding confidence value a1, an object distance value 1.02 meter and a corresponding confidence value a2, an object distance value 1.04 meter and a corresponding confidence value a3, an object distance value 1.06 meter and a corresponding confidence value a4, an object distance value 1.08 meter and a corresponding confidence value a5, an object distance value 1.10 meter and a corresponding confidence value a6, an object distance value 1.12 meter and a corresponding confidence value a7, an object distance value 1.14 meter and a corresponding confidence value a8, an object distance value 1.16 meter and a corresponding confidence value a9, an object distance value 1.18 meter and a corresponding confidence value a10, and an object distance value 1.20 meter and a corresponding confidence value a11 are stored in the confidence set S. The method of updating the curve by using discrete points has the advantages of no error, but the learning time of the algorithm is very long because the points which are not updated have no calibration function, for example, the points with the object distance value of 1.11 m in the above example cannot be calibrated; this problem can be solved well by curve fitting, for example, the point with object distance value of 1.11 m in the above example can be calibrated because it falls within the fitting segment range with object distance value of 1.0 m to 1.2 m. Specific fitting means include, but are not limited to, such as polynomial fitting, four parameter equation fitting, and the like.
It is worth noting that from the confidence point to the confidence segment only seems to be a spread from the point to the curve, but it is followed by technical issues that consider how to make the focus curve more practical and how to improve the efficiency of calibration update. The calibration updating mode of the credible segment is essentially different from that of the credible point, and before the calibration of the credible segment is carried out, whether the forming condition of the credible segment is achieved needs to be judged; the next trusted section calibration can be performed only on the premise of forming the trusted section, so that the embodiment of the invention performs trusted section judgment before calibration updating.
Furthermore, the embodiment of the invention can adopt an array recording mode, and the focusing curve is updated by updating the values of the array; a function curve recording mode can be adopted, and the focusing curve is updated by updating the original data atlas; and the motor control value corresponding to the current object distance value U in the updated curve is equal to or very close to the credible value Y. It should be understood that for an object distance value calibrated according to a trusted point, the corresponding lens motor control value is equal to the trusted value; for the object distance value calibrated by fitting a plurality of confidence points to form a confidence segment, the corresponding lens motor control value is not necessarily equal to the confidence value but is very close to the confidence value. In the above description, taking the fitting segment with the object distance value of 1.0 m to 1.2 m as an example, in the reliable segment formed by fitting, the lens motor control value corresponding to the object distance value of 1.0 m is equal to the reliable value, and the lens motor control value corresponding to the object distance value of 1.11 m is obtained by fitting, which brings a certain error but is very close to the reliable value obtained in the discrete point manner.
For ease of understanding, fig. 5 is taken as an example to illustrate the updating of the focus curve: and the focusing curve is a mapping relation graph between the object distance U and the lens motor control value V, a hollow circle in the graph represents a calibrated reliable point, a solid circle represents a real measuring point before calibration, and the reliable value between the object distance starting point U0 and the object distance end point U1 is fitted to form a reliable section D.
Step S22I: if not, caching the current credible value and finishing.
The function of caching the current credible value is to accumulate the number of the credible values, and when the number of the credible values is accumulated to meet the requirement of the formula 7, a credible segment can be formed, and then credible segment fitting can be implemented.
And step S3: if the credible value exists, the found credible value is used as a lens motor control value to drive the motor to move to the specified position for focusing.
For the object distance value with a credible value, the method of carrying out preliminary focusing control through a TOF focusing algorithm and carrying out stable focusing control through a CAF focusing algorithm is not needed, direct focusing can be realized by directly searching a lens motor control value from an updated focusing curve, the influence of a picture blurring stage generated by CAF focusing is eliminated, the system calculation force required by CAF focusing is also eliminated, and the focusing time is greatly shortened.
And step S4: and if the credible value does not exist, focusing is carried out based on a TOF focusing algorithm and a CAF focusing algorithm.
If the credible value does not exist, focusing is still realized by means of carrying out preliminary focusing control through a TOF focusing algorithm and carrying out stable focusing control through a CAF focusing algorithm.
Taking fig. 5 as an example, if the object distance value of the current subject falls within the range of u0 to u1, the lens motor control value can be directly searched from the updated focusing curve; and if the object distance value of the current subject falls in the range beyond u0-u1 and no corresponding credible point exists in the object distance value of the subject, focusing is realized by means of primary focusing control through a TOF focusing algorithm and stable focusing control through a CAF focusing algorithm.
Fig. 6 is a schematic structural diagram of a self-learning based focusing system according to an embodiment of the present invention. The focusing system 600 includes a data acquiring module 601, a focusing logic determining module 602, and a focusing control module 603. The data acquisition module 601 is configured to acquire object distance value data of a current subject; the focusing logic judgment module 602 is configured to search whether a corresponding trusted value exists in the updated focusing curve according to the object distance value data of the current subject; the focusing control module 603 is configured to, in a case that it is determined that the trusted value exists, use the found trusted value as a lens motor control value to drive the motor to move to an assigned position for focusing; and otherwise, focusing based on a TOF focusing algorithm and a CAF focusing algorithm.
In this embodiment, the focusing system 600 further includes a curve generating module 601, configured to execute the following steps: performing initial focusing on a target object under an object distance value based on a TOF focusing algorithm, and performing stable focusing on the target object based on a CAF focusing algorithm to obtain a lens motor control value corresponding to the object distance value; judging whether an initial focusing curve is calibrated according to the obtained lens motor control value according to a focusing self-learning strategy so as to generate the updated focusing curve; the initial focusing curve is a mapping relation between an object distance value before updating and a lens motor control value.
It should be noted that: in the focusing system based on self-learning provided in the above embodiments, when the focusing system based on self-learning is used for focusing, only the division of the above program modules is taken as an example, in practical applications, the above processing distribution may be completed by different program modules according to needs, that is, the internal structure of the system may be divided into different program modules to complete all or part of the above-described processing. In addition, the focusing system based on self-learning provided by the above embodiment and the focusing method based on self-learning belong to the same concept, and the specific implementation process thereof is described in detail in the method embodiment and is not described herein again.
The focusing method based on self-learning provided by the embodiment of the present invention can be implemented by a terminal side or a server side, and referring to fig. 7 for a hardware structure of the focusing terminal based on self-learning, the terminal 700 is an optional hardware structure diagram of the focusing electronic terminal 700 based on self-learning provided by the embodiment of the present invention, and the terminal 700 can be a live broadcast machine, a mobile phone, a computer device, a tablet device, a personal digital processing device, a factory background processing device, etc. which integrate a photographing/photographing function. The self-learning based focusing terminal 700 includes: at least one processor 701, memory 702, at least one network interface 704, and a user interface 706. The various components in the device are coupled together by a bus system 705. It will be appreciated that the bus system 705 is used to enable communications among the components. The bus system 705 includes a power bus, a control bus, and a status signal bus in addition to a data bus. For clarity of illustration, however, the various buses are labeled as a bus system in fig. 7.
The user interface 706 may include, among other things, a display, a keyboard, a mouse, a trackball, a click gun, keys, buttons, a touch pad, or a touch screen.
It will be appreciated that the memory 702 can be either volatile memory or nonvolatile memory, and can include both volatile and nonvolatile memory. Among them, the nonvolatile Memory may be a Read Only Memory (ROM), a Programmable Read-Only Memory (PROM), which serves as an external cache. By way of example, and not limitation, many forms of RAM are available, such as Static Random Access Memory (SRAM), synchronous Static Random Access Memory (SSRAM). The memory described in connection with the embodiments of the invention is intended to comprise, without being limited to, these and any other suitable types of memory.
The memory 702 in the embodiment of the present invention is used to store various kinds of data to support the operation of the self-learning based focusing terminal 700. Examples of such data include: any executable programs for operating on the self-learning based focus terminal 700, such as an operating system 7021 and application programs 7022; the operating system 7021 includes various system programs such as a framework layer, a core library layer, a driver layer, and the like for implementing various basic services and for processing hardware-based tasks. The application 7022 may include various applications such as a media player (MediaPlayer), a Browser (Browser), and the like, for implementing various application services. The self-learning based focusing method provided by the embodiment of the invention can be embodied in the application program 7022.
The method disclosed in the above embodiments of the present invention may be applied to the processor 701, or implemented by the processor 701. The processor 701 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be implemented by integrated logic circuits of hardware or instructions in the form of software in the processor 701. The Processor 701 may be a general purpose Processor, a Digital Signal Processor (DSP), or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like. The processor 701 may implement or perform the methods, steps, and logic blocks disclosed in embodiments of the present invention. The general purpose processor 701 may be a microprocessor or any conventional processor or the like. The steps of the method for optimizing the accessories provided by the embodiment of the invention can be directly embodied as the execution of a hardware decoding processor, or the combination of hardware and software modules in the decoding processor. The software modules may be located in a storage medium that is located in a memory and that is read by a processor to perform the steps of the method described above in connection with its hardware.
In an exemplary embodiment, the self-learning based focus terminal 700 may be used by one or more Application Specific Integrated Circuits (ASICs), DSPs, programmable Logic Devices (PLDs), complex Programmable Logic Devices (CPLDs), for performing the aforementioned method.
Those of ordinary skill in the art will understand that: all or part of the steps for implementing the above method embodiments may be performed by hardware associated with a computer program. The aforementioned computer program may be stored in a computer readable storage medium. When executed, the program performs steps comprising the method embodiments described above; and the aforementioned storage medium includes: various media that can store program codes, such as ROM, RAM, magnetic or optical disks.
In the embodiments provided herein, the computer-readable and writable storage medium may include read-only memory, random-access memory, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, flash memory, a USB flash drive, a removable hard disk, or any other medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a computer. Also, any connection is properly termed a computer-readable medium. For example, if the instructions are transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital Subscriber Line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. It should be understood, however, that computer-readable-writable storage media and data storage media do not include connections, carrier waves, signals, or other transitory media, but are intended to be non-transitory, tangible storage media. Disk and disc, as used in this application, includes Compact Disc (CD), laser disc, optical disc, digital Versatile Disc (DVD), floppy disk and blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers.
In summary, the invention provides a focusing method, a focusing system, a focusing terminal and a focusing medium based on self-learning, and the method and the system can learn and calibrate to obtain a credible focusing curve through a self-learning algorithm, so that in the subsequent focusing process, the object distance value obtained through TOF can be focused to an accurate position in one step without repeated CAF fine focusing, the influence of a picture fuzzy stage generated by CAF focusing is eliminated, the system calculation force required by CAF focusing is also eliminated, and the focusing time is greatly shortened. Meanwhile, the method has low implementation cost and good system compatibility, and parameters related to the algorithm can be dynamically adjusted, so that the algorithm has excellent extensibility and adjustability. Therefore, the application effectively overcomes various defects in the prior art and has high industrial utilization value.
The above embodiments are merely illustrative of the principles and utilities of the present application and are not intended to limit the application. Any person skilled in the art can modify or change the above-described embodiments without departing from the spirit and scope of the present application. Accordingly, it is intended that all equivalent modifications or changes which can be made by those skilled in the art without departing from the spirit and technical concepts disclosed in the present application shall be covered by the claims of the present application.

Claims (11)

1. A focusing method based on self-learning is characterized by comprising the following steps:
acquiring object distance data of a current shot subject;
searching whether a corresponding credible value exists in the updated focusing curve according to the object distance value data of the current shot subject;
if the credible value exists, taking the found credible value as a lens motor control value to drive the lens motor to move to a specified position for focusing; and otherwise, focusing based on a TOF focusing algorithm and a CAF focusing algorithm.
2. The self-learning based focusing method of claim 1, wherein the updated focusing curve is generated in a manner comprising:
performing initial focusing on a target object under an object distance value based on a TOF focusing algorithm, and performing stable focusing on the target object based on a CAF focusing algorithm to obtain a lens motor control value corresponding to the object distance value;
judging whether an initial focusing curve is calibrated according to the obtained lens motor control value according to a focusing self-learning strategy so as to generate the updated focusing curve; the initial focusing curve is a mapping relation between an object distance value before updating and a lens motor control value.
3. The self-learning based focusing method of claim 2, wherein the determining whether to calibrate the initial focusing curve according to the lens motor control value according to the focusing self-learning strategy comprises:
searching whether the object distance value of the target object has a credible value in a credible set; wherein the trusted set comprises a number of data groups; each data set consists of object distance values and a corresponding statistical set thereof; each of the statistical sets includes a plurality of lens motor control values; the credible value is the optimal lens motor control value corresponding to the object distance value;
if not, adding the lens motor control value corresponding to the object distance value into the statistic set corresponding to the object distance value in the credible set; otherwise, ending;
judging whether the number of the lens motor control values in the current statistical set exceeds a sample number threshold value or not;
if the sample number exceeds the sample number threshold value, calculating a characteristic value of the statistical set based on a data set characteristic algorithm to serve as a credible value corresponding to the object distance value; otherwise, ending;
judging whether the current credible value in a preset interval meets the construction conditions of a credible section; if so, calibrating the initial focusing curve according to the credible segment; otherwise, caching the current credible value.
4. The self-learning based focusing method of claim 3, wherein the dataset characterization algorithm comprises: any one or combination of a plurality of mean calculation method, median calculation method and K-MEANS clustering algorithm.
5. The self-learning based focusing method of claim 3, wherein the step of securing fault tolerance based on dataset inspection strategy is further performed before calculating the confidence value, and the performing process comprises: judging whether the average difference of the samples of the current statistical set exceeds a preset threshold value or not; and if so, clearing the statistic set or increasing the sample number threshold of the statistic set so as to enable the statistic set to be jumped.
6. The self-learning based focusing method according to claim 3, wherein the configuration conditions of the trusted segment include:
Q=(u0-u1)<a*n;
wherein u0 and u1 represent the object distance starting point and the object distance end point of the credible segment; n represents the number of confidence values in the interval; a denotes a scaling factor; if (u 0-u 1) < a x n is true, Q =1, which represents that the interval is true credible segment; if (u 0-u 1) < a × n does not hold, Q =0, which indicates that the reliable segment is not held in this interval.
7. The self-learning based focusing method of claim 3, wherein the method further comprises: calibrating and updating the initial focusing curve of the focusing object distance with lower frequency by updating each scattered credible point; and/or calibrating and updating the initial focusing curve of the higher-frequency focusing object distance by updating each credible segment in the segmental aggregation shape.
8. A self-learning based focusing system, comprising:
the data acquisition module is used for acquiring object distance value data of the current shot subject;
the focusing logic judgment module is used for searching whether a corresponding credible value exists in the updated focusing curve according to the object distance value data of the current shot main body;
the focusing control module is used for taking the searched credible value as a lens motor control value under the condition of judging that the credible value exists so as to drive the lens motor to move to a specified position for focusing; and otherwise, focusing based on a TOF focusing algorithm and a CAF focusing algorithm.
9. The self-learning based focusing system of claim 8, further comprising a curve generation module for performing the following:
performing primary focusing on a target object under an object distance value based on a TOF focusing algorithm, and performing stable focusing on the target object based on a CAF focusing algorithm to obtain a lens motor control value corresponding to the object distance value;
judging whether an initial focusing curve is calibrated according to the obtained lens motor control value according to a focusing self-learning strategy so as to generate the updated focusing curve; the initial focusing curve is a mapping relation between an object distance value before updating and a lens motor control value.
10. A computer-readable storage medium, on which a first computer program is stored, which, when being executed by a processor, carries out the self-learning based focusing method of any one of claims 1 to 7.
11. An electronic terminal, comprising: a processor and a memory;
the memory is used for storing a computer program;
the processor is configured to execute the memory-stored computer program to cause the terminal to perform the self-learning based focusing method according to any one of claims 1 to 7.
CN202310086377.2A 2023-02-09 2023-02-09 Focusing method, system, terminal and medium based on self-learning Active CN115802161B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310086377.2A CN115802161B (en) 2023-02-09 2023-02-09 Focusing method, system, terminal and medium based on self-learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310086377.2A CN115802161B (en) 2023-02-09 2023-02-09 Focusing method, system, terminal and medium based on self-learning

Publications (2)

Publication Number Publication Date
CN115802161A true CN115802161A (en) 2023-03-14
CN115802161B CN115802161B (en) 2023-05-09

Family

ID=85430627

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310086377.2A Active CN115802161B (en) 2023-02-09 2023-02-09 Focusing method, system, terminal and medium based on self-learning

Country Status (1)

Country Link
CN (1) CN115802161B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH06169423A (en) * 1992-11-30 1994-06-14 Hitachi Ltd Control method for autofocus device
WO2017107842A1 (en) * 2015-12-23 2017-06-29 北京奇虎科技有限公司 Zoom tracking curve calibration method and device
CN107071243A (en) * 2017-03-09 2017-08-18 成都西纬科技有限公司 Camera focus calibration system and focus calibration method
WO2020015754A1 (en) * 2018-07-19 2020-01-23 杭州海康威视数字技术股份有限公司 Image capture method and image capture device
CN110913129A (en) * 2019-11-15 2020-03-24 浙江大华技术股份有限公司 Focusing method, device, terminal and storage device based on BP neural network
CN112565591A (en) * 2020-11-20 2021-03-26 广州朗国电子科技有限公司 Automatic focusing lens calibration method, electronic equipment and storage medium
US20220066286A1 (en) * 2020-09-01 2022-03-03 Sorenson Ip Holdings, Llc System, Method, and Computer-Readable Medium for Autofocusing a Videophone Camera
CN114727023A (en) * 2022-06-07 2022-07-08 杭州星犀科技有限公司 Method and system for adjusting camera parameters
CN115701123A (en) * 2021-07-29 2023-02-07 华为技术有限公司 Focusing method and device

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH06169423A (en) * 1992-11-30 1994-06-14 Hitachi Ltd Control method for autofocus device
WO2017107842A1 (en) * 2015-12-23 2017-06-29 北京奇虎科技有限公司 Zoom tracking curve calibration method and device
CN107071243A (en) * 2017-03-09 2017-08-18 成都西纬科技有限公司 Camera focus calibration system and focus calibration method
WO2020015754A1 (en) * 2018-07-19 2020-01-23 杭州海康威视数字技术股份有限公司 Image capture method and image capture device
CN110913129A (en) * 2019-11-15 2020-03-24 浙江大华技术股份有限公司 Focusing method, device, terminal and storage device based on BP neural network
US20220066286A1 (en) * 2020-09-01 2022-03-03 Sorenson Ip Holdings, Llc System, Method, and Computer-Readable Medium for Autofocusing a Videophone Camera
CN112565591A (en) * 2020-11-20 2021-03-26 广州朗国电子科技有限公司 Automatic focusing lens calibration method, electronic equipment and storage medium
CN115701123A (en) * 2021-07-29 2023-02-07 华为技术有限公司 Focusing method and device
CN114727023A (en) * 2022-06-07 2022-07-08 杭州星犀科技有限公司 Method and system for adjusting camera parameters

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
JING ZHANG等: "focusing algorithm of automatic control microscope based on digital image processing" *
程昊: "新型数字化显微***自动对焦研究" *

Also Published As

Publication number Publication date
CN115802161B (en) 2023-05-09

Similar Documents

Publication Publication Date Title
US10297034B2 (en) Systems and methods for fusing images
CN113518210B (en) Method and device for automatic white balance of image
CN106341584B (en) Camera module focusing production method, camera module and terminal
US10645364B2 (en) Dynamic calibration of multi-camera systems using multiple multi-view image frames
JP5725975B2 (en) Imaging apparatus and imaging method
WO2021204202A1 (en) Image auto white balance method and apparatus
CN109151301A (en) Electronic device including camera model
CN110207835A (en) A kind of wave front correction method based on out-of-focus image training
CN104618639B (en) Focusing control device and its control method
WO2017107842A1 (en) Zoom tracking curve calibration method and device
US20110292276A1 (en) Imaging apparatus, imaging system, control method of imaging apparatus, and program
CN109754434A (en) Camera calibration method, apparatus, user equipment and storage medium
CN106683139A (en) Fisheye-camera calibration system based on genetic algorithm and image distortion correction method thereof
CN107181918A (en) A kind of dynamic filming control method and system for catching video camera of optics
CN106896622A (en) Based on more apart from the bearing calibration of auto-focusing
CN103261939A (en) Image capture device and primary photographic subject recognition method
CN111182238B (en) High-resolution mobile electronic equipment imaging device and method based on scanning light field
CN105141872B (en) The method of video image is handled when a kind of contracting
BR112021005139A2 (en) image processing apparatus and method, and, program for causing an image processing apparatus to perform an image process
WO2023236508A1 (en) Image stitching method and system based on billion-pixel array camera
CN107517345A (en) Shooting preview method and capture apparatus
CN111062400A (en) Target matching method and device
CN111815715A (en) Method and device for calibrating zoom pan-tilt camera and storage medium
CN110060208A (en) A method of improving super-resolution algorithms reconstruction property
CN116681732B (en) Target motion recognition method and system based on compound eye morphological vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant