CN114095628B - Automatic focusing method, automatic focusing visual device and control method thereof - Google Patents

Automatic focusing method, automatic focusing visual device and control method thereof Download PDF

Info

Publication number
CN114095628B
CN114095628B CN202111192907.9A CN202111192907A CN114095628B CN 114095628 B CN114095628 B CN 114095628B CN 202111192907 A CN202111192907 A CN 202111192907A CN 114095628 B CN114095628 B CN 114095628B
Authority
CN
China
Prior art keywords
convex lens
lens
guide rail
focusing
cosα
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111192907.9A
Other languages
Chinese (zh)
Other versions
CN114095628A (en
Inventor
吴飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Mengfei Automation Technology Co ltd
Original Assignee
Shanghai Mengfei Automation Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Mengfei Automation Technology Co ltd filed Critical Shanghai Mengfei Automation Technology Co ltd
Priority to CN202111192907.9A priority Critical patent/CN114095628B/en
Publication of CN114095628A publication Critical patent/CN114095628A/en
Application granted granted Critical
Publication of CN114095628B publication Critical patent/CN114095628B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/50Constructional details
    • H04N23/55Optical parts specially adapted for electronic image sensors; Mounting thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Automatic Focus Adjustment (AREA)
  • Focusing (AREA)

Abstract

The invention discloses an automatic focusing method, an automatic focusing visual device and a control method thereof, wherein the focusing method comprises the steps A1-A8; the automatic focusing visual device comprises a bracket, a first lens, a second lens, a movable guide rail assembly, a fixed guide rail assembly and a voice coil motor, wherein a sawtooth frame matched with the movable guide rail assembly and the fixed guide rail assembly is arranged on an output shaft of the voice coil motor, and a first mounting hole for mounting the first lens and a second mounting hole for mounting the second lens are arranged on the bracket; the control method of the automatic focusing visual device comprises the steps C1-C4.

Description

Automatic focusing method, automatic focusing visual device and control method thereof
Technical Field
The invention relates to the field of automatic focusing visual devices, in particular to an automatic focusing method, an automatic focusing visual device and a control method thereof.
Background
The existing automatic focusing camera cannot automatically and accurately measure the distance, the existing automatic focusing camera cannot automatically and accurately focus an observed object behind glass or shoot an underwater observed object from the water surface, and the existing automatic focusing camera cannot accurately focus, so that the existing camera or a mobile phone camera cannot realize three-dimensional identification.
Disclosure of Invention
The invention aims to solve the technical problems that the existing automatic focusing camera cannot automatically and accurately measure the distance, the existing automatic focusing camera cannot automatically focus an observed object behind glass or shoot an underwater observed object from the water surface, the existing automatic focusing camera cannot accurately focus, and the existing camera or a mobile phone camera cannot realize three-dimensional identification; the three-dimensional identification can be realized, as 2 convex lenses shoot the same object, two images are put together, and the characteristics of the three-dimensional structure can be confirmed through the change of the shadow part and the change of the length of the line; after the maximum overlap ratio appears, namely the position of accurate focusing, the accurate distance can be confirmed at the moment, so that the defect caused by the prior art is overcome.
The invention provides the following technical scheme for solving the technical problems:
in a first aspect, an auto-focusing method includes the steps of:
step A1: respectively taking the centers of the two completely identical first photoelectric sensors and the two completely identical second photoelectric sensors as the origin, and establishing two rectangular coordinate systems with central symmetry, namely a rectangular coordinate system XY and a rectangular coordinate system XY';
step A2: respectively establishing 2m in a rectangular coordinate system XY and a rectangular coordinate system XY' by taking an origin as a center 0 *2n 0 A first grid and a second grid which are symmetrical in center, wherein each grid in the first grid and the second grid is a characteristic unit;
step A3: two first focusing areas and second focusing areas which are symmetrical in center and 2m x 2n are established in the first grid and the second grid by taking the origin as the center, and y=L 1 Y=l, which is the boundary line between the common field and the non-common field of the first photosensor 2 A boundary line between the common field of view and the non-common field of view of the second photosensor;
step A4: all characteristic types of the first focusing area are determined by adopting 8-bit binary system, a group of characteristic data is recorded, and the characteristic data is a type of value with equal corresponding values of red, green and blue pixel values of each characteristic unit;
step A5: acquiring the endpoint coordinates of each characteristic data of the first focusing area in the X-axis direction of the first grid and the second grid, wherein the left endpoint sitting mark of the first grid is X 1 ,y 1 The right endpoint sitting is marked as x 2 ,y 2 The left endpoint of the second grid is marked with x 3 ,y 3 The right endpoint sitting is marked as x 4 ,y 4 Wherein x is 1 ≤L 1 ,x 2 ≤L 1 ,x 3 ≤L 2 ,x 4 ≤L 2 ,ε 2 -2μn 0 ≤y 3 ≤2μn 01 ,ε 2 -2μn 0 ≤y 4 ≤2μn 01 ,ε 1 Is the first corresponding to the second photoelectric sensorWhen the two convex lenses run for the whole stroke, the second photoelectric sensor has an integrated error epsilon relative to the Y-axis direction of the first photoelectric sensor 2 The method comprises the steps that when a second convex lens corresponding to a first photoelectric sensor runs for the whole stroke, the second photoelectric sensor has a downward comprehensive error relative to the Y-axis direction of the first sensor;
step A6: according to x 1 、x 2 、x 3 、x 4 Respectively carrying out calculation by using a formula 1 and a formula 2 to obtain alpha 1 、α 2 、θ 1 、θ 2
Equation 1: alpha n =arctg【(L*x n -f*x n *cotα)/Lf】;
Equation 2: θ n =arctg【(L*x n+2 -f*x n+2 *cosα)/Lf】;
Wherein alpha is n And theta n The tangential angle of the end point abscissa and the current object distance ratio is L, the distance between the optical centers of the first convex lens and the second convex lens is L, f is the focal length of the first convex lens and the second convex lens, the focal length of the first convex lens and the focal length of the second convex lens are consistent, and alpha is the included angle between the optical center connecting line of the first convex lens and the second convex lens and the axis of the second convex lens;
step A7: will be alpha 1 、α 2 、θ 1 、θ 2 Bringing into formula 3 to obtain a focusing point angle beta;
equation 3:
β=arctg{[sin(α 12 )*sin²(α+θ 2 )*cos(α 1 +α-θ 1 )*cos(α 12 )+sin²(α 12 )*sin²(α+θ 2 )*sin(α 1 +α-θ 1 )-sin(α 12 )*sin(θ 12 )*sin(α+θ 2 )*cosα 2 ]÷[sin(α 12 )*sin(α+θ 2 )*cos(α 1 +α-θ 1 )*cosα 1 *cos(α+θ 22 )+sin(α 12 )*sin(α+θ 2 )*sin(α 1 +α-θ 1 )*sinα 1 *cos(α+θ 22 )-sin(θ 12 )*sinα 1 *cosα 2 *cos(α+θ 22 )]};
wherein β is the angle of focus of the feature;
step A8: the second lens is adjusted to move to this angle according to the focal point angle beta.
In the above-mentioned automatic focusing method, in the step A1, the X-axis marks of the rectangular coordinate system XY sequentially decrease from left to right, and the X-axis marks of the rectangular coordinate system XY' sequentially increase from left to right;
in the step A2, the feature unit is a2×2 bayer array formed by 1 red pixel unit, 2 green pixel units, and 1 blue pixel unit, and the length of each pixel unit in the X direction is denoted μ
The L in the step A3 1 Calculated from equation 4, the L 2 Calculated from equation 5;
equation 4: l (L) 1 =(WL²f-WLf²*cosα)/( WL²sinα*cosα- WLf*sinα*cos²α+2L²f* sin²α- WLf* cos²α+ Wf²* cos 3 α-2Lf²* sinα*cosα);
Equation 5: l (L) 2 =(WL²f sin²α-WLf²* sinα*cosα)/( WL²sinα*cosα- WLf *cos²α+2L²f* sin²α- WLf* sinα* cos²α+ Wf²* cos 3 α-2Lf²* sin²α*cosα);
Wherein W is the width value of the first grid and the second grid in the X direction, f is the focal length of the first convex lens and the second convex lens, and the focal length of the first convex lens and the focal length of the second convex lens are consistent.
The automatic focusing method further comprises the step of checking the coincidence ratio of the focusing result in the focusing area, and the specific steps are as follows:
step B1: selecting the characteristic data of any characteristic in the first focusing area to subtract the characteristic data of the same position in the second focusing area to obtain the difference of the characteristic data;
step B2: when the difference between the feature data is about 0 and 60% -100% of the number of the features in the first focusing area, confirming that the focusing is completed on the features by the pair of focuses;
step B3: when the difference between the feature data is about 0 and 60% -100% of the number of the features in the first focusing area is not reached, the second convex lens is operated sequentially until the difference between the feature data is 0 and 60% -100% of the number of the features in the first focusing area is reached, and focusing is completed.
The second aspect is an automatic focusing vision device according to an automatic focusing method, wherein the automatic focusing vision device comprises a bracket, a first lens, a second lens, a movable guide rail assembly, a fixed guide rail assembly and a voice coil motor, wherein a saw-tooth rack matched with the movable guide rail assembly and the fixed guide rail assembly is arranged on an output shaft of the voice coil motor, and a first mounting hole for mounting the first lens and a second mounting hole for mounting the second lens are arranged on the bracket;
the movable guide rail assembly comprises a mounting shaft bracket and a movable guide rail, the fixed guide rail assembly comprises a connecting shaft, a fixed guide rail and a fixed shaft, the mounting shaft bracket is U-shaped, two ends of the mounting shaft bracket are respectively connected to the upper side and the lower side of a first lens and then connected to the first mounting hole of the bracket, one end of the first lens, which is far away from the first mounting hole, is provided with a first groove for placing the movable guide rail, a first sliding rod is arranged in the first groove, a first sliding groove matched with the first sliding rod is formed in the movable guide rail, one end of the movable guide rail is connected to the mounting shaft bracket, and the other end of the movable guide rail is provided with a sawtooth groove matched with the sawtooth frame;
the second lens is installed in the second installation hole of the bracket through the connecting shaft, a second groove for placing the fixed guide rail is formed in one end, deviating from the second installation hole, of the second lens, a second sliding rod is installed in the second groove, a second sliding groove matched with the second sliding rod is formed in the fixed guide rail, two ends of the fixed shaft are respectively connected with the fixed guide rail and the bracket, and a sawtooth guide rail matched and connected with the sawtooth frame is arranged at one end of the second lens;
the voice coil motor is externally connected or internally provided with a controller which is in control connection with the voice coil motor and stores the automatic focusing method of the first aspect, and the controller is respectively connected with the first lens and the second lens through wires or wirelessly for data interaction and control.
The automatic focusing visual device according to the automatic focusing method, wherein the bracket comprises a bottom plate and a vertical plate, the first mounting hole and the second mounting hole are respectively formed in the vertical plate, and one end of the fixed shaft is connected with the bottom plate;
the sawtooth rack is an I-shaped sawtooth rack, and two ends of the I-shaped sawtooth rack are respectively connected with the sawtooth groove and the sawtooth guide rail in a matching way;
the sawtooth groove and the sawtooth guide rail are arc-shaped.
The automatic focusing visual device according to the automatic focusing method, wherein a first convex lens and a first guide post are arranged in the first lens, the first sliding rod is arranged on the first guide post, the first convex lens is arranged on the front end face of the first lens, and a first photoelectric sensor is arranged at one end of the first guide post facing the first convex lens;
a second convex lens and a second guide post are arranged in the second lens, the second sliding rod is arranged on the second guide post, the second convex lens is arranged on the front end face of the second lens, and a second photoelectric sensor is arranged at one end of the second guide post facing the second convex lens;
the first photoelectric sensor and the second photoelectric sensor are connected with the controller for data transmission.
According to the automatic focusing visual device according to the automatic focusing method, the first filter is arranged on the first photoelectric sensor, and the second filter is arranged on the second photoelectric sensor.
In a third aspect, a method for controlling an autofocus vision device includes the steps of:
step C1: the voice coil motor drives the sawtooth rack to drive the movable guide rail to move, a first sliding groove on the movable guide rail constrains the first sliding rod to enable the first guide pillar to drive the first photoelectric sensor to move, meanwhile, the voice coil motor drives the sawtooth rack to drive the sawtooth guide rail to move, and a second sliding rod drives the second photoelectric sensor on the second guide pillar to move under the constraint of a second sliding groove;
step C2: acquiring detection data of the first photoelectric sensor and the second photoelectric sensor in real time and transmitting the detection data to the controller;
step C3: the controller calculates the detection data to obtain control data containing voice coil motor driving data;
step C4: the controller controls the voice coil motor to focus through control data.
The control method of the automatic focusing vision device, wherein the confirmation formula of the first chute shape on the moving guide rail is as follows:
X 1 =【Z 1 +L*f/(L-f*cotα)】cosα;
Y 1 =【Z 1 +L*f/(L-f*cotα)】sinα;
wherein A is the coordinate point of the optical center of the first convex lens in the plane coordinate system XY, A is the circle center, Z 1 L is the distance between the optical center of the first convex lens and the optical center of the second convex lens, f is the focal length of the first convex lens and the second convex lens, alpha is the included angle between the axis of the second convex lens and the connecting line between the first convex lens and the second convex lens, alpha is 45 degrees or more and less than or equal to 90 degrees, and when alpha is any value of 45 degrees or more and less than or equal to 90 degrees and is brought into the formula, the matched X is obtained 1 、Y 1
The confirmation formula of the second chute shape on the fixed guide rail is as follows:
X 2 =【Z 2 +L*f/(L-f*cosα)】cosα;
Y 2 =【Z 2 +L*f/(L-f*cosα)】sinα;
wherein B is a coordinate point of the optical center of the second convex lens in a plane coordinate system XY, Z 2 L is the distance between the optical center of the first convex lens and the optical center of the second convex lens, f is the focal length of the first convex lens and the second convex lens, alpha is the included angle between the axis of the second convex lens and the connecting line between the first convex lens and the second convex lens, alpha is 45 degrees or more and less than or equal to 90 degrees, and when alpha is any value of 45 degrees or more and less than or equal to 90 degrees and is brought into the formula, the matched X is obtained 2 、Y 2
In a fourth aspect, a computer readable storage medium has stored thereon a computer program which, when executed by a processor, implements the steps of the method of any of the third aspects.
According to the technical scheme provided by the automatic focusing method, the automatic focusing visual device and the control method thereof, the automatic focusing visual device has the following technical effects:
the method comprises the steps that the positions of the features are confirmed, the focusing mode of the coincidence ratio is calculated to perform multi-focusing, when the object on the object and the object behind the object are observed, 2 data with high coincidence ratio on the individual features can be detected in a focusing area, and the data are observed under water and on the water, and are partially peeped, the object on the multilayer glass and the like, so that the focusing speed is higher than that of contrast focusing and phase focusing and is inferior to that of phase focusing, unlike the traditional contrast focusing and phase focusing; the three-dimensional identification can be realized, as 2 convex lenses shoot the same object, two images are put together, and the characteristics of the three-dimensional structure can be confirmed through the change of the shadow part and the change of the length of the line; after the maximum overlap ratio occurs, namely, the position of accurate focusing, the accurate distance can be confirmed.
Drawings
FIG. 1 is a schematic diagram of a rectangular coordinate system XY in an auto-focusing method according to the present invention;
FIG. 2 is a schematic diagram of a rectangular coordinate system XY' in an auto-focusing method according to the present invention;
FIG. 3 is a graph of focus calculations for a feature
FIG. 4 is a schematic diagram of an auto-focus vision apparatus according to an auto-focus method of the present invention;
FIG. 5 is a schematic diagram illustrating an internal structure of a first lens in an auto-focus vision apparatus according to an auto-focus method of the present invention;
FIG. 6 is a schematic diagram illustrating an internal structure of a second lens in an auto-focus vision apparatus according to an auto-focus method of the present invention;
FIG. 7 is a flow chart of a control method of an auto-focus vision apparatus according to the present invention;
fig. 8 is a diagram showing a chute position relationship of an auto-focus vision apparatus according to the present invention.
Wherein, the reference numerals are as follows:
the camera comprises a bracket 101, a first lens 102, a second lens 103, a mounting shaft bracket 104, a movable guide rail 105, a connecting shaft 106, a fixed guide rail 107, a fixed shaft 108, a voice coil motor 109, a saw-tooth bracket 110, a first mounting hole 111, a second mounting hole 112, a first groove 113, a first sliding groove 114, a saw-tooth groove 115, a second groove 116, a second sliding groove 117, a saw-tooth guide rail 118, a first sliding rod 201, a first convex lens 202, a first guide column 203, a first photoelectric sensor 204, a second sliding rod 301, a second convex lens 302, a second guide column 303 and a second photoelectric sensor 304.
Detailed Description
In order to make the technical means, the inventive features, the achievement of the purpose and the effect of the implementation of the invention easy to understand, the technical solutions in the embodiments of the invention will be clearly and completely described in conjunction with the specific drawings, and it is obvious that the described embodiments are some embodiments of the invention, not all embodiments.
All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
It should be understood that the structures, proportions, sizes, etc. shown in the drawings are for illustration purposes only and should not be construed as limiting the invention to the extent that it can be practiced, since modifications, changes in the proportions, or otherwise, used in the practice of the invention, are not intended to be critical to the essential characteristics of the invention, but are intended to fall within the spirit and scope of the invention.
Also, the terms such as "upper," "lower," "left," "right," "middle," and "a" and the like recited in the present specification are merely for descriptive purposes and are not intended to limit the scope of the invention, but are intended to provide relative positional changes or modifications without materially altering the technical context in which the invention may be practiced.
A first embodiment of the present invention is to provide an automatic focusing method for performing multiple focusing by confirming the positions of features, calculating a focusing mode of a coincidence ratio, wherein 2 data having a high coincidence ratio with respect to individual features can be detected in a focusing area when an object on an object and an object behind the object are observed, and similarly, objects on a multi-layer glass, etc. are observed under water and on a water surface, and a part of peeping condition, unlike conventional contrast focusing and phase focusing, focusing speed is higher than that of contrast focusing, and is inferior to phase focusing; the three-dimensional identification can be realized, as 2 convex lenses shoot the same object, two images are put together, and the characteristics of the three-dimensional structure can be confirmed through the change of the shadow part and the change of the length of the line; after the maximum overlap ratio occurs, namely, the position of accurate focusing, the accurate distance can be confirmed.
As shown in fig. 1-2, in a first aspect, a first embodiment, an auto-focusing method, includes the steps of:
step A1: respectively taking the centers of the two completely identical first photoelectric sensors 204 and second photoelectric sensors 304 as the original points, and establishing two rectangular coordinate systems with central symmetry, namely a rectangular coordinate system XY and a rectangular coordinate system XY';
step A2: respectively establishing 2m in a rectangular coordinate system XY and a rectangular coordinate system XY' by taking an origin as a center 0 *2n 0 A first grid and a second grid which are symmetrical in center, wherein each grid in the first grid and the second grid is a characteristic unit;
step A3: two first focusing areas and second focusing areas which are symmetrical in center and 2m x 2n are established in the first grid and the second grid by taking the origin as the center, and y=L 1 Y=l, which is the boundary line between the common field and the non-common field of view of the first photosensor 204 2 Is the boundary line of the common field of view and the non-common field of view of the second photosensor 304;
step A4: all characteristic types of the first focusing area are determined by adopting 8-bit binary system, a group of characteristic data is recorded, and the characteristic data is a type of value with equal corresponding values of red, green and blue pixel values of each characteristic unit;
step A5: acquiring the endpoint coordinates of each characteristic data of the first focusing area in the X-axis direction of the first grid and the second grid, wherein the left endpoint sitting mark of the first grid is X 1 ,y 1 The right endpoint sitting is marked as x 2 ,y 2 The left endpoint of the second grid is marked with x 3 ,y 3 The right endpoint sitting is marked as x 4 ,y 4 Wherein x is 1 ≤L 1 ,x 2 ≤L 1 ,x 3 ≤L 2 ,x 4 ≤L 2 ,ε 2 -2μn 0 ≤y 3 ≤2μn 01 ,ε 2 -2μn 0 ≤y 4 ≤2μn 01 ,ε 1 For the integrated error epsilon of the second photoelectric sensor 304 relative to the Y-axis direction of the first photoelectric sensor 204 when the second convex lens 302 corresponding to the second photoelectric sensor 304 operates for the whole stroke 2 A comprehensive error of the second photoelectric sensor 304 downward relative to the Y-axis direction of the first sensor when the second convex lens 302 corresponding to the first photoelectric sensor 204 runs for the whole stroke;
step A6: according to x 1 、x 2 、x 3 、x 4 Respectively carrying out calculation by using a formula 1 and a formula 2 to obtain alpha 1 、α 2 、θ 1 、θ 2
Equation 1: alpha n =arctg【(L*x n -f*x n *cotα)/Lf】;
Equation 2: θ n =arctg【(L*x n+2 -f*x n+2 *cosα)/Lf】;
Wherein alpha is n And theta n L is the distance between the optical centers of the first convex lens 202 and the second convex lens 302, f is the focal length of the first convex lens 202 and the second convex lens 302, the focal length of the first convex lens 202 and the second convex lens 302 are consistent, and alpha is the optical center line of the first convex lens 202 and the second convex lens 302 and the firstAn included angle of the axes of the two convex lenses 302;
step A7: will be alpha 1 、α 2 、θ 1 、θ 2 Bringing into formula 3 to obtain a focusing point angle beta;
equation 3:
β=arctg{[sin(α 12 )*sin²(α+θ 2 )*cos(α 1 +α-θ 1 )*cos(α 12 )+sin²(α 12 )*sin²(α+θ 2 )*sin(α 1 +α-θ 1 )-sin(α 12 )*sin(θ 12 )*sin(α+θ 2 )*cosα 2 ]÷[sin(α 12 )*sin(α+θ 2 )*cos(α 1 +α-θ 1 )*cosα 1 *cos(α+θ 22 )+sin(α 12 )*sin(α+θ 2 )*sin(α 1 +α-θ 1 )*sinα 1 *cos(α+θ 22 )-sin(θ 12 )*sinα 1 *cosα 2 *cos(α+θ 22 )]};
wherein β is the angle of focus of the feature;
step A8: the second lens 103 is adjusted to move to this angle according to the focusing angle β.
In the above-mentioned automatic focusing method, in the step A1, the X-axis marks of the rectangular coordinate system XY sequentially decrease from left to right, and the X-axis marks of the rectangular coordinate system XY' sequentially increase from left to right;
in the step A2, the feature unit is a2×2 bayer array formed by 1 red pixel unit, 2 green pixel units, and 1 blue pixel unit, and the length of each pixel unit in the X direction is denoted μ
The L in the step A3 1 Calculated from equation 4, the L 2 Calculated from equation 5;
equation 4: l (L) 1 =(WL²f-WLf²*cosα)/( WL²sinα*cosα- WLf*sinα*cos²α+2L²f* sin²α- WLf* cos²α+ Wf²* cos 3 α-2Lf²* sinα*cosα);
Equation 5: l (L) 2 =(WL²f sin²α-WLf²* sinα*cosα)/( WL²sinα*cosα- WLf *cos²α+2L²f* sin²α- WLf* sinα* cos²α+ Wf²* cos 3 α-2Lf²* sin²α*cosα);
Where W is the width value of the first grid and the second grid in the X direction, and f is the focal length of the first convex lens 202 and the second convex lens 302, where the focal lengths of the first convex lens 202 and the second convex lens 302 are consistent.
The automatic focusing method further comprises the step of checking the coincidence ratio of the focusing result in the focusing area, and the specific steps are as follows:
step B1: selecting the characteristic data of any characteristic in the first focusing area to subtract the characteristic data of the same position in the second focusing area to obtain the difference of the characteristic data;
step B2: when the difference between the feature data is about 0 and 60% -100% of the number of the features in the first focusing area, confirming that the focusing is completed on the features by the pair of focuses;
step B3: when the difference between the feature data is about 0 and 60% -100% of the number of the features in the first focusing area is not reached, the second convex lens 302 is sequentially operated until the difference between the feature data is 0 and 60% -100% of the number of the features in the first focusing area is reached, and focusing is completed.
As shown in fig. 3, a is the optical center of the first convex lens 202, B is the optical center of the second convex lens 302, the axes of the 2 convex lenses intersect at point C, the end is 2 photosensors of the same size, and the centers of the photosensors intersect and are perpendicular to the axes of the convex lenses, K 1 K 2 And J 1 J 2 Is light receiving line of photoelectric sensor at alpha angle, C point is K 1 K 2 And J 1 J 2 Midpoint of (H) 1 And H 3 For projection of one of the characteristic data onto 2 end points of the received light, H 2 To calculate the focusing point of the feature data, β is the focusing angle, BJ 3 And AK (alkyl ketene dimer) 3 L for defining the boundary between the coincident field of view and the non-coincident field of view 1 And L 2 For dividing line at intersection point of photoelectric sensor, the correspondent angle is theta 3 And alpha 3 ;α 4 And theta 4 For maximum light acceptance angle, alpha 1 And alpha 2 Is H 1 And H 3 Light receiving angle of convex lens at point A, theta 1 And theta 2 Is H 1 And H 3 Light receiving angle of convex lens at point B, H 1 And H 3 The falling point of the convex lens at the point A is X 1 And X 2 ,H 1 And H 3 The falling point of the convex lens at the point B is X 3 And X 4 ;α 1 、α 2 、θ 1 、θ 2 、α 3 、θ 3 The value confirmation method is as follows: the transverse length of the photoelectric sensor is W, V 1 Is the distance between the convex lenses at the point A, V 2 The distance between the convex lenses at the point B is f, and AB=L; v (V) 1 =L*f/(L-f*cotα);V 2 =L*f/(L-f*cosα);CA=L*tgα,CB=L/cosα;tgα 4 =0.5W/V 1 =CK 1 /(L*tgα)(1);tgθ 4 =0.5W/V 2 =CJ 2 /(L/cosα)(2);∠J 1 CK 1 =∠J 5 CK 5 =90°-α;∠CK 1 A=90°-α 4 ,∠CJ 1 B=90°-θ 4 ;∠K 1 J 3 C=α+α 4 ,∠J 2 K 3 C=α+θ 4 The method comprises the steps of carrying out a first treatment on the surface of the At DeltaK 1 J 3 C and DeltaJ 2 K 3 In C, there is a sine theorem; CJ (CJ) 3 /(sin90°-α 4 )=CK 1 /sin(α 4 +α)(3);CK 3 /(sin90°-θ 4 )=CJ 2 /sin(θ 4 +α) (4);CJ 3 /CB=L 2 /V 2 (5);CK 3 /CA=L 1 /V 1 (6) The method comprises the steps of carrying out a first treatment on the surface of the CK is obtained from (1) and (2) 1 ,CJ 2 Carry over (3), (4), (5), (6); l (L) 1 =(WL²f-WLf²*cosα)/(WL²sinα*cosα-WLfsinα*cos²α+2L²fsin²α-WLf*cos²α+Wf²*cos 3 α-2Lf²sinα*cosα);L 2 =(WL²fsin²α-WLf²sinα*cosα)/(WL²sinα*cosα-WLfcos²α+2L²fsin²α-WLfsinα*cos²α+Wf²*cos 3 α-2Lf²sin²α*cosα);α 1 =arctg(CX 1 /CA)=arctg【CX 1 /(L*tgα)】;α 2 =arctg(CX 2 /CA)=arctg【CX 2 /(L*tgα)】;θ 1 =arctg(CX 3 /CB)=arctg(CX 3 *cosα/L);θ 2 =arctg(CX 4 /CB)=arctg(CX 4 * cos alpha/L); the method for confirming the value of β is as follows: at DeltaABH 3 Internal, angle ABH 3 =α+θ 2 ,∠H 3 AB=90°-α 2
Then +.AH 3 B=180°-90°+α 2 -α-θ 2 =90°+α 2 -α-θ 2 The method comprises the steps of carrying out a first treatment on the surface of the There is AB/sin (90 +α 2 -α-θ 2 )=AH 3 /sin(α+θ 2 ) Since ab=l; AH of the valve 3 =L*sin(α+θ 2 )/sin(90°+α 2 -α-θ 2 )(7);BH 3 =L*sin(90°-α 2 )/sin(90°+α 2 -α-θ 2 ) (8); at DeltaH 1 H 3 In A, the following formula is given according to the sine theorem: a H 1 /sin∠H 1 H 3 A=A H 3 /sin∠H 3 H 1 A= H 1 H 3 / sin(α 12 ) (9); at Delta B H 1 H 3 In this, there is the following formula according to the sine theorem: h 1 H 3 / sin(θ 12 )=B H 3 / sin∠B H 1 H 3 (10) The method comprises the steps of carrying out a first treatment on the surface of the At DeltaH 1 H 2 In A, the following formula is given according to the sine theorem: h 1 A/ sin(∠H 1 H 3 A+α 2 )= AH 2 / sin∠H 2 H 1 A(11);∠H 2 H 1 A =∠B H 1 H 3 +∠A H 1 B=∠B H 1 H 3 +180°-90°-α 1 -α+θ 1
Get +. B H 1 H 3 =∠H 2 H 1 A -90°+α 1 +α-θ 1 (12) The method comprises the steps of carrying out a first treatment on the surface of the At DeltaH 1 H 2 In A, < H- 1 H 3 A=180°-∠H 2 H 1 A-α 12 (13) The method comprises the steps of carrying out a first treatment on the surface of the Dividing (11) by (9): sin & lt H 1 H 3 A/ sin(∠H 1 H 3 A+α 2 )=A H 2 /A H 3 (14) The method comprises the steps of carrying out a first treatment on the surface of the Dividing (11) by (9):
sin(α 12 )/ sin(θ 12 )=(B H 3 /A H 3 )*(sin∠H 3 H 1 A/ sin∠B H 1 H 3 ) (15); bringing (7), (8), (12), (13) into (14), (15) and solving the equations consisting of (14), (15) can be solved: beta=arctg { [ sin (α 12 )*sin²(α+θ 2 )*cos(α 1 +α-θ 1 )*cos(α 12 )+sin²(α 12 )*sin²(α+θ 2 )*sin(α 1 +α-θ 1 )-sin(α 12 )*sin(θ 12 )*sin(α+θ 2 )*cosα 2 ]÷[sin(α 12 )*sin(α+θ 2 )*cos(α 1 +α-θ 1 )*cosα 1 *cos(α+θ 22 )+sin(α 12 )*sin(α+θ 2 )*sin(α 1 +α-θ 1 )*sinα 1 *cos(α+θ 22 )-sin(θ 12 )*sinα 1 *cosα 2 *cos(α+θ 22 )]}。
As shown in fig. 4, in a second aspect, a second embodiment of an auto-focusing vision apparatus, which includes a bracket 101, a first lens 102, a second lens 103, a moving rail 105 assembly, a fixed rail 107 assembly, and a voice coil motor 109, wherein a driving program of the voice coil motor 109 is a driving program of a voice coil motor 109 of a conventional mobile phone, in a very narrow current range, the higher the resolution is, the better the resolution is, a saw-tooth rack 110 matching the moving rail 105 assembly and the fixed rail 107 assembly is installed on an output shaft of the voice coil motor 109, and a first mounting hole 111 for mounting the first lens 102 and a second mounting hole 112 for mounting the second lens 103 are provided on the bracket 101;
as shown in fig. 5, the moving guide rail 105 assembly comprises a mounting shaft bracket 104 and a moving guide rail 105, the fixed guide rail 107 assembly comprises a connecting shaft 106, a fixed guide rail 107 and a fixed shaft 108, the mounting shaft bracket 104 is in a U shape, two ends of the mounting shaft bracket are respectively connected to the upper side and the lower side of the first lens 102 and then are connected into a first mounting hole 111 of the bracket 101, one end of the first lens 102, which is far away from the first mounting hole 111, is provided with a first groove 113 for placing the moving guide rail 105, a first sliding rod 201 is arranged in the first groove 113, the moving guide rail 105 is provided with a first sliding groove 114 matched with the first sliding rod 201, one end of the moving guide rail 105 is connected to the mounting shaft bracket 104, and the other end of the moving guide rail 105 is provided with a sawtooth groove 115 matched and connected to the sawtooth frame 110;
as shown in fig. 6, the second lens 103 is installed in the second installation hole 112 of the bracket 101 through the connecting shaft 106, one end of the second lens 103, which is far away from the second installation hole 112, is provided with a second groove 116 for placing the fixed rail 107, a second slide bar 301 is installed in the second groove 116, the fixed rail 107 is provided with a second slide groove 117 matched with the second slide bar 301, two ends of the fixed shaft 108 are respectively connected with the fixed rail 107 and the bracket 101, one end of the second lens 103 is provided with a sawtooth rail 118 matched with the sawtooth rack 110, the shape and the size of gears of the sawtooth groove 115 are identical to those of the sawtooth rail 118, the center of the gear of the sawtooth groove 115 is the optical center of the first convex lens 202, and the center of the gear of the sawtooth rail 118 is the optical center of the second convex lens 302;
the voice coil motor 109 is externally connected or internally provided with a controller which is connected with the voice coil motor 109 in a controlling way and stores the automatic focusing method of the first aspect, and the controller is respectively connected with the first lens 102 and the second lens 103 through wires or wirelessly for data interaction and control.
The above-mentioned visual device with automatic focusing, wherein the bracket 101 comprises a bottom plate and a vertical plate, the first mounting hole 111 and the second mounting hole 112 are all opened on the vertical plate, and one end of the fixed shaft 108 is connected to the bottom plate;
the sawtooth rack 110 is an I-shaped sawtooth rack 110 with two ends respectively connected with the sawtooth grooves 115 and the sawtooth guide rails 118 in a matching way;
the serration grooves 115 and the serration rail 118 are both arc-shaped.
As shown in fig. 4-5, in the above-mentioned vision device with auto-focusing, a first convex lens 202 and a first guide post 203 are disposed in a first lens 102, a first sliding rod 201 is mounted on the first guide post 203, the first convex lens 202 is mounted on the front end surface of the first lens 102, and a first photoelectric sensor 204 is mounted at one end of the first guide post 203 facing the first convex lens 202;
a second convex lens 302 and a second guide post 303 are arranged in the second lens 103, a second sliding rod 301 is arranged on the second guide post 303, the second convex lens 302 is arranged on the front end surface of the second lens 103, and a second photoelectric sensor 304 is arranged at one end of the second guide post 303 facing the second convex lens 302;
the first photoelectric sensor 204 and the second photoelectric sensor 304 are connected with a controller for data transmission.
In the above-mentioned vision device for automatic focusing, the first filter is mounted on the first photoelectric sensor 204, and the second filter is mounted on the second photoelectric sensor 304.
As shown in fig. 7, in a third aspect, a control method of an autofocus vision device according to a third embodiment includes the following steps:
step C1: the voice coil motor 109 drives the sawtooth rack 110 to drive the movable guide rail 105 to move, the first sliding groove 114 on the movable guide rail 105 constrains the first sliding rod 201 to enable the first guide pillar 203 to drive the first photoelectric sensor 204 to move, meanwhile, the voice coil motor 109 drives the sawtooth rack 110 to drive the sawtooth guide rail 118 to move, and the second sliding rod 301 drives the second photoelectric sensor 304 on the second guide pillar 303 to move under the constraint of the second sliding groove 117;
step C2: acquiring detection data of the first photoelectric sensor 204 and the second photoelectric sensor 304 in real time and transmitting the detection data to a controller;
step C3: the controller calculates the detection data to obtain control data including driving data of the voice coil motor 109;
step C4: the controller controls the voice coil motor 109 to perform focusing by control data.
The above-mentioned control method of the automatic focusing vision apparatus, wherein the confirmation formula of the shape of the first chute 114 on the moving rail 105 is as follows:
X 1 =【Z 1 +L*f/(L-f*cotα)】cosα;
Y 1 =【Z 1 +L*f/(L-f*cotα)】sinα;
wherein A is a first convex lens202, A is the center of a circle, Z 1 L is the distance between the optical centers of the first convex lens 202 and the second convex lens 302, f is the focal length of the first convex lens 202 and the second convex lens 302, alpha is the included angle between the axis of the second convex lens 302 and the connecting line between the first convex lens 202 and the second convex lens 302, alpha is 45 degrees or more and is not more than 90 degrees, and when any value of alpha is 45 degrees or more and not more than 90 degrees is brought into the formula, the matched X is obtained 1 、Y 1
The confirmation formula of the shape of the second runner 117 on the fixed rail 107 is as follows:
X 2 =【Z 2 +L*f/(L-f*cosα)】cosα;
Y 2 =【Z 2 +L*f/(L-f*cosα)】sinα;
wherein B is a coordinate point of the optical center of the second convex lens 302 in the plane coordinate system XY, Z 2 L is the distance between the optical centers of the first convex lens 202 and the second convex lens 302, f is the focal length of the first convex lens 202 and the second convex lens 302, alpha is the included angle between the axis of the second convex lens 302 and the connecting line between the first convex lens 202 and the second convex lens 302, alpha is 45 degrees or more and is not more than 90 degrees, and when alpha is any value of 45 degrees or more and not more than 90 degrees and is brought into the formula, the matched X2 and Y2 are obtained;
wherein, lxf/(L-f_cotα) is the image distance V of the first convex lens 202 A A synergistic relationship with the angle alpha is provided,
L/(L-f cos alpha) is the image distance V of the second convex lens 302 B A synergistic relationship with angle α;
as shown in fig. 8, in the XY "coordinate system, point a (the optical center of the first convex lens 202) is the center of the coordinate system, point B (the optical center of the second convex lens 302) is a point on the X axis, the axis of the first convex lens 202 located at point a intersects with the axis of the second convex lens 302 located at point B at point C, the Y axis is the axis of the first convex lens 202, AC is a part of the Y axis, AB is the distance from the optical center of the first convex lens 202 to the center of the second convex lens 302, denoted as L, BC is on the axis of the convex lens located at point B; the included angle between AB and the axis of the second convex lens 302 at the point B is alpha; the object distance U, the image distance V, the focal lengths of the first convex lens 202 and the second convex lens 302 are f, and 1/u+1/v=1/f is known;
now, taking AC as the object distance of the convex lens at the point a and BC as the object distance of the convex lens at the point B, obtaining va=l×f/(L-f×cos α), and vb=l×f/(L-f×cos α);
initial angle alpha 0 The value of (2) requires:
the trajectory of the photosensor support 101 at point a, i.e., in rectangular coordinate system XY, is a function of X 2 +Y 2 =【Z 1 +L*f/L-f*cotα】 2 The cooperation between the slope of the curve and the extremely high acceleration of the voice coil motor 109 creates a high resistance that severely affects the cooperative operation of the system, where this disadvantage is ameliorated in two ways:
1. when L is much larger than f, α=arcthl/f will be infinitely close to α=0°, curve V A The flatter the change in the slope of the posterior segment, =l×f/(L-f×cotα), the greater the length of L is than f as much as possible;
2. when alpha is 0 When the value of (2) is 45 DEG, the centrifugal increment is about 1mm when the centrifugal machine runs within the range of 45 DEG alpha less than or equal to 90 DEG; when alpha is 0 When the value of (2) is 60 DEG, the centrifugal increment is about 0.6mm when the centrifugal pump operates within the range of alpha which is more than or equal to 60 DEG and less than or equal to 90 DEG; to reduce the eccentricity, alpha 0 The value of (2) can be very high, and the ideal value is more than or equal to 60 degrees; when alpha is 0 When the value of (a) is large, the point A is taken as a reference object, which means that the distance L is smaller than the distance th alpha, but the use cannot be influenced; on the other hand, when alpha 0 When the value of (a) is large, the protruding part of the gear at the point A is small, even if alpha 0 When the lens is 45 DEG, the protruding part is the largest, the size of the photoelectric sensor is generally about 5mm x 4mm, the actual focal length of the camera for the mobile phone is about 3mm, the view of the convex lens at the point A is not blocked completely, and the convex lens at the point B rotates along with the gear and also does not block the view.
In a fourth aspect, a computer readable storage medium has stored thereon a computer program which, when executed by a processor, implements the steps of the method of any of the third aspects.
In summary, according to the auto-focusing method, the auto-focusing vision device and the control method thereof, the multi-focusing can be performed by confirming the positions of the features and calculating the focusing mode of the overlapping ratio, when the object on the object and the object behind the object are observed, 2 data with high overlapping ratio on the individual features can be detected in the focusing area, and similarly, when the underwater and water objects are observed, the objects on the multilayer glass are partially peeped, and the like, which are different from the traditional contrast focusing and phase focusing, the focusing speed is higher than the contrast focusing and is inferior to the phase focusing; the three-dimensional identification can be realized, as 2 convex lenses shoot the same object, two images are put together, and the characteristics of the three-dimensional structure can be confirmed through the change of the shadow part and the change of the length of the line; after the maximum overlap ratio occurs, namely, the position of accurate focusing, the accurate distance can be confirmed.
The foregoing describes specific embodiments of the invention. It is to be understood that the invention is not limited to the specific embodiments described above, wherein devices and structures not described in detail are to be understood as being implemented in a manner common in the art; numerous variations, changes, or substitutions of light can be made by one skilled in the art without departing from the spirit of the invention and the scope of the claims.

Claims (10)

1. An auto-focusing method, comprising the steps of:
step A1: respectively taking the centers of the two completely identical first photoelectric sensors and the two completely identical second photoelectric sensors as the origin, and establishing two rectangular coordinate systems with central symmetry, namely a rectangular coordinate system XY and a rectangular coordinate system XY';
step A2: respectively establishing 2m in a rectangular coordinate system XY and a rectangular coordinate system XY' by taking an origin as a center 0 *2n 0 A first grid and a second grid which are symmetrical in center, wherein each grid in the first grid and the second grid is a characteristic unit;
step A3: establishing a first grid and a second grid with an origin as a center2 x 2n, y=l 1 Y=l, which is the boundary line between the common field and the non-common field of the first photosensor 2 A boundary line between the common field of view and the non-common field of view of the second photosensor;
step A4: determining all characteristic types of a first focusing area by adopting 8-bit binary system, and recording a group of characteristic data;
step A5: acquiring the endpoint coordinates of each characteristic data of the first focusing area in the X-axis direction of the first grid and the second grid, wherein the left endpoint sitting mark of the first grid is X 1 ,y 1 The right endpoint sitting is marked as x 2 ,y 2 The left endpoint of the second grid is marked with x 3 ,y 3 The right endpoint sitting is marked as x 4 ,y 4 Wherein x is 1 ≤L 1 ,x 2 ≤L 1 ,x 3 ≤L 2 ,x 4 ≤L 2 ,ε 2 -2μn 0 ≤y 3 ≤2μn 01 ,ε 2 -2μn 0 ≤y 4 ≤2μn 01 ,ε 1 For the comprehensive error epsilon of the second photoelectric sensor relative to the Y-axis direction of the first photoelectric sensor when the second convex lens corresponding to the second photoelectric sensor runs for the whole stroke 2 The method comprises the steps that when a second convex lens corresponding to a first photoelectric sensor runs for the whole stroke, the second photoelectric sensor has a downward comprehensive error relative to the Y-axis direction of the first sensor;
step A6: according to x 1 、x 2 、x 3 、x 4 Respectively carrying out calculation by using a formula 1 and a formula 2 to obtain alpha 1 、α 2 、θ 1 、θ 2
Equation 1: alpha n =arctg【(L*x n -f*x n *cotα)/Lf】;
Equation 2: θ n =arctg【(L*x n+2 -f*x n+2 *cosα)/Lf】;
Wherein alpha is n And theta n L is the tangent angle of the ratio of the end point abscissa to the current object distance, and L is the optical center of the first convex lens and the optical center of the second convex lensThe distance between the first convex lens and the second convex lens is f, the focal lengths of the first convex lens and the second convex lens are consistent, and alpha is an included angle between the optical center connecting line of the first convex lens and the second convex lens and the axis of the second convex lens;
step A7: will be alpha 1 、α 2 、θ 1 、θ 2 Bringing into formula 3 to obtain a focusing point angle beta;
equation 3:
β=arctg{[sin(α 12 )*sin²(α+θ 2 )*cos(α 1 +α-θ 1 )*cos(α 12 )+sin²(α 12 )*sin²(α+θ 2 )*sin(α 1 +α-θ 1 )-sin(α 12 )*sin(θ 12 )*sin(α+θ 2 )*cosα 2 ]÷[sin(α 12 )*sin(α+θ 2 )*cos(α 1 +α-θ 1 )*cosα 1 *cos(α+θ 22 )+sin(α 12 )*sin(α+θ 2 )*sin(α 1 +α-θ 1 )*sinα 1 *cos(α+θ 22 )-sin(θ 12 )*sinα 1 *cosα 2 *cos(α+θ 22 )]};
wherein β is the angle of focus of the feature;
step A8: the second lens is adjusted to move to this angle according to the focal point angle beta.
2. An auto-focusing method according to claim 1, wherein in step A1, the X-axis marks of the rectangular coordinate system XY become smaller from left to right, and the X-axis marks of the rectangular coordinate system XY' become larger from left to right;
in the step A2, the feature unit is a2×2 bayer array formed by 1 red pixel unit, 2 green pixel units, and 1 blue pixel unit, and the length of each pixel unit in the X direction is denoted μ
The L in the step A3 1 Calculated from equation 4, the L 2 Calculated from equation 5;
equation 4: l (L) 1 =(WL²f-WLf²*cosα)/( WL²sinα*cosα- WLf*sinα*cos²α+2L²f* sin²α- WLf* cos²α+ Wf²* cos 3 α-2Lf²* sinα*cosα);
Equation 5: l (L) 2 =(WL²f sin²α-WLf²* sinα*cosα)/( WL²sinα*cosα- WLf *cos²α+2L²f* sin²α- WLf* sinα* cos²α+ Wf²* cos 3 α-2Lf²* sin²α*cosα);
Wherein W is the width value of the first grid and the second grid in the X direction, f is the focal length of the first convex lens and the second convex lens, and the focal length of the first convex lens and the focal length of the second convex lens are consistent.
3. An auto-focusing method according to claim 1 or 2, further comprising the step of checking the coincidence ratio of the focusing result in the focusing area, comprising the steps of:
step B1: selecting the characteristic data of any characteristic in the first focusing area to subtract the characteristic data of the same position in the second focusing area to obtain the difference of the characteristic data;
step B2: when the difference between the feature data is about 0 and 60% -100% of the number of the features in the first focusing area, confirming that the focusing is completed on the features by the pair of focuses;
step B3: when the difference between the feature data is about 0 and 60% -100% of the number of the features in the first focusing area is not reached, the second convex lens is operated sequentially until the difference between the feature data is 0 and 60% -100% of the number of the features in the first focusing area is reached, and focusing is completed.
4. The automatic focusing visual device is characterized by comprising a bracket, a first lens, a second lens, a movable guide rail assembly, a fixed guide rail assembly and a voice coil motor, wherein a sawtooth frame matched with the movable guide rail assembly and the fixed guide rail assembly is arranged on an output shaft of the voice coil motor, and a first mounting hole for mounting the first lens and a second mounting hole for mounting the second lens are arranged on the bracket;
the movable guide rail assembly comprises a mounting shaft bracket and a movable guide rail, the fixed guide rail assembly comprises a connecting shaft, a fixed guide rail and a fixed shaft, the mounting shaft bracket is U-shaped, two ends of the mounting shaft bracket are respectively connected to the upper side and the lower side of a first lens and then connected to the first mounting hole of the bracket, one end of the first lens, which is far away from the first mounting hole, is provided with a first groove for placing the movable guide rail, a first sliding rod is arranged in the first groove, a first sliding groove matched with the first sliding rod is formed in the movable guide rail, one end of the movable guide rail is connected to the mounting shaft bracket, and the other end of the movable guide rail is provided with a sawtooth groove matched with the sawtooth frame;
the second lens is installed in the second installation hole of the bracket through the connecting shaft, a second groove for placing the fixed guide rail is formed in one end, deviating from the second installation hole, of the second lens, a second sliding rod is installed in the second groove, a second sliding groove matched with the second sliding rod is formed in the fixed guide rail, two ends of the fixed shaft are respectively connected with the fixed guide rail and the bracket, and a sawtooth guide rail matched and connected with the sawtooth frame is arranged at one end of the second lens;
the voice coil motor is externally connected or internally provided with a controller which is in control connection with the voice coil motor and stores the automatic focusing method according to any one of claims 1-3, and the controller is respectively connected with the first lens and the second lens through wires or wirelessly for data interaction and control.
5. The auto-focus vision apparatus according to the auto-focus method as claimed in claim 4, wherein the bracket comprises a bottom plate and a vertical plate, the first mounting hole and the second mounting hole are both opened on the vertical plate, and one end of the fixed shaft is connected to the bottom plate;
the sawtooth rack is an I-shaped sawtooth rack, and two ends of the I-shaped sawtooth rack are respectively connected with the sawtooth groove and the sawtooth guide rail in a matching way;
the sawtooth groove and the sawtooth guide rail are arc-shaped.
6. The automatic focusing visual device according to the automatic focusing method of claim 5, wherein a first convex lens and a first guide post are arranged in the first lens, the first slide bar is arranged on the first guide post, the first convex lens is arranged on the front end surface of the first lens, and a first photoelectric sensor is arranged at one end of the first guide post facing the first convex lens;
a second convex lens and a second guide post are arranged in the second lens, the second sliding rod is arranged on the second guide post, the second convex lens is arranged on the front end face of the second lens, and a second photoelectric sensor is arranged at one end of the second guide post facing the second convex lens;
the first photoelectric sensor and the second photoelectric sensor are connected with the controller for data transmission.
7. An autofocus vision device according to the autofocus method of claim 6, wherein the first photosensor has a first filter mounted thereon and the second photosensor has a second filter mounted thereon.
8. A control method suitable for an autofocus vision device according to any one of claims 4 to 7, comprising the steps of:
step C1: the voice coil motor drives the sawtooth rack to drive the movable guide rail to move, a first sliding groove on the movable guide rail constrains the first sliding rod to enable the first guide pillar to drive the first photoelectric sensor to move, meanwhile, the voice coil motor drives the sawtooth rack to drive the sawtooth guide rail to move, and a second sliding rod drives the second photoelectric sensor on the second guide pillar to move under the constraint of a second sliding groove;
step C2: acquiring detection data of the first photoelectric sensor and the second photoelectric sensor in real time and transmitting the detection data to the controller;
step C3: the controller calculates the detection data to obtain control data containing voice coil motor driving data;
step C4: the controller controls the voice coil motor to focus through control data.
9. The method of claim 8, wherein the first chute shape on the moving rail is determined by the following formula:
X 1 =【Z 1 +L*f/(L-f*cotα)】cosα;
Y 1 =【Z 1 +L*f/(L-f*cotα)】sinα;
wherein A is the coordinate point of the optical center of the first convex lens in the plane coordinate system XY, A is the circle center, Z 1 L is the distance between the optical center of the first convex lens and the optical center of the second convex lens, f is the focal length of the first convex lens and the second convex lens, alpha is the included angle between the axis of the second convex lens and the connecting line between the first convex lens and the second convex lens, alpha is 45 degrees or more and less than or equal to 90 degrees, and when alpha is any value of 45 degrees or more and less than or equal to 90 degrees and is brought into the formula, the matched X is obtained 1 、Y 1
The confirmation formula of the second chute shape on the fixed guide rail is as follows:
X 2 =【Z 2 +L*f/(L-f*cosα)】cosα;
Y 2 =【Z 2 +L*f/(L-f*cosα)】sinα;
wherein B is a coordinate point of the optical center of the second convex lens in a plane coordinate system XY, Z 2 L is the distance between the optical center of the first convex lens and the optical center of the second convex lens, f is the focal length of the first convex lens and the second convex lens, alpha is the included angle between the axis of the second convex lens and the connecting line between the first convex lens and the second convex lens, alpha is 45 degrees or more and less than or equal to 90 degrees, and when alpha is any value of 45 degrees or more and less than or equal to 90 degrees and is brought into the formula, the matched X is obtained 2 、Y 2
10. A computer readable storage medium, characterized in that it has stored thereon a computer program which, when executed by a processor, implements the steps of the method according to any of claims 8-9.
CN202111192907.9A 2021-10-13 2021-10-13 Automatic focusing method, automatic focusing visual device and control method thereof Active CN114095628B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111192907.9A CN114095628B (en) 2021-10-13 2021-10-13 Automatic focusing method, automatic focusing visual device and control method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111192907.9A CN114095628B (en) 2021-10-13 2021-10-13 Automatic focusing method, automatic focusing visual device and control method thereof

Publications (2)

Publication Number Publication Date
CN114095628A CN114095628A (en) 2022-02-25
CN114095628B true CN114095628B (en) 2023-07-07

Family

ID=80296830

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111192907.9A Active CN114095628B (en) 2021-10-13 2021-10-13 Automatic focusing method, automatic focusing visual device and control method thereof

Country Status (1)

Country Link
CN (1) CN114095628B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102289144A (en) * 2011-06-30 2011-12-21 浙江工业大学 Intelligent three-dimensional (3D) video camera equipment based on all-around vision sensor
CN104853105A (en) * 2015-06-15 2015-08-19 爱佩仪光电技术有限公司 Three-dimensional rapid automatic focusing method based on photographing device capable of controlling inclination of lens
WO2016067648A1 (en) * 2014-10-30 2016-05-06 オリンパス株式会社 Focal point adjustment device, camera system, and focal point adjustment method
CN110568699A (en) * 2019-08-29 2019-12-13 东莞西尼自动化科技有限公司 control method for simultaneously automatically focusing most 12 cameras

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102289144A (en) * 2011-06-30 2011-12-21 浙江工业大学 Intelligent three-dimensional (3D) video camera equipment based on all-around vision sensor
WO2016067648A1 (en) * 2014-10-30 2016-05-06 オリンパス株式会社 Focal point adjustment device, camera system, and focal point adjustment method
CN104853105A (en) * 2015-06-15 2015-08-19 爱佩仪光电技术有限公司 Three-dimensional rapid automatic focusing method based on photographing device capable of controlling inclination of lens
CN110568699A (en) * 2019-08-29 2019-12-13 东莞西尼自动化科技有限公司 control method for simultaneously automatically focusing most 12 cameras

Also Published As

Publication number Publication date
CN114095628A (en) 2022-02-25

Similar Documents

Publication Publication Date Title
JP6858211B2 (en) Devices and methods for positioning a multi-aperture optical system with multiple optical channels relative to an image sensor.
CN103308007B (en) The IC pin coplanarity measuring system of higher order reflection and grating image and method
US20160061594A1 (en) System and method of measuring and correcting tilt angle of lens
US7228069B2 (en) Focusing method for digital camera using plural spatial frequencies
CN109483531A (en) It is a kind of to pinpoint the NI Vision Builder for Automated Inspection and method for picking and placing FPC plate for manipulator
US20090196527A1 (en) Calibration method of image planar coordinate system for high-precision image measurement system
CN111080705B (en) Calibration method and device for automatic focusing binocular camera
TW201203173A (en) Three dimensional distance measuring device and method
CN109146961A (en) A kind of 3D measurement and acquisition device based on virtual matrix
CN209279885U (en) Image capture device, 3D information comparison and mating object generating means
CN113155084A (en) Binocular vision distance measuring device and method based on laser cross standard line assistance
CN103676487B (en) A kind of workpiece height measurement mechanism and bearing calibration thereof
CN213986245U (en) Chip routing three-dimensional detection equipment
WO2014048166A1 (en) Automatic debugging method and system for optical shockproof camera module
CN113064248A (en) Optical alignment method of camera, camera and electronic equipment
CN114383543B (en) WAAM molten pool three-dimensional reconstruction method
CN104864855A (en) Single-camera omnidirectional stereoscopic vision sensor and design method thereof
US11233961B2 (en) Image processing system for measuring depth and operating method of the same
CN1862305A (en) Image sensing module multi-point focusing detecting method and apparatus thereof
CN114095628B (en) Automatic focusing method, automatic focusing visual device and control method thereof
TWI489164B (en) Method for adjusting focusing point with a 3d object and system thereof
CN109636859A (en) A kind of scaling method of the 3D vision detection based on one camera
CN106488224B (en) A kind of calibration method and calibrating installation of camera
CN112419418A (en) Positioning method based on camera mechanical aiming
CN113348538A (en) Mounting device and mounting method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant