CN113034591B - Tooth-shaped structure assembly-oriented addendum circle extraction algorithm - Google Patents

Tooth-shaped structure assembly-oriented addendum circle extraction algorithm Download PDF

Info

Publication number
CN113034591B
CN113034591B CN202110249585.0A CN202110249585A CN113034591B CN 113034591 B CN113034591 B CN 113034591B CN 202110249585 A CN202110249585 A CN 202110249585A CN 113034591 B CN113034591 B CN 113034591B
Authority
CN
China
Prior art keywords
point
corner
tooth
points
addendum
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110249585.0A
Other languages
Chinese (zh)
Other versions
CN113034591A (en
Inventor
李泷杲
黄翔
孔盛杰
李�根
周蒯
王德重
褚文敏
楼佩煌
钱晓明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Research Institute Of Nanjing University Of Aeronautics And Astronautics
Nanjing University of Aeronautics and Astronautics
Original Assignee
Suzhou Research Institute Of Nanjing University Of Aeronautics And Astronautics
Nanjing University of Aeronautics and Astronautics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Research Institute Of Nanjing University Of Aeronautics And Astronautics, Nanjing University of Aeronautics and Astronautics filed Critical Suzhou Research Institute Of Nanjing University Of Aeronautics And Astronautics
Priority to CN202110249585.0A priority Critical patent/CN113034591B/en
Publication of CN113034591A publication Critical patent/CN113034591A/en
Application granted granted Critical
Publication of CN113034591B publication Critical patent/CN113034591B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/181Segmentation; Edge detection involving edge growing; involving edge linking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/66Analysis of geometric attributes of image moments or centre of gravity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20164Salient point detection; Corner detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a tooth top circle extraction algorithm for tooth form structure assembly, which comprises the following steps of S1: extracting tooth top corner points of the tooth-shaped structure based on a Curvature Scale Space (CSS) technology of an adaptive threshold value; s2: accurate positioning of the tooth crest corner points is achieved by adopting a subpixel technology; s3: performing addendum circle fitting on addendum points by adopting a super least square method, searching for an ideal scale normalization, eliminating the statistical deviation of a second-order noise item of the least square method, and further improving the addendum circle precision and robustness; s4: the invention can not only extract the addendum circle containing all gear tooth images, but also detect the image with shielding of the gear teeth with high precision, and can also compensate the elliptical quasi-eccentric error generated by lens distortion.

Description

Tooth-shaped structure assembly-oriented addendum circle extraction algorithm
Technical Field
The invention relates to an addendum circle extraction algorithm, in particular to an addendum circle extraction algorithm for tooth-shaped structure assembly, and belongs to the field of image processing of tooth-shaped structure assembly.
Background
With the rise of digital measurement technology, digital assembly technology based on machine vision has been widely used in the industry. The gear is used as a key part in the transmission device, has higher assembly precision requirement, and most of the assembly of the tooth-shaped structure adopts a manual assembly method, so that the efficiency of the manual assembly method is low and the manpower resource consumption is high for large-size tooth-shaped structural parts in the aerospace field, the space pose of the tooth-shaped structure is required to be measured by means of a machine vision technology, and the driving quantity of the pose adjusting mechanism is calculated through the processes of coordinate system conversion and the like, so that the automatic assembly is realized.
Therefore, there is a need for improvements in conventional image processing algorithms to address the deficiencies of the prior art.
Disclosure of Invention
The invention aims to provide a tooth top circle extraction algorithm for tooth-shaped structure assembly, so as to solve the problems in the background technology.
In order to achieve the above purpose, the present invention provides the following technical solutions: a tooth-shaped structure assembly-oriented addendum circle extraction algorithm comprises the following steps:
s1: extracting tooth top corner points of the tooth-shaped structure based on a Curvature Scale Space (CSS) technology of an adaptive threshold value;
s2: accurate positioning of the tooth crest corner points is achieved by adopting a subpixel technology;
s3: performing addendum circle fitting on addendum points by adopting a super least square method, searching for an ideal scale normalization, eliminating the statistical deviation of a second-order noise item of the least square method, and further improving the addendum circle precision and robustness;
s4: and compensating an elliptical quasi-decentration error caused by the existence of distortion of the lens and optimizing elliptical parameters.
As a preferred embodiment of the present invention, the step S1 includes the following:
a1: performing channel edge detection of a self-adaptive threshold value on an input Image image_input to obtain an edge Image channel_edge;
a2: performing annular (approximate root circle to tip circle) mask in canny_edge to obtain a region of interest image ROI_edge of the gear tooth part;
a3: extracting a tooth top contour in the ROI_edge, wherein the contour is a premise of calculating curvature and extracting corner points, partial contour loss is caused by Canny edge detection, gaps of the contour need to be filled to be complete, and the corner points formed by Canny connection are stored in a canny_Point (Canny connection corner Point) Point set;
a4: calculating curvatures of all points on each contour curve in a large-scale state, wherein the Point with the maximum curvature is a candidate angular Point, if the absolute value of the curvature of the Point is k times of the minimum curvature in the adjacent Point, judging the Point as a correct angular Point, and storing the correct angular Point into a Corner_Point (addendum angular Point) Point set;
a5: traversing all points in Corner_Point in a small scale state, and rejecting points with a distance greater than a threshold t 1 Thereby obtaining better positioning accuracy;
a6: comparing points in canny_point and Corner_point, and rejecting distance is smaller than threshold t 2 The remainder is the final addendum Point, which is updated to the Corner_Point Point set.
As a preferred embodiment of the present invention, the step S2 includes the following:
b1: using a quadratic polynomial to approach the corner reaction function R (x, y) so as to obtain the coordinates of the sub-pixel corner, wherein the quadratic polynomial is as follows:
R(x,y)=a+bx+cy+dx 2 +exy+fy 2
b2: traversing Corner points (x i ,y i ) Establishing an overdetermined equation set containing 6 coefficients a-f by using 9 pixel points in the neighborhood, solving unknown quantity by using a least square method, and deriving the above formula:
b3: calculating the solution of the sub-pixel corner (x' i ,y′ i ):
b4: and updating all the sub-pixel Corner points to a Corner_Point Point set, namely completing the sub-pixel extraction of the addendum Corner points.
As a preferred embodiment of the present invention, the step S3 includes the following:
c1: the elliptic parameter equation and algebraic distance can be expressed as (ζ, θ) =0 and respectively
c2: scale normalization of θ (θ, zθ) =c, where Z is a symmetric matrix, c is a non-zero constant, θ and λ can be expanded to:
wherein:and->As a true value, delta 1 And delta 2 First-order and second-order noise, respectively, with ellipses of third-order and above;
c3: the first-order error and the second-order error are calculated as respectivelyAndwherein: />Deviation of second order noise termSubstituting mθ=λzθ to calculate:
c4: further calculating ellipse parameters:
wherein:
as a preferred embodiment of the present invention, the step S4 includes the following:
d1: ellipse center (x) ij ,y ij ) Circle center (X) under calibration plate coordinate system ij ,Y ij ) And radius r of the circle, obtaining a camera perspective projection matrix K by using a Zhang Zhengyou camera calibration method (0) And distortion coefficient P (0) Is set to an initial value of (1);
d2: calculating quasi-decentration error using mathematical model(m is the number of circles on the calibration plate, n is the number of images) and performing error compensation on the ellipse center under the pixel coordinate system to obtain compensated center coordinate +.>
d3: will beReplace->Acquiring k images at different angles updates the perspective projection matrix and the distortion coefficients using the present algorithm
d4: repeating d2 and d3 until the error change of quasi-decentration in adjacent iteration is smaller than threshold value or the highest iteration number is reached, and finally compensating the central point to be
At this time, the camera perspective projection matrix and distortion coefficient have been updated to optimal values
Compared with the prior art, the invention has the beneficial effects that:
1. the traditional corner detection adopts fixed threshold values, different optimal threshold values exist for curvatures of different edges, the invention sets a self-adaptive threshold value for the curvatures of each edge, and the precision of the corner detection is improved by adopting a sub-pixel technology after the corner detection.
2. The traditional least square method is poor in ellipse fitting precision and robustness, the tooth top points are fitted by adopting the super least square method, an ideal scale normalization is found, the statistical deviation of the second-order noise item of the least square method is eliminated, and then the tooth top circle precision and robustness are improved.
3. The traditional addendum circle extraction algorithm does not consider the quasi-decentration error generated by lens distortion, establishes a mathematical model of the quasi-decentration error, introduces a compensation method, and improves the measurement accuracy by an iterative compensation method.
Drawings
FIG. 1 is a tooth form structural simulation;
fig. 2 is a flowchart of an addendum circle extraction algorithm;
FIG. 3 is a schematic illustration of an input image;
fig. 4 is a schematic view of tooth tip corner extraction;
fig. 5 is a schematic diagram of a tip circle fit.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1-5, the present invention provides a technical scheme of an addendum circle extraction algorithm for tooth form structure assembly, comprising the following steps:
s1: extracting tooth top corner points of the tooth-shaped structure based on a Curvature Scale Space (CSS) technology of an adaptive threshold value;
s2: accurate positioning of the tooth crest corner points is achieved by adopting a subpixel technology;
s3: performing addendum circle fitting on addendum points by adopting a super least square method, searching for an ideal scale normalization, eliminating the statistical deviation of a second-order noise item of the least square method, and further improving the addendum circle precision and robustness;
s4: and compensating an elliptical quasi-decentration error caused by the existence of distortion of the lens and optimizing elliptical parameters.
Step S1 includes the following:
a1: performing channel edge detection of a self-adaptive threshold value on an input Image image_input to obtain an edge Image channel_edge;
a2: performing annular (approximate root circle to tip circle) mask in canny_edge to obtain a region of interest image ROI_edge of the gear tooth part;
a3: extracting a tooth top contour in the ROI_edge, wherein the contour is a premise of calculating curvature and extracting corner points, partial contour loss is caused by Canny edge detection, gaps of the contour need to be filled to be complete, and the corner points formed by Canny connection are stored in a canny_Point (Canny connection corner Point) Point set;
a4: calculating curvatures of all points on each contour curve in a large-scale state, wherein the Point with the maximum curvature is a candidate angular Point, if the absolute value of the curvature of the Point is k times of the minimum curvature in the adjacent Point, judging the Point as a correct angular Point, and storing the correct angular Point into a Corner_Point (addendum angular Point) Point set;
a5: traversing all points in Corner_Point in a small scale state, and rejecting points with a distance greater than a threshold t 1 Thereby obtaining better positioning accuracy;
a6: comparing points in canny_point and Corner_point, and rejecting distance is smaller than threshold t 2 The remainder is the final addendum Point, which is updated to the Corner_Point Point set.
Step S2 includes the following:
b1: using a quadratic polynomial to approach the corner reaction function R (x, y) so as to obtain the coordinates of the sub-pixel corner, wherein the quadratic polynomial is as follows:
R(x,y)=a+bx+cy+dx 2 +exy+fy 2
b2: traversing Corner points (x i ,y i ) Establishing an overdetermined equation set containing 6 coefficients a-f by using 9 pixel points in the neighborhood, solving unknown quantity by using a least square method, and deriving the above formula:
b3: calculating the solution of the sub-pixel corner (x' i ,y′ i ):
b4: and updating all the sub-pixel Corner points to a Corner_Point Point set, namely completing the sub-pixel extraction of the addendum Corner points.
Step S3 includes the following:
c1: the elliptic parameter equation and algebraic distance can be expressed as (ζ, θ) =0 and respectively
c2: scale normalization of θ (θ, zθ) =c, where Z is a symmetric matrix, c is a non-zero constant, θ and λ can be expanded to:
wherein:and->As a true value, delta 1 And delta 2 First-order and second-order noise, respectively, with ellipses of third-order and above;
c3: the first-order error and the second-order error are calculated as respectivelyAndwherein: />Deviation of second order noise termSubstituting mθ=λzθ to calculate:
c4: further calculating ellipse parameters:
wherein:
step S4 includes the following:
d1: ellipse center (x) ij ,y ij ) Circle center (X) under calibration plate coordinate system ij ,Y ij ) And radius r of the circle, obtaining a camera perspective projection matrix K by using a Zhang Zhengyou camera calibration method (0) And distortion coefficient P (0) Is set to an initial value of (1);
d2: calculating quasi-decentration error using mathematical model(m is the number of circles on the calibration plate, n is the number of images) and performing error compensation on the ellipse center under the pixel coordinate system to obtain compensated center coordinate +.>
d3: will beSubstitution (x) ij ,y ij ) K images with different angles are acquired, and the perspective projection matrix and distortion coefficient are updated by using the algorithm
d4: repeating d2 and d3 until the error change of quasi-decentration in adjacent iteration is smaller than threshold value or the highest iteration number is reached, and finally compensating the central point to be
At this time, the camera perspective projection matrix and distortion coefficient have been updated to optimal values
Embodiment one:
s1 tooth-shaped structure tooth top corner extraction:
the curve is parameterized by arc length u as:
Γ(u)=(x(u),y(u))
the curve can be changed into according to different scales
Γ(u)=(X(u,ν),Y(u,ν))
Wherein:noise in a Gaussian convolution kernel Gaussian (u, v) filtering curve with a scaling factor v is adopted to smooth the curve simultaneously; x and y are the abscissa and ordinate, respectively, of the point on the curve in the pixel coordinate system. The curvature can be expressed as
And calculating the curvature of each curve by the above formula, wherein the point where the local maximum value of the absolute value of the curvature is located is the candidate angular point. The improved algorithm based on CSS addendum corner detection comprises the following steps:
a1: performing channel edge detection of a self-adaptive threshold value on an input Image image_input to obtain an edge Image channel_edge;
a2: performing annular (approximate root circle to tip circle) mask in canny_edge to obtain a region of interest image ROI_edge of the gear tooth part;
a3: and extracting the tooth top outline in the ROI_edge, wherein the outline is the premise of calculating curvature and extracting corner points, partial outline is lost by Canny edge detection, gaps of the outline are required to be filled to be complete, and the corner points formed by Canny connection are stored in a canny_Point (Canny connection corner Point) Point set.
a4: calculating curvatures of all points on each contour curve in a large-scale state, wherein the Point with the maximum curvature is a candidate angular Point, if the absolute value of the curvature of the Point is k times of the minimum curvature in the adjacent Point, judging the Point as a correct angular Point, and storing the correct angular Point into a Corner_Point (addendum angular Point) Point set;
a5: traversing all points in Corner_Point in a small scale state, and rejecting points with a distance greater than a threshold t 1 Thereby obtaining better positioning accuracy;
a6: comparing points in canny_point and Corner_point, and rejecting distance is smaller than threshold t 2 The remainder is the final addendum Point, which is updated to the Corner_Point Point set.
Thus, the extraction of the addendum corner points of the addendum circle is primarily completed.
Embodiment two:
sub-pixel positioning of S2 addendum corner points:
the basic unit of the image is pixels, the precision of extracting the addendum Point by adopting an improved CSS (CSS Corner detection) algorithm is 1 pixel, the real coordinates of the addendum Point are not integers, the coordinates in Corner_Point deviate from the real coordinates, and the accurate positioning of the addendum Point is realized by adopting a subpixel technology in order to ensure the fitting precision of the addendum circle.
Approximating the corner reaction function R (x, y) by using a quadratic polynomial to further obtain the coordinates of the corner of the sub-pixel, wherein the quadratic polynomial is
R(x,y)=a+bx+cy+dx 2 +exy+fy 2
Traversing Corner points (x i ,y i ) Establishing an overdetermined equation set containing 6 coefficients a-f by using 9 pixel points in the neighborhood, solving unknown quantity by using a least square method, and deriving the above formula
Sub-pixel corner (x 'can be obtained by the above method' i ,y′ i ):
And updating all the sub-pixel Corner points to a Corner_Point Point set, namely completing the sub-pixel extraction of the addendum Corner points, wherein part of the addendum Corner points of the addendum circle are shown in figure 4.
Embodiment III:
s3, fitting an addendum ellipse by using a super least square method:
when the tip face is not parallel to the image plane, the tip circle is imaged as an ellipse, and the tip ellipse is fitted by using the tip angle Point in the corner_point, thereby obtaining tip circle parameters.
The elliptic parameter equation is:
Ax 2 +2Bxy+Cy 2 +2f 0 (Dx+Ey)+f 0 2 F=0
wherein: f (f) 0 Is a proportionality constant. The least square method calculates coefficients a to F to match the ellipse with (x) in corner_point i ,y i ) The algebraic distance J of i=1 to N is the smallest and a= … =f+.0.
Definition of the definition
Will (x) i ,y i ) Substituting xi to obtain vector xi i The elliptic parameter equation and algebraic distance can be expressed as (ζ, θ) =0 and respectively
To avoid θ=0, scale normalization is performed on θ (θ, zθ) =c, where Z is a symmetric matrix and c is a non-zero constant, and the problem is to be solved into a solution mθ=λzθ for the generalized eigenvalue, and since more noise points exist in corner_point, the worse the calculation result is, so that a statistical model is built to eliminate ellipse fitting errors.
θ and λ can be expanded to
Wherein:and->As a true value, delta 1 And delta 2 First-order and second-order noise, respectively, with ellipses of third-order and above.
The first order error and the second order error are respectively obtained by the arrangement
Wherein:deviation of second order noise term->Substituted mθ=λzθ calculation
In summary, the ellipse parameters can be further calculated:
wherein: />
The derivation eliminates the deviation of the second-order noise item in the elliptic equation, the addendum point is fitted by adopting the super least square method, an ideal scale normalization is found, the statistical deviation of the second-order noise item of the least square method is eliminated, and then the addendum circle precision and the robustness are improved. The result of the addendum circle fitting is shown in fig. 5, and the circle is a partial enlarged view of the addendum arc segment.
Embodiment four:
s4, compensating the quasi-eccentric error optimization ellipse parameters:
the addendum circle is imaged as an ellipse in perspective projection, and the eccentric error of the ellipse is converted into a quasi-eccentric error due to distortion and distortion of the lens. A mathematical model is established for the eccentric error of the distorted ellipse, namely the quasi-eccentric error, and the relation between the center projection and the center of the ellipse is described. An iterative compensation method is applied to optimize the perspective projection matrix and distortion coefficients of the camera.
The pixel coordinates in the image are:
wherein: r=k 1 r 2 +k 2 r 4 +k 3 r 6 ,(x c ,y c ) To normalize the pixel coordinates on the plane,the distance between the pixel point and the main point of the camera is (k 1, k2, p1, p2, k 3) the distortion coefficient of the camera lens.
The real camera model is:
wherein: [ u v 1] T Is the homogeneous coordinates of the pixel point, (c) x ,c y ) Is the coordinate of the main point of the camera, f x And f y Is a scale factor. Because of distortion of a camera lens, the camera model is a nonlinear perspective projection model, and the method for solving the distortion ellipse quasi-center is adopted for calculation: the top circle samples k evenly distributed points with coordinates (x i ,y i ) Relative to (x) i ,y i ) Is set to (u) i ,v i ) Can be calculated from the above formula. The coordinates of the circumferential point in the world coordinate system are [ x ] w y w z w ] T =[x i y i 0] T The distorted ellipse is fitted from the projected points of the k points. The parameters of the cone coefficient matrix are
The cone coefficient matrix Q is
After ellipse fitting by the super least square method, the center (u d ,v d ) Is that
Center of center (x) 0 ,y 0 ) Is (u) c ,v c ) Can be calculated from the above formula, if [ x ] w y w z w ] T =[x i y i 0] T The quasi-decentering error of the distorted ellipse is (u) d -u c ,v d -v c ). A generalized compensation framework is introduced to ensure high accuracy measurements. According to the quasi-decentering error model, obtaining a perspective projection matrix and a distortion coefficient of the camera through camera calibration, and optimizing parameters by using an iteration method so as to solve an ideal addendum circle.
Since the relative position and radius of the tip circle are known, the specific steps of quasi-decentration error compensation are as follows:
d1: ellipse center (x) ij ,y ij ) Circle center (X) under calibration plate coordinate system ij ,Y ij ) And radius r of the circle, obtaining a camera perspective projection matrix K by using a Zhang Zhengyou camera calibration method (0) And distortion coefficient P (0) Is set to an initial value of (1);
d2: calculating quasi-decentration error using mathematical model(m is the number of circles on the calibration plate, n is the number of images) and performing error compensation on the ellipse center under the pixel coordinate system to obtain compensated center coordinate +.>
d3: will beSubstitution (x) ij ,y ij ) K images with different angles are acquired, and the perspective projection matrix and distortion coefficient are updated by using the algorithm
d4: repeating d2 and d3 until the error change of quasi-decentration in adjacent iteration is smaller than threshold value or the highest iteration number is reached, and finally compensating the central point to be
At this time, the camera perspective projection matrix and distortion coefficient have been updated to optimal values
In the description of the present invention, it should be understood that the orientation or positional relationship indicated is based on the orientation or positional relationship shown in the drawings, and is merely for convenience in describing the present invention and simplifying the description, and does not indicate or imply that the apparatus or element referred to must have a specific orientation, be configured and operated in a specific orientation, and thus should not be construed as limiting the present invention.
In the present invention, unless explicitly specified and defined otherwise, for example, it may be fixedly connected, detachably connected, or integrally formed; can be mechanically or electrically connected; either directly or indirectly through intermediaries, or in communication with each other or in interaction with each other, unless explicitly defined otherwise, the meaning of the terms described above in this application will be understood by those of ordinary skill in the art in view of the specific circumstances.
Although embodiments of the present invention have been shown and described, it will be understood by those skilled in the art that various changes, modifications, substitutions and alterations can be made therein without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.

Claims (3)

1. The tooth top circle extraction algorithm for tooth form structure assembly is characterized by comprising the following steps of:
s1: extracting tooth top corner points of tooth-shaped structures based on a curvature scale space CSS technology of a self-adaptive threshold value; step S1 includes the following:
a1: performing channel edge detection of a self-adaptive threshold value on an input Image image_input to obtain an edge Image channel_edge;
a2: performing annular mask from the approximate root circle to the tip circle in the canny_edge to obtain an interested region image ROI_edge of the gear tooth part;
a3: extracting a tooth crest contour in the ROI_edge, wherein the contour is the premise of calculating curvature and extracting corner points, partial contour loss is caused by Canny edge detection, gaps of the contour are required to be filled to be complete, and the corner points formed by Canny connection are stored in a canny_point connection corner Point set;
a4: calculating curvatures of all points on each contour curve in a large-scale state, wherein the Point with the maximum curvature is a candidate angular Point, if the absolute value of the curvature of the Point is k times of the minimum curvature in the adjacent Point, judging the Point as a correct angular Point, and storing the correct angular Point into a Corner_Point addendum angular Point set;
a5: traversing all points in Corner_Point in a small scale state, and rejecting points with a distance greater than a threshold t 1 Thereby obtaining better positioning accuracy;
a6: comparing points in canny_point and Corner_point, and rejecting distance is smaller than threshold t 2 The remainder being the final addendum Point, updating it to the Corner_Point Point set;
s2: accurate positioning of the tooth crest corner points is achieved by adopting a subpixel technology;
the step S2 includes the following:
b1: using a quadratic polynomial to approach the corner reaction function R (x, y) so as to obtain the coordinates of the sub-pixel corner, wherein the quadratic polynomial is as follows:
R(x,y)=a+bx+cy+dx 2 +exy+fy 2
b2: traversing Corner points (x i ,y i ) Establishing an overdetermined equation set containing 6 coefficients a-f by using 9 pixel points in the neighborhood, solving unknown quantity by using a least square method, and deriving the above formula:
b3: calculating the solution of the sub-pixel corner (x' i ,y′ i ):
b4: updating all the sub-pixel Corner points to a Corner_Point Point set, namely completing the sub-pixel extraction of the addendum Corner points;
s3: carrying out addendum circle fitting on addendum corner points by adopting a super least square method, and eliminating statistical deviation of second-order noise items of the least square method by utilizing scale normalization, thereby improving addendum circle precision and robustness;
s4: and compensating an elliptical quasi-decentration error caused by the existence of distortion of the lens and optimizing elliptical parameters.
2. The tooth form structure assembly-oriented addendum circle extraction algorithm of claim 1, wherein: the step S3 includes the following:
c1: the elliptic parameter equation and algebraic distance can be expressed as (ζ, θ) =0 and respectively
c2: scale normalization of θ (θ, zθ) =c, where Z is a symmetric matrix, c is a non-zero constant, θ and λ can be expanded to:
wherein:and->As a true value, delta 1 And delta 2 First-order and second-order noise, respectively, with ellipses of third-order and above;
c3: the first-order error and the second-order error are calculated as respectivelyAndwherein: />Deviation of second order noise termSubstituting mθ=λzθ to calculate:
c4: further calculating ellipse parameters:
wherein:
3. the tooth form structure assembly-oriented addendum circle extraction algorithm of claim 1, wherein: the step S4 includes the following:
d1: ellipse center (x) ij ,y ij ) Circle center (X) under calibration plate coordinate system ij ,Y ij ) And radius r of the circle, obtaining a camera perspective projection matrix K by using a Zhang Zhengyou camera calibration method (0) And distortion coefficient P (0) Is set to an initial value of (1);
d2: calculating quasi-decentration error using mathematical modelWherein m is the number of circles on the calibration plate, n is the number of images, and error compensation is performed on the elliptical center under the pixel coordinate system to obtain compensated center coordinates +.>
d3: will beSubstitution (x) ij ,y ij ) Acquiring m images with different angles, and updating perspective projection matrix and distortion coefficient by using steps S1-S3
d4: repeating d2 and d3 until the error change of quasi-decentration in adjacent iteration is smaller than threshold value or the highest iteration number is reached, and finally compensating the central point to be
At this time, the camera perspective projection matrix and distortion coefficient have been updated to optimal values
CN202110249585.0A 2021-03-08 2021-03-08 Tooth-shaped structure assembly-oriented addendum circle extraction algorithm Active CN113034591B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110249585.0A CN113034591B (en) 2021-03-08 2021-03-08 Tooth-shaped structure assembly-oriented addendum circle extraction algorithm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110249585.0A CN113034591B (en) 2021-03-08 2021-03-08 Tooth-shaped structure assembly-oriented addendum circle extraction algorithm

Publications (2)

Publication Number Publication Date
CN113034591A CN113034591A (en) 2021-06-25
CN113034591B true CN113034591B (en) 2024-01-26

Family

ID=76466992

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110249585.0A Active CN113034591B (en) 2021-03-08 2021-03-08 Tooth-shaped structure assembly-oriented addendum circle extraction algorithm

Country Status (1)

Country Link
CN (1) CN113034591B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115035107B (en) * 2022-08-10 2022-11-08 山东正阳机械股份有限公司 Axle gear working error detection method based on image processing
CN116451384B (en) * 2023-06-15 2023-09-05 合肥皖液液压元件有限公司 Gear forming method based on optimized reference rack tooth profile curve

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107341802A (en) * 2017-07-19 2017-11-10 无锡信捷电气股份有限公司 It is a kind of based on curvature and the compound angular-point sub-pixel localization method of gray scale
CN110544276A (en) * 2019-08-19 2019-12-06 西安交通大学 Least square method ellipse fitting piston skirt maximum point size measurement method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107341802A (en) * 2017-07-19 2017-11-10 无锡信捷电气股份有限公司 It is a kind of based on curvature and the compound angular-point sub-pixel localization method of gray scale
CN110544276A (en) * 2019-08-19 2019-12-06 西安交通大学 Least square method ellipse fitting piston skirt maximum point size measurement method

Also Published As

Publication number Publication date
CN113034591A (en) 2021-06-25

Similar Documents

Publication Publication Date Title
CN106558080B (en) Monocular camera external parameter online calibration method
CN113034591B (en) Tooth-shaped structure assembly-oriented addendum circle extraction algorithm
CN111488874A (en) Method and system for correcting inclination of pointer instrument
CN112614075B (en) Distortion correction method and equipment for surface structured light 3D system
CN107633533B (en) High-precision circular mark point center positioning method and device under large-distortion lens
CN109685744B (en) Scanning galvanometer precision correction method
CN110986998B (en) Satellite video camera on-orbit geometric calibration method based on rational function model
CN112422960B (en) Offset estimation method and device of camera module, storage medium and terminal
CN114463442A (en) Calibration method of non-coaxial camera
CN112950719A (en) Passive target rapid positioning method based on unmanned aerial vehicle active photoelectric platform
CN109887038A (en) A kind of machine vision method for correcting image for on-line checking
CN106023146B (en) For the related unilateral self-calibration bundle adjustment method in field in photogrammetric
CN110298890B (en) Light field camera calibration method based on Planck parameterization
CN109920008B (en) Correction method and device for self-calibration distance measurement error and automatic driving system
CN113393507B (en) Unmanned aerial vehicle point cloud and ground three-dimensional laser scanner point cloud registration method
CN113935912A (en) Method and device for correcting distortion-removed circle center of camera, terminal equipment and medium
CN108052986A (en) Least squares matching method based on multichannel
CN116597184B (en) Least square image matching method
CN117830175B (en) Image geometric distortion self-calibration method under arbitrary orientation condition
CN112017108A (en) Satellite ortho-image color relative correction method based on independent model method block adjustment
CN105303556B (en) Video camera nonlinear distortion parametric solution method based on linear feature
CN113900123B (en) RPC-based planar array imaging remote sensing satellite on-orbit geometric calibration method and system
CN113298865B (en) Measuring and calculating method of high-precision laser displacement sensor
CN114972013B (en) Fisheye image rapid orthorectification method based on spherical geometry single transformation
KR970004392B1 (en) Correcting method for a camera

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant