CN113418927A - Automobile mold visual detection system and detection method based on line structured light - Google Patents

Automobile mold visual detection system and detection method based on line structured light Download PDF

Info

Publication number
CN113418927A
CN113418927A CN202110635122.8A CN202110635122A CN113418927A CN 113418927 A CN113418927 A CN 113418927A CN 202110635122 A CN202110635122 A CN 202110635122A CN 113418927 A CN113418927 A CN 113418927A
Authority
CN
China
Prior art keywords
camera
image
light
obtaining
structured light
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110635122.8A
Other languages
Chinese (zh)
Inventor
胡正乙
杨东旭
单吉
刘思远
杨鹤童
张鑫
张恩奎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changchun Automobile Industry Institute
Original Assignee
Changchun Automobile Industry Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changchun Automobile Industry Institute filed Critical Changchun Automobile Industry Institute
Priority to CN202110635122.8A priority Critical patent/CN113418927A/en
Publication of CN113418927A publication Critical patent/CN113418927A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8851Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/01Arrangements or apparatus for facilitating the optical investigation
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8851Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
    • G01N2021/8887Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges based on image processing techniques

Landscapes

  • Chemical & Material Sciences (AREA)
  • Biochemistry (AREA)
  • Pathology (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Immunology (AREA)
  • Analytical Chemistry (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Signal Processing (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention discloses an automobile mold vision detection system and a detection method based on linear structured light, the automobile mold vision detection system based on linear structured light comprises a six-degree-of-freedom mechanical arm, a camera, a laser generator and a processing system, wherein the output end of the six-degree-of-freedom mechanical arm is provided with a clamp; the automobile mold visual detection method based on the line structure light comprises the steps of calibrating internal and external parameters of a camera, calibrating the space position of the line structure light, calibrating the space position of the tail end of a mechanical arm, obtaining an image, obtaining a light band image of the line structure light and obtaining mold defects. According to the invention, the six-degree-of-freedom mechanical arm is adopted, so that the all-dimensional acquisition of the image is simply and conveniently realized, and meanwhile, the acquired image is effectively processed, so that the defect of the mold is conveniently detected.

Description

Automobile mold visual detection system and detection method based on line structured light
Technical Field
The invention relates to the technical field of automobile mold detection, in particular to an automobile mold visual detection system based on line structured light.
Background
The automobile die is the most important technical equipment in automobile production, plays an important role in ensuring the processing and assembling precision of automobile parts, and has the characteristics of high yield and fast production rhythm, so that the die damage amount is large in the processing process, the surface of the die can be seriously corroded or cracked after working for a period of time, the manufacturing cost of the die is extremely high, and if the damaged stamping die is directly scrapped, the waste is extremely large.
The remanufacturing refers to a series of technical measures or a general term of engineering activities for repairing and reforming the waste equipment by taking the whole life cycle theory of the equipment as a guide, aiming at improving the performance of the waste equipment, taking high quality, high efficiency, energy conservation, material conservation and environmental protection as a criterion and taking advanced technology and industrial production as means, wherein in the remanufacturing engineering, the repair of defective parts is the most important research content, so that the remanufacturing of the automobile mould is carried out frequently to reduce the cost.
Before repair, image information of an automobile mold is generally required to be collected so as to facilitate analysis of cracks of the automobile mold, and in a plurality of three-dimensional scanning technologies, a structured light vision system is taken as an active measurement system, has the characteristics of high measurement precision, large measurement view field, strong adaptability to measurement environment and the like, and is widely applied to the field of three-dimensional measurement.
However, in the detection process of the existing structured light system, three-dimensional movement can only be generally realized, and for the mold, the shape is generally irregular, and a shielded part is easy to appear, so that a part of the mold cannot be photographed, and further the acquisition of an image is influenced.
Disclosure of Invention
Technical problem to be solved
The invention can solve the problem that the image acquisition is influenced because the condition that part of the part cannot be shot easily occurs in the conventional structured light detection.
(II) technical scheme
In order to achieve the purpose, the invention adopts the following technical scheme:
on one hand, the invention provides a line structured light-based automobile mold vision detection system, which comprises a six-degree-of-freedom mechanical arm, a camera, a laser generator and a processing system, wherein the output end of the six-degree-of-freedom mechanical arm is provided with a clamp, the camera and the laser generator are respectively arranged on the side part of the clamp, the processing system is respectively and electrically connected with the camera and the laser generator, and the line structured light-based automobile mold vision detection system comprises:
the processing system comprises a processor and a memory, the processor is electrically connected with the camera and the laser generator respectively, and the memory is electrically connected with the processor.
As a preferred technical scheme of the present invention, the clamp includes a connecting block and three clamping plates, the connecting block is mounted at an output end of the six-degree-of-freedom mechanical arm, the three clamping plates are respectively mounted at side portions of the connecting block, two clamping cavities are formed between the three clamping plates, and the camera and the laser generator are respectively mounted in the two clamping cavities.
In a second aspect, the present invention further provides a line structured light-based visual inspection method for an automobile mold, which specifically includes the following steps:
s1, calibrating internal and external parameters of the camera: establishing a camera coordinate system, and obtaining internal and external parameters of the camera by using a two-step calibration method;
s2, calibrating the spatial position of the line structured light: on the basis of a camera coordinate system, obtaining a space equation of a light plane under the camera coordinate system;
s3, calibrating the space position of the tail end of the mechanical arm: establishing a position relation of a camera coordinate system relative to a mechanical arm tail end coordinate system in a hand-eye calibration mode;
s4, image acquisition: the method comprises the following steps of operating a freedom degree mechanical arm, a camera and a laser generator, wherein the laser generator emits line structured light, and the camera shoots an automobile mold to be detected to obtain an image;
s5, obtaining a structured light band image: obtaining a light band area image in the image through a template matching algorithm;
s6, obtaining the defects of the die: and obtaining a gray extreme point on the cross section or the normal direction of the optical tape by using the gray information of the optical tape area on the image through an optical tape center point detection algorithm, wherein the gray extreme point is the defect of the mold.
As a preferred technical solution of the present invention, in S1, the calibrating of the internal and external parameters of the camera specifically includes the following steps:
s101, determining optical imaging parameters and spatial pose information of an imaging plane in a camera according to the spatial position and the pixel position of a known point, and establishing a camera coordinate system;
s102, establishing a linear model of camera imaging;
s103, a nonlinear model of camera imaging;
s104, neglecting the influence of lens distortion, obtaining internal and external parameters of the camera by using the linear model, or taking the distortion of the camera into consideration, and substituting the obtained camera parameters as initial values into the nonlinear calibration model to obtain the camera parameters and distortion coefficients after the distortion is considered.
As a preferred technical solution of the present invention, in S2, the calibrating of the spatial light position of the line structure specifically includes the following steps:
s201, establishing a line structured light system calibration model;
s202, establishing a light plane equation with coefficients of a light plane under a camera coordinate system according to a calibration model;
s203, substituting the camera coordinates into the light plane equation to obtain a matrix expression, and obtaining an equation set through the matrix expression;
s204, solving the equation set through a least square method to obtain specific values of the coefficients, and substituting the coefficients into the light plane equation to obtain the space equation of the light plane under the camera coordinate system.
As a preferred embodiment of the present invention, in S3, the hand-eye calibration adopts one of the following two modes:
establishing a hand-eye calibration model to obtain an external parameter matrix of the camera relative to the tail end of the mechanical arm;
and performing hand-eye self-calibration based on a single reference point, wherein the camera is relative to external parameters of the tail end of the robot.
As a preferred embodiment of the present invention, in S5, the obtaining of the structured light band image specifically includes the following steps:
s501, calculating a similarity coefficient between a gray value template and an image, and determining an optical band region;
s502, matching the whole image to obtain the accurate position of the light band, and increasing the template matching speed by using an image pyramid down-sampling method to obtain the light band image.
As a preferred technical solution of the present invention, in S502, the method for image pyramid downsampling to improve the template matching speed specifically includes the following steps:
continuously reducing the image and the template by 2 times, setting the original image as a layer 1, and increasing the layer number of an image pyramid by one layer when the image is reduced by 2 times, so as to realize the down-sampling of the light band image;
filtering the sampled image by adopting a Gaussian smoothing filter;
calculating the similarity value of the image in a proper image pyramid layer, and determining the position of a light band in the layer;
and mapping the light band area determined by the highest layer of the image pyramid to the lowest layer of the image pyramid to finally obtain the position of the light band in the original image, wherein each mapping process is to multiply the coordinate of the matching point by 2, so that the matching speed can be increased.
In a preferred embodiment of the present invention, in S5, the optical band image needs to be enhanced after the optical band image is obtained.
As a preferred technical solution of the present invention, the obtaining of the gray extreme point in S6 specifically includes the following steps:
obtaining the gray scale gravity center of each line in the light band region by using a gray scale gravity center method;
obtaining a normal vector corresponding to the gravity center point of each row by using the Hessian matrix;
and carrying out Taylor expansion on the light band gray function along the normal vector to obtain the pixel coordinate of the central point of the light band, and further determining a gray extreme point.
(III) advantageous effects
1. The line structured light-based automobile mold vision detection system comprises a six-degree-of-freedom mechanical arm, a camera, a laser generator and a processing system, wherein the six-degree-of-freedom mechanical arm can effectively drive the camera and the laser generator to move so as to acquire images of all directions of a mold, so that the problem that the existing three-dimensional movement cannot detect irregular molds is avoided;
2. according to the visual detection method for the automobile mold based on the linear structured light, provided by the invention, the subsequent detection precision is ensured by calibrating the internal and external parameters of the camera, the spatial position of the linear structured light and the spatial position of the tail end of the mechanical arm;
3. according to the line structured light-based automobile mold visual detection method, the images are rapidly and effectively processed through image acquisition, the structured light band images and the mold defects are obtained, the defective parts of the mold are conveniently and rapidly detected, and subsequent repair is facilitated.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
FIG. 1 is a schematic view of the present invention;
FIG. 2 is a schematic view of a portion of the clamp of the present invention;
FIG. 3 is a schematic block diagram of a processing system of the present invention;
FIG. 4 is a block schematic flow diagram of the present invention;
FIG. 5 is a schematic diagram of the position relationship of the coordinate system of the present invention;
FIG. 6 is a schematic diagram of the planar position relationship of image coordinates and pixel coordinates of the present invention;
fig. 7 is a schematic diagram of the light plane calibration of the line structure of the present invention.
In the figure: 100. a six-degree-of-freedom mechanical arm; 110. a clamp; 111. connecting blocks; 112. a splint; 113. a clamping cavity; 200. a camera; 300. a laser generator; 400. a processing system; 410. a processor; 420. a memory.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be described clearly and completely with reference to the accompanying drawings of the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be obtained by a person skilled in the art without any inventive step based on the embodiments of the present invention, are within the scope of the present invention.
Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be obtained by a person skilled in the art without any inventive step based on the embodiments of the present invention, are within the scope of the present invention.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
In the description of the present invention, it is to be understood that the terms "longitudinal", "upper", "lower", "left", "right", "top", "bottom", "inner", "outer", and the like, indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, are used only for convenience in describing the present invention and for simplicity in description, and do not indicate or imply that the referenced devices or elements must have a particular orientation, be constructed and operated in a particular orientation, and thus, are not to be construed as limiting the present invention.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the present invention, "a plurality" means two or more unless specifically defined otherwise.
As shown in fig. 1 to 3, the line structured light-based vision inspection system for automobile molds comprises a six-degree-of-freedom mechanical arm 100, a camera 200, a laser generator 300 and a processing system 400, wherein the fixture 110 is installed at an output end of the six-degree-of-freedom mechanical arm 100, the camera 200 and the laser generator 300 are respectively installed at side portions of the fixture 110, the processing system 400 is respectively electrically connected with the camera 200 and the laser generator 300, and wherein:
the processing system 400 includes a processor 410 and a memory 420, wherein the processor 410 is electrically connected to the camera 200 and the laser generator 300, respectively, and the memory 420 is electrically connected to the processor 410.
Specifically, the clamp 110 includes a connection block 111 and three clamping plates 112, the connection block 111 is mounted at an output end of the six-degree-of-freedom robot arm 100, the three clamping plates 112 are respectively mounted at side portions of the connection block 111, two clamping cavities 113 are respectively formed between the three clamping plates 112, and the camera 200 and the laser generator 300 are respectively mounted in the two clamping cavities 113.
As shown in fig. 4 to 7, the present invention further provides a line structured light-based automobile mold visual inspection method, which specifically includes the following steps:
s1, calibrating internal and external parameters of the camera: establishing a camera coordinate system, and obtaining internal and external parameters of the camera by using a two-step calibration method;
s2, calibrating the spatial position of the line structured light: on the basis of a camera coordinate system, obtaining a space equation of a light plane under the camera coordinate system;
s3, calibrating the space position of the tail end of the mechanical arm: establishing a position relation of a camera coordinate system relative to a mechanical arm tail end coordinate system in a hand-eye calibration mode;
s4, image acquisition: the method comprises the following steps of operating a freedom degree mechanical arm, a camera and a laser generator, wherein the laser generator emits line structured light, and the camera shoots an automobile mold to be detected to obtain an image;
s5, obtaining a structured light band image: obtaining a light band area image in the image through a template matching algorithm;
s6, obtaining the defects of the die: and obtaining a gray extreme point on the cross section or the normal direction of the optical tape by using the gray information of the optical tape area on the image through an optical tape center point detection algorithm, wherein the gray extreme point is the defect of the mold.
Specifically, in S1, the calibration of the internal and external parameters of the camera specifically includes the following steps:
s101, determining optical imaging parameters and spatial pose information of an imaging plane in a camera according to the spatial position and the pixel position of a known point, and establishing a camera coordinate system;
s102, establishing a linear model of camera imaging;
s103, a nonlinear model of camera imaging;
s104, neglecting the influence of lens distortion, obtaining internal and external parameters of the camera by using the linear model, or taking the distortion of the camera into consideration, and substituting the obtained camera parameters as initial values into the nonlinear calibration model to obtain the camera parameters and distortion coefficients after the distortion is considered.
It should be noted that, referring to fig. 5, the imaging process of the camera is essentially a pinhole imaging process, and according to the characteristic of light straight-line propagation, the surface profile of the measured object is displayed on the imaging plane through the lens of the camera. Therefore, a coordinate system needs to be established in the imaging plane to obtain imaging information of the measured object. As known from the imaging principle of a photoelectric coupler (CCD), image information obtained on an imaging plane is composed of discontinuous pixels, and the dimensions of the pixel distance and the physical distance are not consistent, two coordinate systems, namely, a pixel coordinate system (OPuv) and an image coordinate system (Oxy), need to be established on the imaging plane. In a pixel coordinate system, a coordinate origin is selected at the upper left corner of an imaging plane, and the unit is a pixel; in an image coordinate system, a coordinate origin is selected at an intersection point of a camera optical axis and a CCD plane, the unit is millimeter, and in order to establish a relation between the two coordinate systems, a pixel coordinate of the image coordinate system origin O is set as (uo, vo).
In the calibration space, a camera coordinate system (OC-XCYCZC) and a world coordinate system (OW-XWYWZW) are three-dimensional coordinate systems. The camera coordinate system (OC-XCYCZC) represents the space position of the measured object relative to the optical center of the lens, the origin of the coordinate system is the optical center of the lens, the ZC axis of the coordinate system represents the depth information of the measured object relative to the optical center, the direction of the ZC axis of the coordinate system is vertical to the imaging plane, and the directions of the XC axis and the YC axis of the coordinate system are consistent with the directions of the x axis and the y axis of the image coordinate system. The world coordinate system (OW-XWYWZW) represents the actual spatial position of the measured object, the coordinate system can be designed according to the actual detection condition, in order to simplify the complexity of the calibration algorithm, the coordinate system is set as the upper left corner of the two-dimensional calibration plate, and the ZW axis direction is perpendicular to the plane of the calibration plate.
In the linear model of camera imaging, as shown in fig. 5, P is a point in space, the world coordinates of the point are (XW, YW, ZW), the camera coordinates are (XC, YC, ZC), Pd is the actual projection point of P on the image plane, Pu is the ideal projection point, the image coordinates of the two projection points are (xd, yd) and (xu, yu), respectively, and (u, v) is the pixel coordinate of P.
According to the pose relationship between the world coordinate system and the camera coordinate system, the camera coordinate of the space point P is as follows:
Figure BDA0003105355730000111
(1.1) wherein t is (t)x,ty,tz)TIs a translation vector;
Figure BDA0003105355730000112
is a 3 x 3 orthogonal rotation matrix;
the relationship between the image coordinate system and the pixel coordinate system is shown in fig. 6, dx and dy are the physical size of the pixel unit on the imaging plane, and the angle between the u axis and the v axis in the θ 0 pixel coordinate system, and the relationship between one point Pf in the image coordinate system and the pixel coordinate thereof can be expressed as:
Figure BDA0003105355730000113
since there is an error in the manufacture of the photosensitive element, θ 0 is 0 only in an ideal case;
according to the pinhole imaging principle, the relationship between the point P in the camera coordinate system and its projection point Pu on the image plane can be expressed as:
Figure BDA0003105355730000114
for ease of calculation, the calibration plate plane is taken as the XWYW coordinate plane in the world coordinate system, i.e., ZW is 0. Combined vertical type (1) to formula (3)
Figure BDA0003105355730000115
(1.4) wherein α is f/dx;β=f/dy sinθ0;γ=-f cotθ0/dx;r1,r2The first and second columns of the rotation matrix R; r and t are external parameters of the camera model. The internal reference a of the camera model may be expressed as:
Figure BDA0003105355730000121
non-linear models of camera imaging, camera optics, have errors during manufacture and assembly and have effects on the image called distortion, which can be classified as: radial distortion, tangential distortion. Radial distortion is caused by changes in curvature of the lens; tangential distortion is mainly caused by the fact that central lines of a plurality of lenses are not collinear in the assembling process of a lens, and a total aberration model can be established according to a mathematical model of radial distortion and tangential distortion:
Figure BDA0003105355730000122
(1.6) in the formula (II),
Figure BDA0003105355730000123
k1、k2、p1、p2radial and tangential distortion coefficients, respectively. The relationship between the ideal image coordinates and the actual image coordinates of the spatial points is:
Figure BDA0003105355730000124
the two-step calibration method combines a calibration method based on a moving plane template and a calibration method based on a one-dimensional calibration object, so that higher calibration precision can be ensured, and the dependence on test equipment is reduced. The calibration method can be divided into two steps: 1. neglecting the influence of lens distortion, and obtaining the internal and external parameters of the camera by using a linear model; 2. taking the distortion of the camera into consideration, substituting the obtained camera parameters as initial values into a nonlinear calibration model to obtain the camera parameters and distortion coefficients after the distortion is considered:
for ease of calculation, the formula (1.4) is rewritten as:
Figure BDA0003105355730000125
in the expression (1.8), H represents a homography matrix of an imaging point from a pixel coordinate to a world coordinate.
Is provided with
Figure BDA0003105355730000131
Is the ith row of the homography matrix H, and has the formula (1.8)
Figure BDA0003105355730000132
Thus:
Figure BDA0003105355730000133
transforming the equation (1.9) into a matrix form:
Figure BDA0003105355730000134
the vector x has 8 unknowns, and when the number of the calibration points is more than 4, the formula (1.10) is an over-definite equation system about x, and the vector x, namely the homography matrix H, can be solved through singular value decomposition.
According to the formula (8):
[h1 h2 h3]=λA[r1 r2 t] (1.11)
(1.11) wherein hiIs the ith column vector of the homography matrix H; λ is an arbitrary constant. Since the rotation matrix R is an orthogonal matrix, it can be obtained:
Figure BDA0003105355730000135
by combining the formulas (1.11) and (1.12), it is possible to obtain:
Figure BDA0003105355730000136
the camera internal reference matrix A contains 6 unknowns, when the number of the calibration plate images is more than 3, the initial value of the internal reference of the camera can be obtained through the formula (1.13), and in order to improve the calibration precision, 9 calibration plate images under different poses are selected when the internal reference of the camera is calibrated.
In order to improve the measurement accuracy of the vision system, the distortion of a lens needs to be considered, and a distortion coefficient is solved according to a nonlinear optimization model. Writing the lens distortion model formula (1.6) into a matrix form
Figure BDA0003105355730000144
(1.14) in the formula,
Figure BDA0003105355730000141
K=[k1 k2 p1 p2]Tis a 1 x 4 column vector. The equation (1.14) can be written as a matrix form, LK ═ F, where there are N index points, L is a 2N × 4 matrix and F is a 2N × 1 column vector, and the vector K can be obtained according to least squares:
K=(LTL)-1LTF (1.15)
the camera internal and external parameters and the distortion coefficients can be obtained through the processes, and the camera parameters are used as initial values to establish an optimization objective function:
Figure BDA0003105355730000142
and optimizing the target function by using a Levenberg-Marquardt algorithm to obtain an optimized solution of the external parameters and the distortion coefficients of the camera. Wherein the content of the first and second substances,
Figure BDA0003105355730000143
is the world coordinate M of the jth angular point on the ith calibration boardijAnd obtaining the pixel coordinates according to the nonlinear model of the camera. m isijIs the corner pixel coordinates obtained by corner detection.
Specifically, in S2, the calibration of the spatial light position of the line structure specifically includes the following steps:
s201, establishing a line structured light system calibration model;
s202, establishing a light plane equation with coefficients of a light plane under a camera coordinate system according to a calibration model;
s203, substituting the camera coordinates into the light plane equation to obtain a matrix expression, and obtaining an equation set through the matrix expression;
s204, solving the equation set through a least square method to obtain specific values of the coefficients, and substituting the coefficients into the light plane equation to obtain the space equation of the light plane under the camera coordinate system.
It should be noted that, the calibration model of the linear structured light plane equation is shown in fig. 7, where l is a light band projected by a light plane on a plane target, and P is a central point of the light band l, and images of a plurality of light bands can be obtained by rotating the plane target. The camera coordinates of the central point of the light band can be obtained by using the calibrated camera imaging model and the corner points of the black and white grids on the calibration plate, and the camera coordinates of the central point of the ith light band on the jth light band image are set as
Figure BDA0003105355730000151
The space plane equation of the light plane in the camera coordinate system is
b1XC+b2YC+b3ZC-1=0 (1.17)
The camera coordinates of the light band center point are substituted into formula (1.17) and written in matrix form:
Figure BDA0003105355730000152
(1.18) in the formula, N is the number of optical tape image obtained by rotating the target; k is the number of center points on each band. Solving the overdetermined equation set by a least square method to obtain:
Figure BDA0003105355730000153
specifically, in S3, the hand-eye calibration adopts one of the following two ways:
establishing a hand-eye calibration model to obtain an external parameter matrix of the camera relative to the tail end of the mechanical arm;
and performing hand-eye self-calibration based on a single reference point, wherein the camera is relative to external parameters of the tail end of the robot.
Specifically, in S5, the step of obtaining the structured light band image specifically includes the following steps:
s501, calculating a similarity coefficient between a gray value template and an image, and determining an optical band region;
s502, matching the whole image to obtain the accurate position of the light band, and increasing the template matching speed by using an image pyramid down-sampling method to obtain the light band image.
Specifically, in S502, the method for image pyramid downsampling to improve the template matching speed specifically includes the following steps:
continuously reducing the image and the template by 2 times, setting the original image as a layer 1, and increasing the layer number of an image pyramid by one layer when the image is reduced by 2 times, so as to realize the down-sampling of the light band image;
filtering the sampled image by adopting a Gaussian smoothing filter;
calculating the similarity value of the image in a proper image pyramid layer, and determining the position of a light band in the layer;
and mapping the light band area determined by the highest layer of the image pyramid to the lowest layer of the image pyramid to finally obtain the position of the light band in the original image, wherein each mapping process is to multiply the coordinate of the matching point by 2, so that the matching speed can be increased.
Specifically, in S5, after the light band image is obtained, the light band image needs to be enhanced.
Specifically, in S6, the obtaining of the gray extreme point specifically includes the following steps:
obtaining the gray scale gravity center of each line in the light band region by using a gray scale gravity center method;
obtaining a normal vector corresponding to the gravity center point of each row by using the Hessian matrix;
and carrying out Taylor expansion on the light band gray function along the normal vector to obtain the pixel coordinate of the central point of the light band, and further determining a gray extreme point.
In summary, the following steps: the six-degree-of-freedom mechanical arm is adopted, so that the image can be acquired in all directions simply and conveniently, and meanwhile, the acquired image is effectively processed, so that the defect of the mold can be conveniently detected.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. Line structure light-based automobile mold vision detection system, including six degree of freedom arms (100), camera (200), laser generator (300) and processing system (400), its characterized in that: the output end of the six-degree-of-freedom mechanical arm (100) is provided with a clamp (110), the camera (200) and the laser generator (300) are respectively arranged at the side part of the clamp (110), the processing system (400) is respectively electrically connected with the camera (200) and the laser generator (300), and the six-degree-of-freedom mechanical arm comprises:
the processing system (400) comprises a processor (410) and a memory (420), wherein the processor (410) is electrically connected with the camera (200) and the laser generator (300) respectively, and the memory (420) is electrically connected with the processor (410).
2. The line structured light based vision inspection system for automotive molds of claim 1, wherein: the clamp (110) comprises a connecting block (111) and three clamping plates (112), the connecting block (111) is installed at the output end of the six-degree-of-freedom mechanical arm (100), the three clamping plates (112) are respectively installed at the side parts of the connecting block (111), two clamping cavities (113) are formed between the three clamping plates (112), and the camera (200) and the laser generator (300) are respectively installed at the two clamping cavities (113).
3. The automobile mold visual detection method based on line structured light is characterized by comprising the following steps:
s1, calibrating internal and external parameters of the camera: establishing a camera coordinate system, and obtaining internal and external parameters of the camera by using a two-step calibration method;
s2, calibrating the spatial position of the line structured light: on the basis of a camera coordinate system, obtaining a space equation of a light plane under the camera coordinate system;
s3, calibrating the space position of the tail end of the mechanical arm: establishing a position relation of a camera coordinate system relative to a mechanical arm tail end coordinate system in a hand-eye calibration mode;
s4, image acquisition: the method comprises the following steps of operating a freedom degree mechanical arm, a camera and a laser generator, wherein the laser generator emits line structured light, and the camera shoots an automobile mold to be detected to obtain an image;
s5, obtaining a structured light band image: obtaining a light band area image in the image through a template matching algorithm;
s6, obtaining the defects of the die: and obtaining a gray extreme point on the cross section or the normal direction of the optical tape by using the gray information of the optical tape area on the image through an optical tape center point detection algorithm, wherein the gray extreme point is the defect of the mold.
4. The line structured light based automobile mold visual inspection method of claim 3, wherein: in S1, the calibrating of the internal and external parameters of the camera specifically includes the following steps:
s101, determining optical imaging parameters and spatial pose information of an imaging plane in a camera according to the spatial position and the pixel position of a known point, and establishing a camera coordinate system;
s102, establishing a linear model of camera imaging;
s103, a nonlinear model of camera imaging;
s104, neglecting the influence of lens distortion, obtaining internal and external parameters of the camera by using the linear model, or taking the distortion of the camera into consideration, and substituting the obtained camera parameters as initial values into the nonlinear calibration model to obtain the camera parameters and distortion coefficients after the distortion is considered.
5. The line structured light based automobile mold visual inspection method of claim 3, wherein: in S2, the calibration of the spatial light position of the line structure specifically includes the following steps:
s201, establishing a line structured light system calibration model;
s202, establishing a light plane equation with coefficients of a light plane under a camera coordinate system according to a calibration model;
s203, substituting the camera coordinates into the light plane equation to obtain a matrix expression, and obtaining an equation set through the matrix expression;
s204, solving the equation set through a least square method to obtain specific values of the coefficients, and substituting the coefficients into the light plane equation to obtain the space equation of the light plane under the camera coordinate system.
6. The line structured light based automobile mold visual inspection method of claim 3, wherein: in S3, the hand-eye calibration adopts one of the following two ways:
establishing a hand-eye calibration model to obtain an external parameter matrix of the camera relative to the tail end of the mechanical arm;
and performing hand-eye self-calibration based on a single reference point, wherein the camera is relative to external parameters of the tail end of the robot.
7. The line structured light based automobile mold visual inspection method of claim 3, wherein: in S5, the obtaining the structured light band image specifically includes the following steps:
s501, calculating a similarity coefficient between a gray value template and an image, and determining an optical band region;
s502, matching the whole image to obtain the accurate position of the light band, and increasing the template matching speed by using an image pyramid down-sampling method to obtain the light band image.
8. The line structured light based automobile mold visual inspection method of claim 7, wherein: in S502, the method for improving the template matching speed by image pyramid downsampling specifically includes the following steps:
continuously reducing the image and the template by 2 times, setting the original image as a layer 1, and increasing the layer number of an image pyramid by one layer when the image is reduced by 2 times, so as to realize the down-sampling of the light band image;
filtering the sampled image by adopting a Gaussian smoothing filter;
calculating the similarity value of the image in a proper image pyramid layer, and determining the position of a light band in the layer;
and mapping the light band area determined by the highest layer of the image pyramid to the lowest layer of the image pyramid to finally obtain the position of the light band in the original image, wherein each mapping process is to multiply the coordinate of the matching point by 2, so that the matching speed can be increased.
9. The line structured light based automobile mold visual inspection method of claim 3, wherein: in S5, after the light band image is obtained, the light band image needs to be enhanced.
10. The line structured light based automobile mold visual inspection method of claim 3, wherein: in S6, the obtaining of the gray extreme point specifically includes the following steps:
obtaining the gray scale gravity center of each line in the light band region by using a gray scale gravity center method;
obtaining a normal vector corresponding to the gravity center point of each row by using the Hessian matrix;
and carrying out Taylor expansion on the light band gray function along the normal vector to obtain the pixel coordinate of the central point of the light band, and further determining a gray extreme point.
CN202110635122.8A 2021-06-08 2021-06-08 Automobile mold visual detection system and detection method based on line structured light Pending CN113418927A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110635122.8A CN113418927A (en) 2021-06-08 2021-06-08 Automobile mold visual detection system and detection method based on line structured light

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110635122.8A CN113418927A (en) 2021-06-08 2021-06-08 Automobile mold visual detection system and detection method based on line structured light

Publications (1)

Publication Number Publication Date
CN113418927A true CN113418927A (en) 2021-09-21

Family

ID=77788025

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110635122.8A Pending CN113418927A (en) 2021-06-08 2021-06-08 Automobile mold visual detection system and detection method based on line structured light

Country Status (1)

Country Link
CN (1) CN113418927A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115343289A (en) * 2022-07-18 2022-11-15 中国第一汽车股份有限公司 Automatic scratch detection system and method for whole automobile assembly pit package
CN117969550A (en) * 2024-03-29 2024-05-03 长春汽车工业高等专科学校 Automobile defect analysis method and system based on image recognition

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102794763A (en) * 2012-08-31 2012-11-28 江南大学 Systematic calibration method of welding robot guided by line structured light vision sensor
CN102927908A (en) * 2012-11-06 2013-02-13 中国科学院自动化研究所 Robot eye-on-hand system structured light plane parameter calibration device and method
CN105303560A (en) * 2015-09-22 2016-02-03 中国计量学院 Robot laser scanning welding seam tracking system calibration method
CN105783726A (en) * 2016-04-29 2016-07-20 无锡科技职业学院 Curve-welding-seam three-dimensional reconstruction method based on line structure light vision detection
CN106425181A (en) * 2016-10-24 2017-02-22 南京工业大学 Curve welding seam welding technology based on line structured light
CN106524945A (en) * 2016-10-13 2017-03-22 无锡科技职业学院 Plane included angle online measurement method based on mechanical arm and structured light vision
CN109015632A (en) * 2018-07-11 2018-12-18 云南电网有限责任公司电力科学研究院 A kind of robot hand end localization method
CN110136208A (en) * 2019-05-20 2019-08-16 北京无远弗届科技有限公司 A kind of the joint automatic calibration method and device of Visual Servoing System
CN110434516A (en) * 2019-08-28 2019-11-12 浙江大学城市学院 A kind of Intelligent welding robot system and welding method
CN110530877A (en) * 2019-09-16 2019-12-03 西安中科光电精密工程有限公司 A kind of welding shape quality inspection robot and its detection method
CN111402411A (en) * 2020-04-10 2020-07-10 贵刚 Scattered object identification and grabbing method based on line structured light

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102794763A (en) * 2012-08-31 2012-11-28 江南大学 Systematic calibration method of welding robot guided by line structured light vision sensor
CN102927908A (en) * 2012-11-06 2013-02-13 中国科学院自动化研究所 Robot eye-on-hand system structured light plane parameter calibration device and method
CN105303560A (en) * 2015-09-22 2016-02-03 中国计量学院 Robot laser scanning welding seam tracking system calibration method
CN105783726A (en) * 2016-04-29 2016-07-20 无锡科技职业学院 Curve-welding-seam three-dimensional reconstruction method based on line structure light vision detection
CN106524945A (en) * 2016-10-13 2017-03-22 无锡科技职业学院 Plane included angle online measurement method based on mechanical arm and structured light vision
CN106425181A (en) * 2016-10-24 2017-02-22 南京工业大学 Curve welding seam welding technology based on line structured light
CN109015632A (en) * 2018-07-11 2018-12-18 云南电网有限责任公司电力科学研究院 A kind of robot hand end localization method
CN110136208A (en) * 2019-05-20 2019-08-16 北京无远弗届科技有限公司 A kind of the joint automatic calibration method and device of Visual Servoing System
CN110434516A (en) * 2019-08-28 2019-11-12 浙江大学城市学院 A kind of Intelligent welding robot system and welding method
CN110530877A (en) * 2019-09-16 2019-12-03 西安中科光电精密工程有限公司 A kind of welding shape quality inspection robot and its detection method
CN111402411A (en) * 2020-04-10 2020-07-10 贵刚 Scattered object identification and grabbing method based on line structured light

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
廉凤慧: "基于线结构光的矩形花键轴视觉测量技术研究", 《中国博士学位论文全文数据库 工程科技II辑》 *
徐德 等, 国防工业出版社 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115343289A (en) * 2022-07-18 2022-11-15 中国第一汽车股份有限公司 Automatic scratch detection system and method for whole automobile assembly pit package
CN117969550A (en) * 2024-03-29 2024-05-03 长春汽车工业高等专科学校 Automobile defect analysis method and system based on image recognition
CN117969550B (en) * 2024-03-29 2024-06-04 长春汽车工业高等专科学校 Automobile defect analysis method and system based on image recognition

Similar Documents

Publication Publication Date Title
CN110276808B (en) Method for measuring unevenness of glass plate by combining single camera with two-dimensional code
CN110146038B (en) Distributed monocular camera laser measuring device and method for assembly corner of cylindrical part
CN108759714B (en) Coordinate system fusion and rotating shaft calibration method for multi-line laser profile sensor
CN109612390B (en) Large-size workpiece automatic measuring system based on machine vision
CN109242908B (en) Calibration method for underwater binocular vision measurement system
CN109029299B (en) Dual-camera measuring device and method for butt joint corner of cabin pin hole
CN110296667B (en) High-reflection surface three-dimensional measurement method based on line structured light multi-angle projection
CN109859272B (en) Automatic focusing binocular camera calibration method and device
CN105716527B (en) Laser seam tracking transducer calibration method
CN110411346B (en) Method for quickly positioning surface micro-defects of aspheric fused quartz element
CN111369630A (en) Method for calibrating multi-line laser radar and camera
CN113205593B (en) High-light-reflection surface structure light field three-dimensional reconstruction method based on point cloud self-adaptive restoration
CN113418927A (en) Automobile mold visual detection system and detection method based on line structured light
CN111191625A (en) Object identification and positioning method based on laser-monocular vision fusion
KR102248197B1 (en) Large reflector 3D surface shape measuring method by using Fringe Pattern Reflection Technique
CN111707187B (en) Measuring method and system for large part
CN110455198B (en) Rectangular spline shaft key width and diameter measuring method based on line structure light vision
Zhong et al. Stereo-rectification and homography-transform-based stereo matching methods for stereo digital image correlation
Liu et al. Measuring method for micro-diameter based on structured-light vision technology
CN113963067B (en) Calibration method for calibrating large-view-field visual sensor by using small target
CN113989199A (en) Binocular narrow butt weld detection method based on deep learning
CN115854921A (en) Method and device for measuring surface shape of object based on structured light
CN116765936A (en) Honeycomb material processing surface profile precision measuring equipment and measuring method thereof
Zhang et al. Improved camera calibration method and accuracy analysis for binocular vision
CN114963981A (en) Monocular vision-based cylindrical part butt joint non-contact measurement method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210921