CN115876166A - Precision evaluation method of visual detection system - Google Patents

Precision evaluation method of visual detection system Download PDF

Info

Publication number
CN115876166A
CN115876166A CN202211706341.1A CN202211706341A CN115876166A CN 115876166 A CN115876166 A CN 115876166A CN 202211706341 A CN202211706341 A CN 202211706341A CN 115876166 A CN115876166 A CN 115876166A
Authority
CN
China
Prior art keywords
coordinates
control field
coordinate system
dimensional control
conversion relation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211706341.1A
Other languages
Chinese (zh)
Inventor
郭寅
尹仕斌
郭磊
吴雨祥
姜�硕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Isvision Hangzhou Technology Co Ltd
Original Assignee
Isvision Hangzhou Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Isvision Hangzhou Technology Co Ltd filed Critical Isvision Hangzhou Technology Co Ltd
Priority to CN202211706341.1A priority Critical patent/CN115876166A/en
Publication of CN115876166A publication Critical patent/CN115876166A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention discloses a precision evaluation method of a visual detection system, which comprises the following steps of fixing a calibration plate in the field of view of a visual sensor, wherein the calibration plate is provided with a mark point; the photogrammetric system II establishes a three-dimensional control field II; fitting the plane Q by using each mark point; calibrating by using a photogrammetric system II to obtain a conversion relation between a visual sensor coordinate system and a three-dimensional control field II; the visual sensor collects a two-dimensional image of the calibration plate and acquires pixel coordinates of the marking points; constructing a space straight line by using the origin of the camera coordinate system and the pixel coordinates of the mark points; converting the space straight line into a three-dimensional control field II by utilizing a conversion relation, and calculating the intersection point of the space straight line and the plane Q; searching coordinates of the corresponding mark points in the three-dimensional control field II, and making a difference between the searched coordinates and the coordinates of the intersection point, wherein if the difference value is smaller than a threshold value, the precision of the visual detection system meets the requirement, otherwise, the precision does not meet the requirement; the method can be suitable for various types of visual detection systems and is high in universality.

Description

Precision evaluation method of visual detection system
Technical Field
The invention relates to the field of precision verification, in particular to a precision evaluation method of a visual inspection system.
Background
At present, a vision inspection system is widely applied to the field of processing and manufacturing, in the process of product inspection, in order to accurately control the product quality, different levels of requirements are required on the inspection precision of the vision inspection system, and particularly in the precision processing and manufacturing industry, the requirements on the inspection precision are stricter.
Therefore, how to accurately verify and evaluate the detection precision of the visual detection system becomes a technical problem to be solved urgently. The existing accuracy evaluation method generally comprises the following steps: the vision detection system collects the geometric characteristics, such as holes and angular points, on the measured object, calculates the measurement coordinates of the geometric characteristics, and then evaluates the errors of the measurement coordinates by using the theoretical coordinates of the geometric characteristics in the digifax of the measured object as real values. However, only special positions (holes and angular points) on the digifax of the measured object have theoretical coordinates, and no theoretical coordinates, such as any point on a plane, can be inquired at positions without large characteristic fluctuation, so that when the visual detection system is used for collecting the positions which cannot be inquired about the theoretical coordinates, the detection precision cannot be evaluated.
Disclosure of Invention
Aiming at the problems, the invention provides a precision evaluation method of a visual detection system, which can be suitable for various types of visual detection systems and has the characteristics of ingenious design, good accuracy and high universality.
A precision evaluation method of a vision detection system utilizes a photogrammetric system II to verify the precision of the vision detection system through the following steps:
1) Fixing a calibration plate in a visual field range of a visual sensor in a visual detection system, wherein the calibration plate is a plane plate and is provided with at least 3 marking points; fixing a coding point and a scale around the calibration plate;
the photogrammetric system II collects images of the coding points, the scaleplates and the marking points from different poses and establishes a three-dimensional control field II;
marking a coordinate fitting space plane of each mark point in the three-dimensional control field II as a plane Q;
2) Calibrating by using a photogrammetric system II to obtain a conversion relation between a visual sensor coordinate system and a three-dimensional control field II;
3) The visual sensor acquires a two-dimensional image of the calibration plate, and pixel coordinates of the marking points are acquired in the two-dimensional image; constructing a space straight line by using the origin of a camera coordinate system in the visual sensor and the pixel coordinates of the mark points;
4) Converting the space straight line into the three-dimensional control field II by using the conversion relation between the visual sensor coordinate system and the three-dimensional control field II, and calculating the intersection point of the space straight line and the plane Q; and searching the coordinates of the mark point corresponding to the space straight line in the three-dimensional control field II, and performing difference by using the searched coordinates and the intersection point coordinates, wherein if the difference value is smaller than a threshold value, the precision of the visual detection system meets the requirement, and otherwise, the precision does not meet the requirement.
Further, step 2), the method for obtaining the conversion relation between the visual sensor coordinate system and the three-dimensional control field II by utilizing the calibration of the photogrammetric system II comprises the following two steps:
the method comprises the following steps: the vision sensor is arranged at the tail end of the robot; installing a target ball of a photogrammetric system at the tail end of the robot, adjusting the pose of the robot, and acquiring target ball coordinates under different poses by the photogrammetric system II; according to the coordinates of the target ball of the photogrammetric system under the robot base coordinate system and the coordinates in the three-dimensional control field II under different poses, the conversion relation between the robot base coordinate and the three-dimensional control field II is solved, and then the conversion relation between the vision sensor coordinate system and the robot base coordinate system obtained by combining hand-eye calibration and the pose of the robot is combined to obtain the conversion relation between the vision sensor coordinate system and the three-dimensional control field II;
the second method comprises the following steps: the visual sensor is fixed in the detection station, and shoots an image of the calibration plate to obtain the coordinates of the calibration point in the image; and solving the conversion relation between the visual sensor coordinate system and the three-dimensional control field II according to the coordinates of the mark points in the image and the coordinates in the three-dimensional control field II.
Further, when the visual detection system uses the laser tracker to perform external reference calibration before use, step 2), the method for obtaining the conversion relation between the visual sensor coordinate system and the three-dimensional control field II by utilizing the photogrammetric system II calibration comprises the following steps:
fixing at least four ball seats around the calibration plate, wherein the ball seats can be used for mounting a laser tracker target ball and a photogrammetric target ball;
installing a photogrammetric target ball on the ball seat, and acquiring coordinates of the photogrammetric target ball in a three-dimensional control field II by a photogrammetric system II and storing the coordinates into a point set A;
replacing the photogrammetric target ball on the ball seat with a laser tracker target ball, and acquiring the coordinate of each laser tracker target ball under a tracker coordinate system by the laser tracker and storing the coordinate into a point set B;
resolving a conversion relation between a three-dimensional control field II and a laser tracker coordinate system by using coordinates of all points in the point set A and the point set B;
then combining a conversion relation between a visual sensor coordinate system and a laser tracker coordinate system which are obtained by utilizing the calibration of the laser tracker in advance;
and obtaining the conversion relation between the coordinate system of the vision sensor and the three-dimensional control field II.
Further, a method of calibrating a conversion relationship between the vision sensor coordinate system and the laser tracker coordinate system with the laser tracker in advance is as follows:
the vision sensor is arranged at the tail end of the robot, the target ball of the laser tracker is arranged at the tail end of the robot, the pose of the robot is adjusted, and the laser tracker acquires the coordinates of the target ball under different poses; according to the coordinates of the target ball of the laser tracker under the robot base coordinate system and the coordinates under the laser tracker coordinate system under different poses, the conversion relation between the robot base coordinate system and the laser tracker coordinate system is solved, and then the conversion relation between the vision sensor coordinate system and the robot base coordinate system is obtained by combining the hand-eye calibration and the robot pose, so that the conversion relation between the sensor coordinate system and the laser tracker coordinate system is obtained.
Further, when the vision inspection system uses the photogrammetry system I to perform external reference calibration before use, step 2), the method for converting the coordinate system of the vision sensor obtained by calibration of the photogrammetry system II and the three-dimensional control field II comprises the following steps:
the photogrammetric system I and the photogrammetric system II respectively collect images of the coding points, the scales and the marking points, and establish a three-dimensional control field I and a three-dimensional control field II;
storing the coordinates of each marking point in a three-dimensional control field I into a point set I;
storing the coordinates of each mark point in a three-dimensional control field II into a point set II;
resolving a conversion relation between the three-dimensional control field I and the three-dimensional control field II by using coordinates of all points in the point set I and the point set II;
then, combining a conversion relation between a visual sensor coordinate system and a three-dimensional control field I which is obtained by utilizing a photogrammetric system I to calibrate in advance;
and obtaining the conversion relation between the coordinate system of the vision sensor and the three-dimensional control field II.
Further, the method for calibrating the conversion relationship between the vision sensor coordinate system and the three-dimensional control field I by using the photogrammetric system I in advance comprises the following two steps:
the method A comprises the following steps: the vision sensor is arranged at the tail end of the robot; mounting a target ball of a photogrammetric system at the tail end of the robot, adjusting the pose of the robot, and acquiring the coordinates of the target ball at different poses by the photogrammetric system I; according to the coordinates of the target ball of the photogrammetric system under the robot base coordinate system and the coordinates in the three-dimensional control field I under different poses, the conversion relation between the robot base coordinate and the three-dimensional control field I is solved, and then the conversion relation between the vision sensor coordinate system and the robot base coordinate system obtained by combining hand-eye calibration and the pose of the robot is combined to obtain the conversion relation between the sensor coordinate system and the three-dimensional control field I.
The method B comprises the following steps: the visual sensor is fixed in the detection station, and shoots the image of the calibration plate to obtain the coordinates of the marking points in the image; and solving the conversion relation between the visual sensor coordinate system and the three-dimensional control field I according to the coordinates of the mark points in the image and the coordinates in the three-dimensional control field I.
For more accurate estimation of the precision, preferably, in step 3), pixel coordinates of a plurality of marking points are acquired in the two-dimensional image; constructing a space straight line by using the origin of a camera coordinate system in the visual sensor and the pixel coordinates of each mark point respectively to obtain a plurality of space straight lines;
in the step 4), converting each space straight line into the three-dimensional control field by using the conversion relation between the coordinate system of the visual sensor and the three-dimensional control field II to obtain the intersection point of each converted space straight line and the plane Q;
searching coordinates of the mark points corresponding to each space straight line in the three-dimensional control field, and performing difference by using the searched coordinates and corresponding intersection point coordinates;
and taking the mean value or the standard deviation of the difference value, if the obtained numerical value is smaller than the threshold value, the precision of the visual detection system meets the requirement, otherwise, the precision does not meet the requirement.
Preferably, in step 3), in the two-dimensional image, the edge of the mark point is extracted by using a canny method or a Sobel method, edge fitting is performed, and the geometric center coordinate of the obtained mark point is recorded as the pixel coordinate of the mark point.
Preferably, the method of fitting the plane Q in step 1) is the RANSAC method or the least squares method.
Preferably, 50 to 300 marking points are arranged and distributed on the calibration plate; the threshold value is 0.5 mm-5 mm.
The method has the following characteristics:
the method determines the real coordinates of the mark points on the calibration plate through a photogrammetric system II; the vision detection system obtains the measurement coordinates of the mark points in a light intersection mode; the precision of the visual detection system is evaluated by combining the deviation between the measurement coordinate and the real coordinate, and the whole scheme is simple and effective and can be implemented strongly.
The method has no requirement on the type of the visual detection system, and when the visual detection system is used for detecting any point (without theoretical digital-analog coordinates) on the detected object, the method can still effectively evaluate the precision of the visual detection system, and provides data support for the normal use of the visual detection system.
The method not only can evaluate the detection precision of the visual detection sensor, but also can simultaneously evaluate the detection precision of the visual detection sensor and the whole calibration device (laser tracker/photogrammetric system), and takes calibration errors into account; the method has the characteristics of wide application range, strong universality and ingenious design.
Drawings
Fig. 1 is a schematic view showing the positional relationship among the robot, the laser tracker, the photogrammetric system II, and the calibration plate in example 2.
Detailed Description
The technical solution of the present invention is described in detail below with reference to the accompanying drawings and examples.
Example 1
In the embodiment, only the detection precision of the visual detection system is evaluated, and the visual detection system comprises a visual sensor; the vision sensor is arranged in the detection station and used for detecting the measuring point information on the measured object;
the scheme is as follows:
a precision evaluation method of a vision detection system utilizes a photogrammetric system II to verify the precision of the vision detection system through the following steps:
1) Fixing a calibration plate in the field of view of the vision sensor, wherein the calibration plate is a plane plate and is provided with at least 3 marking points (preferably 50-300 marking points which are scattered on the calibration plate); fixing coding points and a scale around the calibration plate;
the photogrammetric system II collects images of the coding points, the scaleplates and the marking points from different poses and establishes a three-dimensional control field II;
the three-dimensional control field of the photogrammetric system is established according to an equipment instruction manual and a method disclosed in the prior art by adopting an existing scheme.
Marking a coordinate fitting space plane of each mark point in the three-dimensional control field II as a plane Q;
the preferred implementation is as follows: the method of fitting the plane Q is the RANSAC method or the least squares method.
2) The conversion relation between the visual sensor coordinate system and the three-dimensional control field II is obtained by utilizing the calibration of the photogrammetric system II;
3) The visual sensor acquires a two-dimensional image of the calibration plate, and pixel coordinates of the marking points are acquired in the two-dimensional image;
constructing a space straight line by using an origin (0,0,0) of a camera coordinate system in the visual sensor and the pixel coordinates of the mark points;
4) Converting the space straight line into the three-dimensional control field II by using the conversion relation between the visual sensor coordinate system and the three-dimensional control field II, and calculating the intersection point of the space straight line and the plane Q; and searching the coordinates of the mark point corresponding to the space straight line in the three-dimensional control field II, and performing difference by using the searched coordinates and the intersection point coordinates, wherein if the difference value is smaller than a threshold value, the precision of the visual detection system meets the requirement, and otherwise, the precision does not meet the requirement.
If the number of the marked points is 10, numbering the marked points from 1 to 10 respectively; in the step 3), a space straight line is constructed by using the origin (0,0,0) of a camera coordinate system in the vision sensor and the pixel coordinates of the No. 3 mark point; in step 4), the coordinates of the intersection point are obtained, then the coordinates of the No. 3 mark point in the three-dimensional control field II are searched, and the searched coordinates are used for making a difference with the coordinates of the intersection point.
The threshold is set according to the precision requirement of an actual detection scene, and is generally 0.5 mm-5 mm.
In detail, step 2), the method for obtaining the conversion relationship between the visual sensor coordinate system and the three-dimensional control field II by using the photogrammetric system II calibration comprises the following two steps:
the method comprises the following steps: the vision sensor is arranged at the tail end of the robot; installing a target ball of a photogrammetric system at the tail end of the robot, adjusting the pose of the robot, and acquiring target ball coordinates under different poses by the photogrammetric system II; according to the coordinates of the target ball of the photogrammetric system under the robot base coordinate system and the coordinates in the three-dimensional control field II under different poses, the conversion relation between the robot base coordinate and the three-dimensional control field II is solved, and then the conversion relation between the vision sensor coordinate system and the robot base coordinate system obtained by combining hand-eye calibration and the pose of the robot is combined to obtain the conversion relation between the vision sensor coordinate system and the three-dimensional control field II;
the second method comprises the following steps: the visual sensor is fixed in the detection station, and shoots the image of the calibration plate to obtain the coordinates of the marking points in the image; and solving the conversion relation between the visual sensor coordinate system and the three-dimensional control field II according to the coordinates of the mark points in the image and the coordinates in the three-dimensional control field II.
Specifically, in the step 3), in the two-dimensional image, the edge of the mark point is extracted by using a canny method or a Sobel method, edge fitting is performed, and the geometric center coordinate of the obtained mark point is recorded as the pixel coordinate of the mark point.
In order to more accurately evaluate the precision of the visual inspection system, as a preferred embodiment, in step 3), pixel coordinates of a plurality of marking points are acquired in the two-dimensional image; constructing a space straight line by using the origin of a camera coordinate system in the visual sensor and the pixel coordinates of each mark point respectively to obtain a plurality of space straight lines;
in the step 4), converting each space straight line into the three-dimensional control field by utilizing the conversion relation between the coordinate system of the visual sensor and the three-dimensional control field II to obtain the intersection point of each converted space straight line and the plane Q;
searching coordinates of the mark points corresponding to each space straight line in the three-dimensional control field, and performing difference by using the searched coordinates and corresponding intersection point coordinates;
and taking the mean value or the standard deviation of the difference value, if the obtained numerical value is smaller than the threshold value, the precision of the visual detection system meets the requirement, otherwise, the precision does not meet the requirement.
Example 2
In the embodiment, the visual detection system is externally calibrated by using a laser tracker before use.
The embodiment is used for evaluating the detection error of the visual detection system and the error introduced by the external reference calibration process of the laser tracker.
The vision detection system comprises a vision sensor; the vision sensor is arranged in the detection station and used for detecting the measuring point information on the measured object; the laser tracker is arranged around the visual sensor and used for calibrating the conversion relation between the coordinate system of the visual sensor and the coordinate system of the measured object;
the scheme is as follows:
a precision evaluation method of a vision detection system utilizes a photogrammetric system II to verify the precision of the vision detection system through the following steps:
1) Fixing a calibration plate in the field of view of a vision sensor in a vision detection system, wherein the calibration plate 4 is a plane plate and is provided with at least 3 marking points 5 as shown in fig. 1; fixing coding points and a scale around the calibration plate;
the photogrammetric system II 3 collects images of the coding points, the scales and the marking points from different poses and establishes a three-dimensional control field II;
marking a coordinate fitting space plane of each mark point in the three-dimensional control field II as a plane Q;
2) Calibrating by using a photogrammetric system II to obtain a conversion relation between a visual sensor coordinate system and a three-dimensional control field II;
3) The method comprises the steps that a visual sensor collects a two-dimensional image of a calibration plate, and pixel coordinates of a marking point are obtained in the two-dimensional image; constructing a space straight line by using the origin of a camera coordinate system in the visual sensor and the pixel coordinates of the mark points;
4) Converting the space straight line into the three-dimensional control field II by utilizing the conversion relation between the visual sensor coordinate system and the three-dimensional control field II, and calculating the intersection point of the space straight line and the plane Q; and searching the coordinates of the mark point corresponding to the space straight line in the three-dimensional control field II, and performing difference by using the searched coordinates and the intersection point coordinates, wherein if the difference value is smaller than a threshold value, the precision of the visual detection system meets the requirement, and otherwise, the precision does not meet the requirement.
In this embodiment, in step 2), the method for obtaining the conversion relationship between the visual sensor coordinate system and the three-dimensional control field II by using the calibration of the photogrammetric system II includes:
as shown in fig. 1, at least four ball seats 6 are fixed around the calibration plate, the ball seats 6 being capable of mounting laser tracker target balls and photogrammetric target balls; namely: the ball seat is compatible with the installation of two target balls. The mounting positions of the ball seats 6 are not collinear.
Installing a photogrammetric target ball on the ball seat 6, and acquiring coordinates of the photogrammetric target ball in the three-dimensional control field II by the photogrammetric system II 3 and storing the coordinates into a point set A;
replacing the photogrammetric target ball on the ball seat 6 with the laser tracker target ball, and acquiring the coordinates of each laser tracker target ball under the tracker coordinate system by the laser tracker 2 and storing the coordinates into a point set B;
calculating a conversion relation (RT 2 in the figure 1) between the three-dimensional control field II and a laser tracker coordinate system by using the coordinates of each point in the point set A and the point set B;
then, the conversion relation between the coordinate system of the visual sensor and the coordinate system of the laser tracker is obtained by combining and utilizing the calibration of the laser tracker 2 in advance;
and obtaining the conversion relation between the coordinate system of the vision sensor and the three-dimensional control field II.
Among them, the method of calibrating the conversion relationship (RT 1 in fig. 1) between the vision sensor coordinate system and the laser tracker coordinate system with the laser tracker in advance is as follows:
the vision sensor is arranged at the tail end of the robot 1, a laser tracker target ball is arranged at the tail end of the robot, the pose of the robot 1 is adjusted, and the laser tracker 2 acquires the coordinates of the target ball under different poses; according to the coordinates of the target ball of the laser tracker under the robot base coordinate system and the coordinates under the laser tracker coordinate system under different positions, the conversion relation between the robot base coordinate system and the laser tracker coordinate system is solved, and then the conversion relation between the vision sensor coordinate system and the robot base coordinate system is obtained by combining the hand-eye calibration and the robot position, so that the conversion relation between the sensor coordinate system and the laser tracker coordinate system is obtained.
In order to more accurately evaluate the precision of the visual inspection system, in this embodiment, there are 300 marking points; the visual sensor is used for detecting the surface defects of the product.
In step 3), pixel coordinates of 300 marking points are obtained in the two-dimensional image; constructing space straight lines by using the origin of a camera coordinate system in the visual sensor and the pixel coordinates of the 300 marking points respectively to obtain 300 space straight lines;
in the step 4), converting each space straight line into the three-dimensional control field by using the conversion relation between the coordinate system of the visual sensor and the three-dimensional control field II to obtain the intersection points (300) of each converted space straight line and the plane Q;
searching coordinates of the mark points corresponding to each space straight line in the three-dimensional control field, and performing difference by using the searched coordinates and corresponding intersection point coordinates;
300 difference values will be obtained; taking the mean value of the difference value to obtain a numerical value of 0.9732mm, and taking the standard deviation of the difference value to obtain a numerical value of 0.285mm;
and if the obtained numerical value is less than the threshold value of 2mm, the precision of the visual detection system meets the requirement, otherwise, the precision does not meet the requirement.
Example 3
In the embodiment, the vision detection system is calibrated by an external parameter calibration by a photogrammetric system I before use.
The embodiment is used for evaluating detection errors of a visual detection system and errors introduced by an external parameter calibration process of a photogrammetric system I.
The vision detection system comprises a vision sensor; the vision sensor is arranged in the detection station and used for detecting the measuring point information on the measured object; the photogrammetric system I is arranged around the visual sensor and used for calibrating the conversion relation between the coordinate system of the visual sensor and the coordinate system of the measured object;
the scheme is as follows:
a precision evaluation method of a visual inspection system utilizes a photogrammetric system II to verify the precision of the visual inspection system by the following steps:
1) Fixing a calibration plate in a visual field range of a visual sensor in a visual detection system, wherein the calibration plate 4 is a plane plate and is provided with at least 3 marking points 5 as shown in figure 1; fixing coding points and a scale around the calibration plate;
the photogrammetric system II 3 collects images of the coding points, the scales and the marking points from different poses and establishes a three-dimensional control field II;
marking a coordinate fitting space plane of each mark point in the three-dimensional control field II as a plane Q;
2) Calibrating by using a photogrammetric system II to obtain a conversion relation between a visual sensor coordinate system and a three-dimensional control field II;
3) The visual sensor acquires a two-dimensional image of the calibration plate, and pixel coordinates of the marking points are acquired in the two-dimensional image; constructing a space straight line by using the origin of a camera coordinate system in the visual sensor and the pixel coordinates of the mark points;
4) Converting the space straight line into the three-dimensional control field II by utilizing the conversion relation between the visual sensor coordinate system and the three-dimensional control field II, and calculating the intersection point of the space straight line and the plane Q; and searching the coordinates of the mark point corresponding to the space straight line in the three-dimensional control field II, and performing difference by using the searched coordinates and the intersection point coordinates, wherein if the difference value is smaller than a threshold value, the precision of the visual detection system meets the requirement, and otherwise, the precision does not meet the requirement.
In this embodiment, in step 2), the method for obtaining the conversion relationship between the visual sensor coordinate system and the three-dimensional control field II by using the calibration of the photogrammetric system II includes:
the photogrammetric system I and the photogrammetric system II respectively collect images of the coding points, the scales and the marking points, and establish a three-dimensional control field I and a three-dimensional control field II;
storing the coordinates of each marking point in a three-dimensional control field I into a point set I;
storing the coordinates of each mark point in a three-dimensional control field II into a point set II;
resolving a conversion relation between the three-dimensional control field I and the three-dimensional control field II by using coordinates of all points in the point set I and the point set II;
then, combining a conversion relation between a visual sensor coordinate system and a three-dimensional control field I which is obtained by utilizing a photogrammetric system I to calibrate in advance;
and obtaining the conversion relation between the coordinate system of the vision sensor and the three-dimensional control field II.
The method for converting the relationship between the vision sensor coordinate system and the three-dimensional control field I, which is obtained by calibrating the photogrammetric system I in advance, comprises the following two steps:
the method A comprises the following steps: the vision sensor is arranged at the tail end of the robot; installing a target ball of a photogrammetric system at the tail end of the robot, adjusting the pose of the robot, and acquiring target ball coordinates under different poses by the photogrammetric system I; according to the coordinates of the target ball of the photogrammetric system under the robot base coordinate system and the coordinates in the three-dimensional control field I under different poses, the conversion relation between the robot base coordinate and the three-dimensional control field I is solved, and then the conversion relation between the vision sensor coordinate system and the robot base coordinate system obtained by combining hand-eye calibration and the pose of the robot is combined to obtain the conversion relation between the sensor coordinate system and the three-dimensional control field I.
The method B comprises the following steps: the visual sensor is fixed in the detection station, and shoots the image of the calibration plate to obtain the coordinates of the marking points in the image; and solving the conversion relation between the vision sensor coordinate system and the three-dimensional control field I according to the coordinates of the mark points in the image and the coordinates in the three-dimensional control field I.
The method has no requirement on the type of a vision detection system, and single-eye, binocular vision sensors and structured light sensors can be used for evaluating the precision; when the vision sensor comprises a plurality of cameras, each camera is used for collecting images respectively, errors are resolved respectively, and then the precision of the vision detection system is evaluated comprehensively according to the error results. The method can be used for performance detection before the visual detection system leaves a factory, and provides data support for normal use of the visual detection system.
The foregoing descriptions of specific exemplary embodiments of the present invention have been presented for purposes of illustration and description. The foregoing description is not intended to be exhaustive or to limit the invention to the precise form disclosed, and obviously many modifications and variations are possible in light of the above teaching. The exemplary embodiments were chosen and described in order to explain certain principles of the invention and its practical application to enable others skilled in the art to make and use various exemplary embodiments of the invention and various alternatives and modifications thereof. It is intended that the scope of the invention be defined by the following claims and their equivalents.

Claims (10)

1. A precision evaluation method of a visual inspection system is characterized in that: using photogrammetric system II, the accuracy of the visual inspection system was verified by the following steps:
1) Fixing a calibration plate in a visual field range of a visual sensor in a visual detection system, wherein the calibration plate is a plane plate and is provided with at least 3 marking points; fixing coding points and a scale around the calibration plate;
the photogrammetric system II collects images of the coding points, the scaleplates and the marking points from different poses and establishes a three-dimensional control field II;
a coordinate fitting space plane of each mark point in the three-dimensional control field II is marked as a plane Q;
2) Calibrating by using a photogrammetric system II to obtain a conversion relation between a visual sensor coordinate system and a three-dimensional control field II;
3) The method comprises the steps that a visual sensor collects a two-dimensional image of a calibration plate, and pixel coordinates of a marking point are obtained in the two-dimensional image; constructing a space straight line by using the origin of a camera coordinate system in the visual sensor and the pixel coordinates of the mark points;
4) Converting the space straight line into the three-dimensional control field II by utilizing the conversion relation between the visual sensor coordinate system and the three-dimensional control field II, and calculating the intersection point of the space straight line and the plane Q; and searching the coordinates of the mark point corresponding to the space straight line in the three-dimensional control field II, and performing difference by using the searched coordinates and the intersection point coordinates, wherein if the difference value is smaller than a threshold value, the precision of the visual detection system meets the requirement, and otherwise, the precision does not meet the requirement.
2. The accuracy evaluation method of a vision inspection system according to claim 1, characterized in that: step 2), a method for obtaining a conversion relation between a visual sensor coordinate system and a three-dimensional control field II by utilizing the calibration of a photogrammetric system II comprises the following two steps:
the method comprises the following steps: the vision sensor is arranged at the tail end of the robot; installing a target ball of a photogrammetric system at the tail end of the robot, adjusting the pose of the robot, and acquiring target ball coordinates under different poses by the photogrammetric system II; according to the coordinates of the target ball of the photogrammetric system under the robot base coordinate system and the coordinates in the three-dimensional control field II under different poses, the conversion relation between the robot base coordinate and the three-dimensional control field II is solved, and then the conversion relation between the vision sensor coordinate system and the robot base coordinate system obtained by combining hand-eye calibration and the pose of the robot is combined to obtain the conversion relation between the vision sensor coordinate system and the three-dimensional control field II;
the second method comprises the following steps: the visual sensor is fixed in the detection station, and shoots the image of the calibration plate to obtain the coordinates of the marking points in the image; and solving the conversion relation between the visual sensor coordinate system and the three-dimensional control field II according to the coordinates of the mark points in the image and the coordinates in the three-dimensional control field II.
3. The accuracy evaluation method of a vision inspection system according to claim 1, characterized in that: when the visual detection system uses the laser tracker to perform external reference calibration before use, step 2), the method for obtaining the conversion relation between the visual sensor coordinate system and the three-dimensional control field II by utilizing the photogrammetric system II calibration comprises the following steps:
fixing at least four ball seats around the calibration plate, wherein the ball seats can be used for mounting a laser tracker target ball and a photogrammetric target ball;
installing a photogrammetric target ball on the ball seat, and acquiring coordinates of the photogrammetric target ball in a three-dimensional control field II by a photogrammetric system II and storing the coordinates into a point set A;
replacing the photogrammetric target ball on the ball seat with a laser tracker target ball, and acquiring the coordinate of each laser tracker target ball under a tracker coordinate system by the laser tracker and storing the coordinate into a point set B;
resolving a conversion relation between a three-dimensional control field II and a laser tracker coordinate system by using coordinates of all points in the point set A and the point set B;
then, combining a conversion relation between a visual sensor coordinate system and a laser tracker coordinate system which is obtained by utilizing the calibration of the laser tracker in advance;
and obtaining the conversion relation between the coordinate system of the vision sensor and the three-dimensional control field II.
4. The accuracy evaluation method of a vision inspection system of claim 3, wherein: the method for calibrating the conversion relation between the vision sensor coordinate system and the laser tracker coordinate system by utilizing the laser tracker in advance comprises the following steps:
the vision sensor is arranged at the tail end of the robot, the target ball of the laser tracker is arranged at the tail end of the robot, the pose of the robot is adjusted, and the laser tracker acquires the coordinates of the target ball under different poses; according to the coordinates of the target ball of the laser tracker under the robot base coordinate system and the coordinates under the laser tracker coordinate system under different poses, the conversion relation between the robot base coordinate system and the laser tracker coordinate system is solved, and then the conversion relation between the vision sensor coordinate system and the robot base coordinate system is obtained by combining the hand-eye calibration and the robot pose, so that the conversion relation between the sensor coordinate system and the laser tracker coordinate system is obtained.
5. The accuracy evaluation method of a vision inspection system of claim 1, wherein: when the vision detection system uses the photogrammetry system I to carry out external reference calibration before use, step 2), the method for converting the relation between the vision sensor coordinate system and the three-dimensional control field II by using the photogrammetry system II calibration comprises the following steps:
the photogrammetric system I and the photogrammetric system II respectively collect images of the coding points, the scales and the marking points, and establish a three-dimensional control field I and a three-dimensional control field II;
storing the coordinates of each marking point in a three-dimensional control field I into a point set I;
storing the coordinates of each marking point in the three-dimensional control field II into a point set II;
resolving a conversion relation between the three-dimensional control field I and the three-dimensional control field II by using coordinates of all points in the point set I and the point set II;
then combining a conversion relation between a visual sensor coordinate system and a three-dimensional control field I which are obtained by utilizing a photogrammetric system I to calibrate in advance;
and obtaining the conversion relation between the coordinate system of the vision sensor and the three-dimensional control field II.
6. The accuracy evaluation method of a vision inspection system according to claim 5, characterized in that: the method for converting the coordinate system of the vision sensor and the three-dimensional control field I obtained by utilizing the calibration of the photogrammetric system I in advance comprises the following two steps:
the method A comprises the following steps: the vision sensor is arranged at the tail end of the robot; mounting a target ball of a photogrammetric system at the tail end of the robot, adjusting the pose of the robot, and acquiring the coordinates of the target ball at different poses by the photogrammetric system I; and resolving a conversion relation between the robot base coordinate and the three-dimensional control field I according to the coordinates of the target ball of the photogrammetric system under the robot base coordinate system and the coordinates in the three-dimensional control field I under different poses, and obtaining the conversion relation between the sensor coordinate system and the three-dimensional control field I by combining the conversion relation between the vision sensor coordinate system and the robot base coordinate system obtained by hand-eye calibration and the pose of the robot.
The method B comprises the following steps: the visual sensor is fixed in the detection station, and shoots the image of the calibration plate to obtain the coordinates of the marking points in the image; and solving the conversion relation between the vision sensor coordinate system and the three-dimensional control field I according to the coordinates of the mark points in the image and the coordinates in the three-dimensional control field I.
7. The accuracy evaluation method of the visual inspection system according to any one of claims 1 to 6, wherein: in the step 3), pixel coordinates of a plurality of marking points are obtained in the two-dimensional image; constructing a space straight line by using the origin of a camera coordinate system in the visual sensor and the pixel coordinates of each mark point respectively to obtain a plurality of space straight lines;
in the step 4), converting each space straight line into the three-dimensional control field by using the conversion relation between the coordinate system of the visual sensor and the three-dimensional control field II to obtain the intersection point of each converted space straight line and the plane Q;
searching coordinates of the mark points corresponding to each space straight line in the three-dimensional control field, and performing difference by using the searched coordinates and corresponding intersection point coordinates;
and taking the mean value or the standard deviation of the difference value, if the obtained numerical value is smaller than the threshold value, the precision of the visual detection system meets the requirement, otherwise, the precision does not meet the requirement.
8. The accuracy evaluation method of the visual inspection system according to any one of claims 1 to 6, wherein: and 3) extracting the edges of the mark points in the two-dimensional image by using a canny method or a Sobel method, fitting the edges, and recording the geometric center coordinates of the obtained mark points as the pixel coordinates of the mark points.
9. The accuracy evaluation method of the visual inspection system according to any one of claims 1 to 6, wherein: the method of fitting the plane Q in step 1) is the RANSAC method or the least square method.
10. The accuracy evaluation method of the visual inspection system according to any one of claims 1 to 6, wherein: 50-300 marking points are distributed on the calibration plate; the threshold value is 0.5 mm-5 mm.
CN202211706341.1A 2022-12-29 2022-12-29 Precision evaluation method of visual detection system Pending CN115876166A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211706341.1A CN115876166A (en) 2022-12-29 2022-12-29 Precision evaluation method of visual detection system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211706341.1A CN115876166A (en) 2022-12-29 2022-12-29 Precision evaluation method of visual detection system

Publications (1)

Publication Number Publication Date
CN115876166A true CN115876166A (en) 2023-03-31

Family

ID=85757092

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211706341.1A Pending CN115876166A (en) 2022-12-29 2022-12-29 Precision evaluation method of visual detection system

Country Status (1)

Country Link
CN (1) CN115876166A (en)

Similar Documents

Publication Publication Date Title
CN105716582B (en) Measurement method, device and the camera field of view angle measuring instrument at camera field of view angle
CN102782721B (en) System and method for runtime determination of camera calibration errors
CN110689579A (en) Rapid monocular vision pose measurement method and measurement system based on cooperative target
JP2008014940A (en) Camera calibration method for camera measurement of planar subject and measuring device applying same
US20130194569A1 (en) Substrate inspection method
CN112991453B (en) Calibration parameter verification method and device for binocular camera and electronic equipment
CN103383238A (en) Image measurement apparatus, image measurement method and image measurement program
CN103636201A (en) Arrangement and method for determining imaging deviation of camera
CN110415286B (en) External parameter calibration method of multi-flight time depth camera system
CN108627104A (en) A kind of dot laser measurement method of parts height dimension
US20220067971A1 (en) Assembly and Measurement of an Assembly for Calibrating a Camera
CN110930382A (en) Point cloud splicing precision evaluation method and system based on calibration plate feature point extraction
EP3128482A1 (en) Method for calibration of a stereo camera
CN111376254A (en) Plane distance measuring method and system and method and system for adjusting plane by mechanical arm
CN111508020B (en) Cable three-dimensional position calculation method and device for fusing image and laser radar
CN110044266B (en) Photogrammetry system based on speckle projection
Wohlfeil et al. Automatic camera system calibration with a chessboard enabling full image coverage
CN114502913A (en) Correction parameter calculation method and device, and displacement calculation method and device
CN111145247B (en) Position degree detection method based on vision, robot and computer storage medium
JP3696336B2 (en) How to calibrate the camera
CN112665523B (en) Combined measurement method for complex profile
CN110986784B (en) Reference coordinate acquisition method and application thereof
CN115876166A (en) Precision evaluation method of visual detection system
CN113670280B (en) Verticality measuring device and measuring method
CN112651261B (en) Calculation method for conversion relation between high-precision 2D camera coordinate system and mechanical coordinate system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination