CN115578446A - Unmanned engineering vehicle kinematic parameter identification and calibration method based on multi-view vision - Google Patents

Unmanned engineering vehicle kinematic parameter identification and calibration method based on multi-view vision Download PDF

Info

Publication number
CN115578446A
CN115578446A CN202211285876.6A CN202211285876A CN115578446A CN 115578446 A CN115578446 A CN 115578446A CN 202211285876 A CN202211285876 A CN 202211285876A CN 115578446 A CN115578446 A CN 115578446A
Authority
CN
China
Prior art keywords
mechanical arm
bucket
tail end
camera
engineering vehicle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211285876.6A
Other languages
Chinese (zh)
Inventor
朱本龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xuzhou Inspection And Testing Center
Original Assignee
Xuzhou Inspection And Testing Center
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xuzhou Inspection And Testing Center filed Critical Xuzhou Inspection And Testing Center
Priority to CN202211285876.6A priority Critical patent/CN115578446A/en
Publication of CN115578446A publication Critical patent/CN115578446A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • EFIXED CONSTRUCTIONS
    • E02HYDRAULIC ENGINEERING; FOUNDATIONS; SOIL SHIFTING
    • E02FDREDGING; SOIL-SHIFTING
    • E02F9/00Component parts of dredgers or soil-shifting machines, not restricted to one of the kinds covered by groups E02F3/00 - E02F7/00
    • E02F9/20Drives; Control devices
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/12Computing arrangements based on biological models using genetic models
    • G06N3/126Evolutionary algorithms, e.g. genetic algorithms or genetic programming

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biophysics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Physiology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Genetics & Genomics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Mining & Mineral Resources (AREA)
  • Civil Engineering (AREA)
  • Structural Engineering (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a method for identifying and calibrating kinematic parameters of an unmanned engineering vehicle based on multi-view vision, which comprises the following steps: firstly, planning a multi-view vision system camera array; secondly, calibrating errors of a working space of the multi-view camera; thirdly, positioning the tail end of the mechanical arm; fourthly, identifying error parameters; and fifthly, evaluating the motion precision of the engineering vehicle. The invention can realize the position measurement of the actuating mechanism of the engineering vehicle with different sizes, has high reliability, can greatly reduce the measurement error of the multi-vision system, leads the measurement precision of the system to be high, can quickly and conveniently calibrate the mechanical arm of the engineering vehicle, and provides a standard for the evaluation of the unmanned engineering vehicle.

Description

Unmanned engineering vehicle kinematic parameter identification and calibration method based on multi-view vision
Technical Field
The invention belongs to the technical field of unmanned engineering vehicles, and particularly relates to a multi-view vision-based method for identifying and calibrating kinematic parameters of an unmanned engineering vehicle.
Background
The engineering vehicle is widely applied to mining, traffic construction, military construction and the like, and the unmanned engineering vehicle technology is produced along with the development and application requirements of the technology. Compared with the traditional engineering vehicle, the unmanned engineering vehicle has higher requirements on the control precision of the mechanical arm on the vehicle. The method for improving the positioning precision of the tail end of the mechanical arm comprises two methods: and (1) a better production process is used. And (2) calibrating the kinematic parameters of the mechanical arm. The first method greatly increases the production cost, so the calibration of the kinematic parameters of the mechanical arm becomes the best choice. When the kinematic parameters of the engineering vehicle are calibrated, the pose measurement precision of the tail end actuating mechanism of the engineering vehicle is high. Common measuring instruments include a pull line sensor, a laser tracker, a three-coordinate measuring machine and the like. These measurement methods have significant limitations: laser trackers are expensive; the operation steps of the stay wire sensor are multiple; the three-coordinate measuring machine is large in size and weight, not easy to move and multiple in auxiliary equipment.
The camera-based vision measurement scheme has the advantages of low cost, high detection precision, high detection speed and the like, but the current commercialized multi-view vision detection system cannot be directly applied to pose detection of an engineering vehicle; when a multi-view camera is used, a solution which is high in integration level, convenient and effective is not provided for planning the position of the camera at present, manual adjustment can be performed only a little by a little, the efficiency is low, and an irregular camera placement can cause a great error in a measurement result; a method for evaluating the measurement precision of the working space of the multi-view camera is lacked; there is no method to assess the accuracy of unmanned manipulations.
Disclosure of Invention
The invention aims to provide a method for identifying and calibrating kinematic parameters of an unmanned engineering vehicle based on multi-view vision, which can realize the position measurement of actuating mechanisms of the engineering vehicles with different sizes, has high reliability, can greatly reduce the measurement error of a multi-view vision system, ensures that the measurement precision of the system is high, can quickly and conveniently calibrate the mechanical arm of the engineering vehicle, and provides a standard for the evaluation of the unmanned engineering vehicle.
In order to achieve the purpose, the invention discloses an unmanned engineering vehicle kinematic parameter identification and calibration method based on multi-view vision, which comprises the following steps:
firstly, planning a multi-view vision system camera array:
designing a scheme based on a genetic algorithm to adjust the layout of the multi-view vision system according to a camera view field projection equation;
secondly, calibrating errors of a working space of the multi-view camera:
calibrating the multi-view vision system by using a one-dimensional calibration rod calibration method;
thirdly, positioning the tail end of the mechanical arm:
performing kinematic modeling on a mechanical arm of the engineering vehicle by using a D-H parameter method, and solving a pose matrix of the tail end of a bucket of the working device so as to control the bucket to move on a movement plane;
after the kinematic modeling of the mechanical arm is completed, measuring the position of the tail end of a bucket of the mechanical arm by using a multi-view vision system;
fourthly, identifying error parameters:
modeling a kinematic parameter error model of the mechanical arm of the engineering vehicle, combining the measured position data of the bucket by using a least square method to finish the identification of the kinematic parameter error of the mechanical arm, and performing error compensation on the kinematic parameter;
and fifthly, evaluating the motion precision of the engineering vehicle:
after D-H parameter correction, mechanical arm joint angle data of some points on a motion plane are input into an engineering vehicle controller, a multi-view vision system is used for measuring the position of an end actuating mechanism to obtain corresponding space point coordinates, the space point coordinates are compared with the given positions of the points to obtain errors on the positions, and the average value of the point errors is taken as the absolute positioning error after mechanical arm calibration.
As a further scheme of the invention: in the first step, the multi-view vision system camera array planning step is as follows:
(1) Constraining the motion of the tail end of a mechanical arm bucket of the unmanned engineering vehicle on a plane, and determining the motion track of the mechanical arm bucket;
(2) Obtaining the projection of each camera view field on a motion plane according to a camera projection equation;
(3) And (3) encoding the pose by using a genetic algorithm, ensuring that each point on the track to be detected is at least positioned in the overlapping range of the visual fields of the two cameras by taking the maximum track coverage area as an index, reducing the distance between the cameras and the projection plane and increasing the overlapping degree of the visual fields of the cameras, so as to improve the measurement precision and finally obtain the pose information of each camera.
As a further scheme of the invention: in the second step, the one-dimensional calibration rod method calibrates the multi-view vision system as follows:
(1) Performing binarization processing on a one-dimensional calibration rod image shot by a camera to extract edges of the calibration rod;
(2) Utilizing the edges of the extracted mark points to calculate the pixel coordinates of the centers of the mark points by using a least square method;
(3) And optimizing internal and external parameters of the camera by taking the reprojection error as an index.
As a further scheme of the invention: in the third step, the measuring process of the position of the tail end of the bucket is as follows:
(1) Selecting a plurality of spatial coordinate points on a motion plane at the tail end of a bucket of a mechanical arm as a given track;
(2) Solving kinematic inverse solutions of the points by using coordinate transformation of the tail end of the bucket relative to a base coordinate system of the engineering vehicle, and solving joint angles of the mechanical arm corresponding to the space points on the motion plane;
(3) Controlling the mechanical arm by using the joint angles, and finally obtaining the actual position data of the light-reflecting target spot at the tail end of the bucket of the engineering vehicle by using multi-view vision;
(4) And the position of the light reflecting target point and the position of the tail end of the bucket have a coordinate transformation difference, and the actual position of the tail end of the bucket is obtained by using the coordinate transformation on the position of the light reflecting target point.
Compared with the prior art, the invention has the following beneficial effects:
by planning the positions of the multi-view cameras, the position measurement of the executing mechanisms of the engineering vehicles with different sizes can be realized, and the reliability is high;
the measuring error of the multi-view vision system can be greatly reduced by using the method taking the reprojection error as an index, so that the measuring precision of the system is high;
the method based on the D-H parameter error model is used for quickly and conveniently calibrating the mechanical arm of the engineering vehicle, so that the cost can be greatly saved;
a method for evaluating the operation precision of the unmanned engineering vehicle is used, and a standard is provided for evaluation of the unmanned engineering vehicle.
Drawings
FIG. 1 is a schematic view of a multi-view camera measuring bucket position.
Fig. 2 is a projection of the camera onto the bucket tip motion plane.
Fig. 3 is a schematic view of a calibration rod.
FIG. 4 is a schematic diagram of the projection relationship of a one-dimensional calibration rod under a multi-view camera.
Figure 5 is a simplified schematic of a robotic arm.
Detailed Description
The invention is further explained by taking the small unmanned excavator as a calibration object in combination with the attached drawings.
The unmanned engineering vehicle kinematic parameter identification and calibration method based on multi-view vision comprises the following steps:
firstly, planning a multi-view vision system camera array:
when the position of the tail end of the mechanical arm executing mechanism is measured, the position of a camera needs to be planned, so that the multi-view vision system can have suitable measuring positions for unmanned engineering vehicles of different sizes, and a scheme based on a genetic algorithm is designed according to a camera view field projection equation to adjust the layout of the multi-view vision system. Further, the camera position planning step is as follows:
(1) The motion of the tail end of the mechanical arm bucket of the unmanned engineering vehicle is constrained on a plane, and the motion track of the mechanical arm bucket is determined.
(2) And obtaining the projection of each camera view field on the motion plane according to the camera projection equation.
(3) And (3) encoding the pose by using a genetic algorithm, ensuring that each point on the track to be detected is at least positioned in the overlapping range of the visual fields of the two cameras by taking the maximum track coverage area as an index, reducing the distance between the cameras and the projection plane and increasing the overlapping degree of the visual fields of the cameras, so as to improve the measurement precision and finally obtain the pose information of each camera.
The invention uses 4 cameras to measure the position of the tail end of the mechanical arm bucket of the small unmanned excavator. After the cameras are planned using the above steps, as shown in fig. 1, 4 cameras are spaced 1.5 meters apart from each other and 1.5 meters away from the ground, and at the same time, 4 cameras are on a straight line parallel to the movement plane of the end of the bucket of the robot arm, and the distance between the 4 cameras and the movement plane is 3.5 meters. At the moment, the projection of the 4 camera view fields on the motion plane covers two by two, so that the detection range of the multi-view camera system can completely cover the motion range of the tail end of the mechanical arm bucket. The multi-view camera measurement system 4 projections of camera view fields on a motion plane are shown in fig. 2, and a C1 area represents a projection of a 1 st camera view field on the motion plane; c1& C2 denote regions where the projection of the 1 st camera field of view on the motion plane and the projection of the 2 nd camera field of view on the motion plane overlap; c2& C3 denote the areas where the projection of the 2 nd camera field of view on the motion plane and the projection of the 3 rd camera field of view on the motion plane overlap; c3& C4 denote the areas where the projection of the 3 rd camera field of view on the motion plane and the projection of the 4 th camera field of view on the motion plane overlap; the C4 area represents the projection of the 4 th camera field of view on the motion plane; the dotted line is the movement locus of the bucket tip on the movement plane.
Secondly, calibrating errors of a working space of the multi-view camera:
after the camera position is planned, the present invention calibrates the multi-view vision system using a one-dimensional calibration bar as shown in fig. 3.
Further, the steps of calibrating the multi-view vision system by using the one-dimensional calibration rod method are mainly as follows:
(1) And carrying out binarization processing on the one-dimensional calibration rod image shot by the camera so as to carry out edge extraction on the marking points on the calibration rod. Because the reflective target point can generate a large brightness difference with the surrounding environment, and the gray scale contrast is obvious, the binaryzation operation is carried out on the gray scale image of the calibration rod, and the brightness of the reflective target point is highlighted. In the image, if the gray value of the pixel in the r-th row and c-th column is less than 60, the gray value is changed to 0, if the gray value is greater than or equal to 60, the gray value is changed to 255, and the binarization setting is as follows:
Figure BDA0003899852210000041
after the gray level image is binarized, the mark point edges are extracted.
(2) And (4) utilizing the extracted edges of the mark points to calculate the pixel coordinates of the centers of the mark points by using a least square method. Because of the influence of noise and distortion, after the image is binarized, the obtained mark point is an uneven near-circle image.
(3) And optimizing internal and external parameters of the camera by taking the reprojection error as an index. The projection relationship of the one-dimensional calibration rod under the multi-view camera is shown in figure 4. Three collinear marking points are A, B and C, and A is marked j B j C j J =1,2,3.. N for different positions in space of the marker point. O is i I =0,1,2,3.. M, the optical center of the ith camera. Let L be the distance between different points, then L 1 =||A-B||,L 2 =||B-C||,L 3 And | | | A-C | |. The imaging points of the characteristic points in the camera imaging plane are respectively a ij ,b ij ,c ij
Assuming an estimated reconstruction matrix
Figure BDA0003899852210000042
The image point obtained
Figure BDA0003899852210000043
Called the reprojection point, minimizes the distance between the point and the observation point, as shown in equation (1):
Figure BDA0003899852210000044
the function d (x, y) is a geometric distance function of x and y, which is called the reprojection error. According to A j B j C j Co-linear nature of (1) with A j As a center, a spherical coordinate system is established, and A is used j To describe B j C j Is set. Then spatial point B j C j And A j Is as in formula (2):
Figure BDA0003899852210000051
the expression of formula (2) is taken into the formula of projection coordinates of formula (3):
Figure BDA0003899852210000052
in the formula (3), the parameters of the matrix K are determined by the internal structure parameters of the camera, and are called as an internal parameter matrix; the matrix M is the pose of the camera relative to world coordinates and is called as a camera external parameter matrix; h can be seen as a projective transformation matrix called homography matrix. X W 、Y W 、Z W Is the position in three directions of the world coordinate system, u and v are the positions in two directions of the pixel coordinate system, Z C Indicating the position of the object in the direction of the optical axis in the camera coordinate system. Then A j B j C j Carrying out integral error minimization solving on the three-point reprojection, wherein the minimized reprojection error expression is as shown in formula (4):
Figure BDA0003899852210000053
x in the formula (4) is a variable containing internal and external parameters, and the number of parameters to be optimized is large, so that the sparse LM algorithm is used for solving.
Thirdly, positioning the tail end of the mechanical arm:
firstly, kinematic parameter modeling is carried out on the mechanical arm of the unmanned excavator. Regardless of the horizontal rotation of the turret of the work vehicle, the robotic arm may approximate a 3 degree-of-freedom linkage, as shown in fig. 5. And performing kinematic modeling on the mechanical arm by using the D-H parameter model to obtain a pose matrix of the tail end of the bucket of the working device so as to control the bucket to move on a movement plane. Specifically, a base coordinate system is established on a base of the mechanical arm, each time a joint is transformed into the base coordinate system, a transformation relation matrix is obtained through each transformation, and finally a transformation relation of the coordinates of the actuating mechanism relative to the base coordinate is obtained.
The coordinate transformation of the D-H parameter from the coordinate system i to the coordinate system i +1 is as follows (5):
Figure BDA0003899852210000054
in formula (5), i is the number of each joint; alpha (alpha) ("alpha") i Is the joint torsion angle; theta.theta. i Is the joint angle; a is i Is the link length; d i Is the distance of two adjacent coordinate systems i and i +1 with respect to the Z-axis of the i-coordinate system. Multiplying the transformation matrix of each working device in sequence to obtain a coordinate transformation matrix of the tail end of the mechanical arm bucket relative to a base coordinate system, wherein the expression is as follows (6):
Figure BDA0003899852210000061
according to the formula (6), the posture matrix of the tail end of the bucket of the working device is as shown in the formula (7):
Figure BDA0003899852210000062
wherein: c. C 0 =cosθ 0 ,c 01 =cos(θ 01 ),c 012 =cos(θ 012 ),s 0 =sinθ 0 ,s 01 =sin(θ 01 ),s 012 =sin(θ 012 )。
In the formula (7)
Figure BDA0003899852210000063
In the form of a matrix of poses,
Figure BDA0003899852210000064
is a matrix of positions.
The position of the end of the arm bucket is formula (8):
Figure BDA0003899852210000065
in the equation (8), x represents the horizontal distance of the bucket end from the center of the arm base, and z represents the vertical distance of the bucket end from the center of the arm base.
After the robotic arm kinematics modeling is completed, its bucket tip position is measured using a multi-vision system.
Further, the measurement process is as follows: selecting a plurality of spatial coordinate points on a motion plane at the tail end of a bucket of a mechanical arm as a given track; solving kinematic inverse solutions of the points by using coordinate transformation of the tail end of the bucket relative to a base coordinate system of the unmanned excavator, and solving mechanical arm joint angles corresponding to the space points on a motion plane; and controlling the mechanical arm by using the joint angles, and finally obtaining the actual position data of the light-reflecting target spot at the tail end of the bucket of the unmanned excavator by using multi-view vision. And the position of the light-reflecting target point and the position of the tail end of the bucket are different by one coordinate transformation, and the actual position of the tail end of the bucket is obtained by using the coordinate transformation on the position of the light-reflecting target point.
Fourthly, error parameter identification:
and performing kinematic parameter error model modeling on the mechanical arm of the engineering vehicle, combining the measured position data of the bucket by using a least square method to complete the identification of the kinematic parameter error of the mechanical arm, and performing error compensation on the kinematic parameter.
Specifically, the pose of the tail end of the mechanical arm is determined according to the kinematic model of the mechanical arm in the third step as follows:
P=F(a,d,α,θ)
because of the influence of manufacturing process and assembly, there will be some errors in 4 kinematic parameters of the mechanical arm and theoretical values, and because the rotation angle of each joint of the mechanical arm is read by the code disc, the rotation angle of the joint is not considered to have an error, and there are errors Δ a, Δ d, Δ α corresponding to 3 kinematic parameters a, d, α, and the actual measurement pose of the mechanical arm should be:
P′=F(a+Δa,d+Δd,α+Δα)
ΔP=P-P′
in the case of small kinematic errors, the above equation can be approximated as a linear equation:
Figure BDA0003899852210000071
at any position P in space i The time error equation is:
Figure BDA0003899852210000072
delta P in the formula (10) ix 、ΔP iy 、ΔP iz The errors of the tail end position of the mechanical arm in the X direction, the Y direction and the Z direction are represented respectively. After measuring the coordinates of n points in the working space of the mechanical arm, the above formula can be converted into the following formula:
AΔX=b (11)
in the formula (11), a is a jacobian matrix, which corresponds to the above formula and is grouped in every three rows, and the specific form is as follows:
Figure BDA0003899852210000073
b is the error matrix for n sets of data:
b=(ΔP 1x ΔP 1y ΔP 1z ΔP 2x ΔP 2y ΔP 2z …ΔP nx ΔP ny ΔP nz ) T
Δ X is the error of the D-H parameter:
ΔX=(Δa 1 Δa 2 Δd 1 Δd 2 Δα 1 Δα 2 Δθ 1 Δθ 2 ) T
after the error model of the D-H parameters is obtained, the optimal values of the D-H parameters can be obtained by using a least square method so as to minimize the error of the tail end position of the mechanical arm. The specific expression is as follows:
ΔX=(A T A) -1 ATb。
and fifthly, evaluating the motion precision of the engineering vehicle:
after the data are processed by using the least square one-time completion method, the D-H parameter error vector which is in accordance with the expectation can be obtained. And compensating the original D-H parameter by using the D-H parameter error amount to obtain the calibrated D-H parameter. After D-H parameter correction, mechanical arm joint angle data of the previous 5 points on the motion plane are input into a controller of the small unmanned excavator, the position of the tail end actuating mechanism is measured by using a multi-view vision system to obtain 5 space point coordinates, the space point coordinates are compared with the given positions of the 5 points to obtain errors in the positions, and the average value of the 5 point errors is taken as the absolute positioning error after mechanical arm calibration.

Claims (4)

1. The unmanned engineering vehicle kinematic parameter identification and calibration method based on multi-view vision is characterized by comprising the following steps of:
firstly, planning a multi-view vision system camera array:
designing a scheme based on a genetic algorithm to adjust the layout of the multi-view vision system according to a camera view field projection equation;
secondly, calibrating errors of a working space of the multi-view camera:
calibrating the multi-view vision system by using a one-dimensional calibration rod calibration method;
thirdly, positioning the tail end of the mechanical arm:
performing kinematic modeling on a mechanical arm of the engineering vehicle by using a D-H parameter method, and solving a pose matrix of the tail end of a bucket of the working device so as to control the bucket to move on a movement plane;
after the mechanical arm kinematics modeling is completed, measuring the position of the tail end of a bucket of the mechanical arm kinematics modeling by using a multi-view vision system;
fourthly, error parameter identification:
modeling a kinematic parameter error model of the mechanical arm of the engineering vehicle, combining the measured position data of the bucket by using a least square method to finish the identification of the kinematic parameter error of the mechanical arm, and performing error compensation on the kinematic parameter;
and fifthly, evaluating the motion precision of the engineering vehicle:
after the D-H parameters are corrected, the mechanical arm joint angle data of some points on the motion plane are input into a controller of the engineering vehicle, then the position of the tail end actuating mechanism is measured by using a multi-view vision system, corresponding space point coordinates are obtained, the space point coordinates are compared with the given positions of the points to obtain errors of the positions, and the average value of the point errors is taken as the absolute positioning error after the mechanical arm is calibrated.
2. The method for identification and calibration of kinematic parameters of an unmanned vehicle based on multi-vision according to claim 1, wherein in the first step, the multi-vision system camera array planning step is as follows:
(1) Constraining the motion of the tail end of a mechanical arm bucket of the unmanned engineering vehicle on a plane, and determining the motion track of the mechanical arm bucket;
(2) Obtaining the projection of each camera view field on a motion plane according to a camera projection equation;
(3) And (3) encoding the pose by using a genetic algorithm, ensuring that each point on the track to be detected is at least positioned in the overlapping range of the visual fields of the two cameras by using the maximum track coverage area as an index, reducing the distance between the cameras and the projection plane and increasing the overlapping degree of the visual fields of the cameras, so as to improve the measurement precision and finally obtain the pose information of each camera.
3. The method for identifying and calibrating the kinematic parameters of the unmanned vehicle based on the multi-vision according to claim 1 or 2, wherein in the second step, the step of calibrating the multi-vision system by the one-dimensional calibration rod method comprises the following steps:
(1) Performing binarization processing on a one-dimensional calibration rod image shot by a camera to extract edges of the marker points on the calibration rod;
(2) Utilizing the edges of the extracted mark points to calculate the pixel coordinates of the centers of the mark points by using a least square method;
(3) And optimizing internal and external parameters of the camera by taking the reprojection error as an index.
4. The unmanned vehicle kinematic parameter identification calibration method based on multi-vision as claimed in claim 3, wherein: in the third step, the measuring process of the tail end position of the bucket is as follows:
(1) Selecting a plurality of spatial coordinate points on a motion plane at the tail end of a mechanical arm bucket as a given track;
(2) Solving kinematic inverse solutions of the points by using coordinate transformation of the tail end of the bucket relative to a base coordinate system of the engineering vehicle, and solving joint angles of the mechanical arm corresponding to the space points on the motion plane;
(3) Controlling the mechanical arm by using the joint angles, and finally obtaining the actual position data of the light-reflecting target spot at the tail end of the bucket of the engineering vehicle by using multi-view vision;
(4) And the position of the light-reflecting target point and the position of the tail end of the bucket are different by one coordinate transformation, and the actual position of the tail end of the bucket is obtained by using the coordinate transformation on the position of the light-reflecting target point.
CN202211285876.6A 2022-10-20 2022-10-20 Unmanned engineering vehicle kinematic parameter identification and calibration method based on multi-view vision Pending CN115578446A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211285876.6A CN115578446A (en) 2022-10-20 2022-10-20 Unmanned engineering vehicle kinematic parameter identification and calibration method based on multi-view vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211285876.6A CN115578446A (en) 2022-10-20 2022-10-20 Unmanned engineering vehicle kinematic parameter identification and calibration method based on multi-view vision

Publications (1)

Publication Number Publication Date
CN115578446A true CN115578446A (en) 2023-01-06

Family

ID=84587221

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211285876.6A Pending CN115578446A (en) 2022-10-20 2022-10-20 Unmanned engineering vehicle kinematic parameter identification and calibration method based on multi-view vision

Country Status (1)

Country Link
CN (1) CN115578446A (en)

Similar Documents

Publication Publication Date Title
CN111505606B (en) Method and device for calibrating relative pose of multi-camera and laser radar system
CN111095355B (en) Real-time positioning and orientation tracker
JP4267005B2 (en) Measuring apparatus and calibration method
CN113001535B (en) Automatic correction system and method for robot workpiece coordinate system
CN111531547B (en) Robot calibration and detection method based on vision measurement
CN108827264B (en) Mobile workbench and its mechanical arm optics target positioning device and localization method
CN113211431B (en) Pose estimation method based on two-dimensional code correction robot system
CN106457562A (en) Method for calibrating a robot and a robot system
JP2005300230A (en) Measuring instrument
CN106737859B (en) External parameter calibration method for sensor and robot based on invariant plane
TWI762371B (en) Automated calibration system and method for the relation between a profile scanner coordinate frame and a robot arm coordinate frame
CN110488838B (en) Accurate repeated positioning method for indoor autonomous navigation robot
CN112894209A (en) Automatic plane correction method for intelligent tube plate welding robot based on cross laser
JP6855491B2 (en) Robot system, robot system control device, and robot system control method
CN113028990B (en) Laser tracking attitude measurement system and method based on weighted least square
CN114283391A (en) Automatic parking sensing method fusing panoramic image and laser radar
CN114474003A (en) Vehicle-mounted construction robot error compensation method based on parameter identification
CN112958960A (en) Robot hand-eye calibration device based on optical target
CN109712198B (en) Calibration method of advanced driving assistance system
CN110211175B (en) Method for calibrating space pose of collimated laser beam
CN113781558A (en) Robot vision locating method with decoupled posture and position
CN115578446A (en) Unmanned engineering vehicle kinematic parameter identification and calibration method based on multi-view vision
US20230100182A1 (en) Alignment Of A Radar Measurement System With A Test Target
CN113960564B (en) Laser comprehensive reference system for underwater detection and ranging and calibrating method
CN113276115A (en) Hand-eye calibration method and device without robot movement

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination