CN114663517A - Simulated target pose acquisition method, target capture method and device based on MBDyn simulation, and aerospace on-orbit target capture method - Google Patents

Simulated target pose acquisition method, target capture method and device based on MBDyn simulation, and aerospace on-orbit target capture method Download PDF

Info

Publication number
CN114663517A
CN114663517A CN202210115285.8A CN202210115285A CN114663517A CN 114663517 A CN114663517 A CN 114663517A CN 202210115285 A CN202210115285 A CN 202210115285A CN 114663517 A CN114663517 A CN 114663517A
Authority
CN
China
Prior art keywords
simulation
plane
target
obtaining
code
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210115285.8A
Other languages
Chinese (zh)
Inventor
魏承
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Institute of Technology
Original Assignee
Harbin Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Institute of Technology filed Critical Harbin Institute of Technology
Priority to CN202210115285.8A priority Critical patent/CN114663517A/en
Publication of CN114663517A publication Critical patent/CN114663517A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/10Geometric CAD
    • G06F30/17Mechanical parametric or variational design
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2119/00Details relating to the type or aim of the analysis or the optimisation
    • G06F2119/14Force analysis or force optimisation, e.g. static or dynamic forces

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Geometry (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Pure & Applied Mathematics (AREA)
  • Mathematical Optimization (AREA)
  • Mathematical Analysis (AREA)
  • Computer Hardware Design (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Manipulator (AREA)

Abstract

A simulated target pose acquisition method, a target capture method and a target capture device based on MBDyn simulation relate to the field of spacecraft pose measurement and capture control methods. Aiming at the problem that no visual simulation technology is available in the prior art, is used for verifying the feasibility of a target posture estimation and capture method and provides reference for an on-orbit capture scheme design based on visual servo, the application provides a simulation target capture scheme, and specifically comprises the following steps: the method for acquiring the simulated target pose establishes multi-body dynamics simulation based on the MBDyn environment and outputs an image of an object to be captured, wherein the object to be captured is pasted with a planar Aruco code, and the method comprises the following steps: a camera calibration step, in which internal parameters of a camera used in simulation are acquired; and (3) code identification: identifying a plane Aruco code to obtain a sequence number corresponding to the plane Aruco code; pose solving step: and obtaining the attitude transformation matrix of the object to be captured through the internal reference and the serial number. The method is suitable for being applied to simulation of the attitude estimation and capture method of the target based on the Aruco code in the MBDyn simulation environment.

Description

Simulation target pose acquisition method, target capture method and device based on MBDyn simulation, and aerospace on-orbit target capture method
Technical Field
Relates to the field of spacecraft pose measurement and capture control methods, in particular to a method for estimating and capturing the pose of a target based on Aruco codes in an MBDyn simulation environment.
Background
With the increasing complexity of the space mission requirements, how to ensure the stable operation of the satellite in orbit becomes one of the problems which need to be solved urgently in space engineering and scientific research work. In order to prolong the in-orbit operation life of a satellite and avoid abandoning the failed satellite as much as possible, the problems of in-orbit despinning, capturing, maintaining and the like of a space robot become hot problems in the field of current spaceflight. In the early stage of on-orbit service, astronauts usually work outside a cabin, but with the development of aerospace technology, the method has the problems of high cost and high risk, more on-orbit service operations are completed through space robots, the service capability of the space robots is improved, and the combination of the space robots and top technologies of various industries and subjects becomes one of the key contents of the development of the current aerospace industry. The robot vision servo is the key for solving the problem of unmanned autonomous on-orbit service of the space robot, and the technology for providing instruction parameters for the on-orbit service of the space robot by using a robot vision servo system is extremely high in research value.
Optical assist codes are commonly used as an aid to feature point extraction. The method can solve the problem that space illumination and spacecraft surface features are not obvious by pasting the Aruco code on a target in response to an unstable illumination environment in space, is favorable for feature point extraction, and reduces extraction and matching errors. However, at present, there is no visual simulation technology for verifying the attitude estimation of the target and the feasibility of the capture method, and a reference is provided for the design of the on-orbit capture scheme based on the visual servoing.
Disclosure of Invention
Aiming at the problem that no visual simulation technology exists in the prior art, the visual simulation technology is used for verifying the feasibility of a target posture estimation and capture method and providing reference for the design of an on-orbit capture scheme based on visual servo, the method takes the on-orbit capture of a space robot as a task background, carries out calculation and solution through the mapping relation between a visual image and the real world, predicts the position and the pose of the target and provides the state of an object to be captured for a mechanical arm, and then designs the capture scheme according to the obtained state of the object to be captured, and specifically comprises the following steps:
the method for acquiring the simulated target pose is based on establishing multi-body dynamic simulation through an MBDyn environment and outputting an image of an object to be captured, wherein a plane Aruco code is attached to the object to be captured, and comprises the following steps:
a camera calibration step, in which internal parameters of a camera used in simulation are acquired;
and (3) code identification: obtaining a serial number corresponding to the plane Aruco code by identifying the plane Aruco code;
pose solving step: and obtaining the attitude transformation matrix of the object to be captured through the internal reference and the serial number.
Further, the camera calibration step specifically comprises:
and acquiring internal parameters of a camera used in simulation by a Zhang Zhengyou chessboard lattice calibration method.
Further, the code identification step specifically includes: the method comprises the following steps:
a preprocessing step, namely preprocessing the image to eliminate illumination interference;
contour extraction, namely excluding contours which are not plane Aruco codes to obtain candidate contours;
a characteristic point extraction step, namely extracting angular points by adopting a Shi-Tomasi angular point detection method, and obtaining homography matrixes of the collected image and the target image through the angular points;
and matrix solving, namely obtaining a corresponding serial number of the plane Aruco code through the candidate contour and the homography matrix.
Further, the pose solving step specifically comprises:
and obtaining a pixel coordinate of the plane Aruco code and a world coordinate corresponding to the pixel coordinate through the corresponding relation between the plane Aruco code and the standard Aruco code, and obtaining a rotation matrix R and a translation matrix t between a pixel coordinate system and a world coordinate system of the plane Aruco code through the pixel coordinate of the plane Aruco code and the world coordinate corresponding to the pixel coordinate of the plane Aruco code.
Based on the same inventive concept, the application also provides a target capturing method based on MBDyn simulation, and the method comprises the following steps:
an image output step: establishing multi-body dynamics simulation based on an MBDyn environment and outputting an image of an object to be captured, wherein a plane Aruco code is attached to the object to be captured;
a camera calibration step, in which internal parameters of a camera used in simulation are acquired;
and (3) code identification: obtaining a serial number corresponding to the plane Aruco code by identifying the plane Aruco code;
pose solving step: obtaining an attitude transformation matrix of the object to be captured through the internal reference and the serial number;
a control design step: obtaining a joint angle plan of the mechanical arm through the attitude transformation matrix;
a capturing step: and inputting the joint angle plan of the mechanical arm into the mechanical arm.
Further, the control design step comprises:
a planning step, namely obtaining a mechanical arm joint angle plan through Matlab and MBDyn simulation;
and a control step of obtaining a control rate by a Lagrange method.
Based on the same inventive concept, the application also provides a target capturing device based on the MBDyn simulation, which comprises:
an image output module: the system is used for establishing multi-body dynamics simulation through an MBDyn environment and outputting an image of an object to be captured, wherein a plane Aruco code is attached to the image;
the camera calibration module is used for acquiring internal parameters of a camera used in simulation;
a code identification module: the device is used for identifying the plane Aruco code to obtain a sequence number corresponding to the plane Aruco code;
a pose solving module: the attitude transformation matrix is used for obtaining the attitude transformation matrix of the object to be captured according to the internal reference and the serial number;
a control design module: the attitude transformation matrix is used for obtaining a joint angle plan of the mechanical arm according to the attitude transformation matrix;
a capture module: and the manipulator joint angle planning input module is used for inputting the manipulator joint angle planning into the manipulator.
Based on the same inventive concept, the application also provides an aerospace on-orbit target capturing method, wherein a planar Aruco code is pasted on an object to be captured, and the method comprises the following steps:
a camera calibration step, in which internal parameters of a used camera are acquired;
and (3) code identification: obtaining a serial number corresponding to the plane Aruco code by identifying the plane Aruco code;
pose solving step: obtaining an attitude transformation matrix of the object to be captured through the internal reference and the serial number;
a control design step: and obtaining the joint angle plan of the mechanical arm through the attitude transformation matrix.
A computer apparatus includes a memory in which a computer program is stored and a processor that executes the simulation target pose acquisition method when the processor runs the computer program stored in the memory.
A computer device comprising a storage and a processor, the storage having stored therein a computer program, the processor executing the MBDyn simulation-based target capture method when the processor runs the computer program stored in the storage
The invention has the advantages that:
according to the target capture method based on MBDyn simulation, the planar Aruco codes are used for finishing target pose estimation, the visual servo system is used for calculating the attitude of a target spacecraft in a simulation environment, an autonomous capture scheme is designed, and capture control of a captured target is finished.
The method comprises the steps of calculating and solving through a mapping relation between a visual image and the real world, predicting the position and the pose of a target, providing the state of an object to be captured for a mechanical arm, designing a capture scheme according to the obtained state of the object to be captured, solving the problem that no visual simulation technology exists in the prior art, verifying the posture estimation of the target and the feasibility of a capture method, and providing reference for the design of an on-orbit capture scheme based on visual servo.
The method is suitable for being applied to simulation of the attitude estimation and capture method of the target based on the Aruco code in the MBDyn simulation environment.
Drawings
Fig. 1 is a flowchart of a simulation target pose acquisition method according to a first embodiment;
FIG. 2 is a flowchart of a target capture method based on MBDyn simulation according to the fifth embodiment;
fig. 3 is a diagram illustrating the establishment of a multi-body dynamic simulation based on MBDyn environment and the output of an image of an object to be captured according to an embodiment;
FIG. 4 is a schematic diagram of a simulation process of the MBDyn simulation-based target capture method according to the fifth embodiment;
wherein, a is the state that the mechanical arm is close to the target, b is the state that the mechanical arm is close to the target, and c is the state that the mechanical arm finishes grabbing.
Detailed Description
In order to fully realize the advantages and benefits of the technical solutions provided in the present application, several embodiments of the present application will now be further described with reference to the accompanying drawings.
The first embodiment provides a simulation target pose acquisition method, which is based on establishing multi-body dynamics simulation through an MBDyn environment and outputting an image of an object to be captured, wherein the object to be captured is attached with a planar Aruco code, and the method comprises the following steps:
a camera calibration step, in which internal parameters of a camera used in simulation are acquired;
and (3) code identification: obtaining a serial number corresponding to the plane Aruco code by identifying the plane Aruco code;
pose solving step: and obtaining the attitude transformation matrix of the object to be captured through the internal reference and the serial number.
Specifically, a space robot capturing system is established based on the MBDyn, visual fusion simulation is carried out, and photos shot by the space robot are output.
1. The multi-body dynamics software can support automatic generation and solution of space mechanism dynamics models of various topological structures, has a flexible and convenient pretreatment configuration mode, can automatically check the topological types and the connection modes of space multi-body systems, can visually display the configured multi-body systems, simultaneously has description of the multi-body systems of various topological structures and automatic assembly and solution of the dynamics models, and can visually display in a three-dimensional visualization manner in the solution process or after simulation is finished;
2. realizing a corresponding mathematical algebraic model for the geometric constraint relation in the system, and obtaining a solvable differential algebraic power system by a set; in addition, for intuition and convenience of input and output, the dynamics module is provided with a three-dimensional visualization post-processing function;
in a second embodiment, the present embodiment is further limited to the method for acquiring a pose of a simulation target provided in the first embodiment, and the camera calibration step specifically includes:
and acquiring internal parameters of a camera used in simulation by a Zhang Zhengyou chessboard lattice calibration method.
Specifically, a conversion relation between a pixel coordinate system and a world coordinate system is obtained by using a pinhole camera model, as shown in formula 1:
Figure BDA0003496074110000041
in the formula [ u v 1]TIs the pixel coordinate of any point P in the image, [ X ]W YW ZW 1]TIs the world coordinate of point P, dx, dy are the physical dimensions of a single pixel on the camera imaging plane, f is the camera focal length, u0、v0Is the pixel coordinate of the optical center, s represents the scale factor, the value and ZWThe same; r represents a rotation matrix of the world coordinate system and the camera coordinate system, and t represents a position vector of the world coordinate system and the camera coordinate system. Suppose there is an arbitrary point P on the normalized plane with coordinates [ x, y]TIntroducing a radial distortion parameter k for the distortion model1、k2、k3Has the following relationship:
xdis=x(1+k1r2+k2r4+k3r6)
ydis=y(1+k1r2+k2r4+k3r6), (2)
wherein is represented by [ xdis,ydis]TExpressing the normalized coordinates of the points where radial distortion occurs in the actual scene, and introducing a tangential distortion parameter p1、p2To perform a correction, where r represents the distance of the p point from the origin, to obtain the coordinates of the corrected distortion point:
x'dis=xdis+2p1xy+p2(r2+2x2)
y'dis=ydis+p1(r2+2y2)+2p2xy, (3)
the distortion data can be calculated by substituting equation 2 for equation 3:
x'dis=x(1+k1r2+k2r4+k3r6)+2p1xy+p2(r2+2x2)
y'dis=y(1+k1r2+k2r4+k3r6)+p1(r2+2y2)+2p2xy, (4)
the original correct position of the distorted point can be obtained by mapping the coordinates of the distorted point to the pixel plane of the camera through camera internal parameters, and the method is represented by the formula 5:
Figure BDA0003496074110000051
wherein, fxDenotes the focal length in the x direction, CxX coordinate representing the optical center, CyAnd (3) expressing the y coordinate of the optical center, and obtaining the internal parameters of the camera used in the simulation experiment by using a Zhangyingyou chessboard lattice calibration method according to the principle. In a third embodiment, the second embodiment is further limited to the method for acquiring the pose of the simulation target provided in the second embodiment, and the step of identifying the mark code specifically includes: the method comprises the following steps:
a preprocessing step, preprocessing the image to eliminate illumination interference;
a contour extraction step of excluding contours which are not plane Aruco codes to obtain candidate contours;
a characteristic point extraction step, namely extracting angular points by adopting a Shi-Tomasi angular point detection method, and obtaining homography matrixes of the collected image and the target image through the angular points;
and matrix solving, namely obtaining a corresponding serial number of the plane Aruco code through the candidate contour and the homography matrix.
Specifically, the Aruco code belongs to a binary square mark, a matrix composed of black and white squares is arranged inside the Aruco code, a circle of black frame is arranged on the outermost circle of the Aruco code, the position of an object to be captured can be rapidly located by the outer black frame in a real complex environment, and the serial number information of the marker in a dictionary is contained in the internal binary matrix. The visual servo system is established based on opencv, and for experimental requirements, Aruco codes with the size of 6x6 and the number of 250 pixels are selected. The main identification process is as follows:
1. image preprocessing: for the target pose identification, the undistorted RGB image needs to be subjected to image preprocessing. In order to extract gradient information in a picture acquired by a camera more accurately, the acquired three-channel color image needs to be changed into a single-channel gray image. After graying processing, the image is subjected to adaptive thresholding, and an Otsu (maximum inter-class variance) adaptive thresholding method is adopted in the stage. Unlike general fixed value thresholding, the threshold determined for each pixel by adaptive thresholding is not identical throughout but is determined according to the pixel distribution adjacent to each pixel. By adopting the method, the interference of illumination can be eliminated to a certain degree, and more accurate gradient information can be obtained. There is equation 6 as follows:
Figure BDA0003496074110000061
where dst (x, y) represents the output array, src (x, y) represents the original array, and T (x, y) represents the threshold. And when the original array is larger than the threshold value, the original array takes the designed maximum array value and outputs the maximum array value, and when the original array is other than the designed maximum array value, the maximum array value is set as 0.
2. Contour extraction and screening: after image preprocessing is finished, contour extraction is needed to be carried out on the image, first screening is carried out on the contour according to the area and the perimeter of the contour, the contour which cannot be a captured object is eliminated, and then the contour which is not square is screened out by using the topological invariance of the square. And finally, possible contours are reserved, the calculation amount of a computer is reduced, and the identification precision is improved.
3. Extracting characteristic points: the characteristic points are pixel points with definite information in the image, the position coordinates of the characteristic points in the target image are often definite, after a certain number of characteristic points are obtained, a homography matrix between the collected image and the target image can be obtained, and the stretching, compression, perspective transformation and the like of the image can be realized by utilizing the matrix.
The angular point is used as a characteristic point, is a local characteristic of a point position in an image, has a value that the angular point is an intersection point of two contour lines, has large gradient information difference, is easy to extract, can be accurately positioned, and is also called as: sub-pixel accuracy. And in consideration of the precision and the limitation of the number of the obtained corner points, adopting a Shi-Tomasi corner point detection algorithm to extract the corner points.
Assuming that the pixel gray scale value of a certain point (x, y) in the window is I (x, y), when the point moves by a small displacement (u, v), the pixel gray scale value is I (x + u, y + v), and the gray scale variation value at this time is I (x + u, y + v) -I (x, y), there are:
Figure BDA0003496074110000071
in the formula, w (x, y) is a position function of (x, y), and is generally 1, and a gaussian distribution with the center of the window as the origin may be used.
The partial derivative for E (u, v) is given by equation 8:
Figure BDA0003496074110000072
in the formula
Figure BDA0003496074110000073
Is a partial derivative of I, wherein,
Figure BDA0003496074110000074
indicating the derivation.
U, v are extracted, and formula 9:
Figure BDA0003496074110000075
in the formula
Figure BDA0003496074110000076
λ1And λ2I.e. the variation components in two orthogonal directions, also called eigenvalues.
Given the expression E (u, v), a window causing a large change in E (u, v) is now required, and the amount of change in E (u, v) depends largely on the matrix M, so that the change in pixel gray level can be calculated using the characteristic value obtained by the formula:
R=det(M)-k(trace(M))2, (10)
where R represents the corner response function, det (M) represents the determinant of matrix M, k is an empirical constant, between ranges (0.04,0.06), trace (M) represents the traces of matrix M.
Opencv is known as Open Source Computer Vision Library and is an Intel Open-Source Computer Vision Library. According to the scheme, opencv is utilized and C + + is combined for image processing, the calculation speed is higher compared with languages such as Matlab and python, and real-time image processing can be achieved. In addition, opencv is easy to combine with openGL to visualize the image simulation effect.
4. Solving a homography matrix: the solution of the perspective transformation matrix can be performed after the corner point information is known. The perspective transformation matrix is composed of a rotation matrix of 3x3 and a displacement vector of 3x1, and the images before and after perspective maintain the straightness and parallelism of the vectors. The general formula for a perspective transformation from a real coordinate system to a planar coordinate system is written by definition:
Figure BDA0003496074110000077
wherein, [ u v 1]TRepresenting points in an image coordinate system; [ x y z 1 ]]TRepresenting corresponding points in a real coordinate system; m is11~m34And forming a projective matrix which is a parameter to be solved.
When perspective transformation is performed between planes, all points in a real coordinate system have z equal to 0, a three-dimensional point group is degraded into a two-dimensional point group, the fourth column of a projective matrix in formula 11 is all zero, and a simplified perspective transformation formula between planes is shown as formula 12:
Figure BDA0003496074110000081
multiplying both sides of the matrix by the same coefficient to make m33There is formula 13:
Figure BDA0003496074110000082
the perspective transformation between planes is only eight unknowns, and the projective matrix can be solved only by 4 corresponding feature points, and then the projective matrix can be used to map all points under the real coordinate system to the image pixel coordinate system, and the mapping relationship can be described as formula 14:
Figure BDA0003496074110000083
since perspective transformation requires that points before and after transformation correspond to each other in pairs, the angular point screened from the previous section needs to be sorted, two points with the minimum pixel coordinate line number in four points are found out by using a bubbling algorithm, the line number of the two points is compared, the point with the small line number is used as a starting point, and the two points with the large line number are similarly processed. Experiments prove that a better transformation effect can be obtained by enclosing four points in the perspective transformation into a closed loop in the arrangement sequence, so that the angular points are arranged in the sequence of upper left-upper right-lower left relative to an image pixel coordinate system, and then the perspective transformation is carried out.
5. After the candidate contour with the inverse perspective change is obtained, the bit of each candidate is further analyzed, and the code of the object to be captured is determined. Due to operations involving pixel precision in the perspective transformation process, black and white pixels exist in each grid at the same time. Through experiments, the better recognition effect is found when 2 pixels on the edge of each cell are removed. And finally, comparing the object to be captured after the bit extraction with the dictionary to obtain the corresponding serial number of the object to be captured.
In a fourth embodiment, the present embodiment is a further limitation on the method for acquiring a pose of a simulation target provided in the second embodiment, and the pose solving step specifically includes:
and obtaining a pixel coordinate of the plane Aruco code and a world coordinate corresponding to the pixel coordinate through the corresponding relation between the plane Aruco code and the standard Aruco code, and obtaining a rotation matrix R and a translation matrix t between a pixel coordinate system of the plane Aruco code and the world coordinate system through the pixel coordinate of the plane Aruco code and the world coordinate corresponding to the pixel coordinate of the plane Aruco code.
Specifically, after the target Aruco code is identified, the corresponding relationship between the target Aruco code and the standard Aruco code is obtained, and the pixel coordinate and the world coordinate corresponding thereto are obtained as follows: and (4) solving the rotation matrix R and the translation matrix t between the pixel coordinate system and the world coordinate system according to the target object coordinates.
Fifth, the present embodiment provides a target capture method based on MBDyn simulation, the method includes:
an image output step: establishing multi-body dynamics simulation based on an MBDyn environment and outputting an image of an object to be captured, wherein a plane Aruco code is attached to the object to be captured;
a camera calibration step, in which internal parameters of a camera used in simulation are acquired;
and (3) code identification: obtaining a serial number corresponding to the plane Aruco code by identifying the plane Aruco code;
pose solving step: obtaining a posture transformation matrix of the object to be captured through the internal reference and the serial number;
a control design step: obtaining a manipulator joint angle plan through the attitude transformation matrix;
a capturing step: and inputting the joint angle plan of the mechanical arm into the mechanical arm.
Specifically, the control design steps are as follows:
the control system obtains the state of a target through a visual servo system by adopting Matlab and MBDyn combined simulation, calculates the tail end track through quintic polynomial interpolation, and obtains joint angle planning through kinematic inverse solution. Then, a floating-base space robot kinetic equation is established based on a Lagrange method, and the control rate is designed by calculating a moment method as follows:
Figure BDA0003496074110000091
wherein the content of the first and second substances,
Figure BDA0003496074110000092
a generalized array of moments of inertia is represented,
Figure BDA0003496074110000093
indicating the desired angular acceleration of the joint, kvRepresenting a matrix of scale coefficients, kpA matrix of differential coefficients is represented which is,
Figure BDA0003496074110000094
the non-linear terms are represented by,
Figure BDA0003496074110000095
denotes joint angular velocity, τ denotes a control moment acting on each joint of the robot arm, and e denotes a joint angle error.
Sixth, the present embodiment is further limited to the simulation target capture method according to fifth, wherein the control design step includes:
planning, namely obtaining a joint angle plan of the mechanical arm through Matlab and MBDyn simulation;
and a control step of obtaining a control rate by a Lagrange method.
Seventh embodiment, this embodiment provides an MBDyn simulation-based target capture apparatus, including:
an image output module: the system is used for establishing multi-body dynamics simulation through an MBDyn environment and outputting an image of an object to be captured, wherein a plane Aruco code is attached to the image;
the camera calibration module is used for acquiring internal parameters of a camera used in simulation;
a code identification module: the device is used for identifying the plane Aruco code to obtain a sequence number corresponding to the plane Aruco code;
a pose solving module: the attitude transformation matrix is used for obtaining the attitude transformation matrix of the object to be captured according to the internal reference and the serial number;
a control design module: the manipulator joint angle planning system is used for obtaining a manipulator joint angle plan according to the attitude transformation matrix;
a capture module: and the manipulator joint angle planning input module is used for inputting the manipulator joint angle planning into the manipulator.
The eighth embodiment provides an aerospace in-orbit target capturing method, wherein a planar Aruco code is attached to an object to be captured, and the method comprises the following steps:
a camera calibration step, in which internal parameters of a used camera are acquired;
and (3) code identification: obtaining a serial number corresponding to the plane Aruco code by identifying the plane Aruco code;
pose solving step: obtaining an attitude transformation matrix of the object to be captured through the internal reference and the serial number;
a control design step: and obtaining the joint angle plan of the mechanical arm through the attitude transformation matrix.
A capturing step: and inputting the joint angle plan of the mechanical arm into the mechanical arm.
Ninth embodiment provides a computer device including a memory in which a computer program is stored and a processor that executes the simulation target pose acquisition method according to any one of the first to fourth embodiments when the processor runs the computer program stored in the memory.
Tenth embodiment provides a computer device including a memory in which a computer program is stored and a processor that executes the MBDyn simulation-based target capture method provided according to any one of fifth to sixth embodiments when the processor executes the computer program stored in the memory.
The foregoing detailed description has been given for clearness of understanding and no unnecessary limitations are to be understood therefrom, for the purposes of describing the preferred embodiments herein before, and any modifications and improvements made thereto, including combinations and equivalents of the embodiments, should be considered within the scope of the present disclosure.

Claims (10)

1. The method for acquiring the simulated target pose is characterized by comprising the following steps of establishing multi-body dynamic simulation based on an MBDyn environment and outputting an image of an object to be captured, wherein a plane Aruco code is attached to the object to be captured, and the method comprises the following steps:
a camera calibration step, in which internal parameters of a camera used in simulation are acquired;
and (3) code identification: obtaining a serial number corresponding to the plane Aruco code by identifying the plane Aruco code;
pose solving step: and obtaining the attitude transformation matrix of the object to be captured through the internal reference and the serial number.
2. The method for acquiring the pose of the simulation target according to claim 1, wherein the camera calibration step specifically comprises:
and acquiring internal parameters of a camera used in simulation by a Zhang Zhengyou chessboard lattice calibration method.
3. The method for acquiring the pose of the simulation target according to claim 1, wherein the step of code identification specifically comprises: the method comprises the following steps:
a preprocessing step, preprocessing the image to eliminate illumination interference;
a contour extraction step of excluding contours which are not plane Aruco codes to obtain candidate contours;
a characteristic point extraction step, namely extracting angular points by adopting a Shi-Tomasi angular point detection method, and obtaining homography matrixes of the collected image and the target image through the angular points;
and matrix solving, namely obtaining a corresponding serial number of the plane Aruco code through the candidate contour and the homography matrix.
4. The method for acquiring the pose of the simulation target according to claim 1, wherein the pose solving step specifically comprises the following steps:
and obtaining a pixel coordinate of the plane Aruco code and a world coordinate corresponding to the pixel coordinate through the corresponding relation between the plane Aruco code and the standard Aruco code, and obtaining a rotation matrix R and a translation matrix t between a pixel coordinate system of the plane Aruco code and the world coordinate system through the pixel coordinate of the plane Aruco code and the world coordinate corresponding to the pixel coordinate of the plane Aruco code.
5. The target capturing method based on MBDyn simulation is characterized by comprising the following steps:
an image output step: establishing multi-body dynamics simulation based on an MBDyn environment and outputting an image of an object to be captured, wherein a plane Aruco code is attached to the object to be captured;
a camera calibration step, in which internal parameters of a camera used in simulation are acquired;
and (3) code identification: obtaining a serial number corresponding to the plane Aruco code by identifying the plane Aruco code;
pose solving step: obtaining an attitude transformation matrix of the object to be captured through the internal reference and the serial number;
a control design step: obtaining a manipulator joint angle plan through the attitude transformation matrix;
a capturing step: and inputting the joint angle plan of the mechanical arm to the mechanical arm.
6. The MBDyn simulation-based target capture method of claim 5, wherein the control design step comprises:
a planning step, namely obtaining a mechanical arm joint angle plan through Matlab and MBDyn simulation;
and a control step of obtaining a control rate by a Lagrange method.
7. An MBDyn simulation-based target capture device, the device comprising:
an image output module: the system is used for establishing multi-body dynamics simulation through an MBDyn environment and outputting an image of an object to be captured, wherein a plane Aruco code is attached to the image;
the camera calibration module is used for acquiring internal parameters of a camera used in simulation;
a code identification module: the device is used for identifying the plane Aruco code to obtain a sequence number corresponding to the plane Aruco code;
a pose solving module: the attitude transformation matrix is used for obtaining the attitude transformation matrix of the object to be captured according to the internal reference and the serial number;
a control design module: the manipulator joint angle planning system is used for obtaining a manipulator joint angle plan according to the attitude transformation matrix;
a capture module: and the planning system is used for inputting the joint angle planning of the mechanical arm to the mechanical arm.
8. The space flight on-orbit target capturing method is characterized in that a planar Aruco code is pasted on an object to be captured, and the method comprises the following steps:
a camera calibration step, in which internal parameters of a used camera are acquired;
and (3) code identification: obtaining a serial number corresponding to the plane Aruco code by identifying the plane Aruco code;
pose solving step: obtaining an attitude transformation matrix of the object to be captured through the internal reference and the serial number;
a control design step: obtaining a joint angle plan of the mechanical arm through the attitude transformation matrix;
a capturing step: and inputting the joint angle plan of the mechanical arm to the mechanical arm.
9. A computer device, characterized by: comprising a memory and a processor, the memory having stored therein a computer program, the processor executing the simulation target pose acquisition method according to any one of claims 1 to 4 when the processor runs the computer program stored in the memory.
10. A computer device comprising a storage having a computer program stored therein and a processor that executes the MBDyn simulation-based target capture method according to any one of claims 5-6 when the processor runs the computer program stored in the storage.
CN202210115285.8A 2022-02-07 2022-02-07 Simulated target pose acquisition method, target capture method and device based on MBDyn simulation, and aerospace on-orbit target capture method Pending CN114663517A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210115285.8A CN114663517A (en) 2022-02-07 2022-02-07 Simulated target pose acquisition method, target capture method and device based on MBDyn simulation, and aerospace on-orbit target capture method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210115285.8A CN114663517A (en) 2022-02-07 2022-02-07 Simulated target pose acquisition method, target capture method and device based on MBDyn simulation, and aerospace on-orbit target capture method

Publications (1)

Publication Number Publication Date
CN114663517A true CN114663517A (en) 2022-06-24

Family

ID=82025903

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210115285.8A Pending CN114663517A (en) 2022-02-07 2022-02-07 Simulated target pose acquisition method, target capture method and device based on MBDyn simulation, and aerospace on-orbit target capture method

Country Status (1)

Country Link
CN (1) CN114663517A (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101726296A (en) * 2009-12-22 2010-06-09 哈尔滨工业大学 Vision measurement, path planning and GNC integrated simulation system for space robot
CN103049728A (en) * 2012-12-30 2013-04-17 成都理想境界科技有限公司 Method, system and terminal for augmenting reality based on two-dimension code
CN107609451A (en) * 2017-09-14 2018-01-19 斯坦德机器人(深圳)有限公司 A kind of high-precision vision localization method and system based on Quick Response Code
CN109658461A (en) * 2018-12-24 2019-04-19 中国电子科技集团公司第二十研究所 A kind of unmanned plane localization method of the cooperation two dimensional code based on virtual simulation environment
CN110276808A (en) * 2019-06-11 2019-09-24 合肥工业大学 A kind of method of one camera combination two dimensional code measurement glass plate unevenness
CN110689579A (en) * 2019-10-18 2020-01-14 华中科技大学 Rapid monocular vision pose measurement method and measurement system based on cooperative target
CN111462236A (en) * 2020-04-02 2020-07-28 集美大学 Method and system for detecting relative pose between ships
CN112307786A (en) * 2020-10-13 2021-02-02 上海迅邦电子科技有限公司 Batch positioning and identifying method for multiple irregular two-dimensional codes
CN113297950A (en) * 2021-05-20 2021-08-24 首都师范大学 Dynamic target detection method
CN113792564A (en) * 2021-09-29 2021-12-14 北京航空航天大学 Indoor positioning method based on invisible projection two-dimensional code

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101726296A (en) * 2009-12-22 2010-06-09 哈尔滨工业大学 Vision measurement, path planning and GNC integrated simulation system for space robot
CN103049728A (en) * 2012-12-30 2013-04-17 成都理想境界科技有限公司 Method, system and terminal for augmenting reality based on two-dimension code
CN107609451A (en) * 2017-09-14 2018-01-19 斯坦德机器人(深圳)有限公司 A kind of high-precision vision localization method and system based on Quick Response Code
CN109658461A (en) * 2018-12-24 2019-04-19 中国电子科技集团公司第二十研究所 A kind of unmanned plane localization method of the cooperation two dimensional code based on virtual simulation environment
CN110276808A (en) * 2019-06-11 2019-09-24 合肥工业大学 A kind of method of one camera combination two dimensional code measurement glass plate unevenness
CN110689579A (en) * 2019-10-18 2020-01-14 华中科技大学 Rapid monocular vision pose measurement method and measurement system based on cooperative target
CN111462236A (en) * 2020-04-02 2020-07-28 集美大学 Method and system for detecting relative pose between ships
CN112307786A (en) * 2020-10-13 2021-02-02 上海迅邦电子科技有限公司 Batch positioning and identifying method for multiple irregular two-dimensional codes
CN113297950A (en) * 2021-05-20 2021-08-24 首都师范大学 Dynamic target detection method
CN113792564A (en) * 2021-09-29 2021-12-14 北京航空航天大学 Indoor positioning method based on invisible projection two-dimensional code

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
刘奕博: "基于车载双目相机的目标检测及其运动状态估计", 《中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑》, 15 February 2021 (2021-02-15), pages 1 - 79 *
徐培智;徐贵力;王彪;郭瑞鹏;田裕鹏;叶永强;: "基于立体视觉的非合作目标位姿测量", 《计算机与现代化》, no. 8, 15 August 2013 (2013-08-15), pages 85 - 91 *
杨海根;芮筱亭;刘怡昕;张建书;何斌;: "基于MBDyn的多体***动力学可视化仿真软件", 《南京理工大学学报》, vol. 37, no. 6, 30 December 2013 (2013-12-30), pages 785 - 791 *

Similar Documents

Publication Publication Date Title
Zhang et al. Vision-based pose estimation for textureless space objects by contour points matching
Sharma Comparative assessment of techniques for initial pose estimation using monocular vision
JP5839971B2 (en) Information processing apparatus, information processing method, and program
Jiang et al. An overview of hand-eye calibration
De Luca et al. On-line estimation of feature depth for image-based visual servoing schemes
JP2013050947A (en) Method for object pose estimation, apparatus for object pose estimation, method for object estimation pose refinement and computer readable medium
CN113269840A (en) Combined calibration method for camera and multi-laser radar and electronic equipment
Yan et al. Joint camera intrinsic and lidar-camera extrinsic calibration
CN110009689B (en) Image data set rapid construction method for collaborative robot pose estimation
JPH06131420A (en) Method and device for supporting construction
Lim Point cloud modeling using the homogeneous transformation for non-cooperative pose estimation
Harvard et al. Spacecraft pose estimation from monocular images using neural network based keypoints and visibility maps
Shangguan et al. Vision‐Based Object Recognition and Precise Localization for Space Body Control
CN114581632A (en) Method, equipment and device for detecting assembly error of part based on augmented reality technology
CN117934721A (en) Space robot reconstruction method and system for target spacecraft based on vision-touch fusion
CN113295171A (en) Monocular vision-based attitude estimation method for rotating rigid body spacecraft
CN112629565A (en) Method, device and equipment for calibrating rotation relation between camera and inertial measurement unit
Marchionne et al. GNC architecture solutions for robust operations of a free-floating space manipulator via image based visual servoing
CN114663517A (en) Simulated target pose acquisition method, target capture method and device based on MBDyn simulation, and aerospace on-orbit target capture method
Liang et al. An integrated camera parameters calibration approach for robotic monocular vision guidance
Lim et al. Pose estimation using a flash lidar
CN115760984A (en) Non-cooperative target pose measurement method based on monocular vision by cubic star
Yan et al. Horizontal velocity estimation via downward looking descent images for lunar landing
Villa et al. Autonomous navigation and dense shape reconstruction using stereophotogrammetry at small celestial bodies
Oumer Visual tracking and motion estimation for an on-orbit servicing of a satellite

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information
CB03 Change of inventor or designer information

Inventor after: Wei Cheng

Inventor after: Liu Tianxi

Inventor after: Gu Haiyu

Inventor before: Wei Cheng