CN106651949B - Space manipulator target capturing teleoperation method and system based on simulation - Google Patents

Space manipulator target capturing teleoperation method and system based on simulation Download PDF

Info

Publication number
CN106651949B
CN106651949B CN201610903204.5A CN201610903204A CN106651949B CN 106651949 B CN106651949 B CN 106651949B CN 201610903204 A CN201610903204 A CN 201610903204A CN 106651949 B CN106651949 B CN 106651949B
Authority
CN
China
Prior art keywords
target
image
image data
module
bmax
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610903204.5A
Other languages
Chinese (zh)
Other versions
CN106651949A (en
Inventor
刘传凯
王晓雪
王保丰
王镓
唐歌实
郭祥艳
卜彦龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Unit 63920 Of Pla
Original Assignee
Unit 63920 Of Pla
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Unit 63920 Of Pla filed Critical Unit 63920 Of Pla
Priority to CN201610903204.5A priority Critical patent/CN106651949B/en
Publication of CN106651949A publication Critical patent/CN106651949A/en
Application granted granted Critical
Publication of CN106651949B publication Critical patent/CN106651949B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a space manipulator target capturing teleoperation system based on simulation, which comprises a data source acquisition module, a middleware module, a target tracking and positioning module, an operation planning module, a motion simulation module and the like, wherein the system is used for acquiring various different types of data sources in real time through a plug-in type construction strategy, the inconsistent adjustment of data acquisition and data processing speed is realized, the real-time performance of a camera on target tracking is ensured by adopting a quick tracking algorithm, and the synchronism of a real target and a simulated target pose is ensured by a pose measurement algorithm. The invention can provide a verification platform for a visual positioning algorithm of a space mechanical arm in a dynamic target capturing process, and can demonstrate the capturing process in a three-dimensional visualization mode to assist ground operators in finishing control of the capturing process.

Description

Space manipulator target capturing teleoperation method and system based on simulation
Technical Field
The invention relates to the field of teleoperation of mechanical arms, in particular to a space mechanical arm target capturing teleoperation method and system based on simulation.
Background
The space manipulator is essential key equipment for building and maintaining the space station, can finish various tasks such as assembly and disassembly, maintenance and repair, fuel delivery, satellite release and recovery of the spacecraft in orbit and scientific detection experiments in the space station, can reduce extravehicular activities of the spacemen, avoids life risks and saves the extravehicular cost. In 2022 years, China builds a manned space station engineering system, breaks through and masters the long-term manned flight technology of near-earth space, develops near-earth space scientific and technological experiments, and enhances the capacity of comprehensively developing and utilizing space resources. With the development of aerospace industry, the space manipulator plays an increasingly important role.
The ground shaking operation is one of three modes of space manipulator control, and is an important control method which is necessary to be relied on by the space manipulator to execute various complex space tasks. When the space manipulator executes tasks such as aircraft capture and component module replacement, a visual system carried by the manipulator shoots the state of an operated object. Ground operators observe the operation process in the space according to the downloaded images, however, due to the fact that the transmission network bandwidth of the space spacecraft and the ground teleoperation center is limited and a certain time delay exists in the transmission process, the operators cannot observe the operation process in real time and cannot accurately predict the relative position relation between the mechanical arm and the operated object. When the mechanical arm is close to the operated object, unpredictable collision is easy to occur, and the mechanical arm, the aircraft and the component module are damaged.
Disclosure of Invention
The invention aims to solve the technical problem of providing a simulation method for measuring the pose of the tail end of a mechanical arm relative to a captured target by utilizing an image shot by a visual system carried by a space mechanical arm, further predicting the motion trail of the mechanical arm and assisting ground teleoperators in completing control decision so as to solve the problem that the teleoperators are difficult to make effective decision because the relative state of the space mechanical arm and the target cannot be predicted. .
The technical scheme for solving the technical problems is as follows: a space manipulator capturing target teleoperation method based on simulation is characterized by comprising the following steps:
step 1, a data source acquisition module acquires image data of a target from a data source of a corresponding type through a plurality of types of plug-ins, uniformly converts the image data into the same format and sends the same format to a middleware module;
step 2, the middleware module receives and stores the converted image data and sends the image data to the target tracking and positioning module for processing, and the difference value between the speed of the data source acquisition module for acquiring the image data and the speed of the target tracking and positioning module for processing the image data is kept within a preset range;
step 3, the target tracking and positioning module processes the converted image data to obtain the pose of the target;
step 4, an operation planning module plans the motion of each joint of the space manipulator according to the pose of the target and the acquired pose of the tail end of the manipulator;
step 5, the motion simulation module establishes a three-dimensional model of the space manipulator and a three-dimensional model of the target, and changes the pose of the three-dimensional model of the space manipulator and the pose of the three-dimensional model of the target in real time according to the motion of each joint of the manipulator planned by the operation planning module;
and 6, repeating the steps until the mechanical arm captures the target in the motion simulation module.
On the basis of the technical scheme, the invention can be further improved as follows.
Further, the step 2 specifically includes the following steps:
step 2.1, the plug-in management unit manages and allocates various types of plug-ins, receives the converted image data and stores the image data in the self-adaptive buffer unit;
2.2, the self-adaptive buffer unit enables the difference value between the data acquisition speed of the data source acquisition module and the processing speed of the target tracking and positioning module to be kept within a preset range;
and 2.3, the data output interface unit sends the image data in the self-adaptive buffer area to a target tracking and positioning module.
Further, the step 2.2 specifically comprises the following steps:
setting the maximum frame difference allowed by a buffer area as FBMAXBuffer zoneThe upper limit of the allowed maximum frame difference is FMAXThe number of discarded frame flags is FTfFrame difference between image acquisition and image processing is FP-C
Step 2.2.1, if FP-C>FBMAXExecute FTf=FTf+1, skipping to step 2.2.2;
if FP-C<FBMAXAnd 2, jumping to the step 2.2.3;
if FP-C≥FBMAXAnd/2, and FP-C≤FBMAXThen jump to step 2.2.4;
step 2.2.2, if FTf≥FBMAX/2, performing FBMAX=2*FBMAXSkipping to step 2.2.6; if FTf<FBMAX2, directly jumping to the step 2.2.6;
step 2.2.3, if FP-CIf the ratio is less than or equal to 1, let FTfOtherwise, F is performedTf=(FTf+1)/2, up to 2 × FTf≤FP-CAfter the calculation is executed, the step 2.2.5 is skipped;
step 2.2.4, if FTf>0, then F is executedTf=FTf-1, up to 2 x FTf≤FP-C
Step 2.2.5, if FBMAX>FMAXThen order FBMAX=FBMAX/2;
Step 2.2.6, if FTf>0, discard 2 x F stored in bufferTf-1 frame of image data, then 2 x FTfFrame image data is pushed to a target tracking and positioning module, i.e. FP-C=FP-C-2*FTf
Further, the step 3 specifically includes the following steps:
step 3.1, receiving the converted image data and preprocessing the image data to obtain each edge of a target in the current image frame;
step 3.2, calculating through an edge outer frame to obtain a target area in the current image frame, wherein the geometric center of the target area is the position of the target, and the size of the target area is the size of the target;
3.3, processing the target area in the current image frame through a target tracking algorithm to obtain a target area in the next image frame;
step 3.4, applying a gray-weighted geometric center coordinate calculation method to pixel points in the target area in the next image frame to obtain a target tracking position;
and 3.5, calculating the projection position of the target and obtaining the pose of the target in each frame of image through a measurement model with minimized offset error between the projection position of the target and the tracking position of the target.
Further, the step 3.1 specifically comprises the following steps:
step 3.1.1, Gaussian filtering is carried out on the image, and a certain threshold value T is selected1Make the gray level in the image greater than T1Has a pixel value of 255 and a gray level of less than T1The value of the pixel is 0, and the image after Gaussian filtering is binarized to obtain a binarized image;
3.1.2, processing the binary image by adopting a corrosion expansion algorithm to eliminate isolated bright spots in the binary image, so that the gray value of only the target area in the binary image is 255, and the gray values of other positions are 0;
and 3.1.3, performing edge extraction on the binary image after corrosion expansion.
Further, the target tracking algorithm in the step 3.3 is a tracking algorithm fusing Camshift and Kalman filter.
Further, the step 3.4 is to set a certain threshold T0For all gray values in the target area greater than or equal to T0The gray weighted average operation is carried out on the pixel points according to the positions of the pixel points in the image after the Gaussian filtering processing, and the gray weighted geometric center coordinate is obtained, wherein the calculation formula is as follows:
Figure BDA0001132380900000041
wherein xcRepresents ashX-coordinate of degree-weighted geometric center, ycAnd the y coordinate represents the gray-weighted geometric center, x represents the x coordinate of the pixel point in the image, y represents the y coordinate of the pixel point in the image, I (x, y) represents the gray value at the coordinate (x, y), and R represents the target area.
Further, the step 3.5 specifically includes the following steps:
step 3.5.1, estimating the three-dimensional coordinate position of the target in a world coordinate system;
3.5.2, calculating the projection coordinates of the target in the image by using a perspective projection model according to the three-dimensional coordinate position to obtain a target projection position;
step 3.5.3, comparing the target projection position with the target tracking position, and calculating an offset error;
step 3.5.4, correcting the position of the target in the world coordinate system by using a least square method according to the offset error;
and 3.5.5, repeating the steps for multiple times until convergence, namely the offset error is smaller than a preset threshold value.
Further, the step 4 specifically includes the following steps:
4.1, planning a moving path of the tail end of the mechanical arm by a mechanical arm tail end motion trail planning unit according to the pose of the tail end of the mechanical arm calculated by the mechanical arm configuration and the pose of the target obtained by the target tracking and positioning module;
and 4.2, the collaborative motion planning unit of each joint of the mechanical arm solves the motion angle and the angular velocity of each joint of the mechanical arm corresponding to each discrete track point through an inverse kinematics algorithm according to the discrete track point of the movement path of the tail end of the mechanical arm obtained through planning.
A space manipulator capture target teleoperation system based on simulation comprises:
the data source acquisition module is used for acquiring image data of a target from data sources of corresponding types through various types of plug-ins, uniformly converting the image data into the same format and then sending the image data to the middleware module;
the middleware module is used for receiving and storing the converted image data, sending the image data to the target tracking and positioning module for processing, and keeping the difference value between the speed of the data source acquisition module for acquiring the image data and the speed of the target tracking and positioning module for processing the image data within a preset range;
the target tracking and positioning module is used for processing the converted image data to obtain the pose of the target;
the operation planning module is used for planning the motion of each joint of the space manipulator according to the pose of the target and the acquired pose of the tail end of the manipulator;
and the motion simulation module is used for establishing a three-dimensional model of the space manipulator and a three-dimensional model of the target, and changing the pose of the three-dimensional model of the space manipulator and the pose of the three-dimensional model of the target in real time according to the motion of each joint of the manipulator planned by the operation planning module.
Further, the middleware module comprises a plug-in management unit, an adaptive buffer unit and a unified data output interface unit;
the plug-in management unit is used for loading, managing and allocating various types of plug-ins, receiving the converted image data and storing the converted image data in the self-adaptive buffer unit;
the adaptive buffer unit is used for keeping the difference value between the data acquisition speed of the data source acquisition module and the processing speed of the target tracking and positioning module within a preset range;
the unified data output interface unit is used for sending the image data in the self-adaptive buffer area to a target tracking and positioning module;
further, the target tracking and positioning module comprises an image preprocessing unit, a target detection unit, a target tracking unit, a position settlement unit and a pose resolving unit;
the image preprocessing unit is used for receiving the converted image data and preprocessing the image data to obtain each edge of a target in the current image frame;
the target detection unit is used for obtaining a target area in the current image frame through edge outer frame calculation, wherein the geometric center of the target area is the position of a target, and the size of the target area is the size of the target;
the target tracking unit is used for processing a target area in the current image frame through a target tracking algorithm to obtain a target area in the next image frame;
the position calculating unit is used for applying a gray-weighted geometric center coordinate calculation method to pixel points in a target area in the next image frame to obtain a target tracking position;
and the pose resolving unit is used for obtaining the pose of the target in each frame of image through a measurement model with minimized offset error between the target projection position and the target tracking position.
Further, the operation planning module comprises a planning unit for the motion trail of the tail end of the mechanical arm and a planning unit for the coordinated motion of each joint of the mechanical arm;
the mechanical arm tail end motion trail planning unit is used for planning a moving path of the mechanical arm tail end according to the pose of the mechanical arm tail end and the pose of the target obtained by the target tracking and positioning module;
and the mechanical arm joint collaborative motion planning unit is used for solving the motion angle and the angular velocity of each joint of the mechanical arm corresponding to each discrete track point through an inverse kinematics algorithm according to the discrete track point of the movement path of the tail end of the mechanical arm obtained through planning.
The invention has the beneficial effects that: compared with the existing teleoperation simulation system, the teleoperation simulation system designed by the invention can acquire the relative change of the image tracking target according to the mechanical arm end vision system, measure and estimate the position and the posture of the captured target, and visually display the relative change to ground operators through the simulation system to assist the ground operators in making decisions. Compared with the existing teleoperation method for judging the relative poses of the mechanical arm and the captured target by browsing images by an operator, the method has the advantages that the burden of the operator on the operation state analysis is reduced to a great extent, the operator can put main energy on the operation and control of the space mechanical arm, and the implementation efficiency of teleoperation is greatly improved. .
Drawings
FIG. 1 is a diagram of the relationship between the data flow direction of the constituent modules of the teleoperation system for the space manipulator to capture a target and the data flow direction of each module;
FIG. 2 is a flow chart of a teleoperation method for a space manipulator to capture a target;
FIG. 3 is a flowchart of an algorithm for realizing space target tracking by a method of integrating Camshift and Kalman filtering;
fig. 4 is a spatial relationship of a three-dimensional model of a space manipulator and a three-dimensional model of a space target and a motion control step thereof.
Detailed Description
The principles and features of this invention are described below in conjunction with the following drawings, which are set forth by way of illustration only and are not intended to limit the scope of the invention.
As shown in fig. 1, the teleoperation system for capturing a target by a space manipulator based on simulation provided by the present invention is composed of a data source acquisition module, a middleware module, a target tracking and positioning module, an operation planning module and a motion simulation module, and the implementation method thereof is shown in fig. 2, and includes the following steps:
s1, the data source acquisition module acquires image data of a target from the data source of the corresponding type through various types of plug-ins, uniformly converts the image data into the same format and sends the same format to the middleware module;
s2, the middleware module receives and stores the converted image data and sends the image data to the target tracking and positioning module for processing, and the difference between the speed of the data source acquisition module for acquiring the image data and the speed of the target tracking and positioning module for processing the image data is kept within a preset range;
s3, the target tracking and positioning module processes the converted image data to obtain the pose of the target;
s4, the operation planning module plans the motion of each joint of the space manipulator according to the pose of the target and the pose of the tail end of the manipulator, and captures the target;
s5, the motion simulation module establishes three-dimensional models of the space manipulator and the target, and changes the pose of the three-dimensional model of the space manipulator and the pose of the three-dimensional model of the target in real time according to the motion of each joint of the manipulator planned by the operation planning module;
and repeating the steps until the target is captured by the mechanical arm in the motion simulation module.
The following describes embodiments of the various modules of the system in detail:
1. description of implementation of data source acquisition module
As shown in fig. 1, the data source acquisition module includes multiple types of data source acquisition plug-ins such as camera image stream acquisition, disk video stream acquisition, and network data stream acquisition, and each type of plug-in is further explained as follows.
Each type of plug-in is mainly used for reading a certain type of data source, for example, a camera image stream acquisition plug-in can only acquire image stream data of camera equipment, a disk video stream acquisition plug-in can only acquire video files (such as avi format video files) stored in a disk, and a network data stream acquisition plug-in can only read image data streams received by a network port.
The function of each type of plug-in is: the method comprises the steps of analyzing data collected from a certain type of data source into image data in a single frame format according to a pre-agreed data format, repackaging the image data into image data frames in a uniform format, and outputting and storing the image data frames in a self-adaptive buffer area unit of a middleware module, wherein the image data frames in the uniform format can be in a bitmap format.
Each type of plug-in provides a uniform data and information output format and a function interface for the middleware module, so that all plug-ins can be uniformly managed by the middleware module. The functional interfaces of each plug-in mainly comprise interfaces of data source selection, data acquisition control, data output buffer area setting and the like.
2. Description of the implementation of the middleware Module
The middleware module is an intermediary for interaction between the data source acquisition module and the target tracking and positioning module, and comprises a plug-in management unit, a self-adaptive buffer unit and a data output interface unit. The middleware module completes the management and allocation of different types of plugins by using the plugin management unit. The data output interface unit provides a universal data and functional interface for the target tracking and positioning module, so that the target tracking and positioning module can correctly read image data of different types of data sources. The middleware module is provided with a self-adaptive buffer area, and can automatically discard partial image data frames by calculating the number of the stored image data in the buffer area, so that the data acquisition end and the tracking and positioning processing end are synchronous as much as possible. The three units are specifically described below.
The plug-in management unit can read the data source description file and load and call plug-ins according to the information in the data source description file to read various data sources. The data source description file mainly records information such as data source type, data source path, data source output data format and the like. The data source type is used for assisting in selecting the type of the loaded and called plug-in, information such as a data source path and a data source output data format is used for transmitting the plug-in, and the plug-in is assisted in finishing correct data source reading and data format analysis.
The self-adaptive buffer area unit can automatically store data and discard partial data frames according to the difference between the data acquisition speed and the processing speed of the target tracking and positioning module, so that the difference between the image frame currently processed by the target tracking and positioning module and the latest image frame input by the data source acquisition module is within a range of several frames, and the synchronism of the data source acquisition end and the tracking and positioning processing end is ensured. The specific implementation method is that the number of the image data stored in the buffer area is calculated, the mode of reading the image frame by frame or reading the image by frame skipping is automatically selected, and when the image is read by frame skipping, the skipped image frame is discarded. The specific implementation process is as follows:
setting the maximum frame difference allowed by a buffer area as FBMAXThe upper limit of the maximum frame difference allowed by the buffer is FMAXThe number of discarded frame flags is FTfFrame difference between image acquisition and image processing is FP-C
Step 1, if FP-C>FBMAXExecute FTf=FTf+1, skipping to step 2.2.2;
if FP-C<FBMAXAnd 2, jumping to the step 2.2.3;
if FP-C≥FBMAXAnd/2, and FP-C≤FBMAXThen jump to step 2.2.4;
step 2, if FTf≥FBMAX/2, performing FBMAX=2*FBMAXSkipping to step 2.2.6; if FTf<FBMAX2, directly jumping to the step 2.2.6;
step 3, if FP-CIf the ratio is less than or equal to 1, let FTfOtherwise, F is performedTf=(FTf+1)/2, up to 2 × FTf≤FP-CAfter the calculation is executed, the step 2.2.5 is skipped;
step 4, if FTf>0, then F is executedTf=FTf-1, up to 2 x FTf≤FP-C
Step 5, if FBMAX>FMAXThen order FBMAX=FBMAX/2;
Step 6, if FTf>0, discard 2 x F stored in bufferTf-1 frame of image data, then 2 x FTfFrame image data is pushed to a target tracking and positioning module, i.e. FP-C=FP-C-2*FTf
The data output interface unit is an interface of the middleware module and the target tracking and positioning module, and is responsible for reading a frame of image from the self-adaptive buffer unit and transmitting the frame of image to the target tracking and positioning module for processing and use by the target tracking and positioning module. The interfaces of the data output interface unit and the target tracking and positioning module mainly refer to reading and transmitting interfaces of image frame data, including description of image frame formats and reading and transmitting of image frame data and the like.
3. Description of the implementation of the object tracking and localization Module
And the target tracking and positioning module is used for completing the real-time tracking and positioning of the space target mainly by processing and analyzing the sequence images acquired by the data source acquisition module. The method mainly comprises the functions of image preprocessing, space target detection and tracking, space target pose resolving and the like.
The image preprocessing mainly comprises preprocessing functions such as Gaussian filtering, binaryzation, corrosion expansion, edge extraction and the like, and the image preprocessing method comprises the steps of firstly performing Gaussian filtering on an image, selecting a certain threshold value T1, enabling the value of a pixel with the gray scale larger than T1 in the image to be 255 and the value of a pixel with the gray scale smaller than T1 in the image to be 0, and performing binaryzation on the image after the Gaussian filtering to obtain a binary image; processing the binary image by adopting a corrosion expansion algorithm to eliminate isolated bright spots in the binary image, so that the gray value of only the target area in the binary image is 255, and the gray values of other positions are 0; and (4) performing edge extraction on the binary image after corrosion expansion, and calculating according to each edge to obtain each target position.
The detection of the space target refers to the calculation of the position and the size of the target in the image. And calculating to obtain a target area through an edge outer frame according to the image and each edge obtained in the processing, wherein the geometric center of the area is the position of the target in the image, and the size of the target area is the size of the target in the image.
The tracking of the spatial target is realized by a method of integrating Camshift and Kalman filtering, as shown in FIG. 2, the implementation steps are as follows:
step 1, Kalman prediction: predicting the position of a space target in an image by using a Kalman method, and expressing a state transformation equation of space target imaging as
Figure BDA0001132380900000111
Wherein XkIs the position of the spatial object in the current frame image,
Figure BDA0001132380900000112
and F represents a state transition matrix, and represents the position of the spatial target in the predicted next frame image. Covariance of prediction error
Figure BDA0001132380900000113
Obtained from their covariance values at time k, i.e.
Figure BDA0001132380900000114
Wherein QkIs due to a stateUncertainty of transfer and the resulting covariance of random errors. Position of space target
Figure BDA0001132380900000115
As a target center, the size of the target region in the current frame image is taken as a target size, and a target region S in the next frame image is calculatedk+1
Step 2, Camshift tracking: mainly comprises 3 steps: 1) calculating a reverse projection diagram of a target area predicted by Kalman by using a histogram of a target template; 3) calculating the gravity center (u) of the back projection image of the target region by the first moment and the zero moment according to the back projection image of the target regionx,uy) I.e. by
Figure BDA0001132380900000116
Where Ω denotes the set of coordinates of the target region, m10And m01First moment, m, representing a back projection of the target region00A zeroth order moment representing a back projection of the target region; 3) if the center of gravity (u)x,uy) If the deviation relative to the center of the target area is less than a certain threshold value, ending the tracking, otherwise, updating the target area R by taking the position of the center of gravity as the centerk+1And repeating the steps 1) to 3). Finally obtaining the accurate target area
Figure BDA0001132380900000117
And target position Zk+1
Step 3, Kalman filtering updating: and utilizing the second step to obtain the target position, and correcting the covariance matrix of the target position and the target error by a Kalman filtering method. Update rate K of the zeeman filterkIndicating the correction weight, H, of the new position information of a given object relative to the known position information of the objectkAn observation matrix representing the object, RkTo observe the covariance matrix of random noise generation, we can then obtain:
Figure BDA0001132380900000121
thereby obtaining the position information X of the targetkSum covariance matrix PkThe posterior estimates of (a) are:
Figure BDA0001132380900000122
based on the target tracking algorithm, a target area can be obtained, and a target tracking position is generally obtained according to a central point of the target area, but under the conditions of illumination flicker and the like in an image, the calculated target tracking position is unstable and has a large error, so that a more stable and accurate target tracking position can be obtained by applying a gray-scale weighted geometric center coordinate calculation method to pixel points in the target area, and the obtaining mode is introduced as follows:
setting a certain threshold T0For all gray values in the target area greater than or equal to T0The gray weighted average operation is carried out on the pixel points according to the positions of the pixel points in the Gaussian filtered image to obtain the position of a gray weighted center, and the calculation formula is as follows:
Figure BDA0001132380900000123
wherein xcX coordinate of the geometric center of gray-scale weighting, ycY coordinates of the geometric center of the gray-scale weighting, x coordinates of pixel points in the image, y coordinates of pixel points in the image, I (x, y) gray-scale values at the coordinates (x, y), and R represents the target area.
The pose calculation of the space target means that the pose of the target corresponding to each frame of image in the space is calculated through a measurement model with minimized offset error between the target projection position and the target tracking position. The target projection position refers to the projection coordinates of the target light-emitting lamp in the camera image plane can be calculated through a perspective projection transformation method under the condition that the coordinates of the target and the camera optical center in a world coordinate system are known. The measurement model with minimized offset error between the target projection position and the target tracking position refers to the steps of roughly estimating the three-dimensional coordinate position of a target in a world coordinate system, calculating the projection coordinate of the target in an image by using a perspective projection model according to the three-dimensional position coordinate to obtain the target projection position, comparing the target projection position with the target tracking position obtained by tracking, calculating the offset error, and correcting the position of the target in the world coordinate system by using a least square method according to the offset error; this is repeated several times until convergence occurs, i.e. the offset error is smaller than a predetermined threshold value. The formulation process is described as follows:
suppose that
Figure BDA00011323809000001314
And theta respectively represent the position and the posture of the target in the three-dimensional space, N is the number of target light-emitting lamps,
Figure BDA00011323809000001313
representing the projected position of the object, uiRepresenting the target tracking position, the offset error minimization between the target projection position and the target tracking position can be expressed as:
Figure BDA0001132380900000131
wherein
Figure BDA0001132380900000132
The projection position of the target is represented by a function of the position and the posture of the target in a three-dimensional space, namely a perspective projection transformation function, the function is a nonlinear function, and the function can be linearly expanded by a Taylor series as follows:
Figure BDA0001132380900000133
wherein
Figure BDA0001132380900000134
To represent
Figure BDA0001132380900000135
Jacobian matrix of
Figure BDA0001132380900000136
Order to
Figure BDA0001132380900000137
Then it can be solved according to the linear least squares method
Figure BDA0001132380900000138
Obtaining:
Figure BDA0001132380900000139
by passing
Figure BDA00011323809000001310
Iterative update of sum θ + Δ θ
Figure BDA00011323809000001311
And theta, up to
Figure BDA00011323809000001312
Less than a specified threshold.
4. Description of the implementation of the operation planning Module
The operation planning module comprises a space mechanical arm tail end motion track planning unit and a mechanical arm joint cooperative motion planning unit, and can plan the motion of each joint of the space mechanical arm according to the position of the space target and the requirement of the capture task, so that the mechanical arm can complete asymptotic capture of the space target.
The space manipulator tail end motion trail planning unit is used for planning a moving path and a track of a target caught by the manipulator tail end according to the target pose obtained by the target tracking and positioning module and the pose of the manipulator tail end calculated by the manipulator configuration; and the space manipulator joint collaborative motion planning unit is responsible for solving the motion angle and the angular velocity of each joint of the manipulator corresponding to each discrete track point through an inverse kinematics algorithm according to the discrete track points of the tail end moving path obtained through planning.
Method for calculating end pose of mechanical arm from mechanical arm configurationThe method is expressed as follows: let the angle theta of each joint angle of the mechanical arm be [ theta ═ theta0θ1θ2…θn]At the arm length d of the armiAxial distance aiAngle α of joint axisiWhen the structural parameters are constant, the pose transformation relation between the ith joint and the (i +1) th joint is expressed as Mi=R(θi)T(ai,di)R(αi) Then the pose of the end of the mechanical arm can be calculated as:
Figure BDA0001132380900000141
wherein R (theta)12,…,θn)∈R3×3,T(θ1…θn1…αn,d1…dn,a1…an)∈R3×1And respectively represent the rotation and translation transformation relationship between the tail end of the mechanical arm to the center of mass of the base.
The motion trail of the tail end of the space manipulator is related to the cooperative motion relation of all joints of the space manipulator and is closely related to the posture of a base carrier (a space station) of the space manipulator. In practical application, the function of planning the motion trail of the tail end of the space manipulator is generally simplified into the calculation of the shortest path from the tail end of the manipulator to a target, namely a straight path; and when the collaborative motion relation of all joint angles of the mechanical arm is solved, the tail end track of the space mechanical arm is adjusted according to the constraint of minimum attitude disturbance on the base carrier (space station). The process of planning the joint angle collaborative motion relationship of the space manipulator, that is, the process of solving the motion angle and angular velocity of each joint of the space manipulator corresponding to each discrete locus point through an inverse kinematics algorithm, is described as follows:
assuming that the space manipulator system has n degrees of freedom, the differential kinematic equation of each joint is as follows:
Figure BDA0001132380900000142
wherein v isee∈R3Linear and angular velocity, v, respectively, of the end of the space manipulator00∈R3Respectively is the linear velocity and the angular velocity at the centroid of the base of the space station, theta is belonged to RnIs the joint angle (n is the degree of freedom of the mechanical arm), JsIs a Jacobian matrix associated with the motion of the base of the spatial station, and JmIs a jacobian matrix associated with the motion of the robotic arm.
The space mechanical arm is in the free floating mode, and base position, gesture are all uncontrolled, and the whole system that space station and arm are constituteed satisfies linear momentum and angular momentum conservation, satisfies promptly:
Figure BDA0001132380900000151
wherein, ω isiIs BiThe angular velocity of (c). The above equation is reduced to a matrix form to obtain:
Figure BDA0001132380900000152
solving omega by using the above formula0The following can be obtained:
Figure BDA0001132380900000153
the kinematic equation of the free-floating space manipulator is obtained by substituting (4-4) into (4-1) as follows:
Figure BDA0001132380900000154
wherein, JgIs a generalized Jacobian matrix of the space manipulator, which is the attitude Ψ of the base of the space station0A joint angle theta of the mechanical arm, the mass m of each joint and the space stationiAnd inertia IiFunction of JgvAnd JgvGeneralized jacobian matrices for velocity and acceleration, respectively.
The spatial attitude angle is generally described by using an euler angle E (α, γ), and the relationship between the cartesian attitude angular velocity and the euler angular velocity is:
Figure BDA0001132380900000155
if matrix JEThe relation between the change of the attitude angle of the space station and the joint angle of the mechanical arm can be obtained according to the formulas (4-4) and (4-6) as a non-singular matrix:
Figure BDA0001132380900000156
the path planning with the least impact on the attitude of the space station can be solved by the following optimization problem:
Figure BDA0001132380900000157
therein, Ψl=[αlll]T,Ψu=[αuuu]T,Ψ=[α,β,γ]TThe attitude angle of the space station in any state representing the mechanical arm from the initial state to the target state must be limited within the range of the attitude in which the space station normally works. Discretizing the formula (4-7) according to an equal-interval sampling method, and solving the discretized optimization problem to obtain the motion angle theta of each joint of the space manipulator corresponding to each momentjAnd angular velocity
Figure BDA0001132380900000161
5. Description of the implementation of the motion simulation Module
The motion simulation module is virtual simulation software based on a space manipulator and a space target digital three-dimensional model, and comprises the space manipulator three-dimensional model and operation behaviors thereof, and the space target three-dimensional model and a motion relation thereof relative to the tail end of the manipulator or a visual camera.
The shapes of the three-dimensional model of the space manipulator and the three-dimensional model of the space target in the virtual simulation software are completely or approximately consistent with the shapes of the entity of the space manipulator and the entity of the space target, and the sizes of the two three-dimensional models are set by adopting a proportional relation of 1:1 with the two entities. The spatial relationship between the three-dimensional model of the space manipulator and the three-dimensional model of the space target and the motion control steps of the three-dimensional model are shown in fig. 3.
The operation behavior of the three-dimensional model of the space manipulator mainly refers to the joint motion of the space manipulator and the cooperative motion of a base carrier (space station) and the space manipulator, and the implementation mode comprises two steps:
firstly, setting the motion of each joint of the space manipulator according to the angular motion angle and the angular velocity sequence of each joint obtained by planning of the space manipulator operation planning module;
and secondly, calculating the corresponding change relation between the position and the posture of the base carrier according to the characteristic that the gravity center position of the combination of the base carrier and the space manipulator does not change by the cooperative motion of the base carrier (space station) and the space manipulator, and setting the position and the posture of the base carrier corresponding to each step of motion of the space manipulator according to the change relation.
The motion of the three-dimensional model of the space target relative to the tail end of the mechanical arm or the visual camera refers to the change of the pose relationship of the space target relative to the tail end of the mechanical arm or the visual camera, and the motion of the space target relative to the tail end of the mechanical arm or the visual camera needs to be considered, the space station (the space station and the space mechanical arm are combined into a whole) flies in space, the space station gradually approaches the space target, the flying speed of the space station is assumed to be higher than the flying speed of the space target, a speed difference △ v exists, so that the position of the space target relative to the tail end of the space mechanical arm and the visual camera changes △ vt, meanwhile, the positions and postures of the space target relative to the tail end of the space mechanical arm and the visual camera change due to the motion of all joints of the space mechanical arm, the pose of the space target relative to the tail end of the space mechanical arm and the visual camera changes, the changes of the two positions and postures are superposed, the absolute pose change relationship of the space target relative.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (10)

1. A space manipulator capturing target teleoperation method based on simulation is characterized by comprising the following steps:
step 1, a data source acquisition module acquires image data of a target from a data source of a corresponding type through a plurality of types of plug-ins, uniformly converts the image data into the same format and sends the same format to a middleware module;
step 2, the middleware module receives and stores the converted image data and sends the image data to the target tracking and positioning module for processing, and the difference value between the speed of the data source acquisition module for acquiring the image data and the speed of the target tracking and positioning module for processing the image data is kept within a preset range;
step 3, the target tracking and positioning module processes the converted image data to obtain the pose of the target;
step 4, an operation planning module plans the motion of each joint of the space manipulator according to the pose of the target and the acquired pose of the tail end of the manipulator;
step 5, the motion simulation module establishes a three-dimensional model of the space manipulator and a three-dimensional model of the target, and changes the pose of the three-dimensional model of the space manipulator and the pose of the three-dimensional model of the target in real time according to the motion of each joint of the manipulator planned by the operation planning module;
step 6, repeating the steps until the mechanical arm captures the target in the motion simulation module;
the step 2 specifically comprises the following steps:
step 2.1, the plug-in management unit manages and allocates various types of plug-ins, receives the converted image data and stores the image data in the self-adaptive buffer unit;
2.2, the self-adaptive buffer unit enables the difference value between the data acquisition speed of the data source acquisition module and the processing speed of the target tracking and positioning module to be kept within a preset range;
step 2.3, the data output interface unit sends the image data in the self-adaptive buffer area to a target tracking and positioning module;
the step 2.2 specifically comprises the following steps:
setting the maximum frame difference allowed by a buffer area as FBMAXThe upper limit of the maximum frame difference allowed by the buffer is FMAXThe number of discarded frame flags is FTfFrame difference between image acquisition and image processing is FP-C
Step 2.2.1, if FP-C>FBMAXExecute FTf=FTf+1, skipping to step 2.2.2;
if FP-C<FBMAXAnd 2, jumping to the step 2.2.3;
if FP-C≥FBMAXAnd/2, and FP-C≤FBMAXThen jump to step 2.2.4;
step 2.2.2, if FTf≥FBMAX/2, performing FBMAX=2*FBMAXSkipping to step 2.2.6; if FTf<FBMAX2, directly jumping to the step 2.2.6;
step 2.2.3, if FP-CIf the ratio is less than or equal to 1, let FTfOtherwise, F is performedTf=(FTf+1)/2, up to 2 × FTf≤FP-CAfter the calculation is executed, the step 2.2.5 is skipped;
step 2.2.4, if FTf>0, then F is executedTf=FTf-1, up to 2 x FTf≤FP-C
Step 2.2.5, if FBMAX>FMAXThen order FBMAX=FBMAX/2;
Step 2.2.6, if FTf>0, discard 2 x F stored in bufferTf-1 frame of image data, then 2 x FTfFrame image data is pushed to a target tracking and positioning module, i.e. FP-C=FP-C-2*FTf
2. The method for teleoperation of capturing a target by a space manipulator based on simulation as claimed in claim 1, wherein the step 3 specifically comprises the following steps:
step 3.1, receiving the converted image data and preprocessing the image data to obtain each edge of a target in the current image frame;
step 3.2, calculating through an edge outer frame to obtain a target area in the current image frame, wherein the geometric center of the target area is the position of the target, and the size of the target area is the size of the target;
3.3, processing the target area in the current image frame through a target tracking algorithm to obtain a target area in the next image frame;
step 3.4, applying a gray-weighted geometric center coordinate calculation method to pixel points in the target area in the next image frame to obtain a target tracking position;
and 3.5, calculating the projection position of the target and obtaining the pose of the target in each frame of image through a measurement model with minimized offset error between the projection position of the target and the tracking position of the target.
3. The method for teleoperation of the simulation-based space manipulator capturing target according to claim 2, wherein the step 3.1 specifically comprises the following steps:
step 3.1.1, Gaussian filtering is carried out on the image, and a certain threshold value T is selected1Make the gray level in the image greater than T1Has a pixel value of 255 and a gray level of less than T1The value of the pixel is 0, and the image after Gaussian filtering is binarized to obtain a binarized image;
3.1.2, processing the binary image by adopting a corrosion expansion algorithm to eliminate isolated bright spots in the binary image, so that the gray value of only the target area in the binary image is 255, and the gray values of other positions are 0;
and 3.1.3, performing edge extraction on the binary image after corrosion expansion.
4. The space manipulator target grabbing teleoperation method based on simulation is characterized in that the target tracking algorithm in the step 3.3 is a tracking algorithm fusing Camshift and Kalman filtering.
5. The teleoperation method for catching the target by the space manipulator based on the simulation as claimed in claim 2, wherein the step 3.4 is to set a certain threshold T0For all gray values in the target area greater than or equal to T0The gray weighted average operation is carried out on the pixel points according to the positions of the pixel points in the image after the Gaussian filtering processing, and the gray weighted geometric center coordinate is obtained, wherein the calculation formula is as follows:
Figure FDA0002227164340000031
wherein xcX coordinate, y, representing a geometric center of gray-scale weightingcAnd the y coordinate represents the gray-weighted geometric center, x represents the x coordinate of the pixel point in the image, y represents the y coordinate of the pixel point in the image, I (x, y) represents the gray value at the coordinate (x, y), and R represents the target area.
6. The method for teleoperation of capturing a target by a space manipulator based on simulation of claim 2, wherein the step 3.5 specifically comprises the following steps:
step 3.5.1, estimating the three-dimensional coordinate position of the target in a world coordinate system;
3.5.2, calculating the projection coordinates of the target in the image by using a perspective projection model according to the three-dimensional coordinate position to obtain a target projection position;
step 3.5.3, comparing the target projection position with the target tracking position, and calculating an offset error;
step 3.5.4, correcting the position of the target in the world coordinate system by using a least square method according to the offset error;
and 3.5.5, repeating the steps for multiple times until convergence, namely the offset error is smaller than a preset threshold value.
7. The method for teleoperation of capturing a target by a space manipulator based on simulation of claim 1, wherein the step 4 specifically comprises the following steps:
4.1, planning a moving path of the tail end of the mechanical arm by a mechanical arm tail end motion trail planning unit according to the pose of the tail end of the mechanical arm calculated by the mechanical arm configuration and the pose of the target obtained by the target tracking and positioning module;
and 4.2, the collaborative motion planning unit of each joint of the mechanical arm solves the motion angle and the angular velocity of each joint of the mechanical arm corresponding to each discrete track point through an inverse kinematics algorithm according to the discrete track point of the movement path of the tail end of the mechanical arm obtained through planning.
8. A space manipulator capture target teleoperation system based on simulation is characterized by comprising:
the data source acquisition module is used for acquiring image data of a target from data sources of corresponding types through various types of plug-ins, uniformly converting the image data into the same format and then sending the image data to the middleware module;
the middleware module is used for receiving and storing the converted image data, sending the image data to the target tracking and positioning module for processing, and keeping the difference value between the speed of the data source acquisition module for acquiring the image data and the speed of the target tracking and positioning module for processing the image data within a preset range;
the target tracking and positioning module is used for processing the converted image data to obtain the pose of the target;
the operation planning module is used for planning the motion of each joint of the space manipulator according to the pose of the target and the acquired pose of the tail end of the manipulator;
the motion simulation module is used for establishing a space manipulator three-dimensional model and a target three-dimensional model and changing the pose of the space manipulator three-dimensional model and the pose of the target three-dimensional model in real time according to the motion of each joint of the manipulator planned by the operation planning module;
the middleware module comprises a plug-in management unit, a self-adaptive buffer unit and a unified data output interface unit;
the plug-in management unit is used for loading, managing and allocating various types of plug-ins, receiving the converted image data and storing the converted image data in the self-adaptive buffer unit;
the adaptive buffer unit is configured to keep a difference between a data acquisition speed of the data source acquisition module and a processing speed of the target tracking and positioning module within a preset range, and specifically includes the following steps:
setting the maximum frame difference allowed by a buffer area as FBMAXThe upper limit of the maximum frame difference allowed by the buffer is FMAXThe number of discarded frame flags is FTfFrame difference between image acquisition and image processing is FP-C
Step 2.2.1, if FP-C>FBMAXExecute FTf=FTf+1, skipping to step 2.2.2;
if FP-C<FBMAXAnd 2, jumping to the step 2.2.3;
if FP-C≥FBMAXAnd/2, and FP-C≤FBMAXThen jump to step 2.2.4;
step 2.2.2, if FTf≥FBMAX/2, performing FBMAX=2*FBMAXSkipping to step 2.2.6; if FTf<FBMAX2, directly jumping to the step 2.2.6;
step 2.2.3, if FP-CIf the ratio is less than or equal to 1, let FTfOtherwise, F is performedTf=(FTf+1)/2, up to 2 × FTf≤FP-CAfter the calculation is executed, the step 2.2.5 is skipped;
step 2.2.4, if FTf>0, then F is executedTf=FTf-1, up to 2 x FTf≤FP-C
Step 2.2.5, if FBMAX>FMAXThen order FBMAX=FBMAX/2;
Step 2.2.6, if FTf>0, discard 2 x F stored in bufferTf-1 frame of image data, then 2 x FTfFrame image data is pushed to a target tracking and positioning module, i.e. FP-C=FP-C-2*FTf
And the unified data output interface unit is used for sending the image data in the self-adaptive buffer area to a target tracking and positioning module.
9. The simulation-based space manipulator target capturing teleoperation system is characterized in that the target tracking and positioning module comprises an image preprocessing unit, a target detection unit, a target tracking unit, a position resolving unit and a pose resolving unit;
the image preprocessing unit is used for receiving the converted image data and preprocessing the image data to obtain each edge of a target in the current image frame;
the target detection unit is used for obtaining a target area in the current image frame through edge outer frame calculation, wherein the geometric center of the target area is the position of a target, and the size of the target area is the size of the target;
the target tracking unit is used for processing a target area in the current image frame through a target tracking algorithm to obtain a target area in the next image frame;
the position calculating unit is used for applying a gray-weighted geometric center coordinate calculation method to pixel points in a target area in the next image frame to obtain a target tracking position;
and the pose resolving unit is used for obtaining the pose of the target in each frame of image through a measurement model with minimized offset error between the target projection position and the target tracking position.
10. The simulation-based space manipulator target-capturing teleoperation system of claim 8, wherein the operation planning module comprises a manipulator tail end motion trajectory planning unit and a manipulator joint coordinated motion planning unit;
the mechanical arm tail end motion trail planning unit is used for planning a moving path of the mechanical arm tail end according to the pose of the mechanical arm tail end and the pose of the target obtained by the target tracking and positioning module;
and the mechanical arm joint collaborative motion planning unit is used for solving the motion angle and the angular velocity of each joint of the mechanical arm corresponding to each discrete track point through an inverse kinematics algorithm according to the discrete track point of the movement path of the tail end of the mechanical arm obtained through planning.
CN201610903204.5A 2016-10-17 2016-10-17 Space manipulator target capturing teleoperation method and system based on simulation Active CN106651949B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610903204.5A CN106651949B (en) 2016-10-17 2016-10-17 Space manipulator target capturing teleoperation method and system based on simulation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610903204.5A CN106651949B (en) 2016-10-17 2016-10-17 Space manipulator target capturing teleoperation method and system based on simulation

Publications (2)

Publication Number Publication Date
CN106651949A CN106651949A (en) 2017-05-10
CN106651949B true CN106651949B (en) 2020-05-15

Family

ID=58855840

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610903204.5A Active CN106651949B (en) 2016-10-17 2016-10-17 Space manipulator target capturing teleoperation method and system based on simulation

Country Status (1)

Country Link
CN (1) CN106651949B (en)

Families Citing this family (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107220601B (en) * 2017-05-18 2020-06-26 西北工业大学 Target capture point prediction method based on online confidence degree discrimination
CN107097231A (en) * 2017-07-06 2017-08-29 哈尔滨工业大学深圳研究生院 A kind of concentric tube robot precise motion control method of view-based access control model servo
CN107696033B (en) * 2017-09-18 2020-04-10 北京控制工程研究所 Space manipulator trajectory rolling planning method based on visual measurement
CN109583260A (en) * 2017-09-28 2019-04-05 北京猎户星空科技有限公司 A kind of collecting method, apparatus and system
CN107571260B (en) * 2017-10-25 2021-02-26 南京阿凡达机器人科技有限公司 Method and device for controlling robot to grab object
CN107932502A (en) * 2017-11-07 2018-04-20 陕西科技大学 A kind of SCARA method for planning track of robot based on binocular stereo vision
CN108828935B (en) * 2018-05-07 2022-05-13 中国科学院力学研究所 Intelligent auxiliary operation method and system for remote operation
CN108656109B (en) * 2018-05-07 2021-03-02 中国科学院力学研究所 Remote operation training method and system
CN108748149B (en) * 2018-06-04 2021-05-28 上海理工大学 Non-calibration mechanical arm grabbing method based on deep learning in complex environment
CN108748162B (en) * 2018-07-09 2021-05-25 五邑大学 Mechanical arm control method based on least square method for robot experiment teaching
CN109093376B (en) * 2018-08-17 2020-04-03 清华大学 Multi-axis hole automatic alignment method based on laser tracker
CN109531566B (en) * 2018-11-16 2022-08-19 国网江苏省电力有限公司盐城供电分公司 Robot live-line work control method based on virtual reality system
CN110216698A (en) * 2019-03-11 2019-09-10 浙江工业大学 A kind of mechanical arm remote control system based on ROS
JP7326911B2 (en) * 2019-06-20 2023-08-16 オムロン株式会社 Control system and control method
CN110806197B (en) * 2019-09-28 2022-04-19 上海翊视皓瞳信息科技有限公司 Gesture detecting system based on intelligent vision equipment
CN110815215B (en) * 2019-10-24 2021-07-30 上海航天控制技术研究所 Multi-mode fused rotating target approaching and stopping capture ground test system and method
CN111109417A (en) * 2019-12-23 2020-05-08 重庆大学 Route is from planning sugar-painter based on image information
CN111823225A (en) * 2020-06-04 2020-10-27 江汉大学 Visual servo three-dimensional simulation method and device
CN111890365B (en) * 2020-07-31 2022-07-12 平安科技(深圳)有限公司 Target tracking method and device, computer equipment and storage medium
CN114078158A (en) * 2020-08-14 2022-02-22 边辕视觉科技(上海)有限公司 Method for automatically acquiring characteristic point parameters of target object
CN112148000B (en) * 2020-08-28 2022-10-21 上海宇航***工程研究所 In-cabin simulation platform for simulating operation scene of space maintenance robot and implementation method
CN112621789A (en) * 2020-12-08 2021-04-09 广东联航智能科技有限公司 Control system of robot for double-arm man-machine cooperative operation
CN112847334B (en) * 2020-12-16 2022-09-23 北京无线电测量研究所 Mechanical arm target tracking method based on visual servo
CN112763253B (en) * 2020-12-28 2024-03-29 深圳市人工智能与机器人研究院 Sampling control method and device for mechanical arm and sampling system
CN113103230A (en) * 2021-03-30 2021-07-13 山东大学 Human-computer interaction system and method based on remote operation of treatment robot
CN113352327B (en) * 2021-06-28 2022-09-23 深圳亿嘉和科技研发有限公司 Five-degree-of-freedom mechanical arm joint variable determination method
CN113479442B (en) * 2021-07-16 2022-10-21 上海交通大学烟台信息技术研究院 Device and method for realizing intelligent labeling of unstructured objects on assembly line
CN114102610A (en) * 2021-12-30 2022-03-01 浙江博采传媒有限公司 Mechanical arm simulation control method and device and storage medium
CN114770513B (en) * 2022-05-09 2024-07-12 重庆大学 Industrial four-axis robot moving target tracking and grabbing method
CN116214549B (en) * 2023-01-19 2024-03-01 中国科学院微小卫星创新研究院 Teleoperation system and teleoperation method for space robot

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101726296A (en) * 2009-12-22 2010-06-09 哈尔滨工业大学 Vision measurement, path planning and GNC integrated simulation system for space robot
CN101733746A (en) * 2009-12-22 2010-06-16 哈尔滨工业大学 Autonomously identifying and capturing method of non-cooperative target of space robot
CN103926845A (en) * 2014-04-17 2014-07-16 哈尔滨工业大学 Ground-based simulation system for space robot visual servo to capture moving target and simulation method
CN105635648A (en) * 2014-10-28 2016-06-01 江苏绿扬电子仪器集团有限公司 Video real-time edge detection system
CN106003104A (en) * 2015-07-03 2016-10-12 中国运载火箭技术研究院 Mechanical arm planning method suitable for visual information guiding under multi-constrained condition

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101726296A (en) * 2009-12-22 2010-06-09 哈尔滨工业大学 Vision measurement, path planning and GNC integrated simulation system for space robot
CN101733746A (en) * 2009-12-22 2010-06-16 哈尔滨工业大学 Autonomously identifying and capturing method of non-cooperative target of space robot
CN103926845A (en) * 2014-04-17 2014-07-16 哈尔滨工业大学 Ground-based simulation system for space robot visual servo to capture moving target and simulation method
CN105635648A (en) * 2014-10-28 2016-06-01 江苏绿扬电子仪器集团有限公司 Video real-time edge detection system
CN106003104A (en) * 2015-07-03 2016-10-12 中国运载火箭技术研究院 Mechanical arm planning method suitable for visual information guiding under multi-constrained condition

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
《多目标跟踪的改进Camshift/卡尔曼滤波组合算法》;孙凯等;《信息与控制》;20090228;第9-14页 *

Also Published As

Publication number Publication date
CN106651949A (en) 2017-05-10

Similar Documents

Publication Publication Date Title
CN106651949B (en) Space manipulator target capturing teleoperation method and system based on simulation
JP7260269B2 (en) Positioning system for aeronautical non-destructive inspection
US10732647B2 (en) Multi-sensor fusion for robust autonomous flight in indoor and outdoor environments with a rotorcraft micro-aerial vehicle (MAV)
Skaar et al. Camera-space manipulation
Sampedro et al. Image-based visual servoing controller for multirotor aerial robots using deep reinforcement learning
Lee et al. Visual-inertial telepresence for aerial manipulation
CN109102525B (en) Mobile robot following control method based on self-adaptive posture estimation
Gans et al. A hardware in the loop simulation platform for vision-based control of unmanned air vehicles
CN111325768B (en) Free floating target capture method based on 3D vision and simulation learning
CN109782810B (en) Video satellite moving target tracking imaging method and device based on image guidance
CN112102403B (en) High-precision positioning method and system for autonomous inspection unmanned aerial vehicle in power transmission tower scene
Zhao et al. Vision-based tracking control of quadrotor with backstepping sliding mode control
Su et al. Catching a flying ball with a vision-based quadrotor
Baldini et al. Learning pose estimation for UAV autonomous navigation and landing using visual-inertial sensor data
CN114488848A (en) Unmanned aerial vehicle autonomous flight system and simulation experiment platform for indoor building space
Garcia et al. Real-time navigation for drogue-type autonomous aerial refueling using vision-based deep learning detection
Miranda-Moya et al. Ibvs based on adaptive sliding mode control for a quadrotor target tracking under perturbations
Hao et al. Intelligent spacecraft visual GNC architecture with the state-of-the-art AI components for on-orbit manipulation
Mian et al. Autonomous spacecraft inspection with free-flying drones
CN113421470A (en) Teleoperation simulation training system and teleoperation simulation training method for space manipulator
CN109062220B (en) Method and device for controlling terminal movement
Kaiser et al. Localization and control of an aerial vehicle through chained, vision-based pose reconstruction
CN116149371A (en) Multi-moving body three-dimensional tracking and controlling platform based on visual sensor network
Domnik et al. Dense 3d-reconstruction from monocular image sequences for computationally constrained uas
CN115618749A (en) Error compensation method for real-time positioning of large unmanned aerial vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant