CN114715447A - Cell spacecraft module docking device and visual alignment method - Google Patents
Cell spacecraft module docking device and visual alignment method Download PDFInfo
- Publication number
- CN114715447A CN114715447A CN202210408384.5A CN202210408384A CN114715447A CN 114715447 A CN114715447 A CN 114715447A CN 202210408384 A CN202210408384 A CN 202210408384A CN 114715447 A CN114715447 A CN 114715447A
- Authority
- CN
- China
- Prior art keywords
- image
- platform
- carrying device
- active end
- passive
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 26
- 238000003032 molecular docking Methods 0.000 title claims abstract description 13
- 230000000007 visual effect Effects 0.000 title claims abstract description 11
- 238000012545 processing Methods 0.000 claims abstract description 22
- 238000006243 chemical reaction Methods 0.000 claims description 13
- 244000309464 bull Species 0.000 claims description 12
- 239000011159 matrix material Substances 0.000 claims description 9
- 238000013135 deep learning Methods 0.000 claims description 5
- 238000001914 filtration Methods 0.000 claims description 5
- RYGMFSIKBFXOCR-UHFFFAOYSA-N Copper Chemical compound [Cu] RYGMFSIKBFXOCR-UHFFFAOYSA-N 0.000 claims description 4
- 230000001413 cellular effect Effects 0.000 claims description 4
- 229910052802 copper Inorganic materials 0.000 claims description 4
- 239000010949 copper Substances 0.000 claims description 4
- 238000007781 pre-processing Methods 0.000 claims description 3
- 238000013459 approach Methods 0.000 claims description 2
- 230000001360 synchronised effect Effects 0.000 claims description 2
- 230000036544 posture Effects 0.000 claims 8
- 238000010586 diagram Methods 0.000 description 10
- 210000001503 joint Anatomy 0.000 description 8
- 230000000694 effects Effects 0.000 description 7
- 230000008569 process Effects 0.000 description 6
- 238000004364 calculation method Methods 0.000 description 4
- 238000002474 experimental method Methods 0.000 description 4
- 230000009466 transformation Effects 0.000 description 4
- 230000006870 function Effects 0.000 description 3
- 230000007704 transition Effects 0.000 description 3
- 230000004913 activation Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000010276 construction Methods 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 238000003708 edge detection Methods 0.000 description 2
- 230000005486 microgravity Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 241000834287 Cookeolus japonicus Species 0.000 description 1
- 240000007651 Rubus glaucus Species 0.000 description 1
- 235000011034 Rubus glaucus Nutrition 0.000 description 1
- 235000009122 Rubus idaeus Nutrition 0.000 description 1
- 238000000429 assembly Methods 0.000 description 1
- 230000000712 assembly Effects 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 238000004904 shortening Methods 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B64—AIRCRAFT; AVIATION; COSMONAUTICS
- B64G—COSMONAUTICS; VEHICLES OR EQUIPMENT THEREFOR
- B64G7/00—Simulating cosmonautic conditions, e.g. for conditioning crews
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/73—Deblurring; Sharpening
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/62—Analysis of geometric attributes of area, perimeter, diameter or volume
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30204—Marker
- G06T2207/30208—Marker matrix
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Biophysics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Remote Sensing (AREA)
- Aviation & Aerospace Engineering (AREA)
- Geometry (AREA)
- Image Processing (AREA)
Abstract
The invention provides a cell spacecraft module docking device and a visual alignment method. The device comprises: the device comprises an active end motion platform and a passive end platform; the active end motion platform is at least provided with an active end interface carrying device, an image acquisition device and a processing device; the passive end platform is at least provided with a passive end interface carrying device; image labels are arranged on the passive end interface carrying device and/or the passive end platform; the image acquisition device is configured to acquire an image of the image tag; the processing device is configured to calculate the relative position and posture between the active end interface carrying device and the passive end interface carrying device according to the image of the image label; the processing device is further configured to adjust the moving direction and speed of the active end moving platform relative to the passive end platform according to the relative position and the posture, so that the active end interface carrying device and the passive end interface carrying device are close to each other to a position for interface locking.
Description
Technical Field
The invention relates to the technical field of aerospace, in particular to a cell spacecraft module butt joint device and a visual alignment method.
Background
The modular and standardized concept of the space infrastructure has been studied for decades and is now becoming a reality. One of the attempts of cellular spacecraft is to construct a space system by using cellular modules with standardized shapes and interfaces. Compared with the traditional spacecraft, the spacecraft has higher flexibility. Furthermore, by such a modular system, space debris is facilitated to be reduced, and the life and reusability of the system, subsystems and components is increased.
The module butt joint is a key link in the construction of the cell spacecraft, and the construction of the space in-orbit spacecraft and the replacement of assemblies are completed through butt joint guidance. Therefore, there is a need to develop a cell-spacecraft module docking device and a vision alignment method.
Disclosure of Invention
In order to solve the problems, the invention provides a cell spacecraft module docking device and a vision alignment method.
According to one aspect of the invention, a cell spacecraft module docking device is provided. The device includes: an active end motion platform and a passive end platform; the active end motion platform is at least provided with an active end interface carrying device, an image acquisition device and a processing device; the passive end platform is at least provided with a passive end interface carrying device; image labels are arranged on the passive end interface carrying device and/or the passive end platform; the image acquisition device is configured to acquire an image of the image tag; the processing device is configured to calculate the relative position and posture between the active end interface carrying device and the passive end interface carrying device according to the image of the image label; the processing device is further configured to adjust the moving direction and speed of the active end moving platform relative to the passive end platform according to the relative position and the posture, so that the active end interface carrying device and the passive end interface carrying device are close to each other to a position for interface locking.
According to an example embodiment of the present invention, an active end motion platform includes a smart cart; the image acquisition device comprises a camera; the passive end platform comprises a bull's eye wheel base and is independent of a carrying platform arranged on the bull's eye wheel base; the carrying platform comprises an upper layer carrying platform and a lower layer carrying platform which are connected and supported by copper columns; the passive end interface carrying device is fixedly connected with the upper layer carrying platform; and the front end of each of the active end interface carrying device and the passive end interface carrying device is fixedly connected with the standardized interface.
According to an exemplary embodiment of the present invention, a bracket bottom plate of the interface carrying device is perforated for avoiding interference while installing a standardized interface; the passive end interface carrying device support bottom plate is designed into a bucket shape and is used for installing a standardized interface.
According to another aspect of the invention, a cell spacecraft module vision alignment method is provided. The method uses the cell spacecraft module docking device. The method comprises the following steps: calibrating an image acquisition device; image preprocessing: shooting the image label by using the calibrated image acquisition device, and then filtering the obtained image to eliminate noise; establishing a coordinate system conversion relation: constructing a coordinate conversion relation between an image label coordinate system and an image acquisition device coordinate system and a coordinate conversion relation between the image acquisition device coordinate system and a milemeter coordinate system; target identification: searching for possible image labels in the obtained image; for the searched image tag, acquiring information stored in the image tag, and resolving the position and the posture of the central point of the image tag; position and attitude conversion: converting the position and the posture of the image label into a coordinate system of the odometer, and calculating the position and the posture of the interface carrying device of the active end in the coordinate system of the odometer; the active end motion platform moves: controlling the motion of the active end motion platform towards the image label according to the relative position and posture of the active end interface carrying device and the image label under the odometer coordinate system; interface locking and checking: after the relative position and posture between the active end interface carrying device and the passive end interface carrying device meet the preset requirements, the standardized interface is started to complete locking.
According to an exemplary embodiment of the present invention, the calibrating step of the image capturing apparatus comprises: the method comprises the steps that an image acquisition device is used for shooting a 5-7 standard checkerboard image in multiple angles, a python-opencv tool is used for carrying out calibration processing on the checkerboard image, an internal reference matrix is obtained, and camera distortion is eliminated; the active end moving platform moving step comprises the following steps: path planning: and according to the relative positions of the active terminal interface carrying device and the image label from far to near, respectively carrying out path planning of different strategies according to a rapid stepping stage, an accurate alignment stage and an image enhancement stage. In the fast stepping stage, respectively carrying out global path planning and local path planning by adopting an improved A-x algorithm and an improved TEB algorithm; in the accurate alignment stage, a position and posture synchronous coordination alignment strategy is adopted to measure and calculate the target pose in real time and correct the current track; in the image enhancement stage, an image deblurring algorithm based on deep learning is adopted to process the image of the image label so as to solve the problem of out-of-focus blur; and then, according to the image of the deblurred image label, the motion of the active end motion platform is carried out.
According to the embodiment of the invention, the visual alignment method for the cell spacecraft module butt joint under the two-dimensional condition can be provided, and meanwhile, the module butt joint device for simulating the space in-orbit microgravity environment under the two-dimensional condition is provided, so that an environment is provided for feasibility verification of the visual alignment method, and the visual alignment method has the advantages of high precision and high efficiency.
Drawings
FIG. 1 is a schematic structural diagram of an active end interface carrying device;
FIG. 2 is a schematic structural diagram of a passive-end interface carrying device;
FIG. 3 is a schematic structural view of a bull's eye wheel experimental platform;
FIG. 4 is a coordinate transformation relationship between the AR Tag coordinate system and the active-side camera coordinate system;
FIG. 5 is a diagram of the recognition effect of AR Tag;
FIG. 6 is a target pose output;
FIG. 7 is a diagram of the active end entity;
FIG. 8 is a diagram of a passive end entity;
FIG. 9 is a pictorial view of the overall structure of the present invention;
FIG. 10 is a flow chart of an embodiment of the present invention.
FIG. 11 is a schematic illustration of one of the approximate rectangles of the target region.
Detailed Description
In order to make the technical problems, technical solutions and advantageous effects to be solved by the present invention more clearly understood, the present invention is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The embodiment of the invention provides a cell spacecraft module docking device, which comprises a passive end platform, an active end motion platform, an active end interface carrying device and a passive end interface carrying device which are realized by a bull's eye wheel experiment platform.
Fig. 1 is a schematic structural diagram of an active terminal interface carrying device.
As shown in fig. 1, the active end interface mounting device has an L-shaped structure as a whole, and interfaces that can be butted to each other are provided on the vertical portion of the L-shape. A pentagonal hole is formed in a bottom plate of the L-shaped structure of the active end interface carrying device so as to avoid interference with normal operation of the active end radar. The connection of the active and passive interface carrying device and the standardized interface adopts 120-degree circumferential array parallel threaded holes, so that the stability and reliability of the connection are ensured.
Fig. 2 is a schematic structural view of a passive-end interface mounting device. As shown in fig. 2, the passive-end interface mounting device also has an L-shaped configuration. And an interface which can be butted is arranged on the vertical part of the L-shaped structure. The bottom plate of the passive end interface carrying device is designed into a bucket shape and used for bearing a balancing weight, adjusting the gravity center of the passive end and keeping the whole stability.
And judging whether the alignment result meets the requirement and whether the alignment error is in the required range or not between the two interfaces on the active end interface carrying device and the passive end interface carrying device through whether the standardized interfaces are coupled and locked or not.
FIG. 3 is a schematic structural diagram of a bull's eye wheel experimental platform.
As shown in FIG. 3, the bull's eye wheel experiment platform comprises a bull's eye wheel base, is independent of a carrying platform arranged on the bull's eye wheel base, and is connected and supported by copper columns on the upper and lower parts of a double-layer carrying platform. The passive end interface carrying device is fixedly connected with the upper layer carrying platform. The driving end interface carrying device is fixedly connected with a driving end power source. The power source here employs a smart cart that is used to simulate a two-dimensional environment. That is, the active end motion platform may include the smart cart. The front end of each interface carrying device is fixedly connected with the standardized interface.
The bull's eye wheel experiment platform is used for carrying the passive end interface carrying device and the passive end interface. The microgravity environment is simulated through the bull-eye wheel base, and the motion state of the space on-orbit module after being impacted is truly simulated. Height adjustment is carried through changing the copper post between the platform to double-deck.
In order to realize the butt joint process, the active end motion platform is at least provided with an active end interface carrying device, an image acquisition device and a processing device. And the passive end interface carrying device and/or the passive end platform are/is provided with image labels. The image capture device is configured to capture an image of the image tag. The processing device is configured to calculate the relative position and posture between the active end interface carrying device and the passive end interface carrying device according to the image of the image label. The processing device is further configured to adjust the moving direction and speed of the active end moving platform relative to the passive end platform according to the relative position and the posture, so that the active end interface carrying device and the passive end interface carrying device are close to each other to a position for interface locking.
Fig. 4 is a coordinate transformation relationship between the AR Tag coordinate system and the active-side camera coordinate system.
As shown in fig. 4, the coordinates (which may represent the position and the posture) of the passive-end interface-mounted device in the image tag coordinate system may be converted into the image capture device coordinate system (which has a predetermined correspondence with the odometer coordinate system), or may be converted into the world coordinate system (i.e., a coordinate system common to the active end and the passive end).
Fig. 5 is a graph showing the effect of AR Tag recognition.
As shown in fig. 5, an artificial Tag (AR Tag) may be used as the image Tag described above, which may provide information using an arrangement of a block shape itself, similar to a two-dimensional code.
Fig. 6 is target pose output.
As shown in FIG. 6, pose refers to the pose of the AR TAG in the camera coordinate system, wherein positon is a three-dimensional coordinate and orientation is the pose of the AR TAG in quaternion with respect to the camera coordinate system. The four elements may be used to represent and solve for the attitude angle, and may be performed using any available solution known to those skilled in the art.
FIG. 7 is a diagram of the active end entity.
As shown in fig. 7, the active end motion platform is an intelligent car, and is provided with an active end interface carrying device and an image acquisition device (such as a camera/a video camera).
FIG. 8 is a diagram of a passive end entity.
As shown in fig. 8, the passive end interface carrying device and the bull's eye wheel experiment platform are arranged in this order from top to bottom. In addition, an AR TAG is also provided.
Fig. 9 is a diagram of the overall structure of the present invention.
As shown in fig. 9, the active end motion platform and the passive end platform have moved to a close position.
FIG. 10 is a flow chart of an embodiment of the present invention.
The various modules shown in the figures may be integrated into the active end motion platform or provided separately. For example, the processor described above may implement data processing, motion control, alignment determination, activation of a lock interface, and the like.
The pattern of the artificial beacon may be acquired by the object detection module and then processed by the data processing module. The data processing module can also acquire the information of the position sensor to judge the relative position and posture information so as to instruct the motion control module to control the chassis to move. The information can be displayed on the information display platform.
When the alignment judging module judges that the alignment is not aligned, the target detection module continues to detect the target. Upon determining alignment, the locking interface may be activated to complete the alignment.
The following process can be accomplished by the present invention as shown in the flow chart of fig. 10 in conjunction with the apparatus of fig. 1-9.
The high-definition camera at the active end is calibrated in advance, only a 5 × 7 standard checkerboard image needs to be shot by the camera, and the program can calibrate the checkerboard image to obtain an internal reference matrix of the camera and eliminate the distortion of the camera. The program written as described above for implementing the relevant function is started using the instruction. After startup, filtering is applied to the resulting picture to remove possible noise. Then, the power source of the active end is automatically controlled to search the AR Tag beacon in a rotating manner. And then carrying out edge detection on the beacon to acquire the relative pose between the beacon and the active end module. The specific principle is as described above. And then planning a path according to the pose to finish alignment. Then, whether the alignment meets the required error is judged, and the standardized interface is activated to complete locking.
Specifically, the visual alignment method facing the cell spacecraft module docking comprises the following steps:
(1) calibrating a camera: the method comprises the steps of utilizing an active-end high-definition camera to shoot a 5 x 7 standard checkerboard image in multiple angles, utilizing python-opencv to calibrate the checkerboard image, and obtaining an internal reference matrix M1And eliminates camera distortion.
Wherein f is the focal length of the camera, and the size of each pixel in the directions of the x axis and the y axis is dx、dy,u0、v0Due to mounting deviations and lens distortionsAnd (c) a parameter.
(2) Image preprocessing: the AR Tag is photographed and the acquired image is then subjected to a filtering process (e.g., kalman filtering) to remove noise that may be present.
(3) And (3) establishing a tf tree: establishing a coordinate system relation, and establishing a coordinate conversion relation between an AR Tag coordinate system (namely, an image Tag coordinate system) and an active end camera coordinate system (namely, an image acquisition device coordinate system), and a coordinate conversion relation between the camera coordinate system and an odometer coordinate system.
The coordinate conversion relationship between the AR Tag coordinate system and the active-end camera coordinate system is as follows, and fig. 4 shows the corresponding relationship:
wherein, ZcThe Z-axis coordinate of the origin of the AR Tag coordinate system of the target point in the camera coordinate system, u is the u-axis coordinate of the target point in the image coordinate system, v is the v-axis coordinate of the target point in the image coordinate system, the image coordinate system is the two-dimensional coordinate system in the image plane acquired by the camera at the current moment, the origin is the intersection point of the optical axis and the image plane, and is generally the image center (the center of the plane photographed by the camera is referred to, not the AR Tag center). This coordinate system is an intermediate coordinate system introduced for calculating the AR Tag coordinate system and the camera coordinate system transformation relationship). f is the focal length of the camera, and the size of each pixel in the directions of the x axis and the y axis is dx、dy,u0、v0Is two parameters introduced due to mounting deviation and lens distortion, R is a predetermined orthogonal matrix, T is a predetermined translation vector, and 0 is a matrix [ 000 ]],(Xw,Yw,Zw) As coordinates of the target point in the world coordinate system, M1Referred to as a camera internal reference matrix, M2 is referred to as a camera external reference matrix.
Further, XC=Zc×u,YC=Zc×v。
XcIs the X-axis coordinate, Y, of the origin of the AR Tag coordinate system of the target point under the camera coordinate systemcAs the origin of the AR Tag coordinate system of the target pointY-axis coordinates in the camera coordinate system.
The relationship between the camera coordinate system and the odometer coordinate system is as follows, because the camera is fixedly connected with the driving end power source, the camera and the driving end power source are considered to be equivalent in coordinate conversion, and because the project is designed under the condition of two dimensions, the relationship is as follows:
wherein (X)D,YD,ZD)TIs the coordinate of the target point in the coordinate system of the odometer.
(4) Target identification: and searching for possible AR Tag beacons, and searching for beacon corner points with sub-pixel level precision by utilizing edge detection. Meanwhile, the information stored by the beacon is obtained to distinguish the target beacon from other beacons. Then, the position and attitude of the center point of the target beacon are resolved. The effect is shown in fig. 5, 8, etc.
Firstly, a calculation mode of the distance of the active and passive ends is provided:
from the coordinate transformation, in a plane perpendicular to the optical axis there is
d is the distance between the active and passive terminals, i.e. the unknown quantity to be solved here.
By differential method, the target region can be moved along XwThe direction is equally divided into N parts, and each part is approximately a rectangle.
FIG. 11 is a schematic illustration of one of the approximate rectangles of the target region.
As shown in FIG. 11, assume that the four vertices of the ith rectangle are labeled asThe area of the entire target region is
Is a vertexY of (A) iswThe direction coordinate of the direction coordinate is set,is a vertexY of (A) iswThe coordinates of the direction are shown in the figure,is a vertexX of (2)wThe coordinates of the direction are shown in the figure,is a vertexX of (2)wAnd (4) direction coordinates.
The two types are arranged to obtain
Are respectivelyV-axis coordinates, u-axis coordinates in the image coordinate system. u or v and the coordinates of the world coordinate system of the vertex P are expressed by the following formula.
Wherein S is1Representing the pixel area of the target region in the image coordinate system, ax,ayCan be given by an internal reference matrix obtained by calibrating the camera, thereby obtaining
And further obtaining the coordinates of the center of the two-dimensional code:
(xi,yi,zi) (i ═ 1,2,3,4) is the coordinate of the beacon (AR Tag) corner point (i.e. the point on the 4 corners) i in the camera coordinate system.
The expression "d" is an expression for calculating the distance from the corner point of the AR Tag to the camera (here, the distance refers to the distance between the AR Tag and the camera in the normal direction, i.e., the distance/coordinate on the z-axis), and can also be expressed as "zi. Therefore, z of each corner point described belowiIt is obtained by this formula.
Secondly, a calculation mode of the relative posture of the active end and the passive end is provided:
the AR Tag in the form of a two-dimensional code is used as a plane in a space, and the posture of the AR Tag can be determined only by two points, so that:
and finally, the Euler angle is converted into a quaternion so as to avoid the singularity of the Euler angle and facilitate the quick calculation of the attitude by a machine. The pose output is shown in fig. 6.
(5) Pose conversion: and converting the acquired target pose to a coordinate system of the odometer according to the tf tree, and calculating the pose of the active end aligned with the passive end. Under the two-dimensional condition, the offset pi rad (180 degrees) is added to the yaw angle of the passive end, and the position and posture of the active end aligned with the passive end are obtained.
(6) Driving a trolley: the resolved pose is transferred to the chassis node (for chassis motion), and the dolly is driven to the target point using PD control (e.g., P200, D150).
(7) Path planning: and according to the transition points, planning the paths of different strategies in three stages. The transition point is specifically adjusted according to the program parameter settings. The stroke is generally within 0.8m because the project is the on-orbit module butt joint facing the cell spacecraft. And 3.3 x 3.3cm of AR Tag is selected, and the performance of the used high-definition camera is combined, so that the two transition points are sequentially arranged at a position which is 0.2m away from the passive end (from accurate alignment to image enhancement) and 0.5m away from the passive end (from rapid stepping to accurate alignment). Namely, the active end moving platform approaches to the passive end platform at a faster speed, then the position and the posture are adjusted accurately at the same time, and finally the image is enhanced and then approached under the condition of close intervals.
(8) A rapid stepping stage: and respectively carrying out global path planning and local path planning by adopting an improved A-x algorithm and an improved TEB algorithm, and rapidly shortening the distance between the passive end and the global path planning and the local path planning under a certain precision.
(9) And (3) a precise alignment stage: and measuring and calculating the target pose in real time and correcting the current track by adopting a pose synchronization coordination alignment strategy.
At this time, the attitude deviation control function: Δ roll max (0.1,2/3 x-1/30), i.e. such that the control condition is that Δ roll equals the maximum of 0.1,2/3 x-1/30.
At this time, the y-axis deviation control function: Δ y ═ max (0.005,0.1 × x-0.01), i.e., such that the control condition is that Δ y is equal to the maximum value among 0.005,0.1 × x-0.01.
Wherein, the delta roll is the relative posture of the active and passive ends, the delta y is the relative deviation of the central y axis of the active and passive ends, and the x is the linear distance of the central of the active and passive ends.
(10) And (3) image enhancement: and in the third stage, the problem of out-of-focus blur caused by too short distance between the camera and the target object is solved by adopting an image deblurring algorithm based on deep learning, so that the image acquired by the camera in the last stage has higher definition, and higher alignment precision is achieved. And selecting a generation countermeasure network (GAN) for image deblurring processing aiming at the application scene of camera alignment. The anti-generation network can greatly retain the details of the original picture in the process of processing the image, better restore the true image and is very suitable for the alignment scene.
The deep learning algorithm contains two 1/2 spaced convolution units, 9 residual units and two deconvolution units. Each ResBlock consists of a convolutional layer, an instance normalization layer, and a ReLU activation. In order to achieve a good training effect, the network is trained by adopting a data set of the camera GoPro. The GoPro data set contains 2103 fuzzy-sharp image pairs of 720p taken from different scenes. The trained network is tested, and the processing effect on the out-of-focus blurred target image is obvious. Compared with the traditional method of using the GPU for calculation under a Windows system, the algorithm is calculated by the CPU under the Linux system through configuration, and is finally arranged on the raspberry pie (including the processor in the invention), the usability of performance is guaranteed, and the image enhancement processing can be completed in about 3 s.
Different stages are set and different strategies are used, namely, the speed requirement and the precision requirement of alignment are guaranteed, and meanwhile, the problem of defocusing after approaching in the visual alignment process can be solved in a targeted manner. It should be understood that the present invention may also employ image enhancement means other than deep learning algorithms.
(11) Interface locking and checking: and after the attitude errors of the active and passive ends meet the requirements, starting the standardized interface to complete the locking of the active and passive ends.
Therefore, the invention has the following beneficial effects: the relationship between a beacon coordinate system and a camera coordinate system, and the relationship between the camera coordinate system and a coordinate system of the odometer can be automatically established by means of a vision system; a three-stage path planning strategy beneficial to improving efficiency and precision can be realized; the alignment effect of the active and passive ends can be automatically judged and can be automatically adjusted within a controllable range; the cell spacecraft module butt joint under the two-dimensional condition of the space in-orbit background can be realized.
Claims (5)
1. A cell spacecraft module docking apparatus comprising: an active end motion platform and a passive end platform;
the active end motion platform is at least provided with an active end interface carrying device, an image acquisition device and a processing device;
the passive end platform is at least provided with a passive end interface carrying device; an image label is arranged on the passive end interface carrying device and/or the passive end platform;
wherein the image capture device is configured to capture an image of the image tag;
wherein the processing device is configured to calculate the relative position and posture between the active end interface carrying device and the passive end interface carrying device according to the image of the image label;
the processing device is further configured to adjust the moving direction and speed of the active end moving platform relative to the passive end platform according to the relative position and the posture, so that the active end interface carrying device and the passive end interface carrying device approach to each other to a position where interface locking is performed.
2. A cellular spacecraft module docking device according to claim 1,
the active end motion platform comprises an intelligent trolley; the image acquisition device comprises a camera;
the passive end platform comprises a bull's eye wheel base and is independent of a carrying platform arranged on the bull's eye wheel base;
the carrying platform comprises an upper layer carrying platform and a lower layer carrying platform which are connected and supported by copper columns;
the passive end interface carrying device is fixedly connected with the upper-layer carrying platform; and
the front ends of the active end interface carrying device and the passive end interface carrying device are fixedly connected with the standardized interface.
3. The cell spacecraft module docking device of claim 1, wherein a bracket bottom plate of the interface carrying device is perforated for avoiding interference while installing a standardized interface; the passive end interface carrying device support bottom plate is designed to be bucket-shaped and used for installing a standardized interface.
4. A cell-spacecraft module visual alignment method using the cell-spacecraft module docking device of any one of claims 1 to 3; the method comprises the following steps:
calibrating an image acquisition device;
image preprocessing: shooting an image tag by using a calibrated image acquisition device, and then carrying out Kalman filtering processing on the obtained image to eliminate noise;
establishing a coordinate system conversion relation: constructing a coordinate conversion relation between an image label coordinate system and an image acquisition device coordinate system and a coordinate conversion relation between the image acquisition device coordinate system and a milemeter coordinate system;
target identification: searching for possible image tags in the obtained image; for the searched image tag, acquiring information stored in the image tag, and resolving the position and the posture of the central point of the image tag;
position and attitude conversion: converting the position and the posture of the image tag into the coordinate system of the odometer, and calculating the position and the posture of the interface carrying device of the active end in the coordinate system of the odometer;
the motion of the active end motion platform: controlling the motion of the active end motion platform to move towards the image label according to the relative positions and postures of the active end interface carrying device and the image label under the odometer coordinate system;
interface locking and checking: after the relative position and posture between the active end interface carrying device and the passive end interface carrying device meet the preset requirements, the standardized interface is started to complete locking.
5. The cellular spacecraft module visual alignment method of claim 4, wherein:
the calibration step of the image acquisition device comprises the following steps: the image acquisition device is used for shooting the 5 x 7 standard checkerboard image in multiple angles, a python-opencv tool is used for calibrating the checkerboard image to obtain an internal reference matrix, and camera distortion is eliminated;
the step of moving the active end moving platform comprises the following steps:
path planning: according to the relative positions of the active end interface carrying device and the image label from far to near, path planning of different strategies is carried out according to a rapid stepping stage, a precise alignment stage and an image enhancement stage respectively;
in the fast stepping stage, respectively carrying out global path planning and local path planning by adopting an improved A-x algorithm and an improved TEB algorithm;
in the accurate alignment stage, a position and posture synchronous coordination alignment strategy is adopted to measure and calculate the target pose in real time and correct the current track;
in the image enhancement stage, processing the image of the image label by adopting an image deblurring algorithm based on deep learning so as to solve the problem of defocus blur; and then, according to the deblurred image of the image label, the motion of the active end motion platform is carried out.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210408384.5A CN114715447A (en) | 2022-04-19 | 2022-04-19 | Cell spacecraft module docking device and visual alignment method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210408384.5A CN114715447A (en) | 2022-04-19 | 2022-04-19 | Cell spacecraft module docking device and visual alignment method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114715447A true CN114715447A (en) | 2022-07-08 |
Family
ID=82244260
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210408384.5A Pending CN114715447A (en) | 2022-04-19 | 2022-04-19 | Cell spacecraft module docking device and visual alignment method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114715447A (en) |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CA2491101A1 (en) * | 2003-12-30 | 2005-06-30 | Canadian Space Agency | Zero-g emulating testbed for spacecraft control system |
CN101419055A (en) * | 2008-10-30 | 2009-04-29 | 北京航空航天大学 | Space target position and pose measuring device and method based on vision |
CN108562274A (en) * | 2018-04-20 | 2018-09-21 | 南京邮电大学 | A kind of noncooperative target pose measuring method based on marker |
CN108945536A (en) * | 2018-07-24 | 2018-12-07 | 浙江大学 | A kind of spacecrafts rendezvous experiment porch based on rotor craft |
CN110032201A (en) * | 2019-04-19 | 2019-07-19 | 成都飞机工业(集团)有限责任公司 | A method of the airborne visual gesture fusion of IMU based on Kalman filtering |
CN110062205A (en) * | 2019-03-15 | 2019-07-26 | 四川汇源光通信有限公司 | Motion estimate, tracking device and method |
CN111197984A (en) * | 2020-01-15 | 2020-05-26 | 重庆邮电大学 | Vision-inertial motion estimation method based on environmental constraint |
-
2022
- 2022-04-19 CN CN202210408384.5A patent/CN114715447A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CA2491101A1 (en) * | 2003-12-30 | 2005-06-30 | Canadian Space Agency | Zero-g emulating testbed for spacecraft control system |
CN101419055A (en) * | 2008-10-30 | 2009-04-29 | 北京航空航天大学 | Space target position and pose measuring device and method based on vision |
CN108562274A (en) * | 2018-04-20 | 2018-09-21 | 南京邮电大学 | A kind of noncooperative target pose measuring method based on marker |
CN108945536A (en) * | 2018-07-24 | 2018-12-07 | 浙江大学 | A kind of spacecrafts rendezvous experiment porch based on rotor craft |
CN110062205A (en) * | 2019-03-15 | 2019-07-26 | 四川汇源光通信有限公司 | Motion estimate, tracking device and method |
CN110032201A (en) * | 2019-04-19 | 2019-07-19 | 成都飞机工业(集团)有限责任公司 | A method of the airborne visual gesture fusion of IMU based on Kalman filtering |
CN111197984A (en) * | 2020-01-15 | 2020-05-26 | 重庆邮电大学 | Vision-inertial motion estimation method based on environmental constraint |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106529495B (en) | Obstacle detection method and device for aircraft | |
CN108279670B (en) | Method, apparatus and computer readable medium for adjusting point cloud data acquisition trajectory | |
CN111680685B (en) | Positioning method and device based on image, electronic equipment and storage medium | |
CN108408080A (en) | A kind of aircraft wing body Butt Assembling device, method and system | |
CN113409391B (en) | Visual positioning method and related device, equipment and storage medium | |
CN108520559B (en) | Unmanned aerial vehicle positioning and navigation method based on binocular vision | |
De Croon et al. | Optic-flow based slope estimation for autonomous landing | |
CN112837383B (en) | Camera and laser radar recalibration method and device and computer readable storage medium | |
CN113841384B (en) | Calibration device, chart for calibration and calibration method | |
CN113269840A (en) | Combined calibration method for camera and multi-laser radar and electronic equipment | |
JPH1183530A (en) | Optical flow detector for image and self-position recognizing system for mobile body | |
Yan et al. | Joint camera intrinsic and lidar-camera extrinsic calibration | |
US20220230348A1 (en) | Method and apparatus for determining a three-dimensional position and pose of a fiducial marker | |
CN112629431A (en) | Civil structure deformation monitoring method and related equipment | |
CN112700486B (en) | Method and device for estimating depth of road surface lane line in image | |
KR20220104025A (en) | Calibration of cameras on drones using human joints | |
JP2020052544A (en) | Image processing device | |
Ding et al. | A robust detection method of control points for calibration and measurement with defocused images | |
CN114758011B (en) | Zoom camera online calibration method fusing offline calibration results | |
CN113741495B (en) | Unmanned aerial vehicle attitude adjustment method and device, computer equipment and storage medium | |
CN117036666B (en) | Unmanned aerial vehicle low-altitude positioning method based on inter-frame image stitching | |
CN114715447A (en) | Cell spacecraft module docking device and visual alignment method | |
CN206913156U (en) | A kind of unmanned plane | |
Ju et al. | Multi-camera calibration method based on minimizing the difference of reprojection error vectors | |
Kim et al. | Rover mast calibration, exact camera pointing, and camera handoff for visual target tracking |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20220708 |
|
WD01 | Invention patent application deemed withdrawn after publication |