CN107481281B - Relative pose calculation method and device and spacecraft rendezvous and docking system - Google Patents
Relative pose calculation method and device and spacecraft rendezvous and docking system Download PDFInfo
- Publication number
- CN107481281B CN107481281B CN201710728275.0A CN201710728275A CN107481281B CN 107481281 B CN107481281 B CN 107481281B CN 201710728275 A CN201710728275 A CN 201710728275A CN 107481281 B CN107481281 B CN 107481281B
- Authority
- CN
- China
- Prior art keywords
- neural network
- target
- relative pose
- representing
- coordinate system
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/10—Complex mathematical operations
- G06F17/16—Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Mathematical Physics (AREA)
- Data Mining & Analysis (AREA)
- Computational Mathematics (AREA)
- Computing Systems (AREA)
- Mathematical Analysis (AREA)
- Mathematical Optimization (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Pure & Applied Mathematics (AREA)
- Software Systems (AREA)
- General Engineering & Computer Science (AREA)
- Databases & Information Systems (AREA)
- Biophysics (AREA)
- Geometry (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Algebra (AREA)
- Computational Linguistics (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Multimedia (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
The invention provides a relative pose calculation method and device and a rendezvous and docking system of an aerospace craft, wherein the relative pose calculation method comprises the following steps: acquiring a target image, and calculating to obtain a target circle center coordinate; inputting the circle center coordinates of the target into an optimal neural network, and calculating to obtain a relative pose initial value; and carrying out iterative operation on the initial value of the relative pose, and calculating to obtain a final value of the relative pose. The relative pose calculation method and device and the spacecraft rendezvous and docking system provided by the invention have the advantages of simple application of the neural network, high mobility, easiness in realization on hardware, short processing time and high output frame frequency of the whole system.
Description
Technical Field
The invention relates to the technical field of aerospace, in particular to a relative pose calculation method and device and an aerospace vehicle rendezvous and docking system.
Background
The relative pose measurement is generally used when the aircraft meets in space, and a large aircraft generally uses various measurement means such as visible light, laser and the like to ensure the completion of a space mission, so that the measurement is not suitable for a small spacecraft. The small aircraft usually adopts a mode of adding the cooperative target by the visual camera, the cooperative target usually adopts specific sequencing of the LED lamps as a reference, the pose resolving method adopts a PNP commonly used resolving method, the calculating process is complex, and hardware transplantation is not facilitated.
At present, conventional methods for acquiring pose initial values include Kalman filtering, iterative nonlinear least square methods and the like, the methods need an initial value, and the methods cannot converge to global minimum.
Disclosure of Invention
The invention provides a relative pose calculation method and device and an aerospace vehicle rendezvous and docking system, and aims to solve the problem that in the prior art, when the aerospace vehicle rendezvous and docking, a pose calculation method is complex in calculation process.
The relative pose calculation method provided by the invention comprises the following steps:
acquiring a target image, and calculating to obtain a target circle center coordinate;
inputting the circle center coordinates of the target into an optimal neural network, and calculating to obtain a relative pose initial value;
and carrying out iterative operation on the initial value of the relative pose, and calculating to obtain a final value of the relative pose.
Further, before inputting the coordinates of the circle center of the target into the optimal neural network, the method for calculating the relative pose further comprises:
and training to obtain the optimal neural network.
Further, the method for calculating the relative pose of the present invention, wherein the step of training to obtain the optimal neural network specifically comprises:
acquiring a relative pose data set and a camera coordinate data set;
obtaining an input sample set according to the relative pose data set, and calculating according to the camera coordinate data set to obtain an output ideal sample set;
performing normalization operation on the input sample set to obtain a neural network input set, and performing normalization operation on the output ideal sample set to obtain a neural network output set;
performing training operation on a preset neural network according to the neural network input set and the output ideal sample set;
and selecting to obtain the optimal neural network.
Further, in the relative pose calculation method according to the present invention, the relative pose dataset is constructed using the following formula (1):
wherein the content of the first and second substances,representing a relative pose data set, LijRepresenting the projection of the distance of the target from the imaging point in the z-axis direction in the camera coordinate system, xijRepresenting the projection of the distance of the target from the imaging point in the x-axis direction in the camera coordinate system, yijRepresenting the projection of the distance of the object from the imaging point in the y-axis direction in the camera coordinate system, #ij,ξij,θijRespectively representing three Euler angles corresponding to the rotation matrix of the target relative to the camera coordinate system, wherein m represents the number of wafers used for indication in the target, and n represents the number of samples;
the camera coordinate data set is constructed using the following equation (2):
Θij=RijΩ0+tij………………………………………………(2);
wherein, thetaijRepresenting a camera coordinate data set, omega0Representing the coordinates of the centre of a circle, R, of each disc in the target for indicationijRotation matrix, t, representing the camera coordinate system and the target coordinate systemijTranslation vector representing camera coordinate system and target coordinate system;
The rotation matrix R of the camera coordinate system and the target coordinate systemijExpressed by the following equation (3):
translation vector t of the camera coordinate system and the target coordinate systemijExpressed by the following equation (4):
tij=[xij,yij,Lij]T…………………………………………………………(4)。
further, in the relative pose calculation method according to the present invention, the input sample set is represented by the following formula (5);
wherein the content of the first and second substances,a set of input samples is represented that is,representing a relative pose dataset;
the output ideal sample set is constructed using equation (6) as follows:
wherein the content of the first and second substances,the output of the set of ideal samples is represented,representing coordinates of the object in the camera; coordinates of an object in a cameraSpecifically, the calculation is obtained by using the following formula (7):
wherein the content of the first and second substances,representing the coordinates of the object in the camera, aijRepresenting the projection of the target in the x-direction of the image detector surface in the camera coordinate system, bijRepresents the projection of the target in the y direction of the image detector surface under the camera coordinate system, mu represents the camera pixel size, f represents the camera focal length, xijRepresenting the projection of the distance of the target from the imaging point in the x-axis direction in the camera coordinate system, yijRepresenting the projection of the distance between the target and the imaging point in the y-axis direction in the camera coordinate system, LijRepresenting the projection of the distance between the target and the imaging point in the z-axis direction in a camera coordinate system;
the neural network input set is represented by the following equation (8):
Ω={Ω11,Ω12,...,Ωij,...,Ωmn}……………………………(8);
where Ω represents the neural network input set, ΩijBy aligning relative pose data setsCarrying out normalization operation to obtain;
the neural network output set is represented by the following equation (9):
C={C11,C12,...,Cij,...,Cmn}……………………………(9);
wherein C represents a neural network output set, CijBy aligning the coordinates of the object in the cameraPerforming normalization operationThus obtaining the product.
Further, in the relative pose calculation method of the present invention, the preset neural network includes the following parameter settings: the number of nodes of an input layer and the number of nodes of an output layer;
the node transfer functions of the input layer and the hidden layer adopt tansig functions, and the node transfer function of the output layer adopts purelin functions.
Further, the relative pose calculation method of the present invention specifically includes the following steps of performing training operation on a preset neural network:
in each iteration, taking the neural network input set as an input value of the preset neural network, and calculating to obtain a neural network output vector;
in each iteration, comparing the output vector of the neural network with elements in the output ideal sample set, and calculating to obtain a comparison error;
adjusting the connection weight and the threshold value of the preset neural network by using the comparison error, and performing cyclic training operation;
and when the preset iteration times are reached or the comparison error is within a preset error value, terminating the circular training operation.
Further, according to the relative pose calculation method of the present invention, the comparison error is calculated by using the following formula (10):
wherein e represents a comparison error, OijRepresents the output vector of the neural network and,representing the coordinates of the object in the camera.
Further, in the method for calculating the relative pose according to the present invention, the step of selecting the optimal neural network includes:
acquiring a neural network subjected to the training operation under each hidden layer;
and selecting the neural network with the minimum error from the hidden layers as the optimal neural network.
Further, the method for calculating the relative pose according to the present invention is characterized in that the step of calculating to obtain the initial value of the relative pose includes:
carrying out normalization operation on the coordinates of the circle center of the target;
inputting the target circle center coordinates after normalization operation into the optimal neural network;
and performing inverse normalization operation on the output result of the optimal neural network, and calculating to obtain the initial value of the relative pose.
The invention provides a relative pose calculation device, comprising:
the image processing module is used for acquiring a target image and calculating to obtain a center coordinate of the target;
the neural network module is used for storing the trained optimal neural network;
the initial value calculation module is used for inputting the center coordinates of the target circle into an optimal neural network and calculating to obtain initial values of relative poses;
and the final value calculation module is used for carrying out iterative operation on the initial value of the relative pose and calculating to obtain a final value of the relative pose.
The invention provides a rendezvous and docking system for aerospace vehicles, which comprises: a cooperative target and the relative pose calculation device of the invention;
the relative pose calculation device scans the cooperative target to acquire the center coordinates of the target circle;
wherein the cooperative target comprises: four wafers arranged in an array; the area of each disc increases successively.
The relative pose calculation method and device and the spacecraft rendezvous and docking system provided by the invention have the advantages of simple application of the neural network, high mobility, easiness in realization on hardware, short processing time and high output frame frequency of the whole system.
Drawings
Other features, objects and advantages of the invention will become more apparent upon reading of the detailed description of non-limiting embodiments made with reference to the following drawings:
fig. 1 is a schematic flow chart of a relative pose calculation method according to a first embodiment of the present invention;
fig. 2 is a schematic flow chart of a relative pose calculation method according to a second embodiment of the present invention;
fig. 3 is a schematic flow chart illustrating a training operation performed on a preset neural network according to a second embodiment of the present invention;
FIG. 4 is a schematic flow chart of selecting an optimal neural network according to the second embodiment of the present invention;
fig. 5 is a schematic flow chart illustrating calculation of initial values of relative poses according to a second embodiment of the present invention;
fig. 6 is a schematic structural diagram of a relative pose calculation apparatus according to a third embodiment of the present invention;
fig. 7 is a schematic front view of a cooperative target according to a fourth embodiment of the present invention.
The same or similar reference numbers in the drawings identify the same or similar elements.
Detailed Description
The present invention is described in further detail below with reference to the attached drawing figures.
Example one
Fig. 1 is a schematic flow chart of a relative pose calculation method according to a first embodiment of the present invention, and as shown in fig. 1, the relative pose calculation method according to the first embodiment of the present invention includes:
and S101, acquiring a target image, and calculating to obtain the center coordinates of the target circle.
The cooperation target is pasted on the target aircraft and comprises four target circles with sequentially increasing areas, for example, the radiuses of the four target circles are 15mm, 20mm, 25mm and 30mm in sequence, and the circle center distance between adjacent target circles is 65 mm. The docking aircraft is provided with a visual camera and an image processing system, and the working distance between the docking aircraft and the cooperative target is 0.1-3 m. And the vision camera starts to acquire a target image of the cooperative target in a working range, performs edge detection through the image processing system, identifies four target circles, and calculates the coordinates of the circle centers of the target circles to obtain the corresponding coordinates of the target under a camera coordinate system.
And S102, inputting the center coordinates of the target circle into an optimal neural network, and calculating to obtain an initial value of the relative pose.
And obtaining an optimal neural network in ground training, inputting the target circle center coordinates in the step S101 into the optimal neural network, and calculating to obtain a relative pose as an initial value of calculation.
And S103, carrying out iterative operation on the initial value of the relative pose, and calculating to obtain a final value of the relative pose.
And substituting the initial value of the relative pose obtained in the step S103 into a Haralick iterative algorithm to further solve the relative pose, and obtaining high-precision relative pose information after repeated iteration to serve as a final value of the relative pose.
Example two
Fig. 2 is a schematic flow chart of a relative pose calculation method according to a second embodiment of the present invention, and as shown in fig. 2, the relative pose calculation method according to the second embodiment of the present invention includes:
step S201, training to obtain the optimal neural network.
The step S201 of training the neural network to obtain the optimal neural network includes the following steps S2011 to S2015:
step S2011, a relative pose data set and a camera coordinate data set are acquired.
And randomly acquiring relative pose data of the target according to task requirements, and inputting the relative pose data as a training sample into a data set. The relative pose dataset is constructed using the following equation (1):
wherein the content of the first and second substances,representing a relative pose data set, LijRepresenting the projection of the distance of the target from the imaging point in the z-axis direction in the camera coordinate system, xijIndicating that the target is in phase with the imaging pointProjection in the x-direction, y, of a machine coordinate systemijRepresenting the projection of the distance of the object from the imaging point in the y-axis direction in the camera coordinate system, #ij,ξij,θijEach representing three euler angles of the target relative to a rotation matrix of the camera coordinate system, m representing the number of discs within the target for indication, and n representing the number of samples. m and n are natural numbers.
The camera coordinate data set is constructed using the following equation (2):
Θij=RijΩ0+tij………………………………………………(2);
wherein, thetaijRepresenting a camera coordinate data set, omega0Representing the coordinates of the centre of a circle, R, of each disc in the target for indicationijRotation matrix, t, representing the camera coordinate system and the target coordinate systemijRepresenting translation vectors of the camera coordinate system and the target coordinate system.
Rotation matrix R of camera coordinate system and target coordinate systemijExpressed by the following equation (3):
translation vector t of camera coordinate system and target coordinate systemijExpressed by the following equation (4):
tij=[xij,yij,Lij]T…………………………………………………………(4);
wherein psiij,ξij,θijRespectively representing three Euler angles, L, corresponding to the rotation matrix of the object relative to the camera coordinate systemijRepresenting the projection of the distance of the target from the imaging point in the z-axis direction in the camera coordinate system, xij,yijRespectively representing the projection of the distance between the target and the imaging point in the x-axis and y-axis directions in the camera coordinate system.
And S2012, obtaining an input sample set according to the relative pose data set, and calculating according to the camera coordinate data set to obtain an output ideal sample set.
The input sample set is represented by the following equation (5):
wherein the content of the first and second substances,representing a set of input samples, a set of input samplesBy pair relative pose data setsAnd (4) randomly extracting.
The output ideal sample set is constructed using equation (6) as follows:
wherein the content of the first and second substances,representing the output ideal sample set, and outputting the ideal sample setAny one element ofRepresenting the coordinates of the object in the camera.
Coordinates of an object in a cameraSpecifically, the calculation is obtained by using the following formula (7):
wherein the content of the first and second substances,representing the coordinates of the object in the camera, aijRepresenting the projection of the target in the x-direction of the image detector surface in the camera coordinate system, bijRepresents the projection of the target in the y direction of the image detector surface under the camera coordinate system, mu represents the camera pixel size, f represents the camera focal length, xijRepresenting the projection of the distance of the target from the imaging point in the x-axis direction in the camera coordinate system, yijRepresenting the projection of the distance between the target and the imaging point in the y-axis direction in the camera coordinate system, LijRepresenting the projection of the distance of the target from the imaging point in the z-axis direction in the camera coordinate system. x is the number ofij,yij,LijFrom the camera coordinate dataset Θij。
And S2013, carrying out normalization operation on the input sample set to obtain a neural network input set, and carrying out normalization operation on the output ideal sample set to obtain a neural network output set. The normalization operation adopts a maximum and minimum method.
Specifically, a neural network input set is obtained by performing normalization operation of a maximum-minimum method on each dimensional data of a sample coordinate vector in an input sample set, and the neural network input set is represented by the following formula (8):
Ω={Ω11,Ω12,...,Ωij,...,Ωmn}……………………………(8);
where Ω represents the neural network input set, ΩijBy inputting sample setsRelative pose data set inInner dimension data Lij,xij,yij,ψij,ξij,θijAnd carrying out normalization operation to obtain the product.
Specifically, a neural network output set is obtained by performing normalization operation of the maximum-minimum method on each dimensional data of a sample coordinate vector in an output ideal sample set, and the neural network output set is expressed by the following formula (9):
C={C11,C12,...,Cij,...,Cmn}……………………………(9);
wherein C represents a neural network output set, CijBy outputting ideal sample setCoordinate vector of (1), i.e. the coordinates of the object in the cameraAnd carrying out normalization operation to obtain the product.
Step S2014, training and operating a preset neural network according to the neural network input set and the output ideal sample set;
first, a topology structure of a preset neural network is determined, for example, a forward BP neural network is adopted, and the forward BP neural network is set by parameters, for example, the number p of input layer nodes is 8 and the number q of output layer nodes is 6. The node transfer functions of the input layer and the hidden layer adopt tansig functions, and the node transfer function of the output layer adopts purelin functions. the tansig function and purelin function were taken from the math tool of MATLAB software.
Fig. 3 is a schematic flowchart of a process of performing a training operation on a preset neural network according to a second embodiment of the present invention, and as shown in fig. 3, step S2014 of performing a training operation on a preset neural network specifically includes steps S20141 to S20144:
step S20141, in each round of iterative operation, the neural network input set is used as the input value of the preset neural network, and the neural network output vector O is obtained through calculationij;
Step S20142, in each round of iterative operation, comparing the neural network output vector with elements in the output ideal sample set, and calculating to obtain a comparison error;
the comparison error is calculated using the following equation (10):
wherein e represents a comparison error, OijRepresents the output vector of the neural network and,representing the elements in the output ideal sample set C, i.e. the coordinates of the object in the cameraNeural network output vector OijI.e. the actual output vector of the neural network.
Step S20143, adjusting the connection weight and the threshold of the preset neural network by using the comparison error, and performing cyclic training operation;
step S20144, terminating the loop training operation when a preset iteration number is reached or the comparison error is within a preset error value. Namely, a stopping condition of the loop training operation is set, the stopping condition can be iteration times or mean square error, and when the stopping condition is met, the loop operation is terminated and the neural network is saved.
Step S2015, selecting to obtain an optimal neural network, where fig. 4 is a schematic flowchart of the process of selecting to obtain an optimal neural network according to the second embodiment of the present invention, and as shown in fig. 4, the step S2015 specifically includes steps S20151 to S20152:
step S20151, obtaining the neural networks subjected to the training operation under each hidden layer.
Step S20152, selecting a neural network with the minimum error from each hidden layer as the optimal neural network.
In step S2015, the neural network structures and model parameters thereof under different hidden layers are obtained, errors are compared, and the optimal neural network and model parameters are saved as a final distortion correction model.
Step S202, a target image is obtained, and the center coordinates of the target are obtained through calculation. Here, step S101 can be obtained by referring to the embodiment, and details are not described.
And S203, inputting the center coordinates of the target circle into an optimal neural network, and calculating to obtain an initial value of the relative pose. Fig. 5 is a schematic flow chart of calculating initial values of relative poses according to a second embodiment of the present invention, and as shown in fig. 5, step S203 specifically includes steps S2031 to S2033:
step S2031, carrying out normalization operation on the coordinates of the circle center of the target;
step S2032, inputting the target circle center coordinates after normalization operation into the optimal neural network;
and S2033, performing inverse normalization operation on the output result of the optimal neural network, and calculating to obtain the initial value of the relative pose.
In step S203, the distortion correction is performed using the distortion correction model obtained in step S2015, that is, the optimal neural network. Obtaining target circle center coordinates as an input sample setObtaining a neural network input set omega through the normalization algorithm of the step S2013, inputting the neural network input set omega into the optimal neural network, and calculating output data of the optimal neural network, namely outputting an ideal sample setAnd solving the corrected relative pose data, namely the neural network output set C, as a relative pose initial value by using an inverse normalization algorithm.
And step S204, carrying out iterative operation on the initial value of the relative pose, and calculating to obtain a final value of the relative pose. And the obtained relative pose data C is used as an initial value of a Haralick iterative algorithm to obtain final pose data.
The Haralick algorithm related in the invention is sensitive to the selection of the pose initial value, and if the pose initial value is not properly selected, the whole algorithm can not be globally converged to a certain extent, so that the algorithm falls into a local minimum error region. Therefore, the acquisition of the pose initial value is particularly important. By obtaining the optimal neural network, the optimal pose initial value can be obtained.
EXAMPLE III
Fig. 6 is a schematic structural diagram of a relative pose calculation apparatus according to a third embodiment of the present invention, and as shown in fig. 6, the relative pose calculation apparatus according to the third embodiment of the present invention includes:
the image processing module 61 is used for acquiring a target image and calculating to obtain a center coordinate of the target;
a neural network module 62, configured to store the trained optimal neural network;
the initial value calculation module 63 is configured to input the coordinates of the circle center of the target into an optimal neural network, and calculate to obtain an initial value of the relative pose;
and a final value calculation module 64, configured to perform iterative operation on the initial value of the relative pose, and calculate a final value of the relative pose.
The technical solution of the apparatus of the third embodiment of the present invention is the same as that of the method of the second embodiment, and is not described herein again.
Example four
The fourth embodiment of the present invention provides a rendezvous and docking system for aerospace vehicles, including: a cooperative target and a relative pose calculation means shown in fig. 6;
and the relative pose calculation device scans the cooperative target to acquire the center coordinates of the target circle.
Fig. 7 is a schematic front view of a cooperative target according to a fourth embodiment of the present invention, and as shown in fig. 7, the cooperative target 71 includes: four wafers 72 arranged in an array; the area of each disc increases successively.
Due to the adoption of the technical scheme, compared with the existing product, the invention has the following advantages:
(1) the target is four circles with orderly and sequentially increasing sizes, so that image recognition and sorting are facilitated, an active light source is not needed, the structure is simple, and the universality is high;
(2) the neural network is used for calculating, and the Haralick repeated iteration algorithm does not generate the situation of iteration divergence, so that the whole algorithm structure has strong robustness and high calculating precision;
(3) the neural network has the advantages of simple application, high mobility, easy realization on hardware, short processing time and high output frame frequency of the whole system.
It should be noted that the present invention may be implemented in software and/or in a combination of software and hardware, for example, as an Application Specific Integrated Circuit (ASIC), a general purpose computer or any other similar hardware device. In one embodiment, the software program of the present invention may be executed by a processor to implement the steps or functions described above. Also, the software programs (including associated data structures) of the present invention can be stored in a computer readable recording medium, such as RAM memory, magnetic or optical drive or diskette and the like. Further, some of the steps or functions of the present invention may be implemented in hardware, for example, as circuitry that cooperates with the processor to perform various steps or functions.
In addition, some of the present invention can be applied as a computer program product, such as computer program instructions, which when executed by a computer, can invoke or provide the method and/or technical solution according to the present invention through the operation of the computer. Program instructions which invoke the methods of the present invention may be stored on a fixed or removable recording medium and/or transmitted via a data stream on a broadcast or other signal-bearing medium and/or stored within a working memory of a computer device operating in accordance with the program instructions. An embodiment according to the invention herein comprises an apparatus comprising a memory for storing computer program instructions and a processor for executing the program instructions, wherein the computer program instructions, when executed by the processor, trigger the apparatus to perform a method and/or solution according to embodiments of the invention as described above.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned. Furthermore, it is obvious that the word "comprising" does not exclude other elements or steps, and the singular does not exclude the plural. A plurality of units or means recited in the apparatus claims may also be implemented by one unit or means in software or hardware. The terms first, second, etc. are used to denote names, but not any particular order.
Claims (5)
1. The relative pose calculation method is characterized by comprising the following steps:
acquiring a target image, and calculating to obtain a target circle center coordinate;
inputting the circle center coordinates of the target into an optimal neural network, and calculating to obtain a relative pose initial value;
the step of calculating to obtain the initial value of the relative pose comprises the following steps:
carrying out normalization operation on the coordinates of the circle center of the target;
inputting the target circle center coordinates after normalization operation into the optimal neural network;
performing inverse normalization operation on the output result of the optimal neural network, and calculating to obtain the initial value of the relative pose;
the step of training to obtain the optimal neural network specifically comprises:
acquiring a relative pose data set and a camera coordinate data set;
obtaining an input sample set according to the relative pose data set, and calculating according to the camera coordinate data set to obtain an output ideal sample set;
performing normalization operation on the input sample set to obtain a neural network input set, and performing normalization operation on the output ideal sample set to obtain a neural network output set;
performing training operation on a preset neural network according to the neural network input set and the neural network output set;
selecting to obtain an optimal neural network;
the relative pose dataset is constructed using the following equation (1):
wherein the content of the first and second substances,representing a relative pose data set, LijRepresenting the projection of the distance of the target from the imaging point in the z-axis direction in the camera coordinate system, xijRepresenting the projection of the distance of the target from the imaging point in the x-axis direction in the camera coordinate system, yijRepresenting the projection of the distance of the object from the imaging point in the y-axis direction in the camera coordinate system, #ij,ξij,θijRespectively representing three Euler angles corresponding to the rotation matrix of the target relative to the camera coordinate system, wherein m represents the number of wafers used for indication in the target, and n represents the number of samples;
the camera coordinate data set is constructed using the following equation (2):
Θij=RijΩ0+tij......................................................(2);
wherein, thetaijRepresenting a camera coordinate data set, omega0Representing the coordinates of the centre of a circle, R, of each disc in the target for indicationijRotation matrix, t, representing the camera coordinate system and the target coordinate systemijA translation vector representing a camera coordinate system and a target coordinate system;
the rotation matrix R of the camera coordinate system and the target coordinate systemijExpressed by the following equation (3):
translation vector t of the camera coordinate system and the target coordinate systemijExpressed by the following equation (4):
tij=[xij,yij,Lij]T..................................................(4);
and carrying out iterative operation on the initial value of the relative pose, and calculating to obtain a final value of the relative pose.
2. The relative pose calculation method according to claim 1,
the input sample set is represented by the following formula (5);
wherein the content of the first and second substances,a set of input samples is represented that is,representing a relative pose dataset;
the output ideal sample set is constructed using equation (6) as follows:
wherein the content of the first and second substances,the output of the set of ideal samples is represented,representing coordinates of the object in the camera; coordinates of an object in a cameraSpecifically calculated by the following formula (7)To:
wherein the content of the first and second substances,representing the coordinates of the object in the camera, aijRepresenting the projection of the target in the x-direction of the image detector surface in the camera coordinate system, bijRepresents the projection of the target in the y direction of the image detector surface under the camera coordinate system, mu represents the camera pixel size, f represents the camera focal length, xijRepresenting the projection of the distance of the target from the imaging point in the x-axis direction in the camera coordinate system, yijRepresenting the projection of the distance between the target and the imaging point in the y-axis direction in the camera coordinate system, LijRepresenting the projection of the distance between the target and the imaging point in the z-axis direction in a camera coordinate system;
the neural network input set is represented by the following equation (8):
Ω={Ω11,Ω12,...,Ωij,...,Ωmn}.................................(8);
where Ω represents the neural network input set, ΩijBy aligning relative pose data setsCarrying out normalization operation to obtain;
the neural network output set is represented by the following equation (9):
C={C11,C12,...,Cij,...,Cmn}.................................(9);
3. The relative pose calculation method according to claim 2, wherein the preset neural network includes the following parameter settings: the number of nodes of an input layer and the number of nodes of an output layer;
the node transfer functions of the input layer and the hidden layer adopt tansig functions, and the node transfer function of the output layer adopts purelin functions.
4. The relative pose calculation method according to claim 3, wherein the step of performing the training operation on the preset neural network specifically comprises:
in each iteration, taking the neural network input set as an input value of the preset neural network, and calculating to obtain a neural network output vector;
in each iteration, comparing the neural network output vector with elements in the neural network output set, and calculating to obtain a comparison error;
adjusting the connection weight and the threshold value of the preset neural network by using the comparison error, and performing cyclic training operation;
and when the preset iteration times are reached or the comparison error is within a preset error value, terminating the circular training operation.
5. The relative pose calculation method according to claim 4, wherein the step of selecting an optimal neural network comprises:
acquiring a neural network subjected to the training operation under each hidden layer;
and selecting the neural network with the minimum error from the hidden layers as the optimal neural network.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710728275.0A CN107481281B (en) | 2017-08-23 | 2017-08-23 | Relative pose calculation method and device and spacecraft rendezvous and docking system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710728275.0A CN107481281B (en) | 2017-08-23 | 2017-08-23 | Relative pose calculation method and device and spacecraft rendezvous and docking system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107481281A CN107481281A (en) | 2017-12-15 |
CN107481281B true CN107481281B (en) | 2020-11-27 |
Family
ID=60601275
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710728275.0A Expired - Fee Related CN107481281B (en) | 2017-08-23 | 2017-08-23 | Relative pose calculation method and device and spacecraft rendezvous and docking system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107481281B (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10529089B2 (en) * | 2018-02-23 | 2020-01-07 | GM Global Technology Operations LLC | Crowd-sensed point cloud map |
CN113916254A (en) * | 2021-07-22 | 2022-01-11 | 北京控制工程研究所 | Docking type capture spacecraft autonomous rendezvous and docking test method |
CN114396872A (en) * | 2021-12-29 | 2022-04-26 | 哈尔滨工业大学 | Conversion measuring device and conversion measuring method for butt joint hidden characteristics of aircraft cabin |
CN114882110B (en) * | 2022-05-10 | 2024-04-12 | 中国人民解放军63921部队 | Relative pose measurement and target design method suitable for micro-nano satellite self-assembly |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103245335A (en) * | 2013-05-21 | 2013-08-14 | 北京理工大学 | Ultrashort-distance visual position posture measurement method for autonomous on-orbit servicing spacecraft |
CN105966644A (en) * | 2016-06-07 | 2016-09-28 | 中国人民解放军国防科学技术大学 | Simulation service star used for on-orbit service technical identification |
CN106780608A (en) * | 2016-11-23 | 2017-05-31 | 北京地平线机器人技术研发有限公司 | Posture information method of estimation, device and movable equipment |
-
2017
- 2017-08-23 CN CN201710728275.0A patent/CN107481281B/en not_active Expired - Fee Related
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103245335A (en) * | 2013-05-21 | 2013-08-14 | 北京理工大学 | Ultrashort-distance visual position posture measurement method for autonomous on-orbit servicing spacecraft |
CN105966644A (en) * | 2016-06-07 | 2016-09-28 | 中国人民解放军国防科学技术大学 | Simulation service star used for on-orbit service technical identification |
CN106780608A (en) * | 2016-11-23 | 2017-05-31 | 北京地平线机器人技术研发有限公司 | Posture information method of estimation, device and movable equipment |
Non-Patent Citations (3)
Title |
---|
"基于目标特征的单目视觉位置姿态测量技术研究";赵连军;《中国博士学位论文全文数据库(信息科技辑)》;20141015;I138-59 * |
"相对位姿测量解算的FPGA实现";金虎;《中国优秀硕士学位论文全文数据库(信息科技辑)》;20120115;I140-232 * |
"空间非合作目标的近距离相对位姿测量技术研究";魏许;《中国优秀硕士学位论文全文数据库(信息科技辑)》;20140615;I138-689 * |
Also Published As
Publication number | Publication date |
---|---|
CN107481281A (en) | 2017-12-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107481281B (en) | Relative pose calculation method and device and spacecraft rendezvous and docking system | |
CN111161179A (en) | Point cloud smoothing filtering method based on normal vector | |
Kolomenkin et al. | Geometric voting algorithm for star trackers | |
CN111160298B (en) | Robot and pose estimation method and device thereof | |
CN106780459A (en) | A kind of three dimensional point cloud autoegistration method | |
CN108492333B (en) | Spacecraft attitude estimation method based on satellite-rocket docking ring image information | |
CN111429494B (en) | Biological vision-based point cloud high-precision automatic registration method | |
CN104778688A (en) | Method and device for registering point cloud data | |
CN107481273B (en) | Rapid image matching method for autonomous navigation of spacecraft | |
CN105021124A (en) | Planar component three-dimensional position and normal vector calculation method based on depth map | |
CN111829532B (en) | Aircraft repositioning system and method | |
CN110147837B (en) | Method, system and equipment for detecting dense target in any direction based on feature focusing | |
CN100376883C (en) | Pixel frequency based star sensor high accuracy calibration method | |
KR101941878B1 (en) | System for unmanned aircraft image auto geometric correction | |
Marin et al. | Design and simulation of a high-speed star tracker for direct optical feedback control in ADCS | |
US20140092217A1 (en) | System for correcting rpc camera model pointing errors using 2 sets of stereo image pairs and probabilistic 3-dimensional models | |
CN113012084A (en) | Unmanned aerial vehicle image real-time splicing method and device and terminal equipment | |
JP2006195790A (en) | Lens distortion estimation apparatus, lens distortion estimation method, and lens distortion estimation program | |
CN116704029A (en) | Dense object semantic map construction method and device, storage medium and electronic equipment | |
CN107976176B (en) | Unmanned aerial vehicle data processing method and device | |
CN115752760A (en) | Phase recovery algorithm suitable for micro-vibration environment | |
JP2007034964A (en) | Method and device for restoring movement of camera viewpoint and three-dimensional information and estimating lens distortion parameter, and program for restoring movement of camera viewpoint and three-dimensional information and estimating lens distortion parameter | |
CN111833395A (en) | Direction-finding system single target positioning method and device based on neural network model | |
CN114219706A (en) | Image fast splicing method based on reduction of grid partition characteristic points | |
CN111833281A (en) | Multi-vision sensor data fusion method for recycling reusable rockets |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20201127 Termination date: 20210823 |
|
CF01 | Termination of patent right due to non-payment of annual fee |