CN107481281A - Relative pose computational methods and device and aerospace craft rendezvous and docking system - Google Patents
Relative pose computational methods and device and aerospace craft rendezvous and docking system Download PDFInfo
- Publication number
- CN107481281A CN107481281A CN201710728275.0A CN201710728275A CN107481281A CN 107481281 A CN107481281 A CN 107481281A CN 201710728275 A CN201710728275 A CN 201710728275A CN 107481281 A CN107481281 A CN 107481281A
- Authority
- CN
- China
- Prior art keywords
- mrow
- msub
- mtd
- target
- relative pose
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/10—Complex mathematical operations
- G06F17/16—Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Mathematical Physics (AREA)
- Data Mining & Analysis (AREA)
- Computational Mathematics (AREA)
- Computing Systems (AREA)
- Mathematical Analysis (AREA)
- Mathematical Optimization (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Pure & Applied Mathematics (AREA)
- Software Systems (AREA)
- General Engineering & Computer Science (AREA)
- Databases & Information Systems (AREA)
- Biophysics (AREA)
- Geometry (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Algebra (AREA)
- Computational Linguistics (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Multimedia (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
The present invention, which provides relative pose computational methods and device and aerospace craft rendezvous and docking system, relative pose computational methods, to be included:Target image is obtained, target central coordinate of circle is calculated;The target central coordinate of circle is inputted into optimal neural network, relative pose initial value is calculated;Computing is iterated to the relative pose initial value, relative pose end value is calculated.Relative pose computational methods provided by the invention and device and aerospace craft rendezvous and docking system, the application of neutral net is simple, and migration is high, is easily realized on hardware, processing time is short, and whole system output frame is high.
Description
Technical field
The present invention relates to field of aerospace technology, more particularly to relative pose computational methods and device and aerospace craft
Rendezvous and docking system.
Background technology
Relative pose measurement is generally used for aircraft and used in space intersection, Large Scale Space Vehicle usually using visible ray with
And a variety of measurement means such as laser ensure the completion of space mission, be not suitable for small-sized spacecraft and use.Small aircraft is generally adopted
Make the pattern of target with vision camera adduction, usual cooperation target is using LED particular sorted as benchmark, pose calculation method
The method for solving commonly used using PNP, calculating process is complicated, is unfavorable for hardware transplanting.
The more conventional method of the acquisition of pose initial value has Kalman filtering, the nonlinear least square method of iteration at present
Deng, these methods one initial value of needs, but these methods can not converge to global minima.
The content of the invention
The present invention provides relative pose computational methods and device and aerospace craft rendezvous and docking system, to solve existing skill
In art in aerospace craft spacecrafts rendezvous, the problem of pose calculation method calculating process complexity.
Relative pose computational methods provided by the invention, including:
Target image is obtained, target central coordinate of circle is calculated;
The target central coordinate of circle is inputted into optimal neural network, relative pose initial value is calculated;
Computing is iterated to the relative pose initial value, relative pose end value is calculated.
Further, relative pose computational methods of the present invention, the target central coordinate of circle is being inputted into optimal nerve
Before network, in addition to:
Training obtains the optimal neural network.
Further, relative pose computational methods of the present invention, the step of obtaining the optimal neural network is trained to have
Body includes:
Obtain relative pose data set and camera coordinates data set;
Input sample collection is obtained according to the relative pose data set, also, calculated according to the camera coordinates data set
Obtain exporting ideal sample collection;
Computing is normalized to the input sample collection and obtains neutral net input set, also, it is preferable to the output
Sample set is normalized computing and obtains neutral net output collection;
According to the neutral net input set and the output ideal sample collection, fortune is trained to default neutral net
Calculate;
Selection obtains optimal neural network.
Further, relative pose computational methods of the present invention, the relative pose data set utilize equation below (1)
Structure:
Wherein,Represent relative pose data set, LijRepresent target and imaging point distance z-axis side in camera coordinates system
To projection, xijRepresent target and the projection in imaging point distance x-axis direction in camera coordinates system, yijRepresent target and imaging
The projection in point distance y-axis direction in camera coordinates system, ψij,ξij,θijSpin moment of the target with respect to camera coordinates system is represented respectively
Three Eulerian angles corresponding to battle array, m represent the quantity for being used for the disk of instruction in target, and n represents the quantity of sample;
The camera coordinates data set is built using equation below (2):
Θij=RijΩ0+tij………………………………………………(2);
Wherein, ΘijRepresent camera coordinates data set, Ω0The central coordinate of circle for being used for each disk of instruction in target is represented,
RijRepresent camera coordinates system and the spin matrix of target co-ordinates system, tijExpression camera coordinates system is translated towards with target co-ordinates system
Amount;
The camera coordinates system and the spin matrix R of target co-ordinates systemijRepresented using equation below (3):
The camera coordinates system and the translation vector t of target co-ordinates systemijRepresented using equation below (4):
tij=[xij,yij,Lij]T…………………………………………………………(4)。
Further, relative pose computational methods of the present invention, the input sample collection utilize equation below (5) table
Show;
Wherein,Input sample collection is represented,Represent relative pose data set;
The output ideal sample collection is built using equation below (6):
Wherein,Output ideal sample collection is represented,Represent the coordinate of target in the camera;The seat of target in the camera
MarkSpecifically it is calculated using equation below (7):
Wherein,Represent the coordinate of target in the camera, aijRepresent target under camera coordinates system in image detector table
The projection in face x directions, bijRepresent that target represents camera picture under camera coordinates system in the projection in image detector surface y directions, μ
Plain size, f represent camera focus, xijRepresent target and the projection in imaging point distance x-axis direction in camera coordinates system, yijTable
Show the projection of target and imaging point distance y-axis direction in camera coordinates system, LijRepresent that target is sat with imaging point distance in camera
The projection in z-axis direction in mark system;
The neutral net input set is represented using equation below (8):
Ω={ Ω11,Ω12,...,Ωij,...,Ωmn}……………………………(8);
Wherein, Ω represents neutral net input set, ΩijBy to relative pose data setComputing is normalized to obtain
Arrive;
The neutral net output collection is represented using equation below (9):
C={ C11,C12,...,Cij,...,Cmn}……………………………(9);
Wherein, C represents neutral net output collection, CijPass through the coordinate to target in the cameraComputing is normalized
Obtain.
Further, relative pose computational methods of the present invention, the default neutral net include following parameter setting:
Input layer number, output layer nodes;
Wherein, the node transfer function of input layer and hidden layer uses tansig functions, the node transfer function of output layer
Using purelin functions.
Further, relative pose computational methods of the present invention, to presetting the step of neutral net is trained computing
Specifically include:
In each round iteration, the input value using the neutral net input set as the default neutral net, calculate
Obtain neutral net output vector;
In each round iteration, the element that the neutral net output vector is concentrated with the output ideal sample is carried out
Compare, relative error is calculated;
The connection weight and threshold value of the default neutral net are adjusted using the relative error, carries out circuit training fortune
Calculate;
When reaching default iterations or the relative error within preset error value, the circuit training is terminated
Computing.
Further, relative pose computational methods of the present invention, the relative error are calculated using equation below (10)
Obtain:
Wherein, e represents relative error, OijNeutral net output vector is represented,Represent the coordinate of target in the camera.
Further, relative pose computational methods of the present invention, it is described to select to wrap the step of obtaining optimal neural network
Include:
Obtain the neutral net by the training computing under each hidden layer;
The neutral net minimum from Select Error under each hidden layer is as the optimal neural network.
Further, relative pose computational methods of the present invention, it is characterised in that it is described relative pose is calculated at the beginning of
The step of initial value, includes:
Computing is normalized to the target central coordinate of circle;
The target central coordinate of circle after normalization computing is inputted into the optimal neural network;
The output result of the optimal neural network is subjected to renormalization computing, it is initial that the relative pose is calculated
Value.
Relative pose computing device provided by the invention, including:
Image processing module, for obtaining target image, target central coordinate of circle is calculated;
Neural network module, the optimal neural network completed for storing training;
Initial value computing module, for the target central coordinate of circle to be inputted into optimal neural network, relative pose is calculated
Initial value;
Final value computing module, for being iterated computing to the relative pose initial value, relative pose is calculated most
Final value.
Aerospace craft rendezvous and docking system provided by the invention, including:Cooperation target and relative position of the present invention
Appearance computing device;
The relative pose computing device is scanned to the cooperation target, to obtain target central coordinate of circle;
Wherein, the cooperation target includes:The disk of four array arrangements;The area of each disk is incremented by successively.
Relative pose computational methods provided by the invention and device and aerospace craft rendezvous and docking system, neutral net
Using simple, migration is high, is easily realized on hardware, processing time is short, and whole system output frame is high.
Brief description of the drawings
By reading the detailed description made to non-limiting example made with reference to the following drawings, of the invention is other
Feature, objects and advantages will become more apparent upon:
Fig. 1 is the schematic flow sheet of the relative pose computational methods of the embodiment of the present invention one;
Fig. 2 is the schematic flow sheet of the relative pose computational methods of the embodiment of the present invention two;
Fig. 3 is the schematic flow sheet that computing is trained to presetting neutral net of the embodiment of the present invention two;
Fig. 4 is that the selection of the embodiment of the present invention two obtains the schematic flow sheet of optimal neural network;
Fig. 5 is the schematic flow sheet of the calculating relative pose initial value of the embodiment of the present invention two;
Fig. 6 is the structural representation of the relative pose computing device of the embodiment of the present invention three;
Fig. 7 is the cooperative target target schematic elevation view of the embodiment of the present invention four.
Same or analogous reference represents same or analogous part in accompanying drawing.
Embodiment
The present invention is described in further detail below in conjunction with the accompanying drawings.
Embodiment one
Fig. 1 is the schematic flow sheet of the relative pose computational methods of the embodiment of the present invention one, as shown in figure 1, the present invention is real
Applying the relative pose computational methods of example one includes:
Step S101, target image is obtained, target central coordinate of circle is calculated.
Wherein, cooperation target is pasted on target aircraft, cooperation target includes four sizes target incremented by successively
Mark circle, for example, four target radius of circles are followed successively by 15mm, 20mm, 25mm, 30mm, the center of circle spacing 65mm of adjacent target circle.It is right
Connect aircraft and vision camera and image processing system are installed, be 0.1m~3m with cooperative target target operating distance.Vision camera
Start obtains cooperative target target target image in working range, carries out rim detection by image processing system, identifies four
Target is justified, and calculates the central coordinate of circle of target circle, obtains target coordinate corresponding under camera coordinates system.
Step S102, the target central coordinate of circle is inputted into optimal neural network, relative pose initial value is calculated.
Wherein, optimal neural network is obtained in ground training, then should by the target central coordinate of circle input in step S101
Optimal neural network, relative pose is calculated, the initial value as resolving.
Step S103, computing is iterated to the relative pose initial value, relative pose end value is calculated.
Wherein, relative pose initial value step S103 obtained substitutes into Haralick iterative algorithms, further solves phase
To pose, high-precision relative pose information is obtained after iterating, as relative pose end value.
Embodiment two
Fig. 2 is the schematic flow sheet of the relative pose computational methods of the embodiment of the present invention two, as shown in Fig. 2 the present invention is real
Applying the relative pose computational methods of example two includes:
Step S201, training obtain the optimal neural network.
Need to carry out neural metwork training before direct use, training obtain optimal neural network step S201 specifically include with
Lower step S2011 to step S2015:
Step S2011, obtain relative pose data set and camera coordinates data set.
According to mission requirements, the random relative pose data for obtaining target, as training sample input data set.With respect to position
Appearance data set is built using equation below (1):
Wherein,Represent relative pose data set, LijRepresent target and imaging point distance z-axis side in camera coordinates system
To projection, xijRepresent target and the projection in imaging point distance x-axis direction in camera coordinates system, yijRepresent target and imaging
The projection in point distance y-axis direction in camera coordinates system, ψij,ξij,θijSpin moment of the target with respect to camera coordinates system is represented respectively
Three Eulerian angles corresponding to battle array, m represent the quantity for being used for the disk of instruction in target, and n represents the quantity of sample.M and n is nature
Number.
The camera coordinates data set is built using equation below (2):
Θij=RijΩ0+tij………………………………………………(2);
Wherein, ΘijRepresent camera coordinates data set, Ω0The central coordinate of circle for being used for each disk of instruction in target is represented,
RijRepresent camera coordinates system and the spin matrix of target co-ordinates system, tijExpression camera coordinates system is translated towards with target co-ordinates system
Amount.
Camera coordinates system and the spin matrix R of target co-ordinates systemijRepresented using equation below (3):
Camera coordinates system and the translation vector t of target co-ordinates systemijRepresented using equation below (4):
tij=[xij,yij,Lij]T…………………………………………………………(4);
Wherein, ψij,ξij,θijRepresent target with respect to three Eulerian angles, L corresponding to the spin matrix of camera coordinates system respectivelyij
Represent target and the projection in imaging point distance z-axis direction in camera coordinates system, xij,yijRepresent respectively target and imaging point away from
From the projection in the x-axis in camera coordinates system and y-axis direction.
Step S2012, input sample collection is obtained according to the relative pose data set, also, according to the camera coordinates
Output ideal sample collection is calculated in data set.
Input sample collection is represented using equation below (5):
Wherein,Represent input sample collection, input sample collectionIn element pass through to relative pose data setWith
Machine extracts to obtain.
Ideal sample collection is exported to build using equation below (6):
Wherein,Output ideal sample collection is represented, exports ideal sample collectionIn any one elementRepresent target
Coordinate in the camera.
The coordinate of target in the cameraSpecifically it is calculated using equation below (7):
Wherein,Represent the coordinate of target in the camera, aijRepresent target under camera coordinates system in image detector table
The projection in face x directions, bijRepresent that target represents camera picture under camera coordinates system in the projection in image detector surface y directions, μ
Plain size, f represent camera focus, xijRepresent target and the projection in imaging point distance x-axis direction in camera coordinates system, yijTable
Show the projection of target and imaging point distance y-axis direction in camera coordinates system, LijRepresent that target is sat with imaging point distance in camera
The projection in z-axis direction in mark system.xij,yij,LijCome from camera coordinates data set Θij。
Step S2013, computing is normalized to input sample collection and obtains neutral net input set, to exporting ideal sample
Collection is normalized computing and obtains neutral net output collection.Minimax method is taken in normalization computing.
Specifically, the normalizing of each dimension data progress minimax method to the sample coordinate vector in input sample collection is passed through
Change computing, obtain neutral net input set, neutral net input set is represented by equation below (8):
Ω={ Ω11,Ω12,...,Ωij,...,Ωmn}……………………………(8);
Wherein, Ω represents neutral net input set, ΩijBy to input sample collectionIn relative pose data set
Interior each dimension data Lij,xij,yij,ψij,ξij,θijComputing is normalized to obtain.
Specifically, by carrying out minimax method to each dimension data of the sample coordinate vector in output ideal sample collection
Computing is normalized, obtains neutral net output collection, the neutral net output collection is represented using equation below (9):
C={ C11,C12,...,Cij,...,Cmn}……………………………(9);
Wherein, C represents neutral net output collection, CijBy to exporting ideal sample collectionIn coordinate vector, i.e. target
Coordinate in the cameraComputing is normalized to obtain.
Step S2014, according to the neutral net input set and the output ideal sample collection, default neutral net is entered
Row training computing;
Wherein, need to determine the topological structure for presetting neutral net first, for example with Feedforward BP Neural Network, and by such as
Lower parameter setting Feedforward BP Neural Network, such as input layer number p=8 and output layer nodes q=6 is set.Wherein, it is defeated
The node transfer function for entering layer and hidden layer uses tansig functions, and output node layer transfer function uses purelin functions.
Tansig functions and purelin functions are derived from the mathematical tool of MATLAB softwares.
Fig. 3 is the schematic flow sheet that computing is trained to presetting neutral net of the embodiment of the present invention two, such as Fig. 3 institutes
Show, the step S2014 that computing is trained to presetting neutral net specifically includes step S20141 to step S20144:
Step S20141, in each round interative computation, using the neutral net input set as the default nerve net
The input value of network, neutral net output vector O is calculatedij;
Step S20142, in each round interative computation, by the neutral net output vector and the preferable sample of output
The element of this concentration is compared, and relative error is calculated;
The relative error is calculated using equation below (10):
Wherein, e represents relative error, OijNeutral net output vector is represented,Represent in output ideal sample collection C
The coordinate of element, i.e. target in the cameraNeutral net output vector OijThe as reality output vector of neutral net.
Step S20143, the connection weight and threshold value of the default neutral net are adjusted using the relative error, carried out
Circuit training computing;
Step S20144, when reaching default iterations or the relative error within preset error value, terminate
The circuit training computing.The stop condition of circuit training computing is set, and stop condition can be iterations or square
Error, when meeting stop condition, terminate loop computation and preserve neutral net.
Step S2015, selection obtain optimal neural network, and Fig. 4 is that the selection of the embodiment of the present invention two obtains optimal nerve
The schematic flow sheet of network, as shown in figure 4, step S2015 specifically includes step S20151 to step S20152:
Step S20151, obtain the neutral net by the training computing under each hidden layer.
Step S20152, the neutral net minimum from Select Error under each hidden layer is as the optimal neural network.
Wherein, the neural network structure and its model parameter under different hidden layers are obtained in step S2015, relative error,
Optimal neutral net and model parameter are preserved as final distortion correction model.
Step S202, target image is obtained, target central coordinate of circle is calculated.Embodiment is referred to herein to obtain step
S101, repeat no more.
Step S203, the target central coordinate of circle is inputted into optimal neural network, relative pose initial value is calculated.Figure
5 be the schematic flow sheet of the calculating relative pose initial value of the embodiment of the present invention two, as shown in figure 5, step S203 is specifically wrapped
Step S2031 is included to step S2033:
Step S2031, computing is normalized to the target central coordinate of circle;
Step S2032, the target central coordinate of circle after normalization computing is inputted into the optimal neural network;
Step S2033, the output result of the optimal neural network is subjected to renormalization computing, the phase is calculated
To pose initial value.
Wherein, the distortion correction model that step S203 is obtained using step S2015, i.e. optimal neural network, enter line distortion
Correction.Target central coordinate of circle is obtained as input sample collectionCarry out obtaining nerve net by step S2013 normalization algorithm
Network input set Ω, neutral net input set Ω is inputted into above-mentioned optimal neural network, calculates the output data of optimal neural network,
Export ideal sample collectionAnd try to achieve the relative pose data after correction using renormalization algorithm, i.e. neutral net exports
Collect C, as relative pose initial value.
Step S204, computing is iterated to the relative pose initial value, relative pose end value is calculated.Obtain
Relative pose data C, the initial value for the algorithm that iterated as Haralick, obtain final pose data.
Because the selection of the Haralick algorithms that are related in the present invention to pose initial value is more sensitive, if pose initial value selects
It is improper to take, and can not make whole algorithm global convergence to a certain extent, algorithm can be caused to be absorbed in the mistaken ideas of local minimum.Therefore position
The acquisition of appearance initial value, which seems, to be even more important.By obtaining optimal neural network, optimal pose initial value can be obtained.
Embodiment three
Fig. 6 is the structural representation of the relative pose computing device of the embodiment of the present invention three, as shown in fig. 6, the present invention is real
The relative pose computing device of the offer of example three is provided, including:
Image processing module 61, for obtaining target image, target central coordinate of circle is calculated;
Neural network module 62, the optimal neural network completed for storing training;
Initial value computing module 63, for the target central coordinate of circle to be inputted into optimal neural network, relative position is calculated
Appearance initial value;
Final value computing module 64, for being iterated computing to the relative pose initial value, relative pose is calculated
End value.
The technical scheme of the device of the embodiment of the present invention three is identical with the technical scheme of the method for embodiment two, herein no longer
Repeat.
Example IV
The embodiment of the present invention four provides aerospace craft rendezvous and docking system, including:Cooperation target and relative shown in Fig. 6
Pose computing device;
The relative pose computing device is scanned to the cooperation target, to obtain target central coordinate of circle.
Fig. 7 is the cooperative target target schematic elevation view of the embodiment of the present invention four, as shown in fig. 7, the cooperation target 71 wraps
Include:The disk 72 of four array arrangements;The area of each disk is incremented by successively.
As a result of above-mentioned technical scheme so that the present invention has the following advantages that compared to existing product:
(1) target is four orderly sizes circle incremented by successively, is easy to image recognition and sequence, without active light source,
Simple in construction, universality is strong;
(2) neutral net resolves, plus Haralick iterate algorithm be not in iteration diverging situation, it is whole to calculate
Method structural robustness is strong, and calculation accuracy is high;
(3) application of neutral net is simple, and migration is high, is easily realized on hardware, processing time is short, whole system is defeated
Go out frame frequency height.
It should be noted that the present invention can be carried out in the assembly of software and/or software and hardware, for example, can adopt
With application specific integrated circuit (ASIC), general purpose computer or any other realized similar to hardware device.In one embodiment
In, software program of the invention can realize steps described above or function by computing device.Similarly, it is of the invention
Software program (including related data structure) can be stored in computer readable recording medium storing program for performing, for example, RAM memory,
Magnetically or optically driver or floppy disc and similar devices.In addition, some steps or function of the present invention can employ hardware to realize, example
Such as, coordinate as with processor so as to perform the circuit of each step or function.
In addition, the part of the present invention can be applied to computer program product, such as computer program instructions, when its quilt
When computer performs, by the operation of the computer, the method according to the invention and/or technical scheme can be called or provided.
And the programmed instruction of the method for the present invention is called, it is possibly stored in fixed or moveable recording medium, and/or pass through
Broadcast or the data flow in other signal bearing medias and be transmitted, and/or be stored according to described program instruction operation
In the working storage of computer equipment.Here, including a device according to one embodiment of present invention, the device includes using
Memory in storage computer program instructions and processor for execute program instructions, wherein, when the computer program refers to
When order is by the computing device, method and/or skill of the plant running based on foregoing multiple embodiments according to the present invention are triggered
Art scheme.
It is obvious to a person skilled in the art that the invention is not restricted to the details of above-mentioned one exemplary embodiment, Er Qie
In the case of without departing substantially from spirit or essential attributes of the invention, the present invention can be realized in other specific forms.Therefore, no matter
From the point of view of which point, embodiment all should be regarded as exemplary, and be nonrestrictive, the scope of the present invention is by appended power
Profit requires rather than described above limits, it is intended that all in the implication and scope of the equivalency of claim by falling
Change is included in the present invention.Any reference in claim should not be considered as to the involved claim of limitation.This
Outside, it is clear that the word of " comprising " one is not excluded for other units or step, and odd number is not excluded for plural number.That is stated in device claim is multiple
Unit or device can also be realized by a unit or device by software or hardware.The first, the second grade word is used for table
Show title, and be not offered as any specific order.
Claims (12)
1. relative pose computational methods, it is characterised in that including:
Target image is obtained, target central coordinate of circle is calculated;
The target central coordinate of circle is inputted into optimal neural network, relative pose initial value is calculated;
Computing is iterated to the relative pose initial value, relative pose end value is calculated.
2. relative pose computational methods according to claim 1, it is characterised in that inputted by the target central coordinate of circle
Before optimal neural network, in addition to:
Training obtains the optimal neural network.
3. relative pose computational methods according to claim 2, it is characterised in that training obtains the optimal neural network
The step of specifically include:
Obtain relative pose data set and camera coordinates data set;
Input sample collection is obtained according to the relative pose data set, also, is calculated according to the camera coordinates data set
Export ideal sample collection;
Computing is normalized to the input sample collection and obtains neutral net input set, also, to the output ideal sample
Collection is normalized computing and obtains neutral net output collection;
According to the neutral net input set and the output ideal sample collection, computing is trained to default neutral net;
Selection obtains optimal neural network.
4. relative pose computational methods according to claim 3, it is characterised in that the relative pose data set is using such as
Lower formula (1) structure:
<mrow>
<msup>
<mover>
<mi>&Omega;</mi>
<mo>&OverBar;</mo>
</mover>
<mrow>
<mi>i</mi>
<mi>j</mi>
</mrow>
</msup>
<mo>=</mo>
<mo>&lsqb;</mo>
<msub>
<mi>L</mi>
<mrow>
<mi>i</mi>
<mi>j</mi>
</mrow>
</msub>
<mo>,</mo>
<msub>
<mi>x</mi>
<mrow>
<mi>i</mi>
<mi>j</mi>
</mrow>
</msub>
<mo>,</mo>
<msub>
<mi>y</mi>
<mrow>
<mi>i</mi>
<mi>j</mi>
</mrow>
</msub>
<mo>,</mo>
<msub>
<mi>&psi;</mi>
<mrow>
<mi>i</mi>
<mi>j</mi>
</mrow>
</msub>
<mo>,</mo>
<msub>
<mi>&xi;</mi>
<mrow>
<mi>i</mi>
<mi>j</mi>
</mrow>
</msub>
<mo>,</mo>
<msub>
<mi>&theta;</mi>
<mrow>
<mi>i</mi>
<mi>j</mi>
</mrow>
</msub>
<mo>&rsqb;</mo>
<mo>,</mo>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>=</mo>
<mn>1</mn>
<mo>,</mo>
<mn>2</mn>
<mo>...</mo>
<mo>,</mo>
<mi>m</mi>
<mo>,</mo>
<mi>j</mi>
<mo>=</mo>
<mn>1</mn>
<mo>,</mo>
<mn>2</mn>
<mo>...</mo>
<mo>,</mo>
<mi>n</mi>
<mo>)</mo>
</mrow>
<mo>...</mo>
<mrow>
<mo>(</mo>
<mn>1</mn>
<mo>)</mo>
</mrow>
<mo>;</mo>
</mrow>
Wherein,Represent relative pose data set, LijRepresent target and imaging point distance z-axis direction in camera coordinates system
Projection, xijRepresent target and the projection in imaging point distance x-axis direction in camera coordinates system, yijRepresent target and imaging point away from
From the projection in the y-axis direction in camera coordinates system, ψij,ξij,θijSpin matrix pair of the target with respect to camera coordinates system is represented respectively
Three Eulerian angles answered, m represent the quantity for being used for the disk of instruction in target, and n represents the quantity of sample;
The camera coordinates data set is built using equation below (2):
Θij=RijΩ0+tij………………………………………………(2);
Wherein, ΘijRepresent camera coordinates data set, Ω0Represent the central coordinate of circle for being used for each disk of instruction in target, RijTable
Show the spin matrix of camera coordinates system and target co-ordinates system, tijRepresent camera coordinates system and the translation vector of target co-ordinates system;
The camera coordinates system and the spin matrix R of target co-ordinates systemijRepresented using equation below (3):
<mrow>
<msup>
<mi>R</mi>
<mrow>
<mi>i</mi>
<mi>j</mi>
</mrow>
</msup>
<mo>=</mo>
<mfenced open = "(" close = ")">
<mtable>
<mtr>
<mtd>
<mrow>
<mi>cos</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>&theta;</mi>
<mrow>
<mi>i</mi>
<mi>j</mi>
</mrow>
</msub>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
<mtd>
<mrow>
<mo>-</mo>
<mi>sin</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>&theta;</mi>
<mrow>
<mi>i</mi>
<mi>j</mi>
</mrow>
</msub>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
<mtd>
<mn>0</mn>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<mi>sin</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>&theta;</mi>
<mrow>
<mi>i</mi>
<mi>j</mi>
</mrow>
</msub>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
<mtd>
<mrow>
<mi>cos</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>&theta;</mi>
<mrow>
<mi>i</mi>
<mi>j</mi>
</mrow>
</msub>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
<mtd>
<mn>0</mn>
</mtd>
</mtr>
<mtr>
<mtd>
<mn>0</mn>
</mtd>
<mtd>
<mn>0</mn>
</mtd>
<mtd>
<mn>1</mn>
</mtd>
</mtr>
</mtable>
</mfenced>
<mfenced open = "(" close = ")">
<mtable>
<mtr>
<mtd>
<mrow>
<mi>cos</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>&xi;</mi>
<mrow>
<mi>i</mi>
<mi>j</mi>
</mrow>
</msub>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
<mtd>
<mn>0</mn>
</mtd>
<mtd>
<mrow>
<mi>sin</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>&xi;</mi>
<mrow>
<mi>i</mi>
<mi>j</mi>
</mrow>
</msub>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mn>0</mn>
</mtd>
<mtd>
<mn>1</mn>
</mtd>
<mtd>
<mn>0</mn>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<mo>-</mo>
<mi>sin</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>&xi;</mi>
<mrow>
<mi>i</mi>
<mi>j</mi>
</mrow>
</msub>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
<mtd>
<mn>0</mn>
</mtd>
<mtd>
<mrow>
<mi>cos</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>&xi;</mi>
<mrow>
<mi>i</mi>
<mi>j</mi>
</mrow>
</msub>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
</mtr>
</mtable>
</mfenced>
<mfenced open = "(" close = ")">
<mtable>
<mtr>
<mtd>
<mn>1</mn>
</mtd>
<mtd>
<mn>0</mn>
</mtd>
<mtd>
<mn>0</mn>
</mtd>
</mtr>
<mtr>
<mtd>
<mn>0</mn>
</mtd>
<mtd>
<mrow>
<mi>cos</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>&psi;</mi>
<mrow>
<mi>i</mi>
<mi>j</mi>
</mrow>
</msub>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
<mtd>
<mrow>
<mo>-</mo>
<mi>sin</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>&psi;</mi>
<mrow>
<mi>i</mi>
<mi>j</mi>
</mrow>
</msub>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mn>0</mn>
</mtd>
<mtd>
<mrow>
<mi>sin</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>&psi;</mi>
<mrow>
<mi>i</mi>
<mi>j</mi>
</mrow>
</msub>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
<mtd>
<mrow>
<mi>cos</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>&psi;</mi>
<mrow>
<mi>i</mi>
<mi>j</mi>
</mrow>
</msub>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
</mtr>
</mtable>
</mfenced>
<mn>...</mn>
<mrow>
<mo>(</mo>
<mn>3</mn>
<mo>)</mo>
</mrow>
<mo>;</mo>
</mrow>
The camera coordinates system and the translation vector t of target co-ordinates systemijRepresented using equation below (4):
tij=[xij,yij,Lij]T…………………………………………………………(4)。
5. relative pose computational methods according to claim 4, it is characterised in that
The input sample collection is represented using equation below (5);
<mrow>
<mover>
<mi>&Omega;</mi>
<mo>&OverBar;</mo>
</mover>
<mo>=</mo>
<mo>{</mo>
<msup>
<mover>
<mi>&Omega;</mi>
<mo>&OverBar;</mo>
</mover>
<mn>11</mn>
</msup>
<mo>,</mo>
<msup>
<mover>
<mi>&Omega;</mi>
<mo>&OverBar;</mo>
</mover>
<mn>12</mn>
</msup>
<mo>,</mo>
<mo>...</mo>
<mo>,</mo>
<msup>
<mover>
<mi>&Omega;</mi>
<mo>&OverBar;</mo>
</mover>
<mrow>
<mi>i</mi>
<mi>j</mi>
</mrow>
</msup>
<mo>,</mo>
<mo>...</mo>
<mo>,</mo>
<msup>
<mover>
<mi>&Omega;</mi>
<mo>&OverBar;</mo>
</mover>
<mrow>
<mi>m</mi>
<mi>n</mi>
</mrow>
</msup>
<mo>}</mo>
<mo>...</mo>
<mrow>
<mo>(</mo>
<mn>5</mn>
<mo>)</mo>
</mrow>
<mo>;</mo>
</mrow>
1
Wherein,Input sample collection is represented,Represent relative pose data set;
The output ideal sample collection is built using equation below (6):
<mrow>
<mover>
<mi>C</mi>
<mo>&OverBar;</mo>
</mover>
<mo>=</mo>
<mo>{</mo>
<msub>
<mover>
<mi>C</mi>
<mo>&OverBar;</mo>
</mover>
<mn>11</mn>
</msub>
<mo>,</mo>
<msub>
<mover>
<mi>C</mi>
<mo>&OverBar;</mo>
</mover>
<mn>12</mn>
</msub>
<mo>,</mo>
<mo>...</mo>
<mo>,</mo>
<msub>
<mover>
<mi>C</mi>
<mo>&OverBar;</mo>
</mover>
<mrow>
<mi>i</mi>
<mi>j</mi>
</mrow>
</msub>
<mo>,</mo>
<mo>...</mo>
<mo>,</mo>
<msub>
<mover>
<mi>C</mi>
<mo>&OverBar;</mo>
</mover>
<mrow>
<mi>m</mi>
<mi>n</mi>
</mrow>
</msub>
<mo>}</mo>
<mo>...</mo>
<mrow>
<mo>(</mo>
<mn>6</mn>
<mo>)</mo>
</mrow>
<mo>;</mo>
</mrow>
Wherein,Output ideal sample collection is represented,Represent the coordinate of target in the camera;The coordinate of target in the camera
Specifically it is calculated using equation below (7):
<mrow>
<mfenced open = "{" close = "">
<mtable>
<mtr>
<mtd>
<mrow>
<msub>
<mover>
<mi>C</mi>
<mo>&OverBar;</mo>
</mover>
<mrow>
<mi>i</mi>
<mi>j</mi>
</mrow>
</msub>
<mo>=</mo>
<mrow>
<mo>(</mo>
<mfrac>
<msub>
<mi>a</mi>
<mrow>
<mi>i</mi>
<mi>j</mi>
</mrow>
</msub>
<mi>&mu;</mi>
</mfrac>
<mo>,</mo>
<mfrac>
<msub>
<mi>b</mi>
<mrow>
<mi>i</mi>
<mi>j</mi>
</mrow>
</msub>
<mi>&mu;</mi>
</mfrac>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<msub>
<mi>a</mi>
<mrow>
<mi>i</mi>
<mi>j</mi>
</mrow>
</msub>
<mo>=</mo>
<mfrac>
<mi>f</mi>
<msub>
<mi>L</mi>
<mrow>
<mi>i</mi>
<mi>j</mi>
</mrow>
</msub>
</mfrac>
<mo>&CenterDot;</mo>
<msub>
<mi>x</mi>
<mrow>
<mi>i</mi>
<mi>j</mi>
</mrow>
</msub>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<msub>
<mi>b</mi>
<mrow>
<mi>i</mi>
<mi>j</mi>
</mrow>
</msub>
<mo>=</mo>
<mfrac>
<mi>f</mi>
<msub>
<mi>L</mi>
<mrow>
<mi>i</mi>
<mi>j</mi>
</mrow>
</msub>
</mfrac>
<mo>&CenterDot;</mo>
<msub>
<mi>y</mi>
<mrow>
<mi>i</mi>
<mi>j</mi>
</mrow>
</msub>
</mrow>
</mtd>
</mtr>
</mtable>
</mfenced>
<mn>...</mn>
<mrow>
<mo>(</mo>
<mn>7</mn>
<mo>)</mo>
</mrow>
<mo>;</mo>
</mrow>
Wherein,Represent the coordinate of target in the camera, aijRepresent target under camera coordinates system in image detector surface x
The projection in direction, bijRepresent that target represents camera pixel under camera coordinates system in the projection in image detector surface y directions, μ
Size, f represent camera focus, xijRepresent target and the projection in imaging point distance x-axis direction in camera coordinates system, yijRepresent
Target and the projection in imaging point distance y-axis direction in camera coordinates system, LijRepresent target with imaging point distance in camera coordinates
The projection in z-axis direction in system;
The neutral net input set is represented using equation below (8):
Ω={ Ω11,Ω12,...,Ωij,...,Ωmn}……………………………(8);
Wherein, Ω represents neutral net input set, ΩijBy to relative pose data setComputing is normalized to obtain;
The neutral net output collection is represented using equation below (9):
C={ C11,C12,...,Cij,...,Cmn}……………………………(9);
Wherein, C represents neutral net output collection, CijPass through the coordinate to target in the cameraComputing is normalized to obtain.
6. relative pose computational methods according to claim 5, it is characterised in that the default neutral net includes as follows
Parameter setting:Input layer number, output layer nodes;
Wherein, the node transfer function of input layer and hidden layer uses tansig functions, and the node transfer function of output layer uses
Purelin functions.
7. relative pose computational methods according to claim 6, it is characterised in that fortune is trained to default neutral net
The step of calculation, specifically includes:
In each round iteration, the input value using the neutral net input set as the default neutral net, it is calculated
Neutral net output vector;
In each round iteration, the element that the neutral net output vector is concentrated with the output ideal sample is compared
Compared with relative error is calculated;
The connection weight and threshold value of the default neutral net are adjusted using the relative error, carries out circuit training computing;
When reaching default iterations or the relative error within preset error value, the circuit training fortune is terminated
Calculate.
8. relative pose computational methods according to claim 7, it is characterised in that the relative error utilizes equation below
(10) it is calculated:
<mrow>
<mi>e</mi>
<mo>=</mo>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>i</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mi>m</mi>
</munderover>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>j</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mi>n</mi>
</munderover>
<mo>|</mo>
<msup>
<mi>O</mi>
<mrow>
<mi>i</mi>
<mi>j</mi>
</mrow>
</msup>
<mo>-</mo>
<msup>
<mover>
<mi>C</mi>
<mo>&OverBar;</mo>
</mover>
<mrow>
<mi>i</mi>
<mi>j</mi>
</mrow>
</msup>
<mo>|</mo>
<mo>...</mo>
<mrow>
<mo>(</mo>
<mn>10</mn>
<mo>)</mo>
</mrow>
<mo>;</mo>
</mrow>
Wherein, e represents relative error, OijNeutral net output vector is represented,Represent the coordinate of target in the camera.
9. relative pose computational methods according to claim 8, it is characterised in that the selection obtains optimal neural network
The step of include:
Obtain the neutral net by the training computing under each hidden layer;
The neutral net minimum from Select Error under each hidden layer is as the optimal neural network.
10. the relative pose computational methods according to any one of claim 1 to 9, it is characterised in that described that phase is calculated
The step of pose initial value, is included:
Computing is normalized to the target central coordinate of circle;
The target central coordinate of circle after normalization computing is inputted into the optimal neural network;
The output result of the optimal neural network is subjected to renormalization computing, the relative pose initial value is calculated.
11. relative pose computing device, it is characterised in that including:
Image processing module, for obtaining target image, target central coordinate of circle is calculated;
Neural network module, the optimal neural network completed for storing training;
Initial value computing module, for the target central coordinate of circle to be inputted into optimal neural network, it is initial that relative pose is calculated
Value;
Final value computing module, for being iterated computing to the relative pose initial value, relative pose end value is calculated.
12. aerospace craft rendezvous and docking system, it is characterised in that including:It is relative described in cooperation target and claim 11
Pose computing device;
The relative pose computing device is scanned to the cooperation target, to obtain target central coordinate of circle;
Wherein, the cooperation target includes:The disk of four array arrangements;The area of each disk is incremented by successively.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710728275.0A CN107481281B (en) | 2017-08-23 | 2017-08-23 | Relative pose calculation method and device and spacecraft rendezvous and docking system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710728275.0A CN107481281B (en) | 2017-08-23 | 2017-08-23 | Relative pose calculation method and device and spacecraft rendezvous and docking system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107481281A true CN107481281A (en) | 2017-12-15 |
CN107481281B CN107481281B (en) | 2020-11-27 |
Family
ID=60601275
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710728275.0A Expired - Fee Related CN107481281B (en) | 2017-08-23 | 2017-08-23 | Relative pose calculation method and device and spacecraft rendezvous and docking system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107481281B (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110186467A (en) * | 2018-02-23 | 2019-08-30 | 通用汽车环球科技运作有限责任公司 | Group's sensing points cloud map |
CN113916254A (en) * | 2021-07-22 | 2022-01-11 | 北京控制工程研究所 | Docking type capture spacecraft autonomous rendezvous and docking test method |
CN114396872A (en) * | 2021-12-29 | 2022-04-26 | 哈尔滨工业大学 | Conversion measuring device and conversion measuring method for butt joint hidden characteristics of aircraft cabin |
CN114882110A (en) * | 2022-05-10 | 2022-08-09 | 中国人民解放军63921部队 | Relative pose measurement and target design method suitable for micro-nano satellite self-assembly |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103245335A (en) * | 2013-05-21 | 2013-08-14 | 北京理工大学 | Ultrashort-distance visual position posture measurement method for autonomous on-orbit servicing spacecraft |
CN105966644A (en) * | 2016-06-07 | 2016-09-28 | 中国人民解放军国防科学技术大学 | Simulation service star used for on-orbit service technical identification |
CN106780608A (en) * | 2016-11-23 | 2017-05-31 | 北京地平线机器人技术研发有限公司 | Posture information method of estimation, device and movable equipment |
-
2017
- 2017-08-23 CN CN201710728275.0A patent/CN107481281B/en not_active Expired - Fee Related
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103245335A (en) * | 2013-05-21 | 2013-08-14 | 北京理工大学 | Ultrashort-distance visual position posture measurement method for autonomous on-orbit servicing spacecraft |
CN105966644A (en) * | 2016-06-07 | 2016-09-28 | 中国人民解放军国防科学技术大学 | Simulation service star used for on-orbit service technical identification |
CN106780608A (en) * | 2016-11-23 | 2017-05-31 | 北京地平线机器人技术研发有限公司 | Posture information method of estimation, device and movable equipment |
Non-Patent Citations (4)
Title |
---|
胡守仁: "《神经网络应用技术》", 31 December 1993 * |
赵连军: ""基于目标特征的单目视觉位置姿态测量技术研究"", 《中国博士学位论文全文数据库(信息科技辑)》 * |
金虎: ""相对位姿测量解算的FPGA实现"", 《中国优秀硕士学位论文全文数据库(信息科技辑)》 * |
魏许: ""空间非合作目标的近距离相对位姿测量技术研究"", 《中国优秀硕士学位论文全文数据库(信息科技辑)》 * |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110186467A (en) * | 2018-02-23 | 2019-08-30 | 通用汽车环球科技运作有限责任公司 | Group's sensing points cloud map |
CN113916254A (en) * | 2021-07-22 | 2022-01-11 | 北京控制工程研究所 | Docking type capture spacecraft autonomous rendezvous and docking test method |
CN114396872A (en) * | 2021-12-29 | 2022-04-26 | 哈尔滨工业大学 | Conversion measuring device and conversion measuring method for butt joint hidden characteristics of aircraft cabin |
CN114882110A (en) * | 2022-05-10 | 2022-08-09 | 中国人民解放军63921部队 | Relative pose measurement and target design method suitable for micro-nano satellite self-assembly |
CN114882110B (en) * | 2022-05-10 | 2024-04-12 | 中国人民解放军63921部队 | Relative pose measurement and target design method suitable for micro-nano satellite self-assembly |
Also Published As
Publication number | Publication date |
---|---|
CN107481281B (en) | 2020-11-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107481281A (en) | Relative pose computational methods and device and aerospace craft rendezvous and docking system | |
CN106553195B (en) | Object 6DOF localization method and system during industrial robot crawl | |
CN102254154B (en) | Method for authenticating human-face identity based on three-dimensional model reconstruction | |
CN104748750B (en) | A kind of model constrained under the Attitude estimation of Three dimensional Targets in-orbit method and system | |
US9153030B2 (en) | Position and orientation estimation method and apparatus therefor | |
CN106780459A (en) | A kind of three dimensional point cloud autoegistration method | |
Giuliani et al. | Height fluctuations in interacting dimers | |
CN106503671A (en) | The method and apparatus for determining human face posture | |
CN102750704B (en) | Step-by-step video camera self-calibration method | |
CN101377812B (en) | Method for recognizing position and attitude of space plane object | |
CN106774309A (en) | A kind of mobile robot is while visual servo and self adaptation depth discrimination method | |
CN103700135B (en) | A kind of three-dimensional model local spherical mediation feature extracting method | |
CN108804846A (en) | A kind of data-driven attitude controller design method of noncooperative target assembly spacecraft | |
CN102810204B (en) | Based on the monocular vision single image localization method of parallelogram | |
Yang et al. | Robust and efficient star identification algorithm based on 1-D convolutional neural network | |
Yu et al. | Task coupling based layered cooperative guidance: Theories and applications | |
Tseng et al. | Autonomous driving for natural paths using an improved deep reinforcement learning algorithm | |
CN104976991B (en) | A kind of acquisition methods for the three-line imagery image space deviation for considering attitude of satellite change | |
CN102116633A (en) | Simulation checking method for deep-space optical navigation image processing algorithm | |
Gabern et al. | Binary asteroid observation orbits from a global dynamical perspective | |
CN111931336B (en) | Complex welding part unit dividing method and device and readable storage medium | |
CN103954287B (en) | A kind of roadmap planning method of survey of deep space independent navigation | |
Vithani et al. | Estimation of object kinematics from point data | |
CN111833395A (en) | Direction-finding system single target positioning method and device based on neural network model | |
Putra et al. | Fuzzy Lightweight CNN for Point Cloud Object Classification based on Voxel |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20201127 Termination date: 20210823 |