CN113074725B - Small underwater multi-robot cooperative positioning method and system based on multi-source information fusion - Google Patents
Small underwater multi-robot cooperative positioning method and system based on multi-source information fusion Download PDFInfo
- Publication number
- CN113074725B CN113074725B CN202110512081.3A CN202110512081A CN113074725B CN 113074725 B CN113074725 B CN 113074725B CN 202110512081 A CN202110512081 A CN 202110512081A CN 113074725 B CN113074725 B CN 113074725B
- Authority
- CN
- China
- Prior art keywords
- robot
- positioning
- state
- equation
- vertical distance
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/20—Instruments for performing navigational calculations
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/10—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
- G01C21/12—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
- G01C21/16—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
- G01C21/165—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Automation & Control Theory (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Manipulator (AREA)
Abstract
A small underwater multi-robot cooperative positioning method and system based on multi-source information fusion belongs to the technical field of multi-robot cooperative positioning and is used for solving the problem that a small underwater robot cannot be positioned by using a fiber-optic gyroscope, a Doppler (DVL) and an underwater sound positioning system due to small size and limited energy supply. According to the invention, the vertical distance information of two robots based on the pressure sensor and the three-dimensional spatial position information of the robots based on the panoramic stereo sensing device, namely binocular vision positioning, are fused to obtain the accurate spatial position of the underwater robot, and in a special underwater environment, a positioning device which is high in power and heavy is not required to be relied on, so that the problem that a small underwater robot cannot be positioned by using an optical fiber gyroscope, a Doppler (DVL) and an underwater sound positioning system due to small size and limited energy supply is solved, and the precision and robustness of relative cooperative positioning of the small underwater multi-robot are effectively improved. The invention provides a theoretical basis for cooperative formation control of the small amphibious robot.
Description
Technical Field
The invention relates to the technical field of multi-robot cooperative positioning, in particular to a small underwater multi-robot cooperative positioning method and system based on multi-source information fusion.
Background
In recent years, researchers have proposed relative co-location technology inspired by natural fish formation and bird formation flying. The robot has limited external perception and communication range, in the effective measurement range, the robot perceives or positions adjacent robots through sensors such as vision and infrared sensors, and transmits position and attitude information through a multi-jump communication mechanism, so that the relative cooperative positioning of small-sized multiple robots can be realized.
In 2014, inspired by natural bird team formation flight, researchers at university of federal science and engineering, zurich, switzerland adopted ARToolkit codes to identify unmanned aerial vehicles, and carried out relative pose estimation on multiple unmanned aerial vehicles by utilizing an airborne visual perception sensor and communication equipment in combination with a Kalman filtering algorithm, so that completely distributed Leader-follower formation flight control is completed, and indoor unit outdoor formation flight tests are carried out. In 2015 and 2016, to the environment that unmanned aerial vehicle GPS positioning is limited, pennsylvania university researchers proposed an unmanned aerial vehicle group system based on unmanned aerial vehicle relative positioning, need not with the help of extra global positioning system, and unmanned aerial vehicle fixes a position adjacent unmanned aerial vehicle that pastes the sign through airborne monocular vision. And in a real environment, completing a piloting following formation stability experiment, an unmanned aerial vehicle group system stability and a deployment experiment of a monitoring scene.
In order to realize an autonomous small AUV close-range formation tracking system, in 2015, researchers at Hirana university in Spanish propose a multi-AUV close-range relative cooperative positioning system based on vision and active signal lamps, a panoramic vision camera is carried below an AUV of a navigator, four active signal lamps are fixed above an AUV body of a follower, and the positions of the signal lamps on a robot are fixed and known. And the pilot AUV detects the pilot AUV signal lamp by a visual means and carries out position and attitude estimation. However, the positioning method based on signal lamps needs to detect all signal lamps, and if a single signal lamp is detected incorrectly or is blocked, the navigator AUV cannot be positioned.
Due to the particularity of the underwater environment, the attenuation of sound waves is fast, the robot cannot receive GPS signals when being submerged, and the application of a satellite positioning navigation system is limited. The underwater acoustic communication equipment, inertial navigation equipment, DVL (dynamic video modeling) and sonar equipment which are depended by an absolute positioning method (methods such as an underwater acoustic positioning method, an inertia/dead reckoning method, submarine topography matching and the like) need high power and are heavy, so that the method cannot be applied to small-sized amphibious multi-robots.
Disclosure of Invention
In view of the above problems, the present invention provides a small underwater multi-robot cooperative positioning method and system based on multi-source information fusion, so as to solve the problem that the small underwater robot cannot be positioned by using a fiber-optic gyroscope, a Doppler (DVL) and an underwater acoustic positioning system due to its small size and limited energy supply.
According to one aspect of the invention, a small underwater multi-robot cooperative positioning method based on multi-source information fusion is provided, and the method comprises the following steps:
step one, acquiring sensor data; wherein the sensor data comprises a positioning robot underwater pressure sensor value, an image sequence containing a positioning robot;
calculating according to the value of the underwater pressure sensor of the positioning robot and the value of the underwater pressure sensor of the positioned robot to obtain the vertical distance between the positioning robot and the positioned robot;
step three, calculating and obtaining three-dimensional space position coordinates of the positioned robot according to the image sequence containing the positioned robot;
and fourthly, performing information fusion on the vertical distance between the positioning robot and the positioned robot and the three-dimensional space position coordinate of the positioned robot to obtain the final three-dimensional space position of the positioned robot.
Further, the specific process of the second step comprises:
step two, establishing a linear equation of the pressure difference and the vertical distance between the positioning robot and the positioned robot;
the relation between the pressure difference between the positioning robot and the positioned robot and the vertical distance is as follows:
Zp=kpP12
wherein, ZpRepresents a vertical distance; p is12Indicating a pressure difference, P12=P1-P2,P1Is the positioning robot pressure sensor value; p2Is the positioned robot pressure sensor value; k is a radical of formulapIs a proportional parameter of the pressure difference to the vertical distance;
secondly, determining a first system state equation and an observation equation according to the linear equation;
the state equation and observation equation for the first system are:
the state vector and observation vector of the first system are:
wherein k represents time; v. ofpRepresenting the vertical direction speed of the positioned robot; a represents a state transition matrix; c denotes an observation matrix which is, is gaussian process noise;(ii) observing noise for gaussians;
and step three, filtering the vertical distance by adopting a Kalman algorithm according to the determined state equation and the observation equation of the first system to obtain the vertical distance between the positioning robot and the positioned robot after filtering.
Further, the specific process of the third step comprises:
thirdly, obtaining the pixel coordinates of the positioned robot through a visual target recognition algorithm according to the image sequence containing the positioned robot;
step two, establishing a vision positioning model equation of the positioned robot;
the vision positioning model equation of the positioned robot is as follows:
wherein i represents a binocular camera serial number; l and r respectively represent a left camera coordinate system and a right camera coordinate system in the binocular camera; u and v represent pixel coordinates;representing the coordinates of the positioned robot under a robot body coordinate system;
thirdly, determining a second system state equation and an observation equation according to the visual positioning model equation;
the state equation and observation equation for the second system are:
wherein, the first and the second end of the pipe are connected with each other,is a state vector at the moment k;an observation vector at the k moment;is gaussian process noise;observing noise for gauss;
the state vector and observation vector of the second system are:
and step three, filtering the coordinates of the positioned robot under the robot body coordinate system by adopting an unscented Kalman filtering algorithm according to the determined state equation and observation equation of the second system to obtain the three-dimensional space position coordinates of the positioned robot after filtering
Further, the specific process of the fourth step includes:
step four, acquiring an attitude angle of the positioned robot;
fourthly, according to the attitude angle, the three-dimensional space position coordinate of the positioned robot under the robot coordinate systemConversion to three-dimensional spatial position coordinates in world coordinate system
Step four and step three, coordinate of Z-axis directionAnd the vertical distance Z obtained in the second steppCarrying out fusion to obtainThe global state estimate and covariance matrix are:
wherein the content of the first and second substances,is the covariance of the vertical distance,as a Z-axis direction coordinateThe covariance of (a);
step four, the obtainedCombining the position coordinates of the positioned robot in a world coordinate systemAnd obtaining the final three-dimensional space position of the positioned robot.
Further, the specific process of filtering the coordinates of the positioned robot under the robot body coordinate system by adopting the unscented kalman filter algorithm in the third step and the fourth step comprises the following steps:
step three, four and one, setting initial value of state, obtaining Sigma point set { chi } of state estimation by UT conversioni,k-1},i=1,2,…,2n;
Step three, step two, time updating is carried out, namely, the prediction is carried out in the previous step, and the prediction state and the prediction covariance are calculated:
the Sigma point at time k-1 is brought into the state equation of the second system by the UT transform:
χi,k|k-1=f(χi,k-1)
vector χ of mergeri,k/k-1To obtain a previous step state estimate at time k; meanwhile, the process noise is considered, and the estimated covariance of the previous step of prediction is obtained;
and step three, performing prediction updating, namely updating the prediction state in the previous step by using measurement: and substituting the updated Sigma point into an observation equation of a second system to obtain a measurement predicted value:
merging vectorsObtaining a measurement prediction at the moment k, and obtaining a covariance of the measurement prediction; further calculating the covariance of the state predicted value and the measurement predicted value;
step three and four, calculating filter gain and updating state estimation and variance;
and step three, step four, iterating the step three, step two, step three, step four and step four repeatedly to obtain the estimation result of the state vector.
According to another aspect of the invention, a small underwater multi-robot cooperative positioning system based on multi-source information fusion is provided, and the system comprises a sensor layer and a data fusion layer; wherein the content of the first and second substances,
the sensor layer comprises a looking-around three-dimensional sensing device, a pressure sensor and an inertial sensor; the all-round stereoscopic perception device comprises a plurality of groups of binocular cameras and is used for acquiring an image sequence comprising the positioned robot; the pressure sensor is used for acquiring the value of the underwater pressure sensor of the positioning robot and the value of the underwater pressure sensor of the positioning robot; the inertial sensor is used for acquiring the attitude angle of the positioned robot;
the data fusion layer comprises a sub-filter I, a sub-filter II and a main filter; the sub-filter I is used for calculating and obtaining the vertical distance between the positioning robot and the positioned robot according to the value of the underwater pressure sensor of the positioning robot and the value of the underwater pressure sensor of the positioned robot; the sub-filter II is used for calculating and obtaining the three-dimensional space position coordinates of the positioned robot according to the image sequence containing the positioned robot; the main filter is used for carrying out information fusion on the vertical distance between the positioning robot and the positioned robot and the three-dimensional space position coordinate of the positioned robot to obtain the final three-dimensional space position of the positioned robot;
the sensor layer and the data fusion layer communicate wirelessly.
Further, the specific process of obtaining the vertical distance between the positioning robot and the positioned robot in the sub-filter I includes: firstly, establishing a linear equation of the pressure difference and the vertical distance between the positioning robot and the positioned robot; the relation between the pressure difference between the positioning robot and the positioned robot and the vertical distance is as follows:
Zp=kpP12
wherein Z ispRepresents a vertical distance; p12Indicating a pressure difference, P12=P1-P2,P1Is the positioning robot pressure sensor value; p is2Is the positioned robot pressure sensor value; k is a radical ofpIs a proportional parameter of the pressure difference and the vertical distance;
then, determining a first system state equation and an observation equation according to the linear equation; the state equation and observation equation for the first system are:
the state vector and observation vector of the first system are:
wherein k represents time; v. ofpRepresenting the vertical direction speed of the positioned robot; a represents a state transitionMoving the matrix; c denotes an observation matrix, and C denotes, is gaussian process noise;observing noise for gauss;
and finally, filtering the vertical distance by adopting a Kalman algorithm according to the determined state equation and the observation equation of the first system to obtain the vertical distance between the positioning robot and the positioned robot after filtering.
Further, the specific process of obtaining the three-dimensional spatial position coordinates of the positioned robot in the sub-filter II includes:
firstly, obtaining the pixel coordinates of a positioned robot through a visual target recognition algorithm according to an image sequence containing the positioned robot; then, establishing a visual positioning model equation of the positioned robot; the visual positioning model equation of the positioned robot is as follows:
wherein i represents a binocular camera serial number; l and r respectively represent a left camera coordinate system and a right camera coordinate system in the binocular camera; u and v represent pixel coordinates;representing the coordinates of the positioned robot under a robot body coordinate system;
then, determining a second system state equation and an observation equation according to the visual positioning model equation; the state equation and observation equation of the second system are:
wherein the content of the first and second substances,is a state vector at the moment k;an observation vector at the k moment;is gaussian process noise;(ii) observing noise for gaussians;
the state vector and observation vector of the second system are:
finally, according to the determined state equation and observation equation of the second system, filtering the coordinates of the positioned robot under the body coordinate system of the robot by adopting an unscented Kalman filtering algorithm to obtain the three-dimensional space position coordinates of the positioned robot after filtering
Further, the specific process of obtaining the final three-dimensional spatial position of the positioned robot in the main filter includes:
firstly, acquiring a posture angle of a positioned robot; then, according to the attitude angle, the three-dimensional space position coordinate of the positioned robot under the robot coordinate systemConversion to three-dimensional spatial position coordinates in world coordinate systemThen, the coordinates in the Z-axis direction are measuredThe vertical distance Z obtained in the sum sub-filter IpPerforming fusion to obtainThe global state estimate and covariance matrix are:
wherein, the first and the second end of the pipe are connected with each other,is the covariance of the vertical distance,as a Z-axis direction coordinateThe covariance of (a);
finally, will obtainCombining the position coordinates of the positioned robot in a world coordinate systemAnd obtaining the final three-dimensional space position of the positioned robot.
Further, the specific process of filtering the coordinates of the robot to be positioned in the robot body coordinate system by using the unscented kalman filter algorithm in the sub-filter II includes: first, given initial value of state, the state estimation Sigma point set { chi ] is obtained by UT conversion i,k-11,2, …,2 n; then, time updating, i.e. predicting ahead by one step, calculating the predicted state sumPrediction covariance: and substituting the Sigma point at the moment k-1 into a state equation of a second system through UT conversion:
χi,k|k-1=f(χi,k-1)
merging vector χi,k/k-1To obtain a previous state estimate at time k; meanwhile, the process noise is considered, and the estimated covariance of the previous step of prediction is obtained;
then, prediction update is performed, i.e. the prediction state is updated one step ahead using the measurements: and substituting the updated Sigma point into an observation equation of a second system to obtain a measurement predicted value:
merging vectorsObtaining a measurement prediction at the moment k, and obtaining a covariance of the measurement prediction; further calculating the covariance of the state predicted value and the measurement predicted value;
then, calculating a filtering gain and updating the state estimation and the variance;
and repeatedly iterating the processes to obtain an estimation result of the state vector.
The beneficial technical effects of the invention are as follows:
according to the invention, the vertical distance information of two robots based on the pressure sensor and the three-dimensional spatial position information of the robots based on the panoramic stereo sensing device, namely binocular vision positioning, are fused to obtain the accurate spatial position of the underwater robot, and in a special underwater environment, a positioning device which is high in power and heavy is not required to be relied on, so that the problem that a small underwater robot cannot be positioned by using an optical fiber gyroscope, a Doppler (DVL) and an underwater sound positioning system due to small size and limited energy supply is solved, and the precision and robustness of relative cooperative positioning of the small underwater multi-robot are effectively improved. The invention provides a theoretical basis for the cooperative formation control of the small amphibious robot.
Drawings
The invention may be better understood by reference to the following description taken in conjunction with the accompanying drawings, in which like reference numerals identify like or similar parts throughout the figures. The accompanying drawings, which are incorporated in and form a part of this specification, illustrate preferred embodiments of the present invention and, together with the detailed description, serve to further illustrate the principles and advantages of the invention.
FIG. 1 is a system block diagram of the present invention;
FIG. 2 is a schematic view of a Kalman filtering process of the sub-filter I according to an embodiment of the present invention;
FIG. 3 is a schematic diagram illustrating positioning and modeling of a perspective stereo perception system according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of an internal structure of a perspective stereo perception system according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of an amphibious robot and a target object in a visual positioning experiment according to an embodiment of the invention;
FIG. 6 is a schematic diagram of an example of a visual positioning experiment;
FIG. 7 is a diagram illustrating the positioning result of the sub-filter II in the vision positioning experiment according to the embodiment of the present invention;
FIG. 8 is a schematic diagram of the positioning experimental robot position arrangement in the embodiment of the present invention;
FIG. 9 is a schematic diagram of the distribution of the three-dimensional spatial positions of the robots in the embodiment of the present invention;
FIG. 10 is a schematic diagram of the distribution of multiple robots in an XY plane according to an embodiment of the present invention;
fig. 11 is a graph of the positioning coordinates of the robot 2 in the embodiment of the present invention;
fig. 12 is a graph showing the positioning coordinates of the robot 3 in the embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present invention will be described hereinafter with reference to the accompanying drawings. In the interest of clarity and conciseness, not all features of an actual implementation are described in the specification. It should be noted that, in order to avoid obscuring the present invention with unnecessary details, only the device structures and/or processing steps closely related to the solution according to the present invention are shown in the drawings, and other details not so related to the present invention are omitted.
In order to realize relative cooperative positioning between onshore and underwater multi-amphibious robots, a multi-source information fusion cooperative positioning method based on vision, depth and IMU (inertial sensor) is provided. As shown in fig. 1, the co-location method framework adopts a layered structure design and is divided into a sensor layer and a data fusion layer, wherein the sensor layer comprises a looking-around three-dimensional sensing system (looking-around three-dimensional sensing device), a pressure sensor, an IMU and other small-size sensors; the data fusion layer comprises a sub-filter I, a sub-filter II and a main filter.
Estimating a relative depth model to be a linear model based on the pressure sensor, and filtering by using a Kalman filter as a sub-filter I; the binocular vision positioning model is a nonlinear model, and in order to improve the precision and reduce the calculated amount, an unscented Kalman filter is adopted for position estimation; to avoid the influence of roll and pitch angular jitter of the robot attitude angle on the positioning, the estimated position p is estimatedbPerforming coordinate transformation at ZwIn the direction, the main filter is adopted to carry out visual part informationSum depth difference ZpFused, parallel cubic filter II estimatesObtaining a three-dimensional location estimate based on vision, depth, and IMUThe method of the present invention is described in detail below.
1) Designing a sub-filter I to calculate and acquire the vertical distance between two robots (namely, a positioning robot and a positioned robot) based on the pressure sensors.
Firstly, the pressure sensors on the sensor layer are used for acquiring the values, namely depth values, of the pressure sensors in water of the two robots, the distance between the two robots in the vertical direction is calculated and estimated through the difference value of the depth values, and the relation between the pressure difference of the pressure sensors of the two robots and the distance between the two robots in the vertical direction is as follows:
Zp=kpP12 (1)
wherein the pressure difference P12=P1-P2,P1Is the positioning robot pressure sensor value; p is2Is the positioned robot pressure sensor value; k is a radical ofpIs a proportional parameter of the pressure difference to the vertical distance.
The state equation and observation equation for the sub-filter I system (i.e., the first system) are simplified in form:
the state vector and observation vector are:
wherein v ispIs the vertical direction velocity, i.e. the distance moved in the vertical direction per unit time; state transition matrixObservation matrix Is Gaussian process noise andrepresenting a fitting gaussian distribution;observe the noise for GaussianIndicating a gaussian fit.
After a state equation and an observation equation of a sub-filter I system are determined, filtering the depth difference by adopting a Kalman algorithm, wherein the flow of the Kalman filtering algorithm is shown in FIG. 2, and firstly, initializing a state vector and a variance; then, carrying out the prediction of the previous step; then calculating the filtering gain; then updating the measurement estimation, namely estimating a state vector according to the observation vector; finally, calculating covariance to obtain the distance between the two robots in the vertical direction; and the steps are circulated in sequence.
2) And designing a sub-filter II to calculate and obtain the three-dimensional space position of the robot based on the all-round stereo perception system (all-round stereo perception device).
As shown in fig. 3 and 4, the all-around stereo perception system of the invention has four groups of binocular cameras (SC)iI-1, 2,3,4), and one set of binocular cameras SC1For the purpose of example, it is preferred that,andrespectively representing the left and right camera coordinate systems in the binocular camera,a coordinate system of the robot body is represented,representing a world coordinate system. Obtaining an image sequence containing the positioned robot through a look-around stereo perception system, and assuming that the positioned robot has coordinates under a robot body coordinate system asThrough a visual target recognition algorithm, the positioned robot can be obtained in a binocular camera SC1The coordinates of the middle pixel are respectivelyAndthe visual target recognition algorithm can adopt deep learning to perform target recognition, and after the positioned robot is detected, the positioned robot is tracked. Specifically, firstly, an image frame passes through a detector to obtain a robot frame, so that a central target position of the robot is obtained, the position is transmitted to a tracker, and the tracker learns and gives a predicted position; after the next frame of image arrives and is detected to obtain the target central position, the tracker gives a predicted position, and meanwhile, the distance between the predicted position and the actual detection position is subjected to iterative Hungary algorithm matching, and the target position is finally output.
Then, by the principle of pinhole imaging, the following results are obtained:
wherein i is 1, i.e. the first set of binocular cameras SC1Correspondingly: a is the optical center distance between one set of binocular cameras, b is the optical center distance between two sets of binocular cameras opposite, e.g. SC2And SC4Distance of optical center between, SC1And SC3The optical center distance therebetween; d is the vertical distance from the optical center of the camera to the origin of the coordinate system of the robot body.
Expanding and simplifying the formula (3) and the formula (4) respectively, we can get:
the equation of the binocular vision positioning model can be further simplified as follows:
wherein i represents a binocular camera serial number; l and r respectively represent left and right camera coordinate systems in the binocular camera; u and v represent pixel coordinates;and the coordinates of the positioned robot under the robot body coordinate system are shown.
Other binocular camera positioning models and binocular camera SC1Similarly, the only difference is the relationship of the robot coordinate system and the binocular coordinate system. The state equation and observation equation for the sub-filter II system (i.e., the second system) are simplified in form:
wherein, the first and the second end of the pipe are connected with each other,is a k time system vector;an observation vector at the k moment;is gaussian process noise;the noise is observed as gaussian. According to equation (8), a system state vector and an observation vector are defined:
wherein, the first and the second end of the pipe are connected with each other,andto be positioned at the robot position;the left camera pixel coordinate at time k;andthe right camera pixel coordinate at time k.
After the system equation and the measurement equation are defined, according to the UT (lossless) transformation matrix form, the position estimation based on the UKF comprises the following steps:
first, given an initial value of a state,
wherein the constant α determines the Sigma point to center pointThe influence of higher-order terms can be reduced by adjusting alpha, and the alpha is usually more than or equal to 0 and less than or equal to 1; λ is a second scale parameter used to characterize the range of the sampling points around the mean point, usually set to 0 or 3-n; beta is a state x distribution parameter, and for Gaussian distribution, beta is 2 as the optimal value; the parameter k is a scaling parameter that controls the distance of each point from the state mean.Is a weight value corresponding to the mean value of the sampling points and is the variance weight. Therefore, the accuracy of the estimated mean value can be improved by properly adjusting alpha and lambda; adjusting α can improve the accuracy of the variance, and these parameters are set as: α ═ 0.9, β ═ 2, and κ ═ 0.
Obtaining state estimate Sigma point set { chi } from UT transformi,k-1},i=1,2,…,2n。
Then, updating time, namely predicting in the previous step, and calculating a prediction state and a prediction covariance;
substituting the Sigma point at the k-1 moment into the equation of state equation through UT conversion
χi,k|k-1=f(χi,k-1) (9)
Merging vector χi,k/k-1To obtain a one-step forward state estimate at time k:
meanwhile, considering process noise, the estimated covariance of the previous prediction is solved:
then, prediction updating is carried out, namely, the prediction state of the previous step is updated by using measurement;
and substituting the updated Sigma point into a measurement equation to obtain a measurement predicted value:
at this time, the covariance of the measurement prediction is
Further, the covariance of the state predicted value and the measurement predicted value is calculated:
then, a filter gain is calculated:
finally, the state estimate and variance are updated:
and repeatedly iterating the process to obtain an estimation result of the state vector.
In the underwater movement process of the robot, pitching or rolling is easy to occur due to the interference of water flow. In this case, the positioning is performed by a binocular camera, the three-dimensional space coordinate position of the robot based on the all-round stereoscopic sensing system obtained according to the above steps is in the robot coordinate system, and the depth difference of the two robots is estimated in the world coordinate system by the pressure difference estimation of the two robot pressure sensors, namely the sub-filter I. And converting the visual positioning information obtained under the robot coordinate system into a world coordinate system in consideration of the problem of the uniformity of the data coordinate systems of the pressure sensor and the visual sensor.
The attitude angle, phi,theta respectively represents roll, yaw and pitch angles of the positioned robot, and the robot coordinate system to world coordinate system conversion relation is as follows:
wherein, the first and the second end of the pipe are connected with each other,andrespectively representing a rotation matrix and a translation vector; t is t1、t2And t3X, Y and the amount of translation in the Z direction, respectively.
Second system state vectorIs written asAnd under the world coordinate system, the coordinates of the positioned robot are as follows:
3) designing a main filter will be based on the three-dimensional space position p of the robot of the looking around stereo perception systemwZ axis coordinate of (1)And depth difference (vertical distance) ZPFusing, connecting in parallel and vertically based on the three-dimensional space position p of the robot of the all-round three-dimensional perception systemwIn (1)A three-dimensional position estimate based on a look-around stereo perception system, pressure sensors, and an IMU is obtained.
The main filter has the effect of Z on the visual estimatewAxial distanceAnd the depth difference Z of the two robotsPAnd carrying out data fusion. In the data fusion layer, the sub-filters complete the optimal estimation of the local state, and the main filter measures the filtering precision according to the covariance matrix of the sub-filters. The global state estimate and its covariance matrix are then:
wherein the content of the first and second substances,in order to be the depth-difference variable covariance,for visual measurement of ZwAxial squareDistance of directionAnd (4) covariance.
Further combined with vision to estimate position informationAndestablishing vision and pressure sensor based positional information estimation of positioned robot
Detailed description of the preferred embodiment
The following experiment is performed on the binocular positioning effect in the all-round stereo perception system (all-round stereo perception device), namely, the positioning experiment based on vision. As shown in fig. 5, the amphibious robot is placed in a laboratory pool and fixed, and the positioning amphibious robot is equally divided into 12 equal parts, each of 30 degrees, by a circular angle calibration plate, as shown in fig. 6. The center of the positioning robot is used as an origin, circles are made by taking 0.8m and 1.5m as radiuses, each circle has 12 positioning points, a target object is placed on the positioning points to perform positioning experiments, each positioning point is positioned for 5 times, and an average value is obtained to serve as a positioning experiment result. In order to more visually display the positioning data, a positioning result graph is drawn in a three-dimensional space, as shown in fig. 7, a center "●" is a positioning robot, a robot body coordinate system X, Y, Z axis is shown in the figure,and "□" represent the actual and measured positions of the target object, respectively. The average errors for the positioning on the 80cm circle and the 150cm circle are (3.4cm, 3.0cm, 2.3cm) and (6.9cm, 4.9cm, 4.5cm), and it is clear that the error increases with increasing distance. Similarly, the root mean square error is used to measure the relationship between the vision measurement distance of the binocular camera and the actual distance, that is, the following formula is used:
wherein, diAnd di' denotes an actual distance and a vision measurement distance, respectively. Through analysis, positioning is carried out on a circle with a radius of 80cm, and the mean square deviations of the positioning errors in the x direction, the y direction and the z direction are respectively 3.67cm, 3.17cm and 2.36 cm; the robot is positioned on a circle with the radius of 150cm, the mean square deviations of the positioning errors in the x direction, the y direction and the z direction are respectively 7.07cm, 5.25cm and 4.88cm, and the diameter of the amphibious spherical robot is 30cm and respectively accounts for 23.6 percent, 17.5 percent and 16.3 percent.
Detailed description of the preferred embodiment
In order to verify the effectiveness of the small underwater multi-robot cooperative positioning method and system based on multi-source information fusion, three robots are adopted, and the robots adopt different identifiers for carrying out experiments. As shown in fig. 8, the robot 1 is equipped with a see-around stereo perception system, and the other two robots are equipped with ordinary binocular cameras. The placement positions and postures of the three robots are shown in fig. 8, the robots are equilateral triangles, each side is 2m, and the coordinates of the robot 1 are shown in the figure. By the method, the positioning results of the robots 2 and 3 are unified under the coordinate system of the robot 1. The three-dimensional positioning results are shown in fig. 9, and fig. 10 is an effect graph projected onto the XY plane, and it can be seen that the positioning results are relatively dispersed. As can be seen from the figure, the convergence effect is more obvious compared with the direct positioning result. Fig. 11 and 12 are graphs showing the results of the positioning of the amphibious robot 1 on the robot 2 and the robot 3, respectively, and the positioning error based on the method of the present invention is significantly small. The longer the positioning distance (the larger the distance between the two robots) is, the larger the error is, and the positioning errors of the robot 2 and the robot 3 in the X direction are the largest, respectively 18.1cm and 17.5 cm. After the positioning is carried out by the method, the maximum positioning errors of the two robots are respectively 10.9cm and 10.5cm, the positioning precision is improved by 39.8 percent and 40 percent, and the positioning precision can meet the requirement of the small amphibious robot on underwater cooperative motion.
While the invention has been described with respect to a limited number of embodiments, those skilled in the art, having benefit of this description, will appreciate that other embodiments can be devised which do not depart from the scope of the invention as disclosed herein. The present invention has been disclosed in an illustrative rather than a restrictive sense, and the scope of the present invention is defined by the appended claims.
Claims (8)
1. A small underwater multi-robot cooperative positioning method based on multi-source information fusion is characterized by comprising the following steps:
step one, acquiring sensor data; wherein the sensor data comprises a positioning robot underwater pressure sensor value, a positioned robot underwater pressure sensor value, an image sequence containing a positioned robot;
calculating according to the value of the underwater pressure sensor of the positioning robot and the value of the underwater pressure sensor of the positioned robot to obtain the vertical distance between the positioning robot and the positioned robot; the specific process comprises the following steps:
step two, establishing a linear equation of the pressure difference and the vertical distance between the positioning robot and the positioned robot;
the relation between the pressure difference between the positioning robot and the positioned robot and the vertical distance is as follows:
Zp=kpP12
wherein Z ispRepresents a vertical distance; p is12Indicating a pressure difference, P12=P1-P2,P1Is the positioning robot pressure sensor value; p2Is the positioned robot pressure sensor value; k is a radical ofpIs a proportional parameter of the pressure difference and the vertical distance;
secondly, determining a first system state equation and an observation equation according to the linear equation;
the state equation and observation equation for the first system are:
the state vector and observation vector of the first system are:
wherein k represents time; v. ofpRepresenting the vertical direction speed of the positioned robot; a represents a state transition matrix; c denotes an observation matrix which is, is gaussian process noise;observing noise for gauss;
thirdly, filtering the vertical distance by adopting a Kalman algorithm according to the determined state equation and the observation equation of the first system to obtain the vertical distance between the positioning robot and the positioned robot after filtering;
step three, calculating and obtaining three-dimensional space position coordinates of the positioned robot according to the image sequence containing the positioned robot;
and step four, performing information fusion on the vertical distance between the positioning robot and the positioned robot and the three-dimensional space position coordinate of the positioned robot to obtain the final three-dimensional space position of the positioned robot.
2. The small underwater multi-robot cooperative positioning method based on multi-source information fusion as claimed in claim 1, wherein the specific process of step three comprises:
thirdly, obtaining the pixel coordinates of the positioned robot through a visual target recognition algorithm according to the image sequence containing the positioned robot;
step two, establishing a visual positioning model equation of the positioned robot;
the visual positioning model equation of the positioned robot is as follows:
wherein i represents a binocular camera serial number; l and r respectively represent a left camera coordinate system and a right camera coordinate system in the binocular camera; u and v represent pixel coordinates;representing the coordinates of the positioned robot under a robot body coordinate system;
thirdly, determining a second system state equation and an observation equation according to the visual positioning model equation;
the state equation and observation equation for the second system are:
wherein the content of the first and second substances,is a state vector at the moment k;an observation vector at the k moment;is gaussian process noise;observing noise for gauss;
the state vector and observation vector of the second system are:
step three and four, filtering the coordinates of the positioned robot under the body coordinate system of the robot by adopting an unscented Kalman filtering algorithm according to the determined state equation and observation equation of the second system to obtain the three-dimensional space position coordinates of the filtered positioned robot
3. The small underwater multi-robot cooperative positioning method based on multi-source information fusion as claimed in claim 2, wherein the specific process of step four comprises:
step four, acquiring an attitude angle of the positioned robot;
fourthly, according to the attitude angle, the three-dimensional space position coordinate of the positioned robot under the robot coordinate systemConversion to three-dimensional spatial position coordinates in world coordinate system
Step four and step three, coordinate of Z-axis directionAnd the vertical distance Z obtained in the second steppCarrying out fusion to obtainThe global state estimate and covariance matrix are:
wherein the content of the first and second substances,is the covariance of the vertical distance,as a Z-axis direction coordinateThe covariance of (a);
4. The small underwater multi-robot cooperative positioning method based on multi-source information fusion as claimed in claim 3, wherein the specific process of filtering the coordinates of the positioned robot under the robot body coordinate system by adopting the unscented Kalman filtering algorithm in the third step and the fourth step comprises:
step three, four and one, setting initial value of state, obtaining Sigma point set { chi } of state estimation by UT conversioni,k-1},i=1,2,…,2n;
Step three, step two, time updating, namely predicting in the previous step, and calculating a prediction state and a prediction covariance:
and substituting the Sigma point at the moment k-1 into a state equation of a second system through UT conversion:
χi,k|k-1=f(χi,k-1)
vector χ of mergeri,k/k-1To obtain a previous state estimate at time k; meanwhile, the process noise is considered, and the estimated covariance of the previous prediction step is obtained;
step three, carry on the prediction to upgrade, namely utilize and measure the prediction state of updating the previous step: and substituting the updated Sigma point into an observation equation of a second system to obtain a measurement predicted value:
merging vectorsObtaining a measurement prediction at the moment k, and obtaining a covariance of the measurement prediction; further calculating the covariance of the state predicted value and the measurement predicted value;
step three and four, calculating filter gain and updating state estimation and variance;
and step three, step four, repeating and iterating the step three, the step two, the step three, the step four, and the step four to obtain an estimation result of the state vector.
5. A small underwater multi-robot cooperative positioning system based on multi-source information fusion is characterized by comprising a sensor layer and a data fusion layer; wherein, the first and the second end of the pipe are connected with each other,
the sensor layer comprises a perspective three-dimensional sensing device, a pressure sensor and an inertial sensor; the all-round stereoscopic perception device comprises a plurality of groups of binocular cameras and is used for acquiring an image sequence comprising the positioned robot; the pressure sensor is used for acquiring the value of the underwater pressure sensor of the positioning robot and the value of the underwater pressure sensor of the positioning robot; the inertial sensor is used for acquiring the attitude angle of the positioned robot;
the data fusion layer comprises a sub-filter I, a sub-filter II and a main filter; the sub-filter I is used for calculating and obtaining the vertical distance between the positioning robot and the positioned robot according to the value of the underwater pressure sensor of the positioning robot and the value of the underwater pressure sensor of the positioned robot, and the specific process comprises the following steps: firstly, establishing a linear equation of the pressure difference and the vertical distance between the positioning robot and the positioned robot; the relation between the pressure difference between the positioning robot and the positioned robot and the vertical distance is as follows:
Zp=kpP12
wherein Z ispRepresents a vertical distance; p12Indicating a pressure difference, P12=P1-P2,P1Is the positioning robot pressure sensor value; p is2Is the positioned robot pressure sensor value; k is a radical ofpIs a proportional parameter of the pressure difference and the vertical distance;
then, determining a first system state equation and an observation equation according to the linear equation; the state equation and observation equation for the first system are:
the state vector and observation vector of the first system are:
wherein k represents time; v. ofpRepresenting the vertical direction speed of the positioned robot; a represents a state transition matrix; c denotes an observation matrix, and C denotes, is gaussian process noise;observing noise for gauss;
finally, filtering the vertical distance by adopting a Kalman algorithm according to the determined state equation and the observation equation of the first system to obtain the vertical distance between the positioning robot and the positioned robot after filtering;
the sub-filter II is used for calculating and obtaining the three-dimensional space position coordinates of the positioned robot according to the image sequence containing the positioned robot;
the main filter is used for carrying out information fusion on the vertical distance between the positioning robot and the positioned robot and the three-dimensional space position coordinate of the positioned robot to obtain the final three-dimensional space position of the positioned robot;
the sensor layer and the data fusion layer communicate wirelessly.
6. The system of claim 5, wherein the specific process of obtaining the three-dimensional spatial position coordinates of the positioned robot in the sub-filter II comprises:
firstly, obtaining the pixel coordinates of a positioned robot through a visual target recognition algorithm according to an image sequence containing the positioned robot; then, establishing a visual positioning model equation of the positioned robot; the vision positioning model equation of the positioned robot is as follows:
wherein i represents a binocular camera serial number; l and r respectively represent left and right camera coordinate systems in the binocular camera; u and v represent pixel coordinates;representing the coordinates of the positioned robot under a robot body coordinate system;
then, determining a second system state equation and an observation equation according to the visual positioning model equation; the state equation and observation equation of the second system are:
wherein the content of the first and second substances,state vectors at the k moment;an observation vector at the k moment;is gaussian process noise;observing noise for gauss;
the state vector and observation vector of the second system are:
finally, according to the determined state equation and observation equation of the second system, filtering the coordinates of the positioned robot under the body coordinate system of the robot by adopting an unscented Kalman filtering algorithm to obtain the three-dimensional space position coordinates of the positioned robot after filtering
7. The system of claim 6, wherein the specific process of obtaining the final three-dimensional spatial position of the positioned robot in the main filter comprises:
firstly, acquiring a posture angle of a positioned robot; then, according to the attitude angle, the three-dimensional space position coordinate of the positioned robot under the robot coordinate systemConversion to three-dimensional spatial position coordinates in world coordinate systemThen, the coordinates in the Z-axis direction are measuredVertical distance Z obtained in sum sub-filter IpPerforming fusion to obtainThe global state estimate and covariance matrix are:
wherein, the first and the second end of the pipe are connected with each other,is the covariance of the vertical distance,as a Z-axis direction coordinateThe covariance of (a);
8. According to claim7 the small-sized underwater multi-robot cooperative positioning system based on multi-source information fusion is characterized in that the specific process of filtering the coordinates of a positioned robot under a robot body coordinate system by adopting an unscented Kalman filtering algorithm in the sub-filter II comprises the following steps: first, given initial value of state, the state estimation Sigma point set { chi ] is obtained by UT conversioni,k-11,2, …,2 n; then, a time update is performed, i.e. a prediction is performed one step ahead, calculating the prediction state and the prediction covariance: the Sigma point at time k-1 is brought into the state equation of the second system by the UT transform:
χi,k|k-1=f(χi,k-1)
vector χ of mergeri,k/k-1To obtain a previous state estimate at time k; meanwhile, the process noise is considered, and the estimated covariance of the previous prediction step is obtained;
then, a prediction update is performed, i.e. the previous prediction state is updated with measurements: and substituting the updated Sigma point into an observation equation of a second system to obtain a measurement predicted value:
merging vectorsObtaining a measurement prediction at the moment k, and obtaining a covariance of the measurement prediction; further calculating the covariance of the state predicted value and the measurement predicted value;
then, calculating a filtering gain and updating the state estimation and the variance;
and repeatedly iterating the processes to obtain an estimation result of the state vector.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110512081.3A CN113074725B (en) | 2021-05-11 | 2021-05-11 | Small underwater multi-robot cooperative positioning method and system based on multi-source information fusion |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110512081.3A CN113074725B (en) | 2021-05-11 | 2021-05-11 | Small underwater multi-robot cooperative positioning method and system based on multi-source information fusion |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113074725A CN113074725A (en) | 2021-07-06 |
CN113074725B true CN113074725B (en) | 2022-07-22 |
Family
ID=76616465
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110512081.3A Active CN113074725B (en) | 2021-05-11 | 2021-05-11 | Small underwater multi-robot cooperative positioning method and system based on multi-source information fusion |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113074725B (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114018236B (en) * | 2021-09-30 | 2023-11-03 | 哈尔滨工程大学 | Laser vision strong coupling SLAM method based on self-adaptive factor graph |
CN115031726A (en) * | 2022-03-29 | 2022-09-09 | 哈尔滨工程大学 | Data fusion navigation positioning method |
CN115218804A (en) * | 2022-07-13 | 2022-10-21 | 长春理工大学中山研究院 | Fusion measurement method for multi-source system of large-scale component |
CN116592896B (en) * | 2023-07-17 | 2023-09-29 | 山东水发黄水东调工程有限公司 | Underwater robot navigation positioning method based on Kalman filtering and infrared thermal imaging |
Citations (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101661098A (en) * | 2009-09-10 | 2010-03-03 | 上海交通大学 | Multi-robot automatic locating system for robot restaurant |
CN102052924A (en) * | 2010-11-25 | 2011-05-11 | 哈尔滨工程大学 | Combined navigation and positioning method of small underwater robot |
CN102980579A (en) * | 2012-11-15 | 2013-03-20 | 哈尔滨工程大学 | Autonomous underwater vehicle autonomous navigation locating method |
CN104280025A (en) * | 2013-07-08 | 2015-01-14 | 中国科学院沈阳自动化研究所 | Adaptive unscented Kalman filter-based deepwater robot short-baseline combined navigation method |
CN204228171U (en) * | 2014-11-19 | 2015-03-25 | 山东华盾科技股份有限公司 | A kind of underwater robot guider |
CN105775082A (en) * | 2016-03-04 | 2016-07-20 | 中国科学院自动化研究所 | Bionic robotic dolphin for water quality monitoring |
CN107585280A (en) * | 2017-10-12 | 2018-01-16 | 上海遨拓深水装备技术开发有限公司 | A kind of quick dynamic positioning systems of ROV for being adapted to vertical oscillation current |
CN108303094A (en) * | 2018-01-31 | 2018-07-20 | 深圳市拓灵者科技有限公司 | The Position Fixing Navigation System and its positioning navigation method of array are merged based on multiple vision sensor |
CN108444478A (en) * | 2018-03-13 | 2018-08-24 | 西北工业大学 | A kind of mobile target visual position and orientation estimation method for submarine navigation device |
CN108594834A (en) * | 2018-03-23 | 2018-09-28 | 哈尔滨工程大学 | One kind is towards more AUV adaptive targets search and barrier-avoiding method under circumstances not known |
CN110764533A (en) * | 2019-10-15 | 2020-02-07 | 哈尔滨工程大学 | Multi-underwater robot cooperative target searching method |
GB202007680D0 (en) * | 2020-05-22 | 2020-07-08 | Equinor Energy As | Shuttle loading system |
CN111595348A (en) * | 2020-06-23 | 2020-08-28 | 南京信息工程大学 | Master-slave mode cooperative positioning method of autonomous underwater vehicle combined navigation system |
CN111638523A (en) * | 2020-05-08 | 2020-09-08 | 哈尔滨工程大学 | System and method for searching and positioning lost person by underwater robot |
CN112432644A (en) * | 2020-11-11 | 2021-03-02 | 杭州电子科技大学 | Unmanned ship integrated navigation method based on robust adaptive unscented Kalman filtering |
CN112698273A (en) * | 2020-12-15 | 2021-04-23 | 哈尔滨工程大学 | Multi-AUV single-standard distance measurement cooperative operation method |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107677272B (en) * | 2017-09-08 | 2020-11-10 | 哈尔滨工程大学 | AUV (autonomous Underwater vehicle) collaborative navigation method based on nonlinear information filtering |
US11072405B2 (en) * | 2017-11-01 | 2021-07-27 | Tampa Deep-Sea X-Plorers Llc | Autonomous underwater survey apparatus and system |
CN111542020B (en) * | 2020-05-06 | 2023-06-13 | 河海大学常州校区 | Multi-AUV cooperative data collection method based on region division in underwater acoustic sensor network |
CN112613640A (en) * | 2020-12-07 | 2021-04-06 | 清华大学 | Heterogeneous AUV (autonomous Underwater vehicle) cooperative underwater information acquisition system and energy optimization method |
-
2021
- 2021-05-11 CN CN202110512081.3A patent/CN113074725B/en active Active
Patent Citations (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101661098A (en) * | 2009-09-10 | 2010-03-03 | 上海交通大学 | Multi-robot automatic locating system for robot restaurant |
CN102052924A (en) * | 2010-11-25 | 2011-05-11 | 哈尔滨工程大学 | Combined navigation and positioning method of small underwater robot |
CN102980579A (en) * | 2012-11-15 | 2013-03-20 | 哈尔滨工程大学 | Autonomous underwater vehicle autonomous navigation locating method |
CN104280025A (en) * | 2013-07-08 | 2015-01-14 | 中国科学院沈阳自动化研究所 | Adaptive unscented Kalman filter-based deepwater robot short-baseline combined navigation method |
CN204228171U (en) * | 2014-11-19 | 2015-03-25 | 山东华盾科技股份有限公司 | A kind of underwater robot guider |
CN105775082A (en) * | 2016-03-04 | 2016-07-20 | 中国科学院自动化研究所 | Bionic robotic dolphin for water quality monitoring |
CN107585280A (en) * | 2017-10-12 | 2018-01-16 | 上海遨拓深水装备技术开发有限公司 | A kind of quick dynamic positioning systems of ROV for being adapted to vertical oscillation current |
CN108303094A (en) * | 2018-01-31 | 2018-07-20 | 深圳市拓灵者科技有限公司 | The Position Fixing Navigation System and its positioning navigation method of array are merged based on multiple vision sensor |
CN108444478A (en) * | 2018-03-13 | 2018-08-24 | 西北工业大学 | A kind of mobile target visual position and orientation estimation method for submarine navigation device |
CN108594834A (en) * | 2018-03-23 | 2018-09-28 | 哈尔滨工程大学 | One kind is towards more AUV adaptive targets search and barrier-avoiding method under circumstances not known |
CN110764533A (en) * | 2019-10-15 | 2020-02-07 | 哈尔滨工程大学 | Multi-underwater robot cooperative target searching method |
CN111638523A (en) * | 2020-05-08 | 2020-09-08 | 哈尔滨工程大学 | System and method for searching and positioning lost person by underwater robot |
GB202007680D0 (en) * | 2020-05-22 | 2020-07-08 | Equinor Energy As | Shuttle loading system |
CN111595348A (en) * | 2020-06-23 | 2020-08-28 | 南京信息工程大学 | Master-slave mode cooperative positioning method of autonomous underwater vehicle combined navigation system |
CN112432644A (en) * | 2020-11-11 | 2021-03-02 | 杭州电子科技大学 | Unmanned ship integrated navigation method based on robust adaptive unscented Kalman filtering |
CN112698273A (en) * | 2020-12-15 | 2021-04-23 | 哈尔滨工程大学 | Multi-AUV single-standard distance measurement cooperative operation method |
Non-Patent Citations (6)
Title |
---|
A Multi-Binocular Camera-based Localization Method for Amphibious Spherical Robots;Mugen Zhou等;《2020 IEEE International Conference on Mechatronics and Automation (ICMA)》;20201026;第797-802页 * |
Novel algorithms for coordination of underwater swarm robotics;Feng, WX等;《2006 International Conference on Mechatronics and Automation》;20061211;全文 * |
Pseudo-3D Vision-Inertia Based Underwater Self-Localization for AUVs;Yangyang Wang等;《IEEE Transactions on Vehicular Technology 》;20200511;全文 * |
两栖球形机器人水下定位***研究;唐昆;《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》;20200215(第02期);全文 * |
基于SVM算法的碟形水下机器人姿态预测方法研究;王天等;《传感器与微***》;20120420(第04期);全文 * |
基于距离信息的多AUV协同导航研究;孙鑫;《中国优秀博硕士学位论文全文数据库(硕士)工程科技Ⅱ辑》;20200315(第03期);第16、19、23页 * |
Also Published As
Publication number | Publication date |
---|---|
CN113074725A (en) | 2021-07-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113074725B (en) | Small underwater multi-robot cooperative positioning method and system based on multi-source information fusion | |
Wu et al. | Survey of underwater robot positioning navigation | |
CN111156998B (en) | Mobile robot positioning method based on RGB-D camera and IMU information fusion | |
EP3158293B1 (en) | Sensor fusion using inertial and image sensors | |
EP3158412B1 (en) | Sensor fusion using inertial and image sensors | |
Wang et al. | Online high-precision probabilistic localization of robotic fish using visual and inertial cues | |
EP3158417B1 (en) | Sensor fusion using inertial and image sensors | |
WO2018086133A1 (en) | Methods and systems for selective sensor fusion | |
Xing et al. | A multi-sensor fusion self-localization system of a miniature underwater robot in structured and GPS-denied environments | |
Sunderhauf et al. | Using the unscented kalman filter in mono-SLAM with inverse depth parametrization for autonomous airship control | |
CN111338383B (en) | GAAS-based autonomous flight method and system, and storage medium | |
WO2016187758A1 (en) | Sensor fusion using inertial and image sensors | |
Siegwart et al. | Autonomous mobile robots | |
De Wagter et al. | Towards vision-based uav situation awareness | |
Mebarki et al. | Image moments-based velocity estimation of UAVs in GPS denied environments | |
Xian et al. | Fusing stereo camera and low-cost inertial measurement unit for autonomous navigation in a tightly-coupled approach | |
CN113408623B (en) | Non-cooperative target flexible attachment multi-node fusion estimation method | |
CN114529585A (en) | Mobile equipment autonomous positioning method based on depth vision and inertial measurement | |
Amidi et al. | Research on an autonomous vision-guided helicopter | |
Wang et al. | Micro aerial vehicle navigation with visual-inertial integration aided by structured light | |
Gaspar et al. | Model-based filters for 3-D positioning of marine mammals using AHRS-and GPS-equipped UAVs | |
He et al. | A low cost visual positioning system for small scale tracking experiments on underwater vehicles | |
Cao et al. | Omni-directional vision localization based on particle filter | |
CN114003041A (en) | Multi-unmanned vehicle cooperative detection system | |
Li et al. | Geodetic coordinate calculation based on monocular vision on UAV platform |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |