CN112068168B - Geological disaster unknown environment integrated navigation method based on visual error compensation - Google Patents

Geological disaster unknown environment integrated navigation method based on visual error compensation Download PDF

Info

Publication number
CN112068168B
CN112068168B CN202010933698.8A CN202010933698A CN112068168B CN 112068168 B CN112068168 B CN 112068168B CN 202010933698 A CN202010933698 A CN 202010933698A CN 112068168 B CN112068168 B CN 112068168B
Authority
CN
China
Prior art keywords
matrix
positioning
beidou navigation
image
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010933698.8A
Other languages
Chinese (zh)
Other versions
CN112068168A (en
Inventor
张子腾
盛传贞
张京奎
惠沈盈
魏海涛
蔚保国
王垚
易卿武
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CETC 54 Research Institute
Original Assignee
CETC 54 Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CETC 54 Research Institute filed Critical CETC 54 Research Institute
Priority to CN202010933698.8A priority Critical patent/CN112068168B/en
Publication of CN112068168A publication Critical patent/CN112068168A/en
Application granted granted Critical
Publication of CN112068168B publication Critical patent/CN112068168B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/40Correcting position, velocity or attitude
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/01Satellite radio beacon positioning systems transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/03Cooperating elements; Interaction or communication between different cooperating elements or between cooperating elements and receivers
    • G01S19/07Cooperating elements; Interaction or communication between different cooperating elements or between cooperating elements and receivers providing data for correcting measured positioning data, e.g. DGPS [differential GPS] or ionosphere corrections
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/01Satellite radio beacon positioning systems transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/03Cooperating elements; Interaction or communication between different cooperating elements or between cooperating elements and receivers
    • G01S19/10Cooperating elements; Interaction or communication between different cooperating elements or between cooperating elements and receivers providing dedicated supplementary positioning signals
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position
    • G01S19/48Determining position by combining or switching between position solutions derived from the satellite radio beacon positioning system and position solutions derived from a further system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a geological disaster unknown environment integrated navigation method based on visual error compensation, and belongs to the technical field of navigation positioning and information fusion application. According to the invention, key frame information of a rescue scene is acquired by monocular vision, the gesture transformation matrix is resolved in real time, a high-precision relative positioning result is provided for the rescue platform, positioning maintenance under a satellite navigation positioning failure state is realized, meanwhile, real-time error compensation is performed for satellite navigation by utilizing vision gesture transformation based on a first-order Markov model, and finally a high-precision and reliable positioning result is provided. The method is simple and feasible, and can provide a high-precision credible positioning technical means for the rescue platform in a geological disaster environment.

Description

Geological disaster unknown environment integrated navigation method based on visual error compensation
Technical Field
The invention relates to a geological disaster unknown environment combined navigation method based on visual error compensation, and belongs to the technical field of navigation positioning and information fusion application.
Background
Currently, the demand for high-precision positioning services for unknown and complex environments is increasing. In geological disaster frequent zones such as mountain areas and valleys, vehicles such as engineering mechanical equipment face satellite navigation positioning failure conditions under the influence of satellite signal shielding, multipath and the like, so that the running and operation safety of the vehicles face serious threats. The traditional positioning technology based on single sensing is influenced by the applicable environment, and the reliability and the precision of the positioning technology cannot meet the requirement of high-precision positioning under the complex environment, so that the introduction of high-precision and reliable positioning technical means based on multi-source sensing fusion complementation is needed.
Navigation and positioning technology is developed in association with sensing information sources. The satellite navigation positioning technology is a radio navigation system which depends on navigation satellites, and can provide all-weather, real-time and continuous absolute position information and time information for platforms such as vehicles. However, as a radio signal, signal shadowing, environmental interference, etc. will also directly result in a receiver losing positioning capability. Based on the monocular vision positioning technology, a key frame odometer is constructed through rapid and accurate matching of continuous image sequences, the relative gesture is obtained, and high-precision relative gesture transformation can be provided for a motion platform. Along with the reduction of the cost of the sensor and the development of a multi-source fusion technology, the advantage complementation and error correction of different sensors can be realized by applying a high-precision positioning technology of multi-source information fusion. However, there is no relevant application in the prior art.
Disclosure of Invention
In view of the above, the invention provides a geological disaster unknown environment combined navigation method based on visual error compensation, which breaks through the limitation of the environment applicability of a single sensor and realizes high-precision and reliable stable positioning.
In order to achieve the above purpose, the technical scheme adopted by the invention is as follows:
the method utilizes monocular vision to solve the relative motion gesture of a rescue platform, fuses relative positioning information to perform satellite navigation positioning error compensation, and finally obtains a high-precision and reliable positioning result; the method specifically comprises the following steps:
(1) Acquiring candidate key frames based on continuous sequence image feature tracking, and calculating a camera attitude transformation matrix by using the candidate key frames;
(2) Constructing a first-order Markov model, and performing vision-assisted Beidou error compensation based on the first-order Markov model;
(3) And constructing a trusted interval of satellite navigation signal observation information, and taking the value of the observation weight based on a trust function.
Further, the specific mode of the step (1) is as follows:
(101) Acquiring image sequence feature matching by using a motion statistics method based on gradients, sequentially counting the sequence image feature matching tracking number, namely the number of continuous matching of a certain feature point in the view in the sequence image, recording the maximum tracking feature number in the view, and selecting the view containing the maximum feature tracking number in the continuous sequence as a candidate key frame;
(102) The internal parameters of the camera areWherein->Representing the distortion of the camera->Representing the focal length of the camera, +.>Representing an image principal point; the external parameters of the camera are->Wherein->For rotating matrix +.>Is a translation vector;
(103) The pinhole model is selected for camera gesture transformation and solving, and the method comprises a limited scaleA projection matrix representation; will->Projection matrix of->Break down into +.>Upper triangular matrix>And one->Transfer matrix of->Is the product of:
wherein,for rotating matrix->Element(s) of->For shifting matrix->Is an element of (2);
(104) Assume thatIs two-in-oneNormalized coordinates of matching point pairs corresponding to key frames define a basic matrix describing image posture transformation +.>Foundation matrix->Contains the information of the internal and external parameters of the camera, and is opposite to->Each element is recombined and transformed into a column vector form, namely:
for pairs of matched points that exist steadilyConstrained by the epipolar geometry>The method comprises the following steps:
(105) 8 pairs of matching points are selected, and a linear equation set is formed through algebraic transformation, so that the following conditions are satisfied:
wherein,
a isIs a matrix of (a);
(106) Based on the uniqueness of the matrix, all solution vectorsThe parameters have a scale factor; to determine the standard solution, add the constraint +.>This condition is satisfied->As->A feature vector corresponding to the minimum feature value of (a); singular value decomposition of A into +.>Wherein:
correspond to->Thereby finding the basis matrix +.>
(107) Definition of an essence matrix
Solving the second image external parameters by singular value decomposition:
wherein the method comprises the steps ofAre respectively->Orthogonal arrays of->Is->In the form of a diagonal array of:
for a given one of the matricesProjection matrix by camera of first imageDeriving a projection matrix for the second image>Thus, camera pose transformation between key frames is acquired.
Further, the specific mode of the step (2) is as follows:
(201) Giving continuous positioning observables through a Beidou navigation system, acquiring longitude, latitude and elevation of a motion platform, converting the longitude, latitude and elevation into a planar map coordinate system with a map origin as a starting point, and defining continuous timeThe position information given by the Beidou navigation system at moment is +.>Wherein->,/>Representation->Moment BeidouNavigation signals are given in the planar map coordinate system +.>Axis coordinates->Representation->The moment Beidou navigation signal is given in a planar map coordinate system>Axis coordinates->Representation->The moment Beidou navigation signal is given in a planar map coordinate system>An axis coordinate; defining Beidou navigation observational quantity key framesI.e. require a key frame +.>Time synchronization, wherein key frames which do not acquire effective gestures in the image key frames are not used for subsequent correction processing;
(202) For the followingTime and->Two adjacent key frames +.>And->Image frame->To image frame->Is +.>Wherein->For image frame->To image frame->Rotation matrix of>For image frame->To image frame->Is a translation matrix of (a); two adjacent keyframes->And->The position information given by the Beidou navigation system respectively corresponding to the corresponding time is +.>And->There is an estimation model as follows:
wherein,is->Position information given by Beidou navigation system at moment +.>Multiplying the transfer matrix estimate>Position information of time;
(203) Assuming the existence of discrete time sequencesFor->The position result estimation of the moment exists a first order Markov estimation model: />Wherein->
For the followingDirect positioning result given by Beidou navigation and positioning system at moment +.>Confidence probability of itIf->,/>For the abnormality probability threshold, locate the result +.>Regarding as an abnormal value, filtering; the method comprises the steps of providing a positioning estimation result of a signal missing time period by taking an effective positioning result before Beidou signal missing as an initial value by means of a multi-step first-order Markov estimation model; after filtering and interpolation are completed, vision auxiliary correction is carried out on the Beidou positioning result, and a Markov model state transition matrix with a limiting window size of 5 is defined>And weighting matrix->Giving final positioning correction result +.>The formula:
wherein the window size is 5, i.e. locating discrete time sequences,/>Direct observation result of Beidou navigation positioning at corresponding moment, < >>The matrix is transformed for the camera pose between key frames at the corresponding moment.
Further, the specific mode of the step (3) is as follows:
(301) Definition includesWeight matrix of individual elements->For a recognition frame, define the weight +.>For a set of possible value ranges, a state transition matrix of the Markov model with corresponding weights is required +.>Are mutually incompatible, require a defined recognition frame matrix +.>The method meets the following conditions:
weighting ofIs a weight matrix->Element(s) of->Representing that the weight matrix is selected to be null;
(302) Assume that there is a direct observation of Beidou navigation signalDefining a trust function: />Wherein->Is->Is represented by a trust function->The sum of the likelihood measures of all subsets of (i) representing the pair set +.>Is a total trust of (2);
(303) Defining a plausibility function:indicating no negative->Is the confidence level of all and +>Sum of intersecting set probability assignments, wherein +.>Is not->
(304) Defining trust intervalsRepresenting->When the Beidou navigation signal is visible, the specific meaning of the confidence interval is as follows:
indicating that the direct observation signal of Beidou navigation is stable and the signal value is +.>Is true;
direct observation signal for indicating Beidou navigationNumber loss, its value->Not trusted;
indicating that the Beidou navigation direct observation signal has partial shielding, and the value of the partial shielding is +.>Incompletely trusted;
(305) Defining the window size as 5, and acquiring the Beidou navigation signal observation result asIf the result cannot be judged directly through the Beidou navigation signal observation result, defining a trust interval +.>Obtaining the confidence probability->,/>Wherein->The method comprises the steps of carrying out a first treatment on the surface of the Taking out,/>The method comprises the steps of carrying out a first treatment on the surface of the From this, a transfer matrix of direct observations of Beidou navigation signals is determined +.>Corresponding weighting matrix->The weight element takes the mean of the trust function and the plausibility function.
Compared with the prior art, the invention has the following beneficial effects:
1. the invention uses high-precision visual positioning information to correct the errors of satellite navigation, and improves the accuracy and the robustness of the existing single satellite positioning technology.
2. The observation weight method based on the trust function increases the credibility measurement of satellite navigation positioning observation information and improves the credibility of the actual positioning result.
3. The method is simple and feasible, and can provide a high-precision credible positioning technical means for the rescue platform in a geological disaster environment.
Drawings
Fig. 1 is a schematic diagram of a combined navigation method according to an embodiment of the invention.
Detailed Description
For better illustrating the objects and advantages of the present invention, the following description of the technical solution of the present invention refers to the accompanying drawings.
The utility model provides a geological disaster unknown environment integrated navigation method based on visual error compensation, and equipment required by the method comprises an optical camera and a Beidou receiver. Specifically, as shown in fig. 1, a sequence image is acquired through an optical camera, a Beidou receiver acquires satellite observables, key frame information of a rescue scene is acquired for the sequence image, an attitude transformation matrix is resolved in real time, a high-precision relative positioning result is provided for a rescue platform, meanwhile, a first-order markov model is utilized for performing real-time error compensation for satellite navigation through visual attitude transformation, and finally, a positioning result is provided, and the method comprises the following steps:
(1) Firstly, obtaining candidate key frames based on continuous sequence image feature tracking, and then calculating a camera pose transformation matrix by using the candidate key frames;
(2) The visual auxiliary Beidou error compensation method based on the first-order Markov model comprises the steps of firstly constructing the first-order Markov model, and then performing visual auxiliary Beidou error compensation based on the first-order Markov model;
(3) Firstly, a trusted interval of satellite navigation signal observation information is constructed based on the observation weight value of the trust function, and then the observation weight value is based on the trust function.
Further, the specific mode of the step (1) is as follows:
selecting candidate key frames; the selection of the key frame is critical to the camera pose transformation, if the reconstruction of the camera key frame falls into a wrong local minimum, the subsequent optimization is difficult, so the camera key frame must be carefully selected, in order to meet the camera pose transformation requirement, the key frame should first have enough matching points, and should also meet a larger baseline requirement, and here, it can be understood that there is enough calculation distance between the camera centers or a certain reasonable angle between the two view observation directions, so that the pose transformation of the two frames can be estimated robustly. And acquiring image sequence feature matching by using the existing GMS (Grad-based Motion Statistic) algorithm, sequentially counting the sequence image feature matching tracking number, namely the number of continuous matching of a certain feature point in the view in the sequence image, recording the maximum tracking feature number in the view, and selecting the view containing the maximum feature tracking number in the continuous sequence as a candidate key frame.
Solving camera external parameters by utilizing the key frames adjacent to the candidates to calculate a camera attitude transformation matrix; assume that there are matching point pairs corresponding to candidate neighboring reference key framesAnd->At this time, the intrinsic matrix of the camera can be given +.>And the internal parameters are: />Respectively represent the distortion of the camera>,/>Representing the focal length of an industrial camera, +.>Expressed as the principal point of the image, typically the center point of the image, the intrinsic parameters of the camera are fixed, typically given by the camera manufacturer. For the movement of the camera in space, a certain point in space +.>Is a spatial point in the world coordinate system, in mathematical relationship by means of a rotation matrix +.>Translation vector->To determine a mathematical model of the motion of the camera, the pose of which is also called the external parameters of the camera, including +.>. Selecting pinhole model to make camera gesture transformation solution, which is composed of a defined scale +.>And (5) projecting a matrix representation. Either->Projection matrix of->Can be decomposed into a +.>Upper triangular matrix>And oneIs transferred matrix of (a)/>Is the product of:
for rotating matrix->Element(s) of->For shifting matrix->Elements of (1), supposing->For the normalized coordinates of the matching point pairs corresponding to two key frames, a basic matrix describing the image posture transformation is defined +.>Foundation matrix->Contains the information of the internal and external parameters of the camera, and is opposite to->The elements are recombined and transformed into column vector form, i.e.>For the pairs of matching points which exist steadily +.>Constrained by the epipolar geometry>The method comprises the following steps:
selecting 8 pairs of matching points, algebraic transforming to form a linear equation set as above, solving homogeneous equation according to the uniqueness principle under the condition of a constant factor differenceA matrix. The linear equation set is composed of 8 such equations, satisfying:
wherein the method comprises the steps of
A isIs a matrix of (a) in the matrix. According to the uniqueness of the matrix, all solution vectors +.>The parameters have a scale factor. To determine the standard solution, add the constraint +.>This condition is satisfied->As->A feature vector corresponding to the minimum feature value of (a). Singular value decomposition of A into +.>Wherein:
correspond to->Thereby finding +.>
Basic matrixContains the information of internal and external parameters, and the solution process of the information needs an essential matrix +.>Completion, essence matrix->Is a basic matrix->Is of special form, essence matrix->Can be by->Algebraic transformation is carried out to obtain, and then gesture transformation of the key frame image is solved. The definition of the essence matrix is:
solving the second image external parameters by singular value decomposition:
wherein the method comprises the steps ofAre respectively->Orthogonal arrays of->Is->In the form of a diagonal array of:
for a given one of the essence matrices:projection matrix by camera of first imageDeriving a projection matrix for the second image>. So far, the camera pose transformation between key frames can be obtained.
Further, the specific mode of the step (2) is as follows:
giving continuous positioning observables through a Beidou navigation system, acquiring three-dimensional position information of longitude, latitude and elevation of a motion platform, converting the three-dimensional position information into a planar map coordinate system with a map origin as a starting point, and defining continuous timeThe position information given by the Beidou navigation system at moment is +.>Wherein->,/>Representation->The Beidou navigation signal at moment is given out in a planar map coordinate systemIs->Axis coordinates->Representation->The moment Beidou navigation signal is given in a planar map coordinate system>The axis of the rotation is set to be at the same position,representation->The moment Beidou navigation signal is given in a planar map coordinate system>And (5) axis coordinates. Define big Dipper navigation observation quantity key frame->I.e. require a key frame +.>And (3) time synchronization, wherein key frames which do not acquire effective gestures in the image key frames are not used for subsequent correction processing.
Constructing a first-order Markov estimation model; the visual key frame of claim 1 having acquired a corresponding camera pose forTime and->Two adjacent key frames +.>And->Image frame->To image frame->Is +.>Wherein->For image frame->To image frame->Rotation matrix of>For image frame->To image frame->Is provided for the translation matrix of (a). Two adjacent keyframes->And->The position information given by the Beidou navigation system respectively corresponding to the corresponding time is +.>And->The following estimation model should be used: />,/>Is->Position information given by Beidou navigation system at moment +.>Multiplying the transfer matrix estimate>Position information of time. For ideal conditions, the->Should be equal to->Due to the fact that the actual applicability and noise errors of the Beidou receiver and the optical camera are unavoidable, +.>Is generally not equal toEven with large deviations. The continuous discrete observation information given by the Beidou navigation system is not obviously restrained, namely the positioning result at the current moment is not influenced by the previous moment and the positioning result at the subsequent moment is not influenced, so that the estimation model->Conforming to a first order Markov estimation model. Assuming the existence of discrete time sequencesFor->The position result estimation of the moment exists a first order Markov estimation model:wherein->
Visual auxiliary Beidou error compensation of a first-order Markov model; considering the efficiency of the estimation model and eliminating the influence of time accumulated errors as far as possible, adopting a limited time period sliding window to control the estimation of the positioning result, and specifically comprising the following steps: firstly, filtering the abnormal observed quantity positioning result, forDirect positioning result given by Beidou navigation and positioning system at moment +.>Confidence probability->If->,/>For the abnormality probability threshold, locate the result +.>The abnormal value is regarded as the abnormal value, and filtered. And (3) interpolating the positioning result of the Beidou signal missing section, and giving a positioning estimation result of the signal missing time section by taking the effective positioning result before the Beidou signal is lost as an initial value by means of a multi-step first-order Markov estimation model. After filtering and interpolation are completed, vision auxiliary correction can be carried out on the Beidou positioning result, and a Markov model state transition matrix with a limiting window size of 5 is defined>And weighting matrix->Giving final positioning correction result +.>The formula:
wherein the window size is 5, i.e. locating discrete time sequences,/>Direct observation result of Beidou navigation positioning at corresponding moment, < >>The matrix is transformed for the camera pose between key frames at the corresponding moment.
Further, the specific mode of the step (3) is as follows:
constraining weight matrices using trust functionsSize, definition comprising->Weight matrix of individual elements->For a recognition frame, define the weight +.>For a set of possible value ranges, a state transition matrix of the Markov model with corresponding weights is required +.>Are mutually incompatible, require a defined recognition frame matrix +.>The method meets the following conditions:
weighting ofIs a weight matrix->Element(s) of->Indicating that the weight matrix is selected to be null.
Assume that there is a direct observation of Beidou navigation signalDefining a trust function: />Wherein->Is->Is represented by a trust function->The sum of the likelihood measures of all subsets of (i) representing the pair set +.>Is to be used in the future).
Defining a plausibility function:indicating no negative->Is the confidence level of all and +>Sum of intersecting set probability assignments, wherein +.>Is not->
Defining trust intervalsRepresenting->When the Beidou navigation signal is visible, the specific meaning of the confidence interval is as follows:
indicating that the direct observation signal of Beidou navigation is stable and the signal value is +.>Is true;
indicating the direct observation signal loss of Beidou navigation, the value of which is +.>Not trusted;
indicating that the Beidou navigation direct observation signal has partial shielding, and the value of the partial shielding is +.>Incompletely trusted;
defining the size of the window as 5, and obtaining the BeidouThe navigation signal observation result isIf the result of Beidou navigation signal observation cannot be directly judged, a trust interval can be defined>Obtaining the confidence probability->,/>Wherein->The method comprises the steps of carrying out a first treatment on the surface of the Can be taken out,/>. At this time, a transfer matrix of direct observation results of Beidou navigation signals can be determined +.>Corresponding weighting matrix->The weight element can take the mean value of the trust function and the plausibility function.
The invention provides a geological disaster unknown environment combined navigation method based on visual error compensation. In a geological disaster environment, the accuracy of positioning of a motion rescue platform such as an engineering mechanical vehicle and the like can directly influence the running and working safety of the rescue platform. In a geological disaster environment, satellite navigation is easy to cause positioning failure and failure of satellite navigation due to signal shielding and multipath, and meanwhile, positioning reliability is low, so that technical means capable of providing high-precision reliable positioning for a rescue platform in the geological disaster environment are needed.
According to the invention, key frame information of a rescue scene is acquired by monocular vision, the gesture transformation matrix is resolved in real time, a high-precision relative positioning result is provided for a rescue platform, positioning maintenance under a satellite navigation positioning failure state is realized, and then, a first-order Markov model is used for performing real-time error compensation for satellite navigation by utilizing vision gesture transformation, and finally, a high-precision and reliable positioning result is provided.

Claims (1)

1. A geological disaster unknown environment integrated navigation method based on visual error compensation is characterized in that monocular vision is utilized to solve the relative motion gesture of a rescue platform, relative positioning information is fused to conduct satellite navigation positioning error compensation, and finally a high-precision and credible positioning result is obtained; the method specifically comprises the following steps:
(1) Acquiring candidate key frames based on continuous sequence image feature tracking, and calculating a camera attitude transformation matrix by using the candidate key frames; the specific mode is as follows:
(101) Acquiring image sequence feature matching by using a motion statistics method based on gradients, sequentially counting the sequence image feature matching tracking number, namely the number of continuous matching of a certain feature point in the view in the sequence image, recording the maximum tracking feature number in the view, and selecting the view containing the maximum feature tracking number in the continuous sequence as a candidate key frame;
(102) The internal parameters of the camera are { f x ,f y ,s,c x ,c y Where s denotes the distortion of the camera, f x ,f y Representing the focal length of the camera, c x ,c y Representing an image principal point; the external parameters of the camera are { R, t }, wherein R is a rotation matrix and t is a translation vector;
(103) Selecting a pinhole model to carry out camera attitude transformation solution, wherein the camera attitude transformation solution is represented by a 3X 4 projection matrix with a limited scale; the 3 x 4 projection matrix P is decomposed into a product of a 3 x 3 upper triangular matrix K and a 3 x 4 transfer matrix r|t:
wherein r is 11 ,r 12 ,r 13 ,r 21 ,r 22 ,r 23 ,r 31 ,r 32 ,r 33 Is the element of the rotation matrix R, t x ,t y ,t z Is an element of a translation matrix t;
(104) Let X be 1 =[x 1 ,y 1 ,1] T ,X 2 =[x 2 ,y 2 ,1] T For the normalized coordinates of the matching point pairs corresponding to two key frames, a basic matrix describing the image posture transformation is definedThe basic matrix F contains the inner and outer parameter information of the camera, and the elements of F are recombined and transformed into a column vector form, namely:
f=[f 11 f 12 f 13 f 21 f 22 f 23 f 31 f 32 f 33 ] T
for pairs of matching points (X) 1 ,X 2 ) Constrained by epipolar geometry X 1 T FX 2 =0, there are:
[x 1 x 2 ,x 1 y 2 ,x 1 ,y 1 x 2 ,y 1 y 2 ,y 1 ,x 2 ,y 2 ,1]·f=0
(105) 8 pairs of matching points are selected, and a linear equation set is formed through algebraic transformation, so that the following conditions are satisfied:
Af=0
wherein,
a is an 8×9 matrix;
(106) According to the uniqueness of the matrix, scaling factors exist in all solution vector f parameters; to determine the standard solution, add constraint f=1, which satisfies f as a T A feature vector corresponding to the minimum feature value of A; singular of AValue decomposition into a=udv T Wherein:
V=[v 1 ,v 2 ,v 3 ,v 4 ,v 5 ,v 6 ,v 7 ,v 8 ,v 9 ]corresponding f=v 9 Thereby obtaining a basic matrix F;
(107) Defining an essential matrix E:
E=K T FK
solving the second image external parameters by singular value decomposition:
E=UDV T
wherein U, V are respectively 3×3 orthogonal arrays, D is a 3×3 diagonal array, and the form is:
for a given one of the essential matrices e=udiag (1, 0) V T By camera projection matrix P of first image 1 =K[I,0]Deriving projection matrix P for second image 2 So far, acquiring camera pose transformation among key frames;
(2) Constructing a first-order Markov model, and performing vision-assisted Beidou error compensation based on the first-order Markov model; the specific mode is as follows:
(201) The Beidou navigation system is used for giving continuous positioning observables, acquiring longitude, latitude and elevation of a motion platform, converting the longitude, latitude and elevation into a planar map coordinate system with a map origin as a starting point, and defining position information given by the Beidou navigation system at a continuous time t moment as P t B Whereinx t B Representing the X-axis coordinate and y of the Beidou navigation signal at the moment t in a planar map coordinate system t B Representing Y-axis coordinates of the Beidou navigation signal at the moment t given by a planar map coordinate system, and ++>The Z-axis coordinate of the Beidou navigation signal at the moment t given by a planar map coordinate system is represented; defining key frame K of Beidou navigation observables i B I.e. require and acquire the image key frame K i Time synchronization, i=1, 2,3 …, and for a key frame which does not acquire a valid gesture in the image key frames, the key frame is not used for subsequent correction processing;
(202) For t 1 Time and t 2 Two adjacent key frames I for carrying out attitude estimation in time correspondence 1 And I 2 Image frame I 1 To image frame I 2 Camera pose transformation matrix of T 12 =[R 12 |t 12 ]Wherein R is 12 For image frame I 1 To image frame I 2 T 12 For image frame I 1 To image frame I 2 Is a translation matrix of (a); two adjacent keyframes I 1 And I 2 The position information respectively given by Beidou navigation system at corresponding time is P 1 B And (3) withThen there is an estimation model as follows:
wherein,at t 1 Position information P given by Beidou navigation system at moment 1 B Multiplying t by the transfer matrix estimate 2 Position information of time;
(203) Assuming the presence of a discrete time sequence T n ={t 1 ,t 2 ,t 3 ,…,t n For t } n The position result estimation of the moment exists a first order Markov estimation model:wherein i=1,2,…,n-1;
For t n Direct positioning result given by time Beidou navigation positioning systemConfidence probability of itIf P c ″<th ", th" is the abnormality probability threshold, the result is located +.>Regarding as an abnormal value, filtering; the method comprises the steps of providing a positioning estimation result of a signal missing time period by taking an effective positioning result before Beidou signal missing as an initial value by means of a multi-step first-order Markov estimation model; after filtering and interpolation are completed, vision auxiliary correction is carried out on the Beidou positioning result, a Markov model state transition matrix A and a weighting matrix W with a limiting window size of 5 are defined, and a final positioning correction result P formula is given:
P=WA
W=(w 1 ,w 2 ,w 3 ,w 4 ,w 5 )
wherein the window size is 5, i.e. the discrete time sequence T is located 5 ={t 1 ,t 2 ,t 3 ,t 4 ,t 5 },P 1 B ,For the Beidou navigation positioning direct observation result at corresponding moment, T 12 ,T 23 ,T 34 ,T 45 The camera attitude transformation matrix between key frames at corresponding moments;
(3) The method comprises the steps of constructing a trusted interval of satellite navigation signal observation information, and taking the value of the observation weight based on a trust function, wherein the specific mode is as follows:
(301) Defining a weight matrix W containing n elements as a recognition frame, defining weights W i For a set of value ranges that may take on values, the markov model state transition matrices a that require corresponding weights are mutually incompatible, requiring that the defined recognition framework matrix W satisfy:
weight w i Is an element of the weight matrix W,representing that the weight matrix is selected to be null;
(302) Assuming that a Beidou navigation signal direct observation result B exists, defining a trust function:where B is a subset of W, the trust function represents the sum of the likelihood measures of all subsets of B, i.e., represents the total trust for set B;
(303) Defining a plausibility function:representing the confidence that B is not negated, is the sum of all aggregate probability assignments intersecting B, where +.>Is not B;
(304) Defining a trust interval [ BEL (B), PL (B) ], wherein the trust interval of B is represented by the following special meanings when the Beidou navigation signal is visible:
[1,1] shows that the Beidou navigation direct observation signal is stable, and the signal value B is true;
[0,0] represents that the Beidou navigation direct observation signal is lost, and the value B is not credible;
[0,1] shows that the Beidou navigation direct observation signal has partial shielding, and the value B of the Beidou navigation direct observation signal can not be completely trusted;
(305) Defining the window size as 5, and acquiring the Beidou navigation signal observation result as P 1 B ,If the result cannot be judged directly through the Beidou navigation signal observation result, defining a trust interval +.>Obtaining the confidence probability->Wherein i=1, 2,3,4; taking outTherefore, the trust interval of each weight element in the weighting matrix W corresponding to the transfer matrix A of the direct observation result of the Beidou navigation signal is determined, and the average value of the weight element trust function and the plausibility function is obtained. />
CN202010933698.8A 2020-09-08 2020-09-08 Geological disaster unknown environment integrated navigation method based on visual error compensation Active CN112068168B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010933698.8A CN112068168B (en) 2020-09-08 2020-09-08 Geological disaster unknown environment integrated navigation method based on visual error compensation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010933698.8A CN112068168B (en) 2020-09-08 2020-09-08 Geological disaster unknown environment integrated navigation method based on visual error compensation

Publications (2)

Publication Number Publication Date
CN112068168A CN112068168A (en) 2020-12-11
CN112068168B true CN112068168B (en) 2024-03-15

Family

ID=73664195

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010933698.8A Active CN112068168B (en) 2020-09-08 2020-09-08 Geological disaster unknown environment integrated navigation method based on visual error compensation

Country Status (1)

Country Link
CN (1) CN112068168B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114659526B (en) * 2022-02-11 2024-07-23 北京空间飞行器总体设计部 Spacecraft autonomous navigation robust filtering algorithm based on sequence image state expression
CN115128655B (en) * 2022-08-31 2022-12-02 智道网联科技(北京)有限公司 Positioning method and device for automatic driving vehicle, electronic equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104729506A (en) * 2015-03-27 2015-06-24 北京航空航天大学 Unmanned aerial vehicle autonomous navigation positioning method with assistance of visual information
CN106780699A (en) * 2017-01-09 2017-05-31 东南大学 A kind of vision SLAM methods aided in based on SINS/GPS and odometer
FR3046006A1 (en) * 2015-12-18 2017-06-23 Inst Mines-Telecom METHOD OF ESTIMATING TRAJECTORIES USING MOBILE DATA
CN107229063A (en) * 2017-06-26 2017-10-03 奇瑞汽车股份有限公司 A kind of pilotless automobile navigation and positioning accuracy antidote merged based on GNSS and visual odometry
CN111324126A (en) * 2020-03-12 2020-06-23 集美大学 Visual unmanned ship and visual navigation method thereof

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8174568B2 (en) * 2006-12-01 2012-05-08 Sri International Unified framework for precise vision-aided navigation
US8447519B2 (en) * 2010-11-10 2013-05-21 GM Global Technology Operations LLC Method of augmenting GPS or GPS/sensor vehicle positioning using additional in-vehicle vision sensors
US20150219767A1 (en) * 2014-02-03 2015-08-06 Board Of Regents, The University Of Texas System System and method for using global navigation satellite system (gnss) navigation and visual navigation to recover absolute position and attitude without any prior association of visual features with known coordinates
CN107833249B (en) * 2017-09-29 2020-07-07 南京航空航天大学 Method for estimating attitude of shipboard aircraft in landing process based on visual guidance

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104729506A (en) * 2015-03-27 2015-06-24 北京航空航天大学 Unmanned aerial vehicle autonomous navigation positioning method with assistance of visual information
FR3046006A1 (en) * 2015-12-18 2017-06-23 Inst Mines-Telecom METHOD OF ESTIMATING TRAJECTORIES USING MOBILE DATA
CN106780699A (en) * 2017-01-09 2017-05-31 东南大学 A kind of vision SLAM methods aided in based on SINS/GPS and odometer
CN107229063A (en) * 2017-06-26 2017-10-03 奇瑞汽车股份有限公司 A kind of pilotless automobile navigation and positioning accuracy antidote merged based on GNSS and visual odometry
CN111324126A (en) * 2020-03-12 2020-06-23 集美大学 Visual unmanned ship and visual navigation method thereof

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
视觉传感器辅助的高精度组合导航定位技术研究;周阳林;中国博士学位论文全文数据库 基础科学辑;A008-39 *

Also Published As

Publication number Publication date
CN112068168A (en) 2020-12-11

Similar Documents

Publication Publication Date Title
CN112268559B (en) Mobile measurement method for fusing SLAM technology in complex environment
CN110726406A (en) Improved nonlinear optimization monocular inertial navigation SLAM method
CN113252033B (en) Positioning method, positioning system and robot based on multi-sensor fusion
CN112068168B (en) Geological disaster unknown environment integrated navigation method based on visual error compensation
CN110187375A (en) A kind of method and device improving positioning accuracy based on SLAM positioning result
CN113739795B (en) Underwater synchronous positioning and mapping method based on polarized light/inertia/vision integrated navigation
CN109507706B (en) GPS signal loss prediction positioning method
CN113763548B (en) Vision-laser radar coupling-based lean texture tunnel modeling method and system
Dumble et al. Efficient terrain-aided visual horizon based attitude estimation and localization
Dawood et al. Harris, SIFT and SURF features comparison for vehicle localization based on virtual 3D model and camera
CN112346104A (en) Unmanned aerial vehicle information fusion positioning method
CN114964276A (en) Dynamic vision SLAM method fusing inertial navigation
CN113819904B (en) polarization/VIO three-dimensional attitude determination method based on zenith vector
CN116908777A (en) Multi-robot random networking collaborative navigation method based on explicit communication with tag Bernoulli
CN115453599A (en) Multi-sensor-cooperated pipeline robot accurate positioning method
CN114690229A (en) GPS-fused mobile robot visual inertial navigation method
CN114897942B (en) Point cloud map generation method and device and related storage medium
CN114459474B (en) Inertial/polarization/radar/optical-fluidic combined navigation method based on factor graph
CN114705223A (en) Inertial navigation error compensation method and system for multiple mobile intelligent bodies in target tracking
CN115930948A (en) Orchard robot fusion positioning method
CN114025320A (en) Indoor positioning method based on 5G signal
Choi et al. Image-based Monte-Carlo localization with information allocation logic to mitigate shadow effect
CN114723920A (en) Point cloud map-based visual positioning method
Mirisola et al. Trajectory recovery and 3d mapping from rotation-compensated imagery for an airship
Luo et al. An imu/visual odometry integrated navigation method based on measurement model optimization

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant