CN113759982B - Unmanned aerial vehicle formation relative state estimation method based on sight measurement information only - Google Patents

Unmanned aerial vehicle formation relative state estimation method based on sight measurement information only Download PDF

Info

Publication number
CN113759982B
CN113759982B CN202111217659.9A CN202111217659A CN113759982B CN 113759982 B CN113759982 B CN 113759982B CN 202111217659 A CN202111217659 A CN 202111217659A CN 113759982 B CN113759982 B CN 113759982B
Authority
CN
China
Prior art keywords
unmanned aerial
aerial vehicle
wing unmanned
sight
fixed wing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111217659.9A
Other languages
Chinese (zh)
Other versions
CN113759982A (en
Inventor
苏文山
陈磊
白显宗
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
National Defense Technology Innovation Institute PLA Academy of Military Science
Original Assignee
National Defense Technology Innovation Institute PLA Academy of Military Science
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by National Defense Technology Innovation Institute PLA Academy of Military Science filed Critical National Defense Technology Innovation Institute PLA Academy of Military Science
Priority to CN202111217659.9A priority Critical patent/CN113759982B/en
Publication of CN113759982A publication Critical patent/CN113759982A/en
Application granted granted Critical
Publication of CN113759982B publication Critical patent/CN113759982B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/10Simultaneous control of position or course in three dimensions
    • G05D1/101Simultaneous control of position or course in three dimensions specially adapted for aircraft
    • G05D1/104Simultaneous control of position or course in three dimensions specially adapted for aircraft involving a plurality of aircrafts, e.g. formation flying

Landscapes

  • Engineering & Computer Science (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention belongs to the technical field of unmanned aerial vehicle cluster formation, and provides a relative state estimation method of fixed wing unmanned aerial vehicle formation based on visual measurement innovation. Firstly, measuring the sight height angle q ε and the azimuth angle q β of adjacent unmanned aerial vehicles; searching and locking a target fixed wing unmanned aerial vehicle, measuring and observing the flight state and the nacelle frame angle of the fixed wing unmanned aerial vehicle, and calculating the sight angle according to the obtained data; observing the active maneuver of the fixed-wing unmanned aerial vehicle, and estimating the initial state of the target fixed-wing unmanned aerial vehicle; the relative state of the unmanned aerial vehicle is refreshed in real time by utilizing the sight measurement information of the unmanned aerial vehicle with the observation fixed wing; observing the active maneuver of the fixed-wing unmanned aerial vehicle, and correcting the height estimated value of the target fixed-wing unmanned aerial vehicle; and if the unmanned cluster formation completes the preset task or resumes normal communication, the algorithm is terminated. The invention greatly reduces the requirement of relative state estimation on measurement of visual information and improves the practicability of vision serving as an important supplementary means for acquiring the cooperative information by the clusters.

Description

Unmanned aerial vehicle formation relative state estimation method based on sight measurement information only
Technical Field
The invention belongs to the technical field of unmanned aerial vehicle cluster formation, and particularly relates to a relative state estimation method of fixed wing unmanned aerial vehicle formation based on visual measurement innovation.
Background
The perception and interaction of information is the basis of unmanned cluster formation collaboration. Current research on cluster formation coordination is mainly focused on achieving coordination of flight time, spatial position and task functions of unmanned aerial vehicles, wherein time coordination focuses on interaction of flight time of the unmanned aerial vehicles, spatial coordination focuses on interaction of states such as position and speed of the unmanned aerial vehicles, and function coordination focuses on coordination of load characteristic complementation and enhancement of unmanned aerial vehicles. Communication is a main means of information required by cluster formation coordination, at present, the formation coordination under the conditions of communication topology constraint and limited communication bandwidth under the condition of formation stability is studied deeply, and the effectiveness of the cluster formation coordination technology based on communication is also verified widely in experiments and applications. However, in practical application, events such as a crash of an unmanned plane cluster business and the like also expose the defect that communication is easy to be interfered in an electromagnetic interference environment, and the reliability of information interaction is reduced, so that the cluster cooperation effect is seriously affected.
Visual perception is an important supplementary means for cluster formation to acquire information needed by spatial collaboration. Compared with communication, the visual perception information is not affected by electromagnetic interference, and the adaptability of cluster formation to complex environments can be remarkably improved. In addition, communication is used as a typical information active sensing and interaction means, a communication cluster for executing tasks in a sensitive dispute area is easy to expose, visual sensing has strong concealment, and information sensing of a combined target and a non-combined target can be completed simultaneously, so that a plurality of tasks such as formation cooperation, target search reconnaissance and the like are realized. At present, vision is widely applied to navigation of unmanned aerial vehicle systems, but the application of the field of formation coordination is less, which is mainly limited by the fact that the current visual image processing algorithm is mainly used for target identification, formation space coordination depends on estimation of relative states, and the estimation of states of adjacent unmanned aerial vehicles needs to be completed by utilizing images, so that the relative states can be accurately settled only after target identification is completed, but also after target appearance feature position identification is completed, and prior information such as target appearance prior information and visual sensors is combined. In order to ensure the estimation accuracy of the relative state, the target visual imaging quality and the target priori information credibility both meet higher requirements, and meanwhile, the unmanned aerial vehicle can cause the shielding of the body characteristic part during the movement, so that the usability of the method is reduced.
Disclosure of Invention
The invention provides a target relative state information estimation method based on vision only by combining the motion characteristics of a fixed wing unmanned aerial vehicle. Different from the existing thought of partially settling the relative motion state based on the visual image identification characteristics, the visual image identification method only needs to measure the sight height angle and azimuth angle information of the adjacent unmanned aerial vehicle, greatly reduces the requirement of relative state estimation on visual information measurement, and improves the practicability of vision serving as an important supplementary means for acquiring the cooperative information by a cluster.
In order to achieve the above purpose, the technical scheme of the invention is as follows:
an unmanned aerial vehicle formation relative state estimation method based on sight measurement information comprises the following steps:
the first step: configuring a vision sensor, and accurately measuring the sight height angle q ε and the azimuth angle q β of the adjacent unmanned aerial vehicle;
and a second step of: searching and locking a target fixed wing unmanned aerial vehicle, and ensuring that the target fixed wing unmanned aerial vehicle is under the image of a visible light nacelle of the fixed wing unmanned aerial vehicle;
and a third step of: measuring and observing the flight state and the nacelle frame angle of the fixed-wing unmanned aerial vehicle;
Fourth step: calculating a sight angle according to the data obtained in the third step;
fifth step: observing the active maneuver of the fixed-wing unmanned aerial vehicle, and estimating the initial state of the target fixed-wing unmanned aerial vehicle;
When the fixed wing unmanned aerial vehicle executes a task, the flying speed, the course angle change rate and the height are regarded as constant values in a short time, and based on the constant values, the fixed wing unmanned aerial vehicle is maneuvered at a constant climbing rate under the condition that the current cruising horizontal flying speed of the fixed wing unmanned aerial vehicle is maintained; continuously calculating the sight line between the target fixed wing unmanned aerial vehicle and the target fixed wing unmanned aerial vehicle during maneuvering, and sequentially recording as
Based on the assumption that the flying speed, the course angle change rate and the altitude of the target fixed-wing unmanned plane are regarded as constant values in a short time, the motion characteristics of the target fixed-wing unmanned plane are represented by the following formula:
Wherein x g,l、yg,l and z g,l respectively represent the position of the target fixed wing unmanned aerial vehicle under the northeast day inertial coordinate system, v l and ψ v,l respectively represent the speed and course angle of the unmanned aerial vehicle, and ω l is the course angle change rate of the unmanned aerial vehicle; in order to avoid ambiguity caused by zero course angular velocity when solving the position of the target fixed wing unmanned aerial vehicle, the position analysis expression of each moment of the target fixed wing unmanned aerial vehicle needs to be discussed in terms of cases;
(1) Working condition 1: target fixed wing unmanned aerial vehicle rectilinear motion
If the intersection points of the sight lines of all the moments of the observation fixed wing unmanned aerial vehicle and the horizontal plane are collinear, the target fixed wing unmanned aerial vehicle moves linearly, and the initial observation moment is recorded as 0, and the position analysis expression of all the moments of the target fixed wing unmanned aerial vehicle is as follows:
The sight angle of the target fixed wing unmanned aerial vehicle is calculated by combining the positions of the observed fixed wing unmanned aerial vehicle at all moments, namely:
Wherein:
Wherein x g,f、yl,g and z g,f are the plane states; the sight line information obtained by observing the fixed wing unmanned plane at each moment and utilizing the estimation is recorded as Then x g,l,0、yg,l,0 and z g,l,0 at the initial time of the target fixed wing drone determine,/>The determination is thus that the line-of-sight information at each moment is considered as a function of the variable X 0:
X0=[xg,l,0,yg,l,0,zg,l,0,vl,0v,l,0]T
Taking measurement deviation of line-of-sight unit direction vectors at all moments into consideration, and determining X 0 by taking the sum of square differences of estimated values and measured values of the line-of-sight unit direction vectors at all moments of the observation fixed wing unmanned aerial vehicle as a target;
Namely:
Wherein the method comprises the steps of
W=diag(W1,W2,...,Wn)
Wherein R e is the measurement covariance of the unit direction vector of the line of sight, namely the measurement deviation of the high-low angle and the azimuth angle of the line of sight is sigma and sigma respectively
Wherein:
in actual calculation, the weight W 1、W2、…、Wn at each moment is obtained by the line of sight angle at the current moment by using the formula;
Recording device For the optimal estimate of X 0, the line-of-sight unit vector measurement at each time instant is approximated by:
Then in J Expressed as:
Corresponding to
J=(ΔeLOS-HΔX)TW(ΔeLOS-HΔX)
Wherein:
H=diag(h1,h2,...,hn)
For an X 0, if J takes the smallest value, it should:
the method comprises the following steps:
ΔX=(HTWH)-1HTWΔeLOS
Wherein:
Corresponding to this:
Wherein:
after determining Δx using the above equation, X 0 is first updated as follows:
X0=X0+ΔX
Then calculating delta X by using the updated X 0, updating according to the above formula to obtain new X 0, and continuously iterating until delta X < zeta, wherein zeta is a small amount similar to 0;
(2) Working condition 2: target fixed wing unmanned aerial vehicle curvilinear motion
If the intersection points of the sight lines of all the moments of the observation fixed wing unmanned aerial vehicle and the horizontal plane are collinear, the target fixed wing unmanned aerial vehicle moves linearly, and the initial observation moment is recorded as 0, and the position analysis expression of all the moments of the target fixed wing unmanned aerial vehicle is as follows:
Wherein x g,l,0、yg,l,0 and z g,l,0 respectively represent the position of the target fixed wing unmanned aerial vehicle at the initial moment under the northeast day inertial coordinate system, v l,0 and ψ v,l,0 respectively represent the initial speed and course angle, and ω l,0 is the course angle change rate at the initial moment;
j is determined by adopting the same method as the working condition 1, and the following steps are obtained:
ΔX=(HTWH)-1HTWΔeLOS
Wherein:
H=diag(h1,h2,...,hn)
X0=[xg,l,0 yg,l,0 zg,l,0 vl,0 ψl,0 ωl,0]T
after determining Δx using the above equation, X 0 is first updated as follows:
X0=X0+ΔX
then, delta X is calculated by the updated X 0, and updated according to the above formula to obtain new X 0; iterating until |Δx| < ζ, wherein ζ is a small amount of approximately 0;
Sixth step: the relative state of the unmanned aerial vehicle is refreshed in real time by utilizing the sight measurement information of the unmanned aerial vehicle with the observation fixed wing;
estimating the relative state of the long-wing aircraft by utilizing the height information of the target fixed-wing unmanned aerial vehicle estimated in the step 4 and combining the sight angle information measured in real time; at the moment, the unmanned aerial vehicle with the fixed wing does not need to be observed for maneuver;
And obtaining relative position measurement values of the two unmanned aerial vehicles by using the measurement information of the observed fixed wing unmanned aerial vehicle and the target fixed wing unmanned aerial vehicle height estimation value:
Position measurement value of unmanned plane combined with observation fixed wing And/>Obtaining:
The target unmanned filtering state quantity at the moment k is X k=[xg,l,k,yg,l,k,vg,l,kg,l,kg,l,k]T, if |omega g,l,k | < ζ, ζ is small quantity approximate to 0, the state mean recurrence value and the state covariance recurrence value P k+1|k at the moment k+1 are calculated according to the following formula:
Wherein σ a and σ ω correspond to the standard deviations of the speed change rate and the course angular acceleration characteristics approximated by white noise;
If |omega g,l,k | > ζ, calculating a state mean value recurrence value and a state covariance recurrence value P k+1|k at the time of k+1 according to the following formula, and calculating the sight angle of the target fixed wing unmanned aerial vehicle by combining the positions of the observed fixed wing unmanned aerial vehicle at each time, namely:
JP(:,1)=[1,0,0,0,0,0]T
JP(:,2)=[0,1,0,0,0,0]T
JP(3,5)=0,JP(4,5)=T,JP(5,5)=1
updating the state mean and covariance by using the lateral values of the relative positions in the x and y directions:
Kk=Pk+1|kHT(HPk+1|kHT+Rr_xy)
Pk=(I5-KkH)Pk+1|k
Wherein the method comprises the steps of
The relative position and velocity estimates at time k+1 are:
Seventh, observing the active maneuver of the fixed-wing unmanned aerial vehicle, and correcting the height estimated value of the target fixed-wing unmanned aerial vehicle;
In order to avoid the change of the height of the target fixed wing unmanned aerial vehicle in the execution process of the fifth step, which causes the estimation deviation of the relative state, the active maneuver is performed after a certain correction time interval, the height estimation value is corrected again, and then the sixth step is executed continuously. And after the speed estimated value of the fixed wing unmanned aerial vehicle is obviously changed again, executing a fifth step, and keeping circulation. The correction time interval is defined as a time interval from the end of the last active maneuver to the detection of a significant change in the speed estimate of the target fixed-wing drone, and if the speed estimate of the fixed-wing drone is always stable around a certain value, the active maneuver is not required to perform altitude correction.
Eighth step, the task ends
And according to the task setting, if the unmanned cluster formation completes a preset task or resumes normal communication, the algorithm is terminated.
The effective benefit of the invention is as follows:
The invention provides a relative state estimation method based on line-of-sight angle measurement information only for the formation of the fixed-wing unmanned aerial vehicle clusters, which is beneficial to reducing the dependence of the relative state coordination of the fixed-wing unmanned aerial vehicles in the clusters on communication and improving the adaptability of the fixed-wing unmanned aerial vehicle clusters to complex electromagnetic environments. Meanwhile, the visual perception has various advantages of strong concealment, low device power consumption, strong platform adaptability and the like relative to communication, and the method is simple to operate and has remarkable application potential in the future army and civil field.
Drawings
FIG. 1 is a schematic view of the line of sight in the northeast coordinate system;
FIG. 2 is a schematic diagram of the geometric transformation of the machine body coordinate system to the pod coordinate system;
fig. 3 is a flowchart of an implementation of the unmanned aerial vehicle formation relative state estimation method of the present invention.
Detailed Description
The implementation of the present invention is explained and illustrated in detail below with reference to the attached drawings.
The first step: configuration measuring device
And determining the relative distance of the unmanned aerial vehicle in state estimation according to the size of the cluster formation scale. The model of the visual sensor is determined for the acting distance of the visual sensor, and under the condition that the normal formation scale of the cluster is ensured, each unmanned aerial vehicle can accurately measure the sight height angle q ε and the azimuth angle q β of the adjacent unmanned aerial vehicle by adopting the visual sensor. Defining a northeast day coordinate system O gXgYgZg as a reference inertial coordinate system, namely, pointing the X g axis to the positive east, pointing the Y g axis to the positive north, pointing the Z g axis to the positive upper side of the vertical local horizontal plane, and taking an origin O g as a flying spot of the unmanned aerial vehicle. The line of sight elevation and azimuth are shown in fig. 1 under the northeast inertial coordinate system.
And a second step of: searching and locking the target unmanned aerial vehicle, and ensuring that the target unmanned aerial vehicle is under the visible light nacelle image of the observation unmanned aerial vehicle.
The method ensures that the target unmanned aerial vehicle is a basic premise for realizing the estimation of the relative states of the two unmanned aerial vehicles under the condition that the visible light pod image of the unmanned aerial vehicle is observed. The visible light nacelle is a conventional visual sensor mounted on the current fixed-wing unmanned aerial vehicle, and when the target unmanned aerial vehicle is within the range of the action distance, the target unmanned aerial vehicle can search and lock under the operation of a flying hand or autonomously. In a stable locked condition, the pod will automatically adjust its frame angle so that the target is centered in its field of view, i.e., the center of the target in the unmanned pod imaging.
And a third step of: measuring and observing unmanned aerial vehicle flight state and nacelle frame angle
The flying state of the unmanned aerial vehicle can be usually measured by an inertial navigation system or a GPS of the unmanned aerial vehicle, and the specific measurement comprises the observation of the attitude pitch angle of the unmanned aerial vehicleYaw angle ψ b and roll angle γ b, flight speed components v x,g,f、vy,g,f and v z,g,f on the inertial reference frame triaxial, flight position components x g,f、yg,f and z g,f on the inertial reference frame triaxial. The unmanned plane attitude angle is mainly used for describing the conversion inertia between the unmanned plane system o bxbybzb and the inertial reference coordinate system. The origin of the unmanned aerial vehicle system o b is defined as the unmanned aerial vehicle centroid, x b points to the nose from the centroid, y b points to the vertical x b axis in the longitudinal plane of the fuselage, and the z b axis points to the right hand rule from x b and y b.
The unmanned aerial vehicle nacelle frame angle is mainly used for describing the relation between the direction of the nacelle optical axis and the unmanned aerial vehicle system, and specifically comprises an azimuth frame angle beta c and a high-low frame angle epsilon c, and can be measured by a sensor of a nacelle self-contained servo system. When the nacelle is mounted, the nacelle reference plane is parallel to the plane of the unmanned aerial vehicle system o bxbyb by being aligned such that its optical axis is oriented parallel to the body x g axis. When the nacelle is adjusted by adjusting the frame angle so that the target is centered in the field of view, the line of sight angle of the target drone relative to the viewing drone is directed parallel to the nacelle optical axis.
Fourth step: calculating the angle of sight
When the nacelle stably locks the target unmanned aerial vehicle, the target is positioned at the center of the field of view of the target unmanned aerial vehicle, so that the sight of the unmanned aerial vehicle and the target unmanned aerial vehicle is parallel to the direction of the nacelle optical axis. Accordingly, the unmanned aerial vehicle long-wing aircraft sight direction can be calculated by combining the unmanned aerial vehicle attitude angle and the pod frame angle, and the method specifically comprises the following steps:
wherein e LOS,c=[1,0,0]T,Mg2b is the cosine transform matrix from the northeast inertial coordinate system to the unmanned aerial vehicle system, and can be determined by pitch angle Psi b and gamma b, and,/>The cosine transformation matrix from the unmanned aerial vehicle body coordinate system to the pod coordinate system can be determined by the high and low frame angles epsilon c and the azimuth frame angle beta c. /(I)
Also e LOS,g can be represented by line of sight elevation q ε and azimuth q β:
eLOS,g=[cos qε cos qβ cos qε sin qβ sin qε]T
the corresponding can be solved by e LOS,g:
To avoid ambiguity in the view angle solution, the e LOS,g equivalent description view information can be directly used.
Fifth step: observing active maneuver of unmanned aerial vehicle, and estimating initial state of target unmanned aerial vehicle
When the fixed wing unmanned aerial vehicle executes a task, the flying speed, the course angle change rate and the height can be regarded as constant values in a short time, and based on the constant values, the fixed wing unmanned aerial vehicle can maneuver at a constant climbing rate under the condition that the current cruising horizontal flying speed of the fixed wing unmanned aerial vehicle is maintained. Continuously calculating the sight line between the unmanned aerial vehicle and the target unmanned aerial vehicle during maneuvering, and sequentially recording as
Based on the assumption that the flying speed, the course angle change rate and the altitude of the target fixed-wing unmanned plane can be regarded as constant values in a short time, the motion characteristics of the target fixed-wing unmanned plane can be represented by the following formula:
Wherein x g,l、yg,l and z g,l respectively represent the position of the target unmanned aerial vehicle under the northeast inertial coordinate system, v l and ψ v,l respectively represent the speed and heading angle of the unmanned aerial vehicle, and ω l is the heading angle change rate of the unmanned aerial vehicle. In order to avoid ambiguity caused by zero course angular velocity when solving the position of the target fixed wing unmanned aerial vehicle, the position analysis expression of each moment of the target unmanned aerial vehicle needs to be discussed in terms of conditions.
(1) Working condition 1: target unmanned aerial vehicle rectilinear motion
And if the intersection point of the sight line and the horizontal plane at each moment of the observation unmanned aerial vehicle is collinear, the target unmanned aerial vehicle moves linearly. Recording the initial observation time as 0, and then the position analysis expression of each time of the target unmanned aerial vehicle is as follows:
the sight angle of the target unmanned aerial vehicle can be calculated by combining the positions of the unmanned aerial vehicle at all moments, namely:
Wherein:
Wherein x g,f、yl,g and z g,f are in the plane state. Recording sight line information obtained by estimating the observation unmanned aerial vehicle at each moment by using the above method as It can be seen that if x g,l,0、yg,l,0 and z g,l,0 at the initial time of the target drone are determined, then/>The line-of-sight information at each time can be regarded as a function of the variable X 0.
X0=[xg,l,0,yg,l,0,zg,l,0,vl,0v,l,0]T
Taking measurement deviation of the line-of-sight unit direction vector at each moment into consideration, and determining X 0 by taking the sum of square differences of the estimated value and the measured value of the line-of-sight unit direction vector at each moment of the observation unmanned aerial vehicle and the minimum value of J as targets.
Namely:
Wherein the method comprises the steps of
W=diag(W1,W2,...,Wn)
Where R e is the measurement covariance of the line of sight unit direction vector. I.e. the measurement deviation of the high-low angle and the azimuth angle of the sight is sigma and sigma respectively, then
Wherein:
in actual calculation, the weight W 1、W2、…、Wn at each time can be obtained from the line of sight angle at the current time by using the above equation.
Recording deviceFor the optimal estimate of X 0, the line-of-sight unit vector measurement at each time instant can be approximated by:
Then in J Can be expressed as:
Corresponding to
J=(ΔeLOS-HΔX)TW(ΔeLOS-HΔX)
Wherein:
H=diag(h1,h2,...,hn)
For an X 0, if J takes the smallest value, it should:
/>
the method comprises the following steps:
ΔX=(HTWH)-1HTWΔeLOS
Wherein:
Corresponding to this:
Wherein:
after determining Δx using the above equation, X 0 is first updated as follows:
X0=X0+ΔX
Delta X is then calculated as updated X 0 and updated as above to obtain new X 0. The iteration is continued until |Δx| < ζ, where ζ is a small amount of approximately 0.
(2) Working condition 2: curved motion of target unmanned aerial vehicle
And if the intersection point of the sight line and the horizontal plane at each moment of the observation unmanned aerial vehicle is collinear, the target unmanned aerial vehicle moves linearly. Recording the initial observation time as 0, and then the position analysis expression of each time of the target unmanned aerial vehicle is as follows:
/>
Wherein x g,l,0、yg,l,0 and z g,l,0 respectively represent the position of the target unmanned aerial vehicle at the initial moment in the northeast day inertial coordinate system, v l,0 and ψ v,l,0 respectively represent the initial speed and heading angle, and ω l,0 is the heading angle change rate at the initial moment.
J is determined by adopting the same method as the working condition 1, and the following steps are obtained:
ΔX=(HTWH)-1HTWΔeLOS
Wherein:
H=diag(h1,h2,...,hn)
X0=[xg,l,0 yg,l,0 zg,l,0 vl,0 ψl,0 ωl,0]T
after determining Δx using the above equation, X 0 is first updated as follows:
X0=X0+ΔX
Delta X is then calculated as updated X 0 and updated as above to obtain new X 0. The iteration is continued until |Δx| < ζ, where ζ is a small amount of approximately 0.
Sixth step: utilize observation unmanned aerial vehicle sight measurement information, refresh unmanned aerial vehicle relative state in real time
Considering that the height change is very small when the target unmanned aerial vehicle normally executes the task, the height information of the target unmanned aerial vehicle estimated in the fourth step can be utilized to estimate the relative state of the long-range plane by combining the sight angle information measured in real time. At this time, the unmanned aerial vehicle does not need to be observed for maneuver.
And the relative position measured value of the two unmanned aerial vehicles can be obtained by using the measurement information of the unmanned aerial vehicle and the target unmanned aerial vehicle height estimated value:
Position measurement value combined with unmanned aerial vehicle observation And/>The method can obtain:
The target unmanned filtering state quantity at the moment k is X k=[xg,l,k,yg,l,k,vg,l,kg,l,kg,l,k]T, if |omega g,l,k | < ζ, ζ is small quantity approximate to 0, the state mean recurrence value and the state covariance recurrence value P k+1|k at the moment k+1 are calculated according to the following formula:
/>
where σ a and σ ω correspond to the standard deviation of the speed magnitude change rate and heading angle acceleration characteristics approximated by white noise.
If |omega g,l,k | > ζ, calculating a state mean recurrence value and a state covariance recurrence value P k+1|k at the time of k+1 according to the following formula, and calculating the sight angle of the target unmanned aerial vehicle by combining the positions of the unmanned aerial vehicle at each time, wherein the sight angle comprises:
JP(:,1)=[1,0,0,0,0,0]T
JP(:,2)=[0,1,0,0,0,0]T
JP(3,5)=0,JP(4,5)=T,JP(5,5)=1
updating the state mean and covariance by using the lateral values of the relative positions in the x and y directions:
Kk=Pk+1|kHT(HPk+1|kHT+Rr_xy)
Pk=(I5-KkH)Pk+1|k
Wherein the method comprises the steps of
The relative position and velocity estimates at time k+1 are:
seventh, observing the active maneuver of the unmanned aerial vehicle, and correcting the target unmanned aerial vehicle height estimated value
In order to avoid the change of the altitude of the target unmanned aerial vehicle in the execution process of the fifth step, which causes the relative state estimation deviation, the method should return to the fifth step after a certain interval time to re-correct the altitude estimation value, and then continue to execute the sixth step. After a certain time, the fifth step is executed again, and the cycle is maintained.
Eighth step, the task ends
And according to the task setting, if the unmanned cluster formation completes a preset task or resumes normal communication, the algorithm is terminated.

Claims (4)

1. The unmanned aerial vehicle formation relative state estimation method based on the sight measurement information only is characterized by comprising the following steps of:
The first step: determining a model of a visual sensor according to the formation scale of the visual cluster;
And a second step of: searching and locking a target fixed wing unmanned aerial vehicle, and ensuring that the target fixed wing unmanned aerial vehicle is under the image of the visible light nacelle of the fixed wing unmanned aerial vehicle;
And a third step of: measuring and observing the flight state, the sight angle and the nacelle frame angle of the fixed-wing unmanned aerial vehicle;
Fourth step: according to the third step, calculating and observing the sight angles of the fixed wing unmanned aerial vehicle and the target fixed wing unmanned aerial vehicle;
fifth step: observing the active maneuver of the fixed-wing unmanned aerial vehicle, and estimating the initial state of the target fixed-wing unmanned aerial vehicle;
When the fixed wing unmanned aerial vehicle executes a task, the flying speed, the course angle change rate and the height are regarded as constant values in a short time, and based on the constant values, the fixed wing unmanned aerial vehicle is maneuvered at a constant climbing rate under the condition that the current cruising horizontal flying speed of the fixed wing unmanned aerial vehicle is maintained; during manoeuvres Continuously calculating the sight line between the target fixed wing unmanned aerial vehicle and the target fixed wing unmanned aerial vehicle, and sequentially recording as' v
Based on the assumption that the flying speed, the course angle change rate and the altitude of the target fixed-wing unmanned plane are regarded as constant values in a short time, the motion characteristics of the target fixed-wing unmanned plane are represented by the following formula:
Wherein x g,l、yg,l and z g,l respectively represent the position of the target fixed wing unmanned aerial vehicle under the northeast day inertial coordinate system, v l and ψ v,l respectively represent the speed and course angle of the unmanned aerial vehicle, and ω l is the course angle change rate of the unmanned aerial vehicle; in order to avoid ambiguity caused by zero course angular velocity when solving the position of the target fixed wing unmanned aerial vehicle, the position analysis expression of each moment of the target fixed wing unmanned aerial vehicle needs to be discussed in terms of cases;
(1) Working condition 1: target fixed wing unmanned aerial vehicle rectilinear motion
If the intersection points of the sight lines of all the moments of the observation fixed wing unmanned aerial vehicle and the horizontal plane are collinear, the target fixed wing unmanned aerial vehicle moves linearly, and the initial observation moment is recorded as 0, and the position analysis expression of all the moments of the target fixed wing unmanned aerial vehicle is as follows:
The sight angle of the target fixed wing unmanned aerial vehicle is calculated by combining the positions of the observed fixed wing unmanned aerial vehicle at all moments, namely:
Wherein:
Wherein x g,f、yl,g and z g,f are the plane states; the sight line information obtained by observing the fixed wing unmanned plane at each moment and utilizing the estimation is recorded as Then x g,l,0、yg,l,0 and z g,l,0 at the initial time of the target fixed wing drone determine,/>The determination is thus that the line-of-sight information at each moment is considered as a function of the variable X 0:
X0=[xg,l,0,yg,l,0,zg,l,0,vl,0v,l,0]T
Taking measurement deviation of line-of-sight unit direction vectors at all moments into consideration, and determining X 0 by taking the sum of square differences of estimated values and measured values of the line-of-sight unit direction vectors at all moments of the observation fixed wing unmanned aerial vehicle as a target;
Namely:
Wherein the method comprises the steps of
W=diag(W1,W2,...,Wn)
Wherein R e is the measurement covariance of the unit direction vector of the line of sight, namely the measurement deviation of the high-low angle and the azimuth angle of the line of sight is sigma and sigma respectively
q=[qε,qβ]T
Wherein:
in actual calculation, the weight W 1、W2、…、Wn at each moment is obtained by the line of sight angle at the current moment by using the formula;
Recording device For the optimal estimate of X 0, the line-of-sight unit vector measurement at each time instant is approximated by:
Then in J Expressed as:
Corresponding to
J=(ΔeLOS-HΔX)TW(ΔeLOS-HΔX)
Wherein:
H=diag(h1,h2,...,hn)
For an X 0, if J takes the smallest value, it should:
the method comprises the following steps:
ΔX=(HTWH)-1HTWΔeLOS
Wherein:
Corresponding to this:
Wherein:
after determining Δx using the above equation, X 0 is first updated as follows:
X0=X0+ΔX
Then calculating delta X by using the updated X 0, updating according to the above formula to obtain new X 0, and continuously iterating until delta X < zeta, wherein zeta is a small amount similar to 0;
(2) Working condition 2: target fixed wing unmanned aerial vehicle curvilinear motion
If the intersection points of the sight lines of all the moments of the observation fixed wing unmanned aerial vehicle and the horizontal plane are collinear, the target fixed wing unmanned aerial vehicle moves linearly, and the initial observation moment is recorded as 0, and the position analysis expression of all the moments of the target fixed wing unmanned aerial vehicle is as follows:
Wherein x g,l,0、yg,l,0 and z g,l,0 respectively represent the position of the target fixed wing unmanned aerial vehicle at the initial moment under the northeast day inertial coordinate system, v l,0 and ψ v,l,0 respectively represent the initial speed and course angle, and ω l,0 is the course angle change rate at the initial moment;
j is determined by adopting the same method as the working condition 1, and the following steps are obtained:
ΔX=(HTWH)-1HTWΔeLOS
Wherein:
H=diag(h1,h2,...,hn)
X0=[xg,l,0 yg,l,0 zg,l,0 vl,0 ψl,0 ωl,0]T
after determining Δx using the above equation, X 0 is first updated as follows:
X0=X0+ΔX
then, delta X is calculated by the updated X 0, and updated according to the above formula to obtain new X 0; iterating until |Δx| < ζ, wherein ζ is a small amount of approximately 0;
Sixth step: the method comprises the steps of utilizing sight measurement information of an observation fixed wing unmanned aerial vehicle to filter and estimate the relative state of the observation fixed wing unmanned aerial vehicle and a target unmanned aerial vehicle;
Estimating the relative state of the long-wing aircraft by utilizing the height information of the target fixed-wing unmanned aerial vehicle estimated in the fourth step and combining the sight angle information measured in real time; at the moment, the unmanned aerial vehicle with the fixed wing does not need to be observed for maneuver;
And obtaining relative position measurement values of the two unmanned aerial vehicles by using the measurement information of the observed fixed wing unmanned aerial vehicle and the target fixed wing unmanned aerial vehicle height estimation value:
Position measurement value of unmanned plane combined with observation fixed wing And/>Obtaining:
The target unmanned filtering state quantity at the moment k is X k=[xg,l,k,yg,l,k,vg,l,kg,l,kg,l,k]T, if |omega g,l,k | < ζ, ζ is small quantity approximate to 0, the state mean recurrence value and the state covariance recurrence value P k+1|k at the moment k+1 are calculated according to the following formula:
Wherein σ a and σ ω correspond to the standard deviations of the speed change rate and the course angular acceleration characteristics approximated by white noise;
If |omega g,l,k | > ζ, calculating a state mean value recurrence value and a state covariance recurrence value P k+1|k at the time of k+1 according to the following formula, and calculating the sight angle of the target fixed wing unmanned aerial vehicle by combining the positions of the observed fixed wing unmanned aerial vehicle at each time, namely:
JP(:,1)=[1,0,0,0,0,0]T
JP(:,2)=[0,1,0,0,0,0]T
JP(3,5)=0,JP(4,5)=T,JP(5,5)=1
updating the state mean and covariance by using the lateral values of the relative positions in the x and y directions:
Kk=Pk+1|kHT(HPk+1|kHT+Rr_xy)
Pk=(I5-KkH)Pk+1|k
Wherein the method comprises the steps of
The relative position and velocity estimates at time k+1 are:
Seventh, observing the active maneuver of the fixed-wing unmanned aerial vehicle, and correcting the target fixed-wing unmanned aerial vehicle height estimated value
In order to avoid the change of the height of the target fixed wing unmanned aerial vehicle in the execution process of the fifth step and the relative state estimation deviation, the method returns to the fifth step after a certain interval time to re-correct the height estimation value, then continues to execute the sixth step, and executes the fifth step after the speed estimation value of the fixed wing unmanned aerial vehicle is obviously changed again, and keeps circulating; the correction time interval is defined as a time interval from the end of the last active maneuver to the detection of the significant change of the speed estimated value of the target fixed-wing unmanned aerial vehicle, and if the speed estimated value of the fixed-wing unmanned aerial vehicle is always stabilized near a certain value, the active maneuver is not required to execute the altitude correction;
Eighth step, the task ends
And according to the task setting, if the unmanned cluster formation completes a preset task or resumes normal communication, the algorithm is terminated.
2. The method for estimating the relative state of the unmanned aerial vehicle formation based on the sight measurement information only according to claim 1, wherein the flying state of the fixed-wing unmanned aerial vehicle in the step 3 is measured by an inertial navigation system or a GPS of the fixed-wing unmanned aerial vehicle, and the nacelle frame angle of the fixed-wing unmanned aerial vehicle is measured by a sensor of a nacelle self-contained servo system.
3. The method for estimating the relative state of the unmanned aerial vehicle formation based on the sight measurement information only according to claim 1, wherein the sight angle in step 4 is calculated as follows:
The fixed wing unmanned aerial vehicle long wing aircraft sight direction is calculated by combining the unmanned aerial vehicle attitude angle and the nacelle frame angle, and the method specifically comprises the following steps:
Wherein e LOS,c=[1,0,0]T,Mg2b is the cosine transformation matrix from the northeast inertial coordinate system to the unmanned aerial vehicle system, and is the pitch angle Psi b and gamma b, and,/>The cosine transformation matrix from the unmanned aerial vehicle body coordinate system to the pod coordinate system is determined by a high-low frame angle epsilon c and an azimuth frame angle beta c;
Also e LOS,g is represented by line of sight elevation q ε and azimuth q β:
eLOS,g=[cosqεcosqβ cosqεsinqβ sinqε]T
the corresponding solution from e LOS,g is:
4. a method for estimating relative states of unmanned aerial vehicle formation based on only sight measurement information according to claim 3, wherein in step 4, in order to avoid ambiguity in the solution of the sight angle, the sight information is described directly by using e LOS,g equivalent.
CN202111217659.9A 2021-10-19 2021-10-19 Unmanned aerial vehicle formation relative state estimation method based on sight measurement information only Active CN113759982B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111217659.9A CN113759982B (en) 2021-10-19 2021-10-19 Unmanned aerial vehicle formation relative state estimation method based on sight measurement information only

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111217659.9A CN113759982B (en) 2021-10-19 2021-10-19 Unmanned aerial vehicle formation relative state estimation method based on sight measurement information only

Publications (2)

Publication Number Publication Date
CN113759982A CN113759982A (en) 2021-12-07
CN113759982B true CN113759982B (en) 2024-05-28

Family

ID=78784147

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111217659.9A Active CN113759982B (en) 2021-10-19 2021-10-19 Unmanned aerial vehicle formation relative state estimation method based on sight measurement information only

Country Status (1)

Country Link
CN (1) CN113759982B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110162094A (en) * 2019-06-13 2019-08-23 中国人民解放军军事科学院国防科技创新研究院 A kind of close/intra control method of view-based access control model metrical information
CN110703798A (en) * 2019-10-23 2020-01-17 中国人民解放军军事科学院国防科技创新研究院 Unmanned aerial vehicle formation flight control method based on vision
CN110737283A (en) * 2019-11-04 2020-01-31 中国人民解放军军事科学院国防科技创新研究院 visual cluster-oriented formation decoupling control method
WO2020087846A1 (en) * 2018-10-31 2020-05-07 东南大学 Navigation method based on iteratively extended kalman filter fusion inertia and monocular vision
CN111650963A (en) * 2020-06-03 2020-09-11 中国人民解放军军事科学院国防科技创新研究院 Visual cluster formation control method for vertical take-off and landing fixed wing unmanned aerial vehicle
CN112363528A (en) * 2020-10-15 2021-02-12 北京理工大学 Unmanned aerial vehicle anti-interference cluster formation control method based on airborne vision

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020087846A1 (en) * 2018-10-31 2020-05-07 东南大学 Navigation method based on iteratively extended kalman filter fusion inertia and monocular vision
CN110162094A (en) * 2019-06-13 2019-08-23 中国人民解放军军事科学院国防科技创新研究院 A kind of close/intra control method of view-based access control model metrical information
CN110703798A (en) * 2019-10-23 2020-01-17 中国人民解放军军事科学院国防科技创新研究院 Unmanned aerial vehicle formation flight control method based on vision
CN110737283A (en) * 2019-11-04 2020-01-31 中国人民解放军军事科学院国防科技创新研究院 visual cluster-oriented formation decoupling control method
CN111650963A (en) * 2020-06-03 2020-09-11 中国人民解放军军事科学院国防科技创新研究院 Visual cluster formation control method for vertical take-off and landing fixed wing unmanned aerial vehicle
CN112363528A (en) * 2020-10-15 2021-02-12 北京理工大学 Unmanned aerial vehicle anti-interference cluster formation control method based on airborne vision

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
现实真比例导引拦截任意机动目标捕获区域;白志会;黎克波;苏文山;陈磊;航空学报;第41卷(第008期);332-342 *
考虑几何约束的无人机双机编队相对姿态确定方法;张旭;崔乃刚;王小刚;崔祜涛;秦武韬;战术导弹技术(第001期);17-21, 39 *

Also Published As

Publication number Publication date
CN113759982A (en) 2021-12-07

Similar Documents

Publication Publication Date Title
US10657832B2 (en) Method and apparatus for target relative guidance
Redding et al. Vision-based target localization from a fixed-wing miniature air vehicle
CN109911188B (en) Bridge detection unmanned aerial vehicle system in non-satellite navigation and positioning environment
CN101270993B (en) Remote high-precision independent combined navigation locating method
CN109709537B (en) Non-cooperative target position and speed tracking method based on satellite formation
CN111426320B (en) Vehicle autonomous navigation method based on image matching/inertial navigation/milemeter
CN105929836B (en) Control method for quadrotor
CN111238469B (en) Unmanned aerial vehicle formation relative navigation method based on inertia/data chain
CN112229405A (en) Unmanned aerial vehicle target motion estimation method based on image tracking and laser ranging
CN110793515A (en) Unmanned aerial vehicle attitude estimation method based on single-antenna GPS and IMU under large-mobility condition
CN109143303B (en) Flight positioning method and device and fixed-wing unmanned aerial vehicle
CN111024091A (en) Three-dimensional attitude algorithm for indoor flight of vision-assisted micro unmanned aerial vehicle
CN111504323A (en) Unmanned aerial vehicle autonomous positioning method based on heterogeneous image matching and inertial navigation fusion
Wang et al. Monocular vision and IMU based navigation for a small unmanned helicopter
CN109186614B (en) Close-range autonomous relative navigation method between spacecrafts
CN108981691A (en) A kind of sky polarised light integrated navigation filters online and smoothing method
Miller et al. Optical Flow as a navigation means for UAV
CN113670301A (en) Airborne SAR motion compensation method based on inertial navigation system parameters
CN113759982B (en) Unmanned aerial vehicle formation relative state estimation method based on sight measurement information only
CN113129377A (en) Three-dimensional laser radar rapid robust SLAM method and device
Gonçalves et al. Vision-based automatic approach and landing of fixed-wing aircraft using a dense visual tracking
Emran et al. A cascaded approach for quadrotor's attitude estimation
KR101862065B1 (en) Vision-based wind estimation apparatus and method using flight vehicle
Li et al. Small UAV autonomous localization based on multiple sensors fusion
Liu et al. Motion estimation using optical flow sensors and rate gyros

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant