CN115511955B - Distributed ground-to-air cooperative unknown unmanned aerial vehicle state estimation method and application thereof - Google Patents

Distributed ground-to-air cooperative unknown unmanned aerial vehicle state estimation method and application thereof Download PDF

Info

Publication number
CN115511955B
CN115511955B CN202211458585.2A CN202211458585A CN115511955B CN 115511955 B CN115511955 B CN 115511955B CN 202211458585 A CN202211458585 A CN 202211458585A CN 115511955 B CN115511955 B CN 115511955B
Authority
CN
China
Prior art keywords
coordinate system
target object
node
axis
sensing node
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211458585.2A
Other languages
Chinese (zh)
Other versions
CN115511955A (en
Inventor
郑灿伦
沈佳豪
陈华奔
赵世钰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Westlake University
Original Assignee
Westlake University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Westlake University filed Critical Westlake University
Priority to CN202211458585.2A priority Critical patent/CN115511955B/en
Publication of CN115511955A publication Critical patent/CN115511955A/en
Application granted granted Critical
Publication of CN115511955B publication Critical patent/CN115511955B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/16Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/17Terrestrial scenes taken from planes or by drones
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Remote Sensing (AREA)
  • Pure & Applied Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Mathematical Optimization (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Computational Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Algebra (AREA)
  • Multimedia (AREA)
  • Automation & Control Theory (AREA)
  • Computing Systems (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The application provides a distributed ground-to-air cooperative unknown unmanned aerial vehicle state estimation method and application thereof, and the method comprises the following steps: acquiring panoramic camera image data on a current observation sensing node, and identifying the position of a target object in an image; respectively defining an image coordinate system, a camera coordinate system, a node coordinate system and a world coordinate system; obtaining the direction of the target object in a camera coordinate system according to the pixel position of the target object in the image coordinate system and the camera internal parameters; projecting the current detected target direction into a node coordinate system, and converting the direction of the current target object into a world coordinate system through a rotation matrix; when any sensing node detects a target object and at least obtains information of one adjacent sensing node, performing state estimation on the target object; and carrying out state estimation on the target object through a space-time collaborative state estimation algorithm to obtain an estimated target state of the target object. The target object is estimated and positioned through the multiple sensing nodes, and the target object can be better observed and tracked.

Description

Distributed ground-to-air cooperative unknown unmanned aerial vehicle state estimation method and application thereof
Technical Field
The application relates to the technical field of unmanned aerial vehicle detection, in particular to a distributed ground-to-air cooperative unknown unmanned aerial vehicle state estimation method and application thereof.
Background
Unmanned aerial vehicles, as emerging research fields, exhibit powerful task execution capabilities and are applied to various fields. However, the unmanned aerial vehicle has small volume, high flying speed and low manufacturing cost, and once the unmanned aerial vehicle is utilized by lawless persons, the unmanned aerial vehicle can generate great threat to social public safety.
At present, anti-unmanned aerial vehicle's technological development is more and more. The primary purpose of the countermeasures is to enable real-time state tracking and estimation of the target. However, the unmanned aerial vehicle has a small size and a high flying speed, and unknown information such as the unmanned aerial vehicle may appear, so that a real-time state estimation algorithm is particularly important.
The existing method mainly comprises radar detection and laser range finder combined camera detection. Most of the existing distributed Kalman filtering algorithms of the current algorithm have interactive information among nodes, including: the target estimation state and the estimated covariance matrix. The covariance matrix occupies a large amount of communication resources due to the size problem, so that the updating frequency and the expansion scale of the algorithm are limited.
The distributed consistent Kalman filtering algorithm is added with the consistent algorithm on the basis of the original Kalman filtering algorithm to realize the convergence of global information, but the method cannot prove the optimality of the global information through theory.
Therefore, the existing means can not effectively realize positioning and tracking the unmanned aerial vehicle. Therefore, a method for estimating the state of an unknown unmanned aerial vehicle by using distributed ground-to-air coordination and an application thereof are needed to be provided, so as to solve the problems in the prior art, and particularly solve a series of related problems of target detection and coordinated target state estimation in a multi-node coordination capture system based on a panoramic camera.
Disclosure of Invention
The embodiment of the application provides a distributed ground-to-air cooperative unknown unmanned aerial vehicle state estimation method and application thereof, and aims to solve a series of related problems of target detection and cooperative target state estimation in a multi-node cooperative capture system based on a panoramic camera in the prior art.
The core technology of the invention is mainly an algorithm based on multi-panoramic sensing node cooperative target state estimation. One application scenario is that a plurality of distributed sensing devices are covered in a specific detection area, and based on a panoramic camera, only detection of aerial targets in the intrusion area is achieved, and the real-time states of the targets are cooperatively estimated.
In a first aspect, the application provides a method for estimating states of unknown unmanned aerial vehicles through distributed ground-to-air coordination, the method comprising the following steps:
s00, acquiring panoramic camera image data on the current observation sensing node, and identifying the position of a target object in the panoramic camera image data;
s10, respectively defining an image coordinate system, a camera coordinate system, a node coordinate system and a world coordinate system, and obtaining a pitch angle, a roll-over angle and a yaw angle of a current observation sensing node and a position of a sensing node center under the world coordinate system to obtain a rotation matrix from the node coordinate system to the world coordinate system;
s20, obtaining the direction of the target object in a camera coordinate system according to the pixel position of the target object in the image coordinate system and camera internal parameters;
s30, projecting the current detected target direction into a node coordinate system, and converting the direction of the current target object into a world coordinate system through a rotation matrix;
s40, after any sensing node detects a target object and at least obtains information of one adjacent sensing node, performing state estimation on the target object;
s50, performing state estimation on the target object through a space-time collaborative state estimation algorithm to obtain an estimated target state of the target object, wherein the space-time collaborative state estimation algorithm adopts an iterative mode to solve and sequentially comprises three parts of prediction, information updating and correction;
each observation sensing node can send the direction of the monitored target object, the estimated target state and the position of the observation sensing node to a communication network and receive information sent by adjacent sensing nodes.
Further, in step S00, the panoramic camera of each observation sensing node is composed of three monocular cameras, and the target with the highest confidence in the observation direction in each monocular camera is taken as the target observed by the current sensing node.
Further, in step S10, a pitch angle and a roll-over angle from the node coordinate system to the world coordinate system are obtained through the imu attitude sensor on the current sensing node, and a yaw angle and a position of the sensing node center under the world coordinate system are obtained through the rtk position yaw sensor on the current sensing node.
Further, in step S50, the motion of the target object is described by a linear time-invariant system.
Furthermore, the image coordinate system takes the up-down direction of the panoramic camera image data as a y-axis, the horizontal direction as an x-axis, and any one of four corners of the panoramic camera image data as a coordinate system origin; the camera coordinate system takes a camera focus as an origin, the direction vertical to the imaging plane is an x axis, a y axis is parallel to the x axis of the image coordinate system, and a z axis is parallel to the y axis of the image coordinate system; the node coordinate system takes one point in the observation sensing nodes as an origin, takes the length direction of the observation sensing nodes as an x axis, takes the width direction of the observation sensing nodes as a y axis, and takes the height direction of the observation sensing nodes as a z axis; the world coordinate system takes preset longitude and latitude altitude coordinates as an origin, takes the longitude as an x axis, takes the latitude as a y axis, and takes the gravity direction as a z axis.
In a second aspect, the present application provides a distributed ground-to-air cooperative pair unknown unmanned aerial vehicle state estimation apparatus, including:
the device comprises an airborne computer, a panoramic camera, an imu attitude sensor, an rtk position yaw sensor, a battery, a wireless communication module and a bottom plate;
the airborne computer stores a computer program and is respectively and electrically connected with the panoramic camera, the imu attitude sensor, the rtk position yaw sensor, the battery and the wireless communication module, the computer program stores a program code for controlling a process to execute the process, and the process comprises a method for estimating the state of the unknown unmanned aerial vehicle according to the distributed ground-to-air cooperation;
the airborne computer, the panoramic camera, the imu attitude sensor, the rtk position yaw sensor, the battery and the wireless communication module are all arranged on the bottom plate.
In a third aspect, the present application provides an electronic device, including a memory and a processor, where the memory stores a computer program, and the processor is configured to execute the computer program to perform the above method for estimating states of an unknown unmanned aerial vehicle by distributed space-air coordination.
In a fourth aspect, the present application provides a readable storage medium having stored therein a computer program comprising program code for controlling a process to execute a process comprising a method for distributed air-to-air cooperative pair estimation of unknown drone states in accordance with the above.
The main contributions and innovation points of the invention are as follows: 1. compared with the prior art (most of the interactive information of the existing distributed Kalman filtering algorithm among nodes comprises a target estimation state and an estimation covariance matrix, and the covariance matrix occupies a large amount of communication resources due to the size problem, so that the updating frequency and the extension scale of the algorithm are limited), the method carries out state estimation from the angle of least square, and the communication information only comprises the measurement information and the estimation state of the adjacent nodes without a communication covariance matrix, so that the communication burden is reduced to a great extent, and the expandability of the algorithm and the expandability of target estimation dimension are improved;
2. compared with the prior art (the consistency algorithm is added on the basis of the original Kalman filtering algorithm, the convergence of global information is realized, but the method cannot prove the optimality of the global information by theory), the optimal estimation state is obtained by designing the objective function derivation, the algorithm has a more compact structure, fewer steps and higher precision;
3. compared with the prior art (the traditional consistency Kalman filtering algorithms (namely a measurement consistency Kalman filtering algorithm, an estimation consistency Kalman filtering algorithm and a mixed measurement and estimation consistency Kalman filtering algorithm respectively), the method does not need a communication covariance matrix, has smaller communication traffic, and has higher estimation precision and higher response speed through simulation verification, so that each unmanned aerial vehicle can better observe and track a target object;
4. compared with the prior art (the prior patent CN 114581480B), firstly, the present application does not perform observation by means of an unmanned aerial vehicle any more, but is based on multiple sensing nodes, so that observation can be performed on the ground, and meanwhile, the present application does not need to perform weight switching control, that is, values of wself and wneighbor need to be switched back and forth according to obtained friend information, so the present application does not have the problem; meanwhile, the position of the target needs to be calculated by combining the prior patent with least square (see formula 11 in CN 114581480B), and the calculation amount increases with the increase of the friend information, so this application does not have this problem; the communication information of the prior patent comprises measurement information, an estimation state and an estimated covariance matrix, and the communication data volume can be effectively reduced only by the measurement information and the estimation state, so that the communication burden is reduced;
5. compared with the prior art (the prior patent CN 114581480B), the calculation formula of the method is obviously reduced, and the calculation time of single iteration can be greatly reduced, so that the response speed is improved; in addition, the prior patent realizes target estimation through a bias engineering method, and cannot prove that the effect is the best theoretically.
The details of one or more embodiments of the application are set forth in the accompanying drawings and the description below to provide a more thorough understanding of the application.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
FIG. 1 is a flow chart of a multi-node cooperative target state estimation method based on panoramic machine vision according to an embodiment of the application;
FIG. 2 is a schematic diagram of a single-node coordinate system definition and target detection according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a cooperative target state estimation algorithm according to an embodiment of the present application;
FIG. 4 is a schematic diagram of a single node structure according to an embodiment of the present application;
FIG. 5 is a comparison graph I of simulation results of a distributed co-estimation algorithm according to an embodiment of the application;
FIG. 6 is a comparison graph of simulation results of a distributed co-estimation algorithm according to an embodiment of the application;
fig. 7 is a schematic hardware structure diagram of an electronic device according to an embodiment of the application.
In the figure, 1, an on-board computer; 2. a panoramic camera; 3. an imu attitude sensor; 4. an rtk position yaw sensor; 5. a battery; 6. a wireless communication module; 7. a base plate.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the following exemplary embodiments do not represent all implementations consistent with one or more embodiments of the specification. Rather, they are merely examples of apparatus and methods consistent with certain aspects of one or more embodiments of the specification, as detailed in the claims which follow.
It should be noted that: in other embodiments, the steps of the corresponding methods are not necessarily performed in the order shown and described in this specification. In some other embodiments, the methods may include more or fewer steps than those described herein. Moreover, a single step described in this specification may be broken down into multiple steps for description in other embodiments; multiple steps described in this specification may be combined into a single step in other embodiments.
Example one
The application aims to provide a distributed ground-to-air cooperative unknown unmanned aerial vehicle state estimation method, and an algorithm for estimating a target state is cooperated through multiple panoramic sensing nodes. One preferred application scenario is that a plurality of distributed sensing devices are covered in a specific detection area, and based on the panoramic camera 2, only detection of an aerial target in the intrusion area is achieved, and a real-time state of the target is cooperatively estimated. First, in this embodiment, each node can communicate with at least 1 neighboring node and obtain its detection information.
The method mainly comprises two parts: referring to fig. 1, firstly, the small onboard computer 1 acquires the direction of a target object (target unmanned aerial vehicle) in a camera coordinate system from a detection image; acquiring the attitude and the position of a node (a sensing node) through an imu attitude sensor 3 and an rtk position yaw sensor 4 of an rtk positioning and orienting system, and converting the measured direction information into a pseudo linear equation, namely acquiring data in the figure 1; then, the sensing node exchanges information with the adjacent nodes through the communication network, including pseudo-linear measurement, pseudo-linear measurement matrix and estimated target state of the adjacent nodes, and realizes state estimation of the target, that is, collaborative estimation in fig. 1.
Specifically, as shown in fig. 4, each observation sensing node includes an onboard computer 1, a panoramic camera 2, an imu attitude sensor 3, an rtk position yaw sensor 4 (rtk positioning and orientation system), a battery 5, a wireless communication module 6 and a bottom plate 7;
wherein, the panoramic camera 2 of each observation sensing node consists of three monocular cameras or more.
The airborne computer 1 stores a computer program and is respectively electrically connected with the panoramic camera 2, the imu attitude sensor 3, the rtk position yaw sensor 4, the battery 5 and the wireless communication module 6, the computer program stores a program code for controlling a process to execute the process, and the process comprises a method for estimating the state of the unknown unmanned aerial vehicle according to the distributed ground-to-air cooperation;
the onboard computer 1, the panoramic camera 2, the imu attitude sensor 3, the rtk position yaw sensor 4, the battery 5 and the wireless communication module 6 are all arranged on the bottom plate 7.
The embodiment of the application provides a method for estimating the state of an unknown unmanned aerial vehicle by distributed ground-to-air coordination, and particularly, with reference to fig. 1, the method comprises the following steps:
s00, acquiring image data of the panoramic camera 2 on the current observation sensing node, and identifying the position of a target object in the image data of the panoramic camera 2;
the following sensing nodes and observation sensing nodes refer to the same feature; target and target drone refer to the same feature.
In this embodiment, the panoramic camera 2 captures an image to obtain image data of the panoramic camera 2, and then the position of the target in the image can be obtained by using an existing unmanned aerial vehicle detection algorithm, such as the algorithm disclosed in the prior patent CN114581480B, which is a conventional operation and is not described herein again.
S10, respectively defining an image coordinate system, a camera coordinate system, a node coordinate system and a world coordinate system, and obtaining a pitch angle, a roll-over angle and a yaw angle of a current sensing node and a position of a center of the sensing node under the world coordinate system to obtain a rotation matrix from the node coordinate system to the world coordinate system;
because the sensing node is not completely flat on the ground, the ground may have a slope that is not stable, and the pitch angle and the roll angle are relative to the horizontal plane, in the position estimation, the error in direction is proportional to the square of the distance, so that the direction information in the world coordinate system needs to be accurately measured.
In the present embodiment, as shown in fig. 2, four coordinate systems: image coordinate system S picture Camera coordinate system S camera Nodal point coordinate system S node And a world coordinate system S Earth (it may be a global coordinate system).
Wherein S picture The origin is the upper left corner of the image, the x axis points to the right along the long axis of the image, and the y axis points to the down along the short axis of the image;
S camera the x axis is perpendicular to the imaging plane and points outwards, the y axis is parallel to the x axis of the image coordinate system and points in the same direction, and the z axis is parallel to the y axis of the image coordinate system and points in the same direction;
S Node the origin of (1) is the center of the T-shaped bottom plate 7, the z-axis is vertical to the bottom plate 7 and faces downwards, the y-axis is forward along the short side of the T-shaped bottom plate 7, and the x-axis is along the long side and accords with the right-hand rule;
S Earth the origin point of the altitude coordinate system is a preset longitude and latitude altitude coordinate P 0 Earth The x-axis points north in the longitudinal direction, the y-axis points east in the latitudinal direction, and the z-axis points downward in accordance with the right-hand rule.
The definition of the coordinate system is substantially similar to that of the prior patent CN114581480B, except that an improved point (node coordinate system) is added in the present application.
S20, obtaining the direction of the target object in a camera coordinate system according to the pixel position of the target object in the image coordinate system and camera internal parameters;
in the present embodiment, as shown in fig. 4, it is assumed that the pixel position of the target drone (target object) in the image is detected as [ x, y] picture The target unmanned aerial vehicle S can be obtained by means of formula 1 according to camera internal parameters camera Is as follows
Figure DEST_PATH_IMAGE001
The direction is as follows:
equation 1
Figure DEST_PATH_IMAGE002
In the formulax c ,y c Is centered in the imageS picture Coordinates of (5);
Figure DEST_PATH_IMAGE003
is the focal length of the camera; imaging die size for camerad x d y
S30, projecting the current detected target direction into a node coordinate system, and converting the direction of the current target object into a world coordinate system through a rotation matrix;
in the present embodiment, since the panoramic camera 2 of each sensing node is composed of three monocular cameras, the detection direction vector in the camera coordinate system
Figure DEST_PATH_IMAGE004
The transformation to the nodal coordinate system is:
equation 2
Figure DEST_PATH_IMAGE005
In the formula
Figure DEST_PATH_IMAGE006
And
Figure DEST_PATH_IMAGE007
respectively camera coordinate system S camera To the node coordinate system S Node Of the rotation matrix
Figure DEST_PATH_IMAGE008
And translation matrix
Figure DEST_PATH_IMAGE009
. Because the rotation and translation matrixes of the three cameras relative to the node coordinate system are different, the three cameras need to be calibrated respectively under vicon equipment or calculated according to the design size parameters of the cameras. And finally, taking the target with the highest confidence level in all detection directions as the target detected by the node, and converting the target into a world coordinate system.
The confidence degree refers to that a series of detection targets are obtained in image detection, the probability that the series of detection targets are most likely to be detected objects is represented according to the confidence degree from high to low, and the higher the confidence degree is, the more likely the series of detection targets are to be the targets.
Because the corresponding relation between the sensing node and the world coordinate system respectively obtains the pitching angles from the node coordinate system to the world coordinate system through the imu attitude sensor 3
Figure DEST_PATH_IMAGE010
And roll angle
Figure DEST_PATH_IMAGE011
The yaw angle is obtained by rtk position yaw sensor 4
Figure DEST_PATH_IMAGE012
And sensing the position of the node center under the world coordinate system
Figure DEST_PATH_IMAGE013
. Further, a rotation matrix from the node coordinate system to the world coordinate system can be obtained:
equation 3
Figure DEST_PATH_IMAGE014
And converting the detected direction of the target object into a world coordinate system:
equation 4
Figure DEST_PATH_IMAGE015
In the formula
Figure DEST_PATH_IMAGE016
And
Figure DEST_PATH_IMAGE017
the unit vectors R of the target unmanned aerial vehicle under the world coordinate system and the node coordinate system from the jth sensing node are respectively Earth2Node From the world coordinate system S Earth To the node coordinate system S Node The rotation matrix of (a) is,
Figure DEST_PATH_IMAGE018
the rotation matrixes of the pitch, the roll and the yaw of the sensing node are respectively, and the calculation formula 5 is as follows:
equation 5
Figure DEST_PATH_IMAGE019
In the formula (I), the compound is shown in the specification,θφψrespectively a pitch angle, a roll angle and a yaw angle of the sensing node;
s40, after any sensing node detects a target object and at least obtains information of one adjacent sensing node, performing state estimation on the target object;
each sensing node can send the direction of the monitored target object, the estimated target state and the position of the sensing node to a communication network and receive information sent by adjacent sensing nodes.
And S50, as shown in FIG. 3, performing state estimation on the target object through a space-time cooperation state estimation algorithm to obtain an estimated target state of the target object, wherein the space-time cooperation state estimation algorithm is solved in an iterative mode and sequentially comprises three parts of prediction, information updating and correction.
In this embodiment, the motion of the target is described by a linear time invariant system:
equation 6
Figure DEST_PATH_IMAGE021
In the formula (I), the compound is shown in the specification,
Figure DEST_PATH_IMAGE022
is a target objectkThe position of the moment of time, the speed state,
Figure DEST_PATH_IMAGE023
in order to be a systematic error, the error rate of the system,
Figure DEST_PATH_IMAGE024
is aimed atkThe state transition matrix at a time has the value:
Figure DEST_PATH_IMAGE025
where dt is the sampling time interval,
Figure DEST_PATH_IMAGE026
is a 3 × 3 identity matrix.
In this embodiment, the observation of the linear time invariant system is direction information, and the observation equation is:
equation 7
Figure DEST_PATH_IMAGE028
Equation 8
Figure DEST_PATH_IMAGE029
Equation 9
Figure DEST_PATH_IMAGE030
Equation 10
Figure DEST_PATH_IMAGE031
In the formula (I), the compound is shown in the specification,
Figure DEST_PATH_IMAGE032
is a firstiA sensing node is atkThe time of day observes the direction of the target in the world coordinate system,
Figure DEST_PATH_IMAGE033
is a firstiThe position of each sensing node in the world coordinate system,
Figure DEST_PATH_IMAGE034
and
Figure DEST_PATH_IMAGE035
is the measurement and measurement equation after pseudo linear conversion.
In the embodiment, the space-time collaborative state estimation algorithm adopts an iterative method to solve. The method comprises three parts of prediction, information updating and correction.
Wherein the prediction is:
equation 11
Figure DEST_PATH_IMAGE037
Equation 12
Figure DEST_PATH_IMAGE038
In equation 11
Figure DEST_PATH_IMAGE039
Is as followsiAn observation sensing node isk-1The optimal target estimation state at the time of day,
Figure DEST_PATH_IMAGE040
estimating the prediction state of the target object at the kth moment for the ith observation and sensing node; in the formula 12, in the above-mentioned formula,
Figure DEST_PATH_IMAGE041
is the estimated covariance matrix at time k-1;
Figure DEST_PATH_IMAGE042
is the predicted covariance matrix at time k.
The information updating part is as follows:
equation 13
Figure DEST_PATH_IMAGE044
Equation 14
Figure DEST_PATH_IMAGE046
Equation 15
Figure DEST_PATH_IMAGE048
In the case of the equations 13, 14, 15,
Figure DEST_PATH_IMAGE050
in order to measure the error of the optical sensor,
Figure DEST_PATH_IMAGE052
in order to estimate the error, the error is estimated,
Figure DEST_PATH_IMAGE053
in order to update the information matrix,
Figure DEST_PATH_IMAGE054
Figure DEST_PATH_IMAGE055
the adjustable parameters respectively represent the confidence level of the measurement, the confidence level of the self estimation and the confidence level of the estimation of the adjacent node.
And finally, state correction:
equation 16
Figure DEST_PATH_IMAGE057
Equation 17
Figure DEST_PATH_IMAGE058
Equation 18
Figure DEST_PATH_IMAGE059
Figure DEST_PATH_IMAGE060
In order to calculate the gain(s),
Figure DEST_PATH_IMAGE061
in order to be a history of the attenuation coefficient,
Figure DEST_PATH_IMAGE062
is the corrected state of the target object.
Equations 11-18 above are all equations for the spatio-temporal co-ordination state estimation algorithm iteration.
The space-time collaborative state estimation algorithm is obtained by derivation according to a design cost function, firstly, the derivation assumes that the motion of the target object is a uniform linear line, and then the current state of the target object is only related to the initial state of the target object:
equation 19
Figure DEST_PATH_IMAGE063
In the formula (I), the compound is shown in the specification,
Figure DEST_PATH_IMAGE064
a is the linear state transition matrix for the initial state of the target. Then our estimation of the current state of the target can be transformed into an estimation of the initial state of the target. In the time from 0 to K, the information obtained by each sensing node includes all the measurements of the sensing node itself on the target object, all the measurements of the neighboring nodes (i.e. the sensing node currently measured and the neighboring nodes) on the target object, the estimation of the initial state of the target by the sensing node itself, and all the estimations of the initial state of the target by the neighboring nodes.
Thus, an objective function can be established as shown in equation 20:
Figure DEST_PATH_IMAGE066
in the formula (I), the compound is shown in the specification,
Figure DEST_PATH_IMAGE068
representing the set of neighbors of the ith node at time kIn the synthesis process, the raw materials are mixed,
Figure DEST_PATH_IMAGE069
for the optimal target initial state estimated by the ith node at time k,
Figure DEST_PATH_IMAGE070
which is indicative of the current time of day,
Figure DEST_PATH_IMAGE071
and
Figure DEST_PATH_IMAGE072
a pseudo-linear measurement and a pseudo-linear measurement matrix representing the j-th node at time k,
Figure DEST_PATH_IMAGE073
representing the state matrix from time 0 to time k.
Solve equation 20
Figure DEST_PATH_IMAGE074
The partial derivatives can be obtained
Figure 125195DEST_PATH_IMAGE074
Is solved as in equation 21:
Figure DEST_PATH_IMAGE076
Figure DEST_PATH_IMAGE077
here, it is defined that:
equation 22
Figure DEST_PATH_IMAGE079
Equation 23
Figure DEST_PATH_IMAGE080
Thus can obtain
Figure DEST_PATH_IMAGE081
And
Figure DEST_PATH_IMAGE082
the iteration form of (a) is:
equation 24
Figure DEST_PATH_IMAGE084
Equation 25
Figure DEST_PATH_IMAGE085
Equation 26
Figure DEST_PATH_IMAGE087
Finally, obtain
Figure DEST_PATH_IMAGE088
Of the iterative form:
equation 27
Figure DEST_PATH_IMAGE090
After finishing, obtaining:
equation 28
Figure DEST_PATH_IMAGE092
Equation 29
Figure DEST_PATH_IMAGE093
Equation 30
Figure DEST_PATH_IMAGE094
Equation 31
Figure DEST_PATH_IMAGE095
Equation 32
Figure DEST_PATH_IMAGE096
Equation 33
Figure DEST_PATH_IMAGE097
Finally, substituting equation 33 into equation 19, an iterative estimation algorithm for the current state can be obtained:
Figure DEST_PATH_IMAGE099
Figure DEST_PATH_IMAGE101
defining:
Figure DEST_PATH_IMAGE103
equation 34
Figure DEST_PATH_IMAGE105
Equation 35
Figure DEST_PATH_IMAGE106
Equation 36
Figure DEST_PATH_IMAGE107
Substituting equation 24 into equation 35 can result in
Figure DEST_PATH_IMAGE109
Finally, the overall derivation of equations 11-18 can be obtained.
As shown in fig. 5-6 and table 1, 10 observation sensing nodes are randomly distributed in the simulation environment, each observation sensing node can exchange information with the nearest 3 adjacent nodes, the target drone moves around (the target drone moves around in the air, and the sensing nodes observe the moving aerial drone and estimate the flight state and trajectory thereof in real time), the direction measurement error of each node conforms to gaussian distribution with an average value of 5 °, and the total simulation is 100 times. Fig. 5-6 are 100 mean curves, and table 1 is the overall mean of the algorithm. Fig. 5 and fig. 6 show the mean square deviations of the estimated target position and speed, respectively, and it can be seen from table 1 that the spatio-temporal collaborative state estimation algorithm has a faster response speed and a smaller estimation error, and is greatly improved compared with the existing estimation algorithm.
TABLE 1 Algorithm simulation error mean
Figure DEST_PATH_IMAGE111
Example two
Based on the same concept, as shown in fig. 4, the present application further provides a distributed ground-to-air cooperative unknown unmanned aerial vehicle state estimation device, including:
the device comprises an airborne computer 1, a panoramic camera 2, an imu attitude sensor 3, an rtk position yaw sensor 4, a battery 5, a wireless communication module 6 and a bottom plate 7;
the airborne computer 1 stores a computer program and is respectively electrically connected with the panoramic camera 2, the imu attitude sensor 3, the rtk position yaw sensor 4, the battery 5 and the wireless communication module 6, the computer program stores a program code for controlling a process to execute the process, and the process comprises a distributed ground-to-air cooperative unknown unmanned aerial vehicle state estimation method according to the first embodiment;
the airborne computer 1, the panoramic camera 2, the imu attitude sensor 3, the rtk position yaw sensor 4, the battery 5 and the wireless communication module 6 are all arranged on the bottom plate 7.
The onboard computer 1 stores a computer program and obtains a panoramic image through the panoramic camera 2; the airborne computer 1 sends information to other nodes in the network through the wireless communication module 6 and obtains information sent by adjacent unmanned aerial vehicles; the onboard computer 1 obtains the attitude angle of the panoramic camera 2 through the imu attitude sensor 3. The onboard computer 1 obtains the position and yaw direction of the panoramic camera 2 through an rtk positioning orientation system (rtk position yaw sensor 4).
EXAMPLE III
The present embodiment also provides an electronic device, referring to fig. 7, comprising a memory 404 and a processor 402, wherein the memory 404 stores a computer program, and the processor 402 is configured to execute the computer program to perform the steps of any of the method embodiments described above.
Specifically, the processor 402 may include a Central Processing Unit (CPU), or an Application Specific Integrated Circuit (ASIC), or may be configured to implement one or more integrated circuits of the embodiments of the present application.
Memory 404 may include, among other things, mass storage 404 for data or instructions. By way of example, and not limitation, memory 404 may include a hard disk drive (hard disk drive, HDD for short), a floppy disk drive, a solid state drive (SSD for short), flash memory, an optical disk, a magneto-optical disk, tape, or a Universal Serial Bus (USB) drive or a combination of two or more of these. Memory 404 may include removable or non-removable (or fixed) media, where appropriate. The memory 404 may be internal or external to the data processing apparatus, where appropriate. In a particular embodiment, the memory 404 is a Non-Volatile (Non-Volatile) memory. In particular embodiments, memory 404 includes Read-only memory (ROM) and Random Access Memory (RAM). The ROM may be mask-programmed ROM, programmable ROM (PROM), erasable PROM (EPROM), electrically Erasable PROM (EEPROM), electrically erasable ROM (EEPROM), electrically Alterable ROM (EAROM), or FLASH memory (FLASH), or a combination of two or more of these, where appropriate. The RAM may be a static random-access memory (SRAM) or a dynamic random-access memory (DRAM), where the DRAM may be a fast page mode dynamic random-access memory 404 (FPMDRAM), an extended data output dynamic random-access memory (EDODRAM), a synchronous dynamic random-access memory (SDRAM), or the like.
Memory 404 may be used to store or cache various data files needed for processing and/or communication purposes, as well as possibly computer program instructions executed by processor 402.
The processor 402, by reading and executing computer program instructions stored in the memory 404, implements any of the distributed air-to-air cooperative pair unknown drone state estimation methods in the embodiments described above.
Optionally, the electronic apparatus may further include a transmission device 406 and an input/output device 408, where the transmission device 406 is connected to the processor 402, and the input/output device 408 is connected to the processor 402.
The transmitting device 406 may be used to receive or transmit data via a network. Specific examples of the network described above may include wired or wireless networks provided by communication providers of the electronic devices. In one example, the transmission device includes a Network adapter (NIC) that can be connected to other Network devices through a base station to communicate with the internet. In one example, the transmitting device 406 may be a Radio Frequency (RF) module, which is used to communicate with the internet in a wireless manner.
The input-output device 408 is used to input or output information. In the present embodiment, the input information may be panoramic camera 2 image data or the like, and the output information may be target object state estimation data or the like.
Example four
The present embodiments also provide a readable storage medium having stored therein a computer program comprising program code for controlling a process to execute a process, the process comprising a distributed ground-to-air cooperative pair unknown drone state estimation method according to an embodiment one.
It should be noted that, for specific examples in this embodiment, reference may be made to examples described in the foregoing embodiments and optional implementations, and details of this embodiment are not described herein again.
In general, the various embodiments may be implemented in hardware or special purpose circuits, software, logic or any combination thereof. Some aspects of the invention may be implemented in hardware, while other aspects may be implemented in firmware or software which may be executed by a controller, microprocessor or other computing device, although the invention is not limited thereto. While various aspects of the invention may be illustrated and described as block diagrams, flow charts, or using some other pictorial representation, it is well understood that these blocks, apparatus, systems, techniques or methods described herein may be implemented in, as non-limiting examples, hardware, software, firmware, special purpose circuits or logic, general purpose hardware or controller or other computing devices, or some combination thereof.
Embodiments of the invention may be implemented by computer software executable by a data processor of the mobile device, such as in a processor entity, or by hardware, or by a combination of software and hardware. Computer software or programs (also referred to as program products) including software routines, applets and/or macros can be stored in any device-readable data storage medium and they include program instructions for performing particular tasks. The computer program product may comprise one or more computer-executable components configured to perform embodiments when the program is run. The one or more computer-executable components may be at least one software code or a portion thereof. Further in this regard it should be noted that any block of the logic flow as in the figures may represent a program step, or an interconnected logic circuit, block and function, or a combination of a program step and a logic circuit, block and function. The software may be stored on physical media such as memory chips or memory blocks implemented within the processor, magnetic media such as hard or floppy disks, and optical media such as, for example, DVDs and data variants thereof, CDs. The physical medium is a non-transitory medium.
It should be understood by those skilled in the art that various features of the above embodiments can be combined arbitrarily, and for the sake of brevity, all possible combinations of the features in the above embodiments are not described, but should be considered as within the scope of the present disclosure as long as there is no contradiction between the combinations of the features.
The above examples are merely illustrative of several embodiments of the present application, and the description is more specific and detailed, but not to be construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present application shall be subject to the appended claims.

Claims (6)

1. A method for estimating the state of an unknown unmanned aerial vehicle by distributed ground-to-air coordination is characterized by comprising the following steps:
s00, acquiring panoramic camera image data on a current observation sensing node, and identifying the position of a target object in the panoramic camera image data;
the panoramic camera of each observation sensing node consists of three monocular cameras, and a target with the highest confidence coefficient of the observation direction in each monocular camera is taken as a target object observed by the current observation sensing node;
s10, respectively defining an image coordinate system, a camera coordinate system, a node coordinate system and a world coordinate system, and obtaining a pitching angle, a turning angle and a yaw angle of a current observation sensing node and a position of a sensing node center under the world coordinate system to obtain a rotation matrix from the node coordinate system to the world coordinate system;
the pitch angle and the roll-over angle from the node coordinate system to the world coordinate system are obtained through an imu attitude sensor on the current observation sensing node, and the yaw angle and the position of the center of the sensing node under the world coordinate system are obtained through an rtk position yaw sensor on the current observation sensing node;
s20, obtaining the direction of the target object in the camera coordinate system according to the pixel position of the target object in the image coordinate system and camera internal parameters;
s30, projecting the current detected target direction into the node coordinate system, and converting the direction of the current target object into the world coordinate system through the rotation matrix;
s40, after any sensing node detects the target object and at least obtains information of one adjacent sensing node, performing state estimation on the target object;
s50, performing state estimation on the target object through a space-time collaborative state estimation algorithm to obtain an estimated target state of the target object, wherein the space-time collaborative state estimation algorithm is solved in an iterative mode and sequentially comprises three parts, namely prediction, information updating and correction;
each observation sensing node can send the direction of the monitored target object, the estimated target state and the position of the observation sensing node to a communication network and receive information sent by adjacent sensing nodes.
2. The method for estimating state of an unknown unmanned aerial vehicle through distributed space-air coordination pair according to claim 1, wherein in step S50, the motion of the target object is described by a linear time-invariant system.
3. The method for estimating the state of the unknown unmanned aerial vehicle in distributed space-air cooperative pair according to any one of claims 1-2, wherein the image coordinate system takes the up-down direction of the panoramic camera image data as a y-axis, the horizontal direction as an x-axis, and any one of four corners of the panoramic camera image data as a coordinate system origin; the camera coordinate system takes a camera focus as an origin, the direction vertical to an imaging plane is an x axis, a y axis is parallel to the x axis of the image coordinate system, and a z axis is parallel to the y axis of the image coordinate system; the node coordinate system takes one point in the observation sensing nodes as an origin, the length direction of the observation sensing nodes as an x axis, the width direction of the observation sensing nodes as a y axis and the height direction of the observation sensing nodes as a z axis; the world coordinate system uses preset longitude and latitude altitude coordinates as an origin, uses the longitude line as an x axis, uses the latitude line as a y axis, and uses the gravity direction as a z axis.
4. Distributed ground-to-air cooperation is to unknown unmanned aerial vehicle state estimation device, its characterized in that includes:
the device comprises an airborne computer, a panoramic camera, an imu attitude sensor, an rtk position yaw sensor, a battery, a wireless communication module and a bottom plate;
the on-board computer having stored thereon and being electrically connected to the panoramic camera, the imu attitude sensor, the rtk position yaw sensor, the battery and the wireless communication module, respectively, a computer program having stored thereon program code comprising program code for controlling a process to execute a process comprising a distributed earth-to-air cooperative pair unknown drone state estimation method according to any one of claims 1 to 3;
the airborne computer, the panoramic camera, the imu attitude sensor, the rtk position yaw sensor, the battery and the wireless communication module are all arranged on the bottom plate.
5. An electronic device comprising a memory and a processor, wherein the memory has a computer program stored therein, and the processor is configured to execute the computer program to perform the method of distributed earth-to-air cooperative pair unknown drone status estimation according to any one of claims 1 to 3.
6. A readable storage medium, characterized in that it has stored therein a computer program comprising program code for controlling a process to execute a process comprising a distributed ground-to-air cooperative pair unknown drone state estimation method according to any one of claims 1 to 3.
CN202211458585.2A 2022-11-18 2022-11-18 Distributed ground-to-air cooperative unknown unmanned aerial vehicle state estimation method and application thereof Active CN115511955B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211458585.2A CN115511955B (en) 2022-11-18 2022-11-18 Distributed ground-to-air cooperative unknown unmanned aerial vehicle state estimation method and application thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211458585.2A CN115511955B (en) 2022-11-18 2022-11-18 Distributed ground-to-air cooperative unknown unmanned aerial vehicle state estimation method and application thereof

Publications (2)

Publication Number Publication Date
CN115511955A CN115511955A (en) 2022-12-23
CN115511955B true CN115511955B (en) 2023-03-10

Family

ID=84513741

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211458585.2A Active CN115511955B (en) 2022-11-18 2022-11-18 Distributed ground-to-air cooperative unknown unmanned aerial vehicle state estimation method and application thereof

Country Status (1)

Country Link
CN (1) CN115511955B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022042184A1 (en) * 2020-08-31 2022-03-03 深圳市道通智能航空技术股份有限公司 Method and apparatus for estimating position of tracking target, and unmanned aerial vehicle
CN114399528A (en) * 2021-11-29 2022-04-26 深圳先进技术研究院 Three-dimensional space moving target tracking method based on two-dimensional image and related device
CN114581480A (en) * 2022-05-07 2022-06-03 西湖大学 Multi-unmanned aerial vehicle cooperative target state estimation control method and application thereof

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022000127A1 (en) * 2020-06-28 2022-01-06 华为技术有限公司 Target tracking method and device therefor
CN113850126A (en) * 2021-08-20 2021-12-28 武汉卓目科技有限公司 Target detection and three-dimensional positioning method and system based on unmanned aerial vehicle

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022042184A1 (en) * 2020-08-31 2022-03-03 深圳市道通智能航空技术股份有限公司 Method and apparatus for estimating position of tracking target, and unmanned aerial vehicle
CN114399528A (en) * 2021-11-29 2022-04-26 深圳先进技术研究院 Three-dimensional space moving target tracking method based on two-dimensional image and related device
CN114581480A (en) * 2022-05-07 2022-06-03 西湖大学 Multi-unmanned aerial vehicle cooperative target state estimation control method and application thereof

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
面向井下无人机自主飞行的人工路标辅助位姿估计方法;单春艳等;《煤炭学报》;20190815;全文 *

Also Published As

Publication number Publication date
CN115511955A (en) 2022-12-23

Similar Documents

Publication Publication Date Title
CN109974693B (en) Unmanned aerial vehicle positioning method and device, computer equipment and storage medium
CN107808407B (en) Binocular camera-based unmanned aerial vehicle vision SLAM method, unmanned aerial vehicle and storage medium
CN111025283B (en) Method and device for linking radar and dome camera
US20220371602A1 (en) Vehicle positioning method, apparatus, and controller, intelligent vehicle, and system
CN114581480B (en) Multi-unmanned aerial vehicle cooperative target state estimation control method and application thereof
CN112184824B (en) Camera external parameter calibration method and device
US20110044504A1 (en) Information processing device, information processing method and program
CN112985391B (en) Multi-unmanned aerial vehicle collaborative navigation method and device based on inertia and binocular vision
WO2021195939A1 (en) Calibrating method for external parameters of binocular photographing device, movable platform and system
CN111083633B (en) Mobile terminal positioning system, establishment method thereof and positioning method of mobile terminal
CN108444452B (en) Method and device for detecting longitude and latitude of target and three-dimensional space attitude of shooting device
Tian et al. HiQuadLoc: A RSS fingerprinting based indoor localization system for quadrotors
Ivancsits et al. Visual navigation system for small unmanned aerial vehicles
CN113436267B (en) Visual inertial navigation calibration method, device, computer equipment and storage medium
CN109769206B (en) Indoor positioning fusion method and device, storage medium and terminal equipment
US20210156710A1 (en) Map processing method, device, and computer-readable storage medium
Alves et al. Cost-effective indoor localization for autonomous robots using kinect and wifi sensors
CN108844527A (en) Antenna for base station engineering parameter acquisition methods and system, storage medium and equipment
CN114237275A (en) Multi-unmanned aerial vehicle game collaborative search method based on perception-locking-discovery
CN115511955B (en) Distributed ground-to-air cooperative unknown unmanned aerial vehicle state estimation method and application thereof
CN109945864B (en) Indoor driving positioning fusion method and device, storage medium and terminal equipment
Paiva et al. Indoor localization algorithm based on fingerprint using a single fifth generation Wi-Fi access point
Kang et al. Development of a peripheral-central vision system for small UAS tracking
CN114782496A (en) Object tracking method and device, storage medium and electronic device
CN109407087B (en) Target feature vector diagram matching method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant