CN113189585A - Motion error compensation algorithm based on unmanned aerial vehicle bistatic SAR system - Google Patents

Motion error compensation algorithm based on unmanned aerial vehicle bistatic SAR system Download PDF

Info

Publication number
CN113189585A
CN113189585A CN202110188317.2A CN202110188317A CN113189585A CN 113189585 A CN113189585 A CN 113189585A CN 202110188317 A CN202110188317 A CN 202110188317A CN 113189585 A CN113189585 A CN 113189585A
Authority
CN
China
Prior art keywords
motion
error
sar system
bistatic sar
receiver
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110188317.2A
Other languages
Chinese (zh)
Inventor
刘飞峰
曾涛
王战泽
高检
何思敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Technology BIT
Chongqing Innovation Center of Beijing University of Technology
Original Assignee
Beijing Institute of Technology BIT
Chongqing Innovation Center of Beijing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Technology BIT, Chongqing Innovation Center of Beijing University of Technology filed Critical Beijing Institute of Technology BIT
Priority to CN202110188317.2A priority Critical patent/CN113189585A/en
Publication of CN113189585A publication Critical patent/CN113189585A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/89Radar or analogous systems specially adapted for specific applications for mapping or imaging
    • G01S13/90Radar or analogous systems specially adapted for specific applications for mapping or imaging using synthetic aperture techniques, e.g. synthetic aperture radar [SAR] techniques
    • G01S13/9021SAR image post-processing techniques
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/40Means for monitoring or calibrating

Landscapes

  • Engineering & Computer Science (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Physics & Mathematics (AREA)
  • Electromagnetism (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

The invention provides a motion error compensation method based on an unmanned aerial vehicle bistatic SAR system, which reduces the difficulty of motion compensation by constructing a motion sensitivity model of a small unmanned aerial vehicle BiSAR system and reducing the degree of freedom of errors. The method comprises the steps of firstly, sub-image division is carried out on a coarse imaging result, meanwhile, a motion sensitivity model of a small unmanned aerial vehicle bistatic synthetic aperture radar system is established, the second-order tone frequency and the third-order tone frequency of an azimuth time domain signal are expressed as the linear sum of a plurality of error degrees of freedom, error degree of freedom screening is completed by the model, then, an error estimation value of a motion parameter is obtained by a simulated annealing algorithm and a weighted least square method, and finally, global motion compensation is realized by the values. The difficulty of motion compensation is reduced by means of reducing the degree of freedom of errors, global motion compensation is achieved, the method is suitable for imaging environments with few strong contrast areas in sparse scenes, and the problem that a traditional motion error compensation algorithm is not suitable when large space variability exists in the motion errors of a transceiver under a small unmanned aerial vehicle BiSAR system is solved.

Description

Motion error compensation algorithm based on unmanned aerial vehicle bistatic SAR system
Technical Field
The disclosure belongs to the technical field of bistatic synthetic aperture radars, and particularly relates to a motion error compensation algorithm based on an unmanned aerial vehicle bistatic SAR system.
Background
An Unmanned Aerial Vehicle-mounted Bistatic Synthetic Aperture Radar (UAV-BiSAR) system can provide images which are not affected by weather conditions day and night, and has wide application in the fields of disaster search, land surveying and mapping, military monitoring and the like. However, under the bistatic condition, the space-variant performance of the motion error of the small unmanned aerial vehicle system is larger than that of the traditional SAR system, the motion errors in the distance direction and the azimuth direction are seriously coupled, and the error estimation precision is greatly reduced. Therefore, the conventional Motion error Compensation (MOCO) algorithm combined with spatial variability is no longer applicable. In addition, the transmitter and the receiver are installed on different unmanned aerial vehicle platforms, and motion errors are respectively introduced, so that the degree of Freedom (degree of Freedom) of the motion errors is twice that of a single station system, the estimation of error parameters is difficult, and the problem of high degree of Freedom of errors is not solved at present.
Disclosure of Invention
In view of the above, the present disclosure provides a motion error compensation method based on an unmanned aerial vehicle bistatic SAR system, which reduces the difficulty of motion compensation by constructing a motion sensitivity model of a small unmanned aerial vehicle bistatic SAR system and reducing the degree of freedom of error, realizes global motion compensation, is suitable for an imaging environment with a sparse scene and a small area of strong contrast, and solves the problem that a conventional motion error compensation algorithm is not suitable when the motion error of a transceiver has large space variability in the small unmanned aerial vehicle bistatic SAR system.
According to an aspect of the present disclosure, the present disclosure proposes a motion error compensation method based on an unmanned airborne bistatic SAR system, the method comprising:
imaging the original echo data of the unmanned airborne bistatic SAR system by using an NCS algorithm, and dividing subimages;
establishing a motion sensitivity model of the unmanned airborne bistatic SAR system to obtain the relationship between the second order modulation frequency and the third order modulation frequency of the azimuth direction time domain signal of the unmanned airborne bistatic SAR system and a motion error parameter;
screening the motion trajectory freedom degrees of a transmitter and a receiver of the unmanned airborne bistatic SAR system based on the motion sensitivity model of the unmanned airborne bistatic SAR system;
obtaining estimated values of the second-order tone frequency and the third-order tone frequency of each sub-image in the azimuth time domain signal by using a simulated annealing algorithm;
obtaining a motion error parameter of the unmanned airborne bistatic SAR system by using a weighted least square method and combining with estimated values of second-order tone frequency and third-order tone frequency of the sub-image in an azimuth time domain signal for weighted estimation, and correcting the motion trajectory error dimension of the transmitter and the receiver which are screened by using the motion error parameter to obtain a corrected motion parameter of the transmitter and the receiver;
and utilizing the NCS algorithm to perform imaging by using the corrected motion parameters of the transmitter and the receiver to obtain the focusing image based on the unmanned airborne bistatic SAR system.
In a possible implementation manner, the imaging of the raw echo data based on the unmanned airborne bistatic SAR system by using the NCS algorithm and the subimage division include:
according to the distance unit migration correction of the unmanned airborne bistatic SAR system, the introduced error is less than or equal to half of the distance resolution, and the phase error introduced by the azimuth Doppler frequency modulation rate space-variant is less than or equal to
Figure BDA0002939192120000021
Sub-picture division is performed.
In one possible implementation, the screening of the degrees of freedom of the motion trajectories of the transmitter and the receiver based on the motion sensitivity model of the unmanned airborne bistatic SAR system includes:
substituting the upper limit value of each motion error parameter into the motion sensitivity model of the unmanned airborne bistatic SAR system according to the upper limit value of the motion error parameter to obtain the influence degree of each error degree of freedom on the subimages, and selecting the error dimension with the influence degree on the subimages exceeding a preset threshold value as the motion track degree of freedom screening result of the transmitter and the receiver.
In one possible implementation, the motion error parameters include: the unmanned aerial vehicle carries bistatic SAR system's transmitter and receiver initial position error, speed error, acceleration error.
The invention provides a motion error compensation method based on an unmanned aerial vehicle bistatic SAR system, which comprises the steps of imaging original echo data based on the unmanned aerial vehicle bistatic SAR system by using an NCS algorithm and dividing subimages; establishing a motion sensitivity model of the unmanned airborne bistatic SAR system to obtain the relationship between the second-order modulation frequency and the third-order modulation frequency of the azimuth direction time domain signal of the unmanned airborne bistatic SAR system and each motion parameter; screening the freedom degrees of motion tracks of a transmitter and a receiver of the unmanned airborne bistatic SAR system based on a motion sensitivity model of the unmanned airborne bistatic SAR system; obtaining estimated values of the second-order tone frequency and the third-order tone frequency of each sub-image in the azimuth time domain signal by using a simulated annealing algorithm; carrying out weighted estimation on the estimated values of the second-order tone frequency and the third-order tone frequency of the sub-image in the azimuth time domain signal by using a weighted least square method to obtain a motion error parameter of the unmanned airborne bistatic SAR system, and correcting the motion trajectory error parameters of the screened transmitter and receiver by using the motion error parameter to obtain corrected motion parameters of the transmitter and the receiver; and imaging by using the corrected motion parameters of the transmitter and the receiver by using an NCS algorithm to obtain a focused image based on the unmanned airborne bistatic SAR system. The difficulty of motion compensation is reduced by reducing the degree of freedom of errors, global motion compensation is realized, the method is suitable for an imaging environment with few strong contrast areas in a sparse scene, and the problem that a traditional motion error compensation algorithm is not suitable when large space variability exists in the motion errors of a transceiver under a small unmanned aerial vehicle BiSAR system is solved.
Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate exemplary embodiments, features, and aspects of the disclosure and, together with the description, serve to explain the principles of the disclosure.
Fig. 1 shows a flow chart of a motion error compensation method based on an unmanned airborne bistatic SAR system according to an embodiment of the present disclosure;
fig. 2 illustrates a diagram of an unmanned airborne bistatic SAR-based system architecture, according to an embodiment of the present disclosure;
FIG. 3 illustrates a simulated target point profile according to an embodiment of the present disclosure;
FIG. 4 illustrates a graph of the degree of influence of various degrees of freedom of error on imaging results according to an embodiment of the disclosure;
FIG. 5 is a graph illustrating the effect of residual motion error on phase according to an embodiment of the present disclosure;
FIG. 6 is a graph illustrating simulation results of SA algorithm versus second-order tone frequency according to an embodiment of the disclosure;
FIG. 7 is a graph illustrating simulation results of SA algorithm for three-tone frequencies according to an embodiment of the disclosure;
FIG. 8 illustrates a graph of second order tone frequency versus respective degree of freedom motion error according to an embodiment of the present disclosure;
FIG. 9 illustrates a graph of third order tone frequency versus respective degree of freedom motion error according to an embodiment of the present disclosure;
FIG. 10 shows a target 1 compensation results graph according to an embodiment of the present disclosure;
FIG. 11 shows a target 5 compensation results graph according to an embodiment of the present disclosure;
FIG. 12 shows a target 21 compensation result graph according to an embodiment of the present disclosure;
FIG. 13 illustrates a target 25 compensation results graph according to an embodiment of the present disclosure;
FIG. 14 shows a graph of measured imaging results without motion error compensation according to an embodiment of the present disclosure;
FIG. 15 shows a sub-image selection result graph according to an embodiment of the present disclosure;
FIG. 16 shows transmitter trajectory correction results in accordance with an embodiment of the present disclosure;
FIG. 17 is a diagram illustrating receiver trajectory correction results according to an embodiment of the present disclosure;
FIG. 18 shows a graph of imaging results from a motion compensation algorithm according to an embodiment of the present disclosure;
fig. 19 illustrates a graph of imaging results obtained using a conventional error compensation algorithm according to an embodiment of the present disclosure.
Detailed Description
Various exemplary embodiments, features and aspects of the present disclosure will be described in detail below with reference to the accompanying drawings. While the various aspects of the embodiments are presented in drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used exclusively herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
Furthermore, in the following detailed description, numerous specific details are set forth in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, instrumentalities well known to those skilled in the art have not been described in detail in order to not unnecessarily obscure the present disclosure.
Fig. 1 shows a flow chart of a motion error compensation method based on an unmanned airborne bistatic SAR system according to an embodiment of the present disclosure; fig. 2 illustrates a diagram of an unmanned airborne bistatic SAR-based system architecture according to an embodiment of the present disclosure. As shown in fig. 1, the method may include:
step S1: and imaging the original echo data based on the unmanned airborne bistatic SAR system by using an NCS algorithm, and dividing subimages.
In one example, the error introduced by the range cell migration correction of the unmanned airborne bistatic SAR system is less than or equal to half of the range resolution, and the phase error introduced by the azimuth Doppler frequency modulation space-variant performance is less than or equal to half of the range resolution
Figure BDA0002939192120000051
The sub-image division is performed, which can ensure that the RCM (range cell migration) and the spatial variability of the doppler parameter within the divided sub-images are negligible.
Step S2: establishing a motion sensitivity model of the unmanned airborne bistatic SAR system, and obtaining the relationship between the second order modulation frequency and the third order modulation frequency of the azimuth direction time domain signal of the unmanned airborne bistatic SAR system and the motion error parameter.
The motion error parameters may include parameters such as an initial position error, a velocity error, and an acceleration error of a receiver and a transceiver of the unmanned airborne bistatic SAR system, which are not limited herein.
Specifically, as shown in FIG. 2, PTAnd PRIs the location of the transmitter and receiver, VTAnd VRIs the velocity of the transmitter and receiver, and P is the location of the target point. X, Y, Z are the three coordinate axes of the ground distance coordinate system, which may be taken to be north east, where the transmitter and receiver have a large squint angle with respect to the target area.
Under the unmanned aerial vehicle carries bistatic SAR system, the expression of this system is comparatively complicated. Under error conditions, the taylor expansion form of the slope distance expression of the system can be expressed as:
Figure BDA0002939192120000052
wherein R is the sum of the actual distances of the unmanned aerial vehicle-mounted bistatice,Ve,aeAnd beRespectively obtaining the initial slope distance, speed, acceleration and jerk, R of the unmanned airborne bistatic SAR system according to inertial navigation data of the systeme,Ve,aeAnd beAre respectively expressed as
Figure BDA0002939192120000061
Wherein the content of the first and second substances,
Figure BDA0002939192120000062
the receiver variables are defined identically to the transmitter variables, P, V, a and b are 3 x 1 vectors, components in the x, y and z directions, respectively, for example:
PT=[PTx PTy PTz]Tformula (4)
The subscript T denotes parameters of the transmitter and R denotes parameters of the receiver, which are directly acquired using INS (Inertial Navigation System) and GPS (Global Positioning System).
According to the derivation of formula 1, the trajectory of the drone transceiving platform can be represented by a third order expression. But for jerk beTerm, based on measured data, jerk beHas an error value of substantially 0.01m/s3Of the order of (a), can be regarded as jerk beThere is no error. That is, the track error of the transceiving stage is mainly concentrated on the position, velocity and acceleration, and can be expressed as:
Figure BDA0002939192120000071
where Δ represents the error, P, V, a represents the position, velocity and acceleration of the system, respectively, T represents the transmitter and R represents the receiver, for a total of 18 dimensions of error parameters.
Based on the NCS algorithm, after distance direction correction and pulse compression, the time domain signal expression of the direction is as follows:
Sa(u)=ω(u)×exp(j2πupu+jπKru2+jπKtu3) Formula (6)
The expression of each parameter is as follows:
Figure BDA0002939192120000072
wherein u ispIs the azimuth position of the specific target point, KrAnd KtThe two-order modulation frequency and the three-order modulation frequency of the azimuth time domain signal play a crucial role in azimuth focusing. KrAnd KtThe dependency on the motion parameter, i.e. the motion sensitivity model, can be defined as:
Figure BDA0002939192120000073
based on equation (7), K can be calculatedrAnd KtFor distance parameter Ve、aeAnd bePartial derivative of (c), as shown in equation (9):
Figure BDA0002939192120000081
v of System parameters according to equation (2)e、aeAnd beThe partial derivative of (d) can be calculated as:
Figure BDA0002939192120000082
then, the system parameter K is determinedrAnd KtPartial derivatives of (1), we
Figure BDA0002939192120000083
For example, the following steps are carried out:
Figure BDA0002939192120000084
step S3: and screening the freedom degrees of the motion tracks of the transmitter and the receiver of the unmanned airborne bistatic SAR system based on the motion sensitivity model of the unmanned airborne bistatic SAR system.
In an example, substituting the upper limit of the motion error parameter into the motion sensitivity model of the unmanned airborne bistatic SAR system to obtain the influence degree of each error degree of freedom on the sub-image, and selecting the (more prominent) error dimension with the influence degree on the sub-image exceeding a preset threshold value as the screening result of the motion trajectory degrees of freedom of the transmitter and the receiver.
From the motion sensitivity model established in step S2, it can be seen that there is enough actual or accurate doppler parameter KrAnd KtSample of (i), i.e. exact { Kr,KtSet, the motion error can be accurately estimated. By carrying out self-adaptive focusing on the SAR sub-image of the microwave active radar imaging system, the Doppler parameter can be accurately estimated. However, in the bistar topology, the sensitivity model has 18 unknown variables, and the solution is very difficult. Also, since only the image quality of the system is of interest and not the motion error itself, there is a need to reduce the number of fundamental motion errors or equations in the sensitivity model.
To evaluate the effect of motion errors on the final image quality, KrAnd KtThe value of the partial derivative of the motion parameter can be used as a good indicator. The doppler phase error can be estimated using the equation in equation (8) in conjunction with the accuracy of the inertial navigation system. Through the evaluation of the influence degree of the motion errors of all degrees of freedom on the Doppler phase and the image quality, the necessary degrees of freedom can be selected, and the difficulty of motion error compensation of the BiSAR system is greatly simplified.
FIG. 3 illustrates a simulated target point profile according to an embodiment of the present disclosure; FIG. 4 illustrates a graph of the degree of influence of various degrees of freedom of error on imaging results according to an embodiment of the disclosure.
As shown in fig. 3, the simulation target area is a 5 × 5 grid, and the simulation target points are uniformly distributed on a 1600 × 1600 grid. According to the parameters in table 1, the influence of the mini-UAV-based BiSAR error DOF is calculated, as shown in table 1:
TABLE 1
Transmitter range 2km Transmitter height 500m
Transmitter squint angle 45° Transmitter speed 20m/s
Transmitter acceleration 0.5m/s2 Transmitter jerk 0.1m/s3
Receiver range 5km Receiver height 500m
Squint angle of receiver 80° Receiver speed 10m/s
Acceleration of receiver 0.5m/s2 Receiver jerk 0.1m/s3
Wavelength of light 0.02m Bandwidth of 100MHz
According to the upper limit of the motion error provided by the inertial navigation system, the average influence of each error dimension on each point is calculated by substituting a partial derivative expression in formula (8), and the result is shown in fig. 4, wherein the horizontal axis of fig. 4 represents the degree of freedom of the motion error, and the vertical axis represents the influence degree of each error degree of freedom on the imaging result, and the unit is rad. From the simulation result, 4 th, 5 th, 7 th, 8 th, and 17 th error dimensions, that is, the X-axis direction and Y-axis direction velocities of the transmitter, the X-axis direction and Y-axis direction accelerations, and the Y-axis direction acceleration of the receiver have a large influence on the imaging result, and in the subsequent motion compensation process, the influence of these error dimensions is mainly considered. Thereby reducing the motion error dimension from 18 to 5, greatly reducing the number of sub-images required and the computation time.
Fig. 5 shows a graph of the effect of residual motion error on phase according to an embodiment of the present disclosure.
To illustrate the correctness of the dimension reduction processing operation, the influence of the remaining 13 error dimensions, which are ignored, is simulated.
First, KrAnd KtThe error of (2) can be expressed by equation (12), and the phase error range of the imaging result is shown in fig. 5. As shown in FIG. 5, the phase error is around 0.2rad, less than
Figure BDA0002939192120000101
Indicating that these motion error dimensions are negligible. Therefore, the speed of the transmitter in the X-axis direction and the speed of the transmitter in the Y-axis direction and the acceleration of the receiver in the X-axis direction and the Y-axis direction can be determined, and 5 error dimensions of the acceleration of the receiver in the Y-axis direction have large influence on the imaging result, and 5 groups of { K } are neededr,KtEither 5 sub-images to estimate the motion errors.
Figure BDA0002939192120000102
Step S4: and obtaining estimated values of the second-order tone frequency and the third-order tone frequency of each sub-image in the azimuth time domain signal by using a simulated annealing algorithm.
The image entropy can be selected as an evaluation index of the sub-image focusing effect.
The entropy of a two-dimensional image is represented as:
Figure BDA0002939192120000103
wherein N isaIs the number of pulses, NrIs the number of distance sampling points, D (q, k) is the scattering intensity density of the image, and the expression is:
Figure BDA0002939192120000111
where s (I) is the total energy of the image and I (q, k) is the complex reflection intensity of the synthetic aperture radar image.
To improve the efficiency of the simulated annealing algorithm, the initial solution of the search is set to:
Figure BDA0002939192120000112
wherein f isdrnIs the initial solution, f, of each sub-imagedtmIs the search result of the previous sub-image.
After each temperature iteration, the search steps should be reduced to improve the search efficiency.
The iterative expression is:
Figure BDA0002939192120000113
where i is the number of iterations and a may be set to a constant 1.5 based on UAV real data.
Fig. 6 and 7 are graphs showing simulation results of the SA algorithm for the second-order and third-order tone frequencies, respectively, according to an embodiment of the present disclosure.
For example, if the set motion error is: the position error is 5m, the speed error is 1m/s, and the acceleration error is 0.1m/s2. The simulation target is shown in FIG. 3, K at different iteration numbersrAnd KtThe estimation accuracy of (2) is shown in fig. 6 and 7. In fig. 6 and 7, the horizontal axis represents the number of iterations per iteration, and L represents the number of temperature updates. From the simulation result, when the temperature update time is 2 and the iteration time is 60, K isrHas an estimation accuracy of 0.05/s2,KtHas an estimation accuracy of 0.02/s3This way, the parameters are set to ensure that the simulated annealing algorithm does not take too much time.
Step S5: and performing weighted estimation by using a weighted least square method in combination with estimated values of second-order tone frequency and third-order tone frequency of the sub-image in the azimuth time domain signal to obtain motion error parameters of the unmanned airborne bistatic SAR system, and correcting the motion trajectory error dimensions of the transmitter and the receiver which are screened by using the motion error parameters to obtain corrected motion parameters of the transmitter and the receiver.
Fig. 8 and 9 respectively show graphs of the second-order tone frequency and the third-order tone frequency and the respective degree of freedom motion errors according to an embodiment of the disclosure.
Wherein, for N sub-images, K is inputrAnd KtThe error is:
Figure BDA0002939192120000121
Figure BDA0002939192120000122
is the motion error of the system, the dimension is M, and the corresponding selected M degrees of freedom,
Figure BDA0002939192120000123
setting the response function of the system as f (X), obtaining the upper limit of the motion error according to the accuracies of the inertial navigation system and the GPS, and obtaining the error pair K under the unmanned aerial vehicle condition in order to prove the accuracy of the formula (8) and gradually increase the percentage of the motion error of each degree of freedomrAnd KtFig. 8 and 9 show the tendency of influence of (a).
In fig. 8 and 9, 18 lines represent the motion error and doppler parameter K with 18 degrees of freedom, respectivelyrAnd KtThe relationship between them. Under the condition of the small unmanned aerial vehicle-based BiSAR, the linear relation can be regarded, namely the precision of the formula (8) is high. The response function of the system can be approximated as:
f (X) BX type (19)
Wherein B can be represented as:
Figure BDA0002939192120000124
the WLS estimate is:
Figure BDA0002939192120000131
where W is the diagonal weighting matrix. And determining a weighting matrix according to the maximum contrast of the sub-images by using the original data of the unmanned aerial vehicle. For regions with rich imaging scene information, the signal-to-noise ratio can also be used as the basis of the weighting matrix. And obtaining the motion error parameters of the system platform by using the WLS, and further correcting the motion tracks of the transmitter and the receiver to obtain the corrected motion parameters of the transmitter and the receiver, thereby realizing the global improvement of the image quality.
In order to improve the estimation accuracy of the motion error, multiple rounds of motion error estimation can be performed to improve the accuracy and reduce the influence of nonlinearity. Under the condition of a bistatic UAV, the two-wheeled motion estimation can obtain a good enough imaging result, and further obtain a motion error amount of a screening dimension.
Step S6: and utilizing the NCS algorithm to perform imaging by using the corrected motion parameters of the transmitter and the receiver to obtain the focusing image based on the unmanned airborne bistatic SAR system.
And correcting the original motion trail of the motion error dimensionality screened by the transmitter and the receiver to obtain new motion parameters, and then performing imaging processing by using an NCS imaging algorithm to obtain a final imaging result.
FIG. 10 shows a target 1 compensation results graph according to an embodiment of the present disclosure; FIG. 11 shows a target 5 compensation results graph according to an embodiment of the present disclosure; FIG. 12 shows a target 21 compensation result graph according to an embodiment of the present disclosure; FIG. 13 illustrates a target 25 compensation result graph according to an embodiment of the present disclosure.
For example, for the NCS algorithm, in this simulation, sub-images of target 3, target 11, target 13, target 16, and target 22 are taken, respectively. The compensation results are similar when a plurality of random selections are made. The compensation results of 4 targets, i.e., target 1, target 5, target 21 and target 25, are shown in fig. 10, 11, 12 and 13, and it is known from the simulation results of 4 targets, i.e., target 1, target 5, target 21 and target 25, that the motion error is compensated well. The resolution of the target point basically reaches the theoretical value, and the PSLR is better than-12.5 dB.
Application example:
figure 14 shows a graph of measured imaging results without motion error compensation according to an embodiment of the present disclosure.
In one example, using the motion error parameters in table 1, using the Ku band system, the imaging scene is located in a suburban area, including two low plants, a dirt road, a portion of a straight line, and a transponder, with both the transmitter and receiver in a squint configuration.
Using INS raw data, imaging was performed directly using the NCS algorithm, resulting in the results shown in fig. 13. As shown in fig. 13, in the imaging result, the object is blurred, and no texture is observed in the factory floor. The imaging result of the repeater is a bright line, and the problem of strong defocusing exists.
Fig. 15 illustrates a sub-image selection result diagram according to an embodiment of the present disclosure.
The algorithm of the present disclosure is utilized for motion compensation. First, a sub-image having a high contrast is selected from the coarse imaging result, and the result is shown in fig. 15. And then, analyzing the influence degrees of the errors in different dimensions according to the configuration of the system, and calculating to obtain five dimensions of the error dimensions mainly required to be compensated, namely the speed in the Y direction of the transmitter, the acceleration in the Y direction of the transmitter, the speed in the Y direction of the receiver, the acceleration in the X direction of the receiver and the acceleration in the Y direction of the receiver.
FIG. 16 shows transmitter trajectory correction results in accordance with an embodiment of the present disclosure; fig. 17 is a diagram illustrating a receiver trajectory correction result according to an embodiment of the present disclosure.
The trajectory correction results of the transmitter and the receiver obtained by using the simulated annealing algorithm and the weighted least squares method are shown in fig. 16 and 17, respectively.
FIG. 18 shows a graph of imaging results from a motion compensation algorithm according to an embodiment of the present disclosure; fig. 19 illustrates a graph of imaging results obtained using a conventional error compensation algorithm according to an embodiment of the present disclosure.
Finally, the final result obtained by focusing and imaging the scene by using the result after the track correction by using the NCS algorithm is shown in fig. 18, and compared with the result obtained by using the conventional motion error compensation algorithm shown in fig. 19, the overall texture information of the imaging result obtained by the algorithm provided by the present invention is very obvious.
The invention provides a motion error compensation method based on an unmanned aerial vehicle bistatic SAR system, which reduces the difficulty of motion compensation by constructing a motion sensitivity model of a small unmanned aerial vehicle BiSAR system and reducing the degree of freedom of errors. Firstly, coarse imaging is carried out, the second-order tone frequency and the third-order tone frequency of the sub-images are expressed as the linear sum of a plurality of error degrees of freedom, the estimated values of the second-order tone frequency and the third-order tone frequency of the azimuth echo signals of each sub-image are obtained by using a simulated annealing algorithm, and finally, the error value of the motion parameter is obtained by adopting least square, so that the global motion compensation is realized. The difficulty of motion compensation is reduced by means of reducing the degree of freedom of errors, global motion compensation is achieved, the method is suitable for imaging environments with few sparse scene strong contrast areas, and the problem that a traditional motion error compensation algorithm is not suitable when large space variability exists in the motion errors of a transceiver under a small unmanned aerial vehicle BiSAR system is solved.
Having described embodiments of the present disclosure, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (4)

1. A motion error compensation method based on an unmanned aerial vehicle bistatic SAR system is characterized by comprising the following steps:
imaging original echo data based on the unmanned airborne bistatic SAR system by using an NCS algorithm, and dividing subimages;
establishing a motion sensitivity model of the unmanned airborne bistatic SAR system to obtain the relationship between the second order modulation frequency and the third order modulation frequency of the azimuth direction time domain signal of the unmanned airborne bistatic SAR system and a motion error parameter;
screening the motion trajectory freedom degrees of a transmitter and a receiver of the unmanned airborne bistatic SAR system based on the motion sensitivity model of the unmanned airborne bistatic SAR system;
obtaining estimated values of the second-order tone frequency and the third-order tone frequency of each sub-image in the azimuth time domain signal by using a simulated annealing algorithm;
carrying out weighted estimation by using a weighted least square method in combination with estimated values of second-order tone frequency and third-order tone frequency of the sub-image in the azimuth time domain signal to obtain motion error parameters of the unmanned airborne bistatic SAR system, and correcting the motion trajectory error dimensions of the transmitter and the receiver which are screened by using the motion error parameters to obtain corrected motion parameters of the transmitter and the receiver;
and utilizing the NCS algorithm to perform imaging by using the corrected motion parameters of the transmitter and the receiver to obtain the focusing image based on the unmanned airborne bistatic SAR system.
2. The motion error compensation method of claim 1, wherein the imaging of raw echo data based on the unmanned airborne bistatic SAR system by the NCS algorithm and the subimage division comprise:
according to the distance unit migration correction of the unmanned airborne bistatic SAR system, the introduced error is less than or equal to half of the distance resolution, and the phase error introduced by the azimuth Doppler frequency modulation rate space-variant is less than or equal to
Figure FDA0002939192110000011
Sub-picture division is performed.
3. The motion error compensation method of claim 1, wherein the screening of the degrees of freedom of the motion trajectories of the transmitter and the receiver based on the motion sensitivity model of the unmanned airborne bistatic SAR system comprises:
substituting the upper limit value of each motion error parameter into the motion sensitivity model of the unmanned airborne bistatic SAR system according to the upper limit value of the motion error parameter to obtain the influence degree of each error degree of freedom on the subimages, and selecting the error dimension with the influence degree on the subimages exceeding a preset threshold value as the motion track degree of freedom screening result of the transmitter and the receiver.
4. The motion error compensation method according to claim 1, wherein the motion error parameters include: the unmanned aerial vehicle carries bistatic SAR system's transmitter and receiver initial position error, speed error and acceleration error.
CN202110188317.2A 2021-06-23 2021-06-23 Motion error compensation algorithm based on unmanned aerial vehicle bistatic SAR system Pending CN113189585A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110188317.2A CN113189585A (en) 2021-06-23 2021-06-23 Motion error compensation algorithm based on unmanned aerial vehicle bistatic SAR system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110188317.2A CN113189585A (en) 2021-06-23 2021-06-23 Motion error compensation algorithm based on unmanned aerial vehicle bistatic SAR system

Publications (1)

Publication Number Publication Date
CN113189585A true CN113189585A (en) 2021-07-30

Family

ID=76972985

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110188317.2A Pending CN113189585A (en) 2021-06-23 2021-06-23 Motion error compensation algorithm based on unmanned aerial vehicle bistatic SAR system

Country Status (1)

Country Link
CN (1) CN113189585A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113671500A (en) * 2021-08-11 2021-11-19 北京理工大学 Unmanned aerial vehicle-mounted bistatic SAR high-frequency motion error compensation method

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2660623A2 (en) * 2012-09-03 2013-11-06 Institute of Electronics, Chinese Academy of Sciences Imaging method and device in SAB mobile bistatic SAR
CN111443349A (en) * 2020-02-28 2020-07-24 南昌大学 BiSAR echo-based correlation motion error compensation method, system and application

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2660623A2 (en) * 2012-09-03 2013-11-06 Institute of Electronics, Chinese Academy of Sciences Imaging method and device in SAB mobile bistatic SAR
CN111443349A (en) * 2020-02-28 2020-07-24 南昌大学 BiSAR echo-based correlation motion error compensation method, system and application

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
GUIJIE DIAO 等: "Study on the Modeling Method of Sip Target for Uav-borne Bi-SAR", 《IEEE》 *
JIANLAI CHEN 等: "A 2-D Space-Variant Motion Estimation and Compensation Method for Ultrahigh-Resolution Airborne Stepped-Frequency SAR With Long Integration Time", 《IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING》 *
MENG KE 等: "Experiment and Results of bistatic UAV SAR", 《IEEE》 *
TAO ZENG 等: "An Improved Frequency-Domain Image Formation Algorithm for Mini-UAV-Based Forward-Looking Spotlight BiSAR Systems", 《REMOTE SENSING》 *
ZHANZE WANG 等: "A Novel Motion Compensation Algorithm Based on Motion Sensitivity Analysis for Mini-UAV-Based BiSAR System", 《IEEE》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113671500A (en) * 2021-08-11 2021-11-19 北京理工大学 Unmanned aerial vehicle-mounted bistatic SAR high-frequency motion error compensation method
CN113671500B (en) * 2021-08-11 2023-07-25 北京理工大学 Unmanned aerial vehicle-mounted bistatic SAR high-frequency motion error compensation method

Similar Documents

Publication Publication Date Title
CN110779518B (en) Underwater vehicle single beacon positioning method with global convergence
CN109782289B (en) Underwater vehicle positioning method based on baseline geometric structure constraint
CN110749891B (en) Self-adaptive underwater single beacon positioning method capable of estimating unknown effective sound velocity
US8816896B2 (en) On-board INS quadratic correction method using maximum likelihood motion estimation of ground scatterers from radar data
CN110794409A (en) Underwater single beacon positioning method capable of estimating unknown effective sound velocity
CN105043392A (en) Aircraft pose determining method and aircraft pose determining device
CN112346104A (en) Unmanned aerial vehicle information fusion positioning method
CN113447924B (en) Unmanned aerial vehicle mapping method and system based on millimeter wave radar
CN111551934A (en) Motion compensation self-focusing method and device for unmanned aerial vehicle SAR imaging
Steiner et al. Ego-motion estimation using distributed single-channel radar sensors
Baek et al. Accurate vehicle position estimation using a Kalman filter and neural network-based approach
CN110646783A (en) Underwater beacon positioning method of underwater vehicle
CN110779519A (en) Underwater vehicle single beacon positioning method with global convergence
Wang et al. A novel motion compensation algorithm based on motion sensitivity analysis for mini-UAV-based BiSAR system
CN113189585A (en) Motion error compensation algorithm based on unmanned aerial vehicle bistatic SAR system
CN114089333A (en) SAR vibration error estimation and compensation method based on helicopter platform
CN113670301A (en) Airborne SAR motion compensation method based on inertial navigation system parameters
Saleh et al. Vehicular positioning using mmWave TDOA with a dynamically tuned covariance matrix
CN111765905A (en) Method for calibrating array elements of unmanned aerial vehicle in air
CN115560757B (en) Unmanned aerial vehicle direct positioning correction method based on neural network under random attitude error condition
CN109471102B (en) Inertial measurement unit error correction method
CN117029840A (en) Mobile vehicle positioning method and system
CN116819524A (en) Motion error compensation method based on unmanned aerial vehicle bistatic SAR system
CN111856464B (en) DEM extraction method of vehicle-mounted SAR (synthetic aperture radar) based on single control point information
CN114763998A (en) Unknown environment parallel navigation method and system based on micro radar array

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20210730