CN104251990B - Synthetic aperture radar self-focusing method - Google Patents

Synthetic aperture radar self-focusing method Download PDF

Info

Publication number
CN104251990B
CN104251990B CN201410467978.9A CN201410467978A CN104251990B CN 104251990 B CN104251990 B CN 104251990B CN 201410467978 A CN201410467978 A CN 201410467978A CN 104251990 B CN104251990 B CN 104251990B
Authority
CN
China
Prior art keywords
phase error
phi
lambda
azimuth
vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201410467978.9A
Other languages
Chinese (zh)
Other versions
CN104251990A (en
Inventor
李文超
蒲巍
杨建宇
黄钰林
武俊杰
李中余
杨海光
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN201410467978.9A priority Critical patent/CN104251990B/en
Publication of CN104251990A publication Critical patent/CN104251990A/en
Application granted granted Critical
Publication of CN104251990B publication Critical patent/CN104251990B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/89Radar or analogous systems specially adapted for specific applications for mapping or imaging
    • G01S13/90Radar or analogous systems specially adapted for specific applications for mapping or imaging using synthetic aperture techniques, e.g. synthetic aperture radar [SAR] techniques
    • G01S13/9004SAR image acquisition techniques
    • G01S13/9019Auto-focussing of the SAR signals
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/89Radar or analogous systems specially adapted for specific applications for mapping or imaging
    • G01S13/90Radar or analogous systems specially adapted for specific applications for mapping or imaging using synthetic aperture techniques, e.g. synthetic aperture radar [SAR] techniques
    • G01S13/904SAR modes
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/40Means for monitoring or calibrating
    • G01S7/4052Means for monitoring or calibrating by simulation of echoes

Landscapes

  • Engineering & Computer Science (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Physics & Mathematics (AREA)
  • Electromagnetism (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

The invention discloses a kind of synthetic aperture radar self-focusing method;Specifically include following steps: distance to pulse compression, range migration correction, orientation is gone tiltedly, coordinate declines multidimensional phase error estimation and phase error, estimate single localizer unit phase error, output multidimensional phase error estimation and phase error value and compensate phase error.The synthetic aperture radar self-focusing method of the present invention uses coordinate descent that each orientation moment phase error is carried out multivariate joint probability estimation, linear search is solved the problem of maximum-contrast and is converted into the problem solving quartic polynomial by the method using multidimensional projection, substantially reduces operand;The present invention has the feature limited without exponent number to orientation to the estimation of phase error simultaneously.

Description

Synthetic aperture radar self-focusing method
Technical Field
The invention belongs to the technical field of radars, and particularly relates to a synthetic aperture radar self-focusing method.
Background
Synthetic Aperture Radar (SAR) is a full-time, all-weather, high-resolution microwave remote sensing imaging Radar. In the fields of military reconnaissance, topographic mapping, marine and hydrological observation, environmental and disaster monitoring, resource exploration, crustal micro-variation detection and the like, the SAR plays an increasingly important role. In SAR imaging under ideal conditions, the platform is assumed to move linearly at a constant speed, however, in practice, the platform often deviates from the actual motion track due to the influence of wind and turbulence, so that phase errors are introduced, and the imaging quality is reduced. Thus, motion compensation is a key technique for SAR high resolution imaging. At present, SAR motion compensation is mainly divided into two types, one is motion compensation based on sensor information, and the other is motion compensation based on echo data, so-called auto-focusing. In recent years, with the requirement of imaging resolution being continuously improved, the requirement on motion compensation is higher and higher, the motion information provided by the sensor often cannot meet the requirement on compensation accuracy, and particularly under certain conditions, such as a small unmanned aerial vehicle, even no inertial navigation information is available, so that motion compensation based on echo data, namely self-focusing, becomes more important. Among the existing auto-focusing methods, documents "basic auto focus for Synthetic Aperture Radar, T.M. Calloway, and G.W.Dohoe, IEEE Transaction on Aero space and electronic Systems, Vol.30, No.2, pp.617-621, April 1994" propose a sub-view correlation (MD) method, which estimates Doppler modulation frequency by dividing aperture and inter-aperture correlation operation, and realizes auto-focusing. However, this method is based on a second order approximation model, and cannot compensate for the influence of a high order phase error. According to the concept of Phase gradient, a Phase gradient auto-focusing algorithm (PGA) built on an order-free limited Phase error model is provided, which can estimate each Phase error and make up for the defect that an MD algorithm can only estimate low-order Phase errors. However, the PGA algorithm has a high requirement for scene contrast in the field, and under the conditions of a uniform scene and a low signal-to-noise ratio, the focusing effect is often not good because the phase history of the salient point cannot be successfully extracted. In documents "Autofocusing of ISAR images based on elementary minimization, l.xi and j.ni, IEEE Transactions on atomic and electronic Systems, vol.35, No.4, pp.1240-1252,1999", a step-by-step approach autofocus algorithm is proposed with an image quality evaluation function as a metric, and phase errors in a high-dimensional space are searched by gradually reducing a step size, so that Autofocusing is realized, and the method is suitable for various types of scenes, but the amount of operation is large due to the involvement of high-dimensional search.
Disclosure of Invention
The purpose of the invention is: aiming at the defects in the background art, a self-focusing algorithm based on the maximum contrast of an image is researched and designed so as to solve the problem that the existing self-focusing algorithm only aims at low-order phase errors and does not have the order to limit the large operation amount of a high-dimensional space searching process in phase error estimation.
For the convenience of describing the contents of the present invention, the following terms are first explained:
the term 1: SAR self-focusing algorithm
The SAR self-focusing algorithm refers to an azimuth phase error estimation and compensation algorithm based on SAR echo data.
The term 2: coordinate descent method
The coordinate descent method is a multivariable non-gradient optimization algorithm. In each iteration of the algorithm, one-dimensional search is carried out at the current point along a coordinate direction to obtain a local minimum value of a function, and different coordinate directions are circularly used in the whole process.
The term 3: multidimensional projection method
The multidimensional projection method refers to a method of projecting a vector in a multidimensional space onto a specific two-dimensional plane.
The technical scheme of the invention is as follows: a synthetic aperture radar self-focusing method comprises the following steps:
s1, acquiring two-dimensional echo data, utilizing a matched filtering method to realize range direction pulse compression, and obtaining point target echo data s after pulse compression0(τ, η) is expressed as:
s 0 ( τ , η ) = sin c [ τ - 2 R ( η ) c ] w a z ( η ) exp [ - j 4 π λ R ( η ) ]
where c is the speed of light, η is the azimuth slow time, τ is the distance fast time, waz(η) is azimuth time domain envelope, lambda is wavelength, and R (η) is scene center distance history;
s2, point target echo data S in step S10(tau, η) performing two-dimensional Fourier transform to obtain two-dimensional frequency domain echo signal S0(fτ,fη) Then two-dimensional frequency domain echo signal S0(fτ,fη) Multiply-migrate corrected phaseHRCMC(fτ,fη) And performing Fourier inversion on the result obtained by the multiplication to a two-dimensional time domain to obtain a two-dimensional time domain signal s after range migration correction1(τ, η), expressed as:
s 1 ( τ , η ) = sin c [ τ - 2 R 0 c ] w a z ( η ) exp [ - j 4 π λ R ( η ) ]
wherein the migration corrects the phasefcIs the carrier frequency, V is the platform motion velocity, R0Instantaneous slope distance, f, at the moment of Doppler center crossingτIs a distance to a reference frequency, fηIs the azimuth reference frequency;
s3, constructing an azimuth declivity function H by using the ideal speed and action distance information of the platformdechirp(η), expressed as:
H d e c h i r p ( η ) = exp ( j 2 πV 2 λR 0 η 2 )
declivity function Hdechirp(η) and the distance migration corrected two-dimensional time domain signal S obtained in step S21(τ, η) to obtain y (τ, η) and discretely represent y (τ, η) as ym,nWherein M is 1,2, 3., M, N is 1,2, 3., N, M is the number of sampling points in the direction of distance, and N is the number of sampling points in the direction of direction; the phase error of the echo data in the azimuth direction is phi ═ phi123,…,φN) Wherein phin(N ═ 1,2, 3.., N) denotes the phase error of the nth azimuth unit in the echo data, then y will bem,nExpressed as:
y m , n = y ~ m , n · e jφ n
wherein,the method is a result of discrete representation after multiplication of a distance migration corrected two-dimensional time domain signal and an azimuth deskew function under an ideal condition; will ym,nPerforming orientation discrete Fourier transform to obtain a two-dimensional image z without self-focusing processingm,nExpressed as:
z m , n = 1 / N Σ k = 1 N exp ( j 2 π k n / N ) y ~ m , k e jφ k
wherein k is 1,2.. N is the number of azimuth units; for two-dimensional image z without self-focusing processingm,nCalculating the image contrast C thereof0Expressed as:
C 0 = Σ q = 1 N × M ( z q × z q * ) 2
wherein z isqAs a two-dimensional image zm,nQ is 1,2, and N × M is the number of matrix elements, [ · q]*Is a conjugate operation;
s4, setting a phase error estimation vectorWhen i is 1, is fixedEstimate phi1When i is>At 1 time, fixEstimate phiiWherein i is the number of azimuth units;
s5, when the number i of azimuth units of the phase error is equal to 1, utilizing the phase error phi to be estimatediAnd the phase error estimates of the remaining azimuth elementsConstructing a phase error compensation vectorWhen i ≠ 1, the phase error phi to be estimated is utilizediAnd the phase error estimates of the remaining azimuth elementsConstructing a phase error compensation vectorWill y in step S3m,nMultiplying the phase error compensation vector, and then performing azimuth Fourier transform on the result obtained after multiplication to obtain
z m , n ′ = 1 N exp ( j 2 π n / N ) y m , i e - jφ i + Σ k ≠ i 1 N exp ( j 2 π k n / N ) y m , k e - j φ ^ k
To z 'obtained'm,nAccording to the image contrast C in step S30The expression (c) leads to an analytical expression of the contrast ratio, and is simplified to obtain:
C(φi)=||F0+A cosφi+B sinφi||2
wherein | · | purple sweet2Is a vector two norm, F0Is a constant vector quantity, satisfies
F0=[(f0)1,1,(f0)1,2,...,(f0)1,N,(f0)2,1,(f0)2,2,...,(f0)2,N,...,(f0)M,1,(f0)M,2,...,(f0)M,N]
( f 0 ) m , n = | Σ k ≠ i 1 N exp ( j 2 π k n / N ) y m , k e - j φ ^ k | 2 + | 1 N exp ( j 2 π n / N ) y m , i | 2
A. B is an elliptical parameter vector satisfying
A=[(a)1,1,(a)1,2,...,(a)1,N,(a)2,1,(a)2,2,...,(a)2,N,...,(a)M,1,(a)M,2,...,(a)M,N]
B=[(b)1,1,(b)1,2,...,(b)1,N,(b)2,1,(b)2,2,...,(b)2,N,...,(b)M,1,(b)M,2,...,(b)M,N]
a m , n = 2 Re { 1 N exp ( - j 2 π n / N ) y m , i * [ Σ k ≠ i 1 N exp ( j 2 π k n / N ) y m , k e - j φ ^ k ] }
b m , n = - 2 Im { 1 N exp ( - j 2 π n / N ) y m , i * [ Σ k ≠ i 1 N exp ( j 2 π k n / N ) y m , k e - j φ ^ k ] }
Vector F is projected by multi-dimensional projection method0Projecting the vector A, B to form a two-dimensional plane, and solving the ellipse on the two-dimensional plane to xoIs the farthest point from, where xoIs a projected point with the origin of coordinates in a two-dimensional plane spanned by A, B and having coordinates of x0=-[E1E2]T·F0,E1、E2Is the unit orthogonal basis of vector A, B [ ·]TPerforming matrix transposition operation; using the farthest point to xoIs parallel to the geometric relationship of the ellipse normal vector, solving a fourth order polynomial about the parameter to be solved α, expressed as:
Σ p = 0 4 c p α p = 0
where p is 0,1,2,3,4, which is a polynomial order, and the parameter α is related to the phase error phi to be solvediThe relationship is as follows:
c o s φ i sinφ i = [ A B ] - 1 ( α m i n R + I ) - 1 x 0
c 0 = λ 1 β 1 2 + λ 2 β 2 2 - 1 c 1 = 2 λ 1 ( λ 2 β 2 2 - 1 ) + 2 λ 2 ( λ 1 β 1 2 - 1 ) c 2 = ( λ 1 β 1 2 - 1 ) λ 2 2 + ( λ 2 β 2 2 - 1 ) λ 1 2 - 4 λ 1 λ 2 c 3 = - 2 λ 1 λ 2 ( λ 1 + λ 2 ) c 4 = - ( λ 1 λ 2 ) 2
λ1、λ2is the eigenvalue, v, of an elliptic parameter matrix R1、v2For its corresponding feature vector, [ β ]1β2]T=[v1v2]Tx0Solving a quartic polynomial about the parameter to be solved α to obtain a minimum real number solution α of αminα will beminCarry-in parameter α and phase error phi to be solvediIn the relation, the estimated value of the phase error is obtained
S6, updating the phase error estimation vector in the step S4The phase error estimated in the judgment step S5If the number i of azimuth cells in (b) is equal to N, if not, let i be i +1, and proceed to step S5 to estimate phii(ii) a If true, then y in step S3m,nMultiplying by a phase error compensation vectorAnd performing azimuth Fourier transform to obtain a two-dimensional focusing image with compensated phase errorExpressed as:
z ~ m , n = 1 / N Σ k = 1 N exp ( j 2 π k n / N ) y m , k e - j φ ^ k
according to the image contrast C in step S30Is calculated from the expressionContrast ratio C of1Judging whether the image contrast at the moment meets the conditionIf yes, directly outputting the phase error estimation vectorIf not, updating the initial image contrast C0Contrast C of the image at this time1Is given to C0Returning to step S4, repeating steps S5 and S6, and estimating phi123,…,φNUntil the condition is satisfied, outputting the phase error estimation vector at the moment
S7. the phase error estimation vector output in the step S6Constructing a phase error compensation vectorWill y in step S3m,nMultiplying the phase error compensation vector and performing azimuth Fourier transform to obtain a two-dimensional focusing image after phase error compensationExpressed as:
z ~ m , n = 1 / N Σ k = 1 N exp ( j 2 π k n / N ) y m , k e - j φ ^ k .
further, the ellipse parameter matrix in step S5 is specifically defined as:
R = r 1 r 3 r 3 r 2
r 1 = ( A 2 2 + B 2 2 ) / ( A 2 B 1 - A 1 B 2 ) r 2 = ( A 1 2 + B 1 2 ) / ( A 2 B 1 - A 1 B 2 ) r 3 = - ( A 1 A 2 + B 1 B 2 ) / ( A 2 B 1 - A 1 B 2 )
[A1A2]=AT[E1E2]T
[B1B2]=BT[E1E2]T
the invention has the beneficial effects that: the synthetic aperture radar self-focusing method carries out multi-dimensional joint estimation on phase errors at each azimuth moment by using a coordinate descent method, converts the problem of solving the maximum contrast by one-dimensional search into the problem of solving a quartic polynomial by using a multi-dimensional projection method, and greatly reduces the operation amount; meanwhile, the method has the characteristic of no order limitation on the estimation of the azimuth phase error.
Drawings
Fig. 1 is a schematic flow chart of a synthetic aperture radar self-focusing method of the present invention.
Fig. 2 is a schematic diagram of the bistatic forward-looking SAR geometry of the present invention.
FIG. 3 is a target scenario layout diagram of the present invention.
Fig. 4 is a schematic representation of the results of the pre-autofocus imaging of the invention.
FIG. 5 is a schematic of the azimuthal phase error of the present invention.
FIG. 6 is a graphical representation of the imaging results after autofocusing according to the invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The invention mainly adopts a simulation experiment method for verification, and all the steps and conclusions are verified to be correct on Matlab 2012. Fig. 1 is a schematic flow chart of a synthetic aperture radar self-focusing method according to the present invention. A synthetic aperture radar self-focusing method comprises the following steps:
s1, acquiring two-dimensional echo data, utilizing a matched filtering method to realize range direction pulse compression, and obtaining point target echo data s after pulse compression0(τ, η) is expressed as:
s 0 ( τ , η ) = sin c [ τ - 2 R ( η ) c ] w a z ( η ) exp [ - j 4 π λ R ( η ) ]
where c is the speed of light, η is the azimuth slow time, τ is the distance fast time, waz(η) is the azimuth time domain envelope, λ is the wavelength, and R (η) is the scene center distance history.
As shown in fig. 2, is a schematic diagram of the bistatic forward-looking SAR geometry of the present invention. The parameters required for the simulation of the present invention are shown in the following table.
And calculating radar distance history of each point target in the central area of the imaging area to generate an SAR point target simulation echo matrix. Fig. 3 is a diagram showing the arrangement of the target scene according to the present invention. The distance direction Fourier transformation is carried out on the echo, and then the distance direction is multiplied by a matched filter matching function H1(fτ) And realizing the distance direction pulse compression.
Wherein,distance direction frequency fτThe variation range is [ -24,24 [)]MHz, range of variation of distance time tau [6.13 × 10%-5,7.20×10-5]And second.
S2, point target echo data S in step S10(tau, η) performing two-dimensional Fourier transform to obtain two-dimensional frequency domain echo signal S0(fτ,fη) Then two-dimensional frequency domain echo signal S0(fτ,fη) Riding deviceCorrecting phase H by migrationRCMC(fτ,fη) And performing Fourier inversion on the result obtained by the multiplication to a two-dimensional time domain to obtain a two-dimensional time domain signal s after range migration correction1(τ, η), expressed as:
s 1 ( τ , η ) = sin c [ τ - 2 R 0 c ] w a z ( η ) exp [ - j 4 π λ R ( η ) ]
wherein the migration corrects the phasefcIs the carrier frequency, V is the platform motion velocity, R0Instantaneous slope distance, f, at the moment of Doppler center crossingτIs a distance to a reference frequency, fηFor the azimuth reference frequency, azimuth frequency fηThe variation range is [ -300,300 [ -300]Hz, azimuth time η range of [ -0.425,0.425]Second, η is 0 seconds when the beam center irradiates the target.
S3, constructing an azimuth declivity function H by using the ideal speed and action distance information of the platformdechirp(η), expressed as:
H d e c h i r p ( η ) = exp ( j 2 πV 2 λR 0 η 2 )
declivity function Hdechirp(η) and the distance migration corrected two-dimensional time domain signal S obtained in step S21(τ, η) to obtain y (τ, η) and discretely represent y (τ, η) as ym,nWherein M is 1,2, 3., M, N is 1,2, 3., N, M is the number of sampling points in the direction of distance, and N is the number of sampling points in the direction of direction; the phase error of the echo data in the azimuth direction is phi ═ phi123,…,φN) Wherein phin(N ═ 1,2, 3.., N) denotes the phase error of the nth azimuth unit in the echo data, then y will bem,nExpressed as:
y m , n = y ~ m , n · e jφ n
wherein,the method is a result of discrete representation after multiplication of a distance migration corrected two-dimensional time domain signal and an azimuth deskew function under an ideal condition; will ym,nPerforming orientation discrete Fourier transform to obtain a two-dimensional image z without self-focusing processingm,nExpressed as:
z m , n = 1 / N Σ k = 1 N exp ( j 2 π k n / N ) y ~ m , k e jφ k
wherein k is 1,2.. N is the number of azimuth units; for two-dimensional image z without self-focusing processingm,nCalculating the image contrast C thereof0Expressed as:
C 0 = Σ q = 1 N × M ( z q × z q * ) 2
wherein z isqAs a two-dimensional image zm,nQ is 1,2, and N × M is the number of matrix elements, [ · q]*Is a conjugate operation.
Here, we set the azimuth phase error in the echo signal to Φ (η) ═ 6 η2+3η3-2η45Discretizing phi (η) into an N-dimensional phase error vector phi, adding the phase error vector phi to the echo signalObtaining the azimuth deskew function and s in the actual situation1(τ, η) multiplied result ym,k. For signal ym,kCarrying out azimuth Fourier transform to obtain a pre-autofocus imaging result zm,nAnd calculating the image contrast of the SAR imaging result before the self-focusing processing. Fig. 4 is a diagram illustrating the result of the imaging before autofocusing according to the present invention.The contrast of the image before the autofocus processing is C0=1.19×1018
S4, setting a phase error estimation vectorWhen i is 1, is fixedEstimate phi1When i is>At 1 time, fixEstimate phiiAnd i is the number of azimuth units.
Due to estimating a multi-dimensional phase error phi123,…,φNThe method adopts a coordinate descent method, namely, the phase errors of other azimuth units are kept fixed in each iteration, and one-dimensional search is carried out along the coordinate direction of the phase error of a specific azimuth unit.
S5, when the number i of azimuth units of the phase error is equal to 1, utilizing the phase error phi to be estimatediAnd the phase error estimates of the remaining azimuth elementsConstructing a phase error compensation vectorWhen i ≠ 1, the phase error phi to be estimated is utilizediAnd the phase error estimates of the remaining azimuth elementsConstructing a phase error compensation vectorWill y in step S3m,nMultiplying the phase error compensation vector, and then performing azimuth Fourier transform on the result obtained after multiplication to obtain
z m , n ′ = 1 N exp ( j 2 π n / N ) y m , i e - jφ i + Σ k ≠ i 1 N exp ( j 2 π k n / N ) y m , k e - j φ ^ k
To z 'obtained'm,nAccording to the image contrast C in step S30The expression (c) leads to an analytical expression of the contrast ratio, and is simplified to obtain:
C(φi)=||F0+A cosφi+B sinφi||2
wherein, C (phi)i) Is a vector F0+Acosφi+BsinφiIs the length from the origin of coordinates in the N × M dimensional space to the ellipse Acos phii+BsinφiDistance of (1) | · | | non-calculation2Is a vector two norm, F0Is a constant vector quantity, satisfies
F0=[(f0)1,1,(f0)1,2,...,(f0)1,N,(f0)2,1,(f0)2,2,...,(f0)2,N,...,(f0)M,1,(f0)M,2,...,(f0)M,N]
( f 0 ) m , n = | Σ k ≠ i 1 N exp ( j 2 π k n / N ) y m , k e - j φ ^ k | 2 + | 1 N exp ( j 2 π n / N ) y m , i | 2
A. B is an elliptical parameter vector satisfying
A=[(a)1,1,(a)1,2,...,(a)1,N,(a)2,1,(a)2,2,...,(a)2,N,...,(a)M,1,(a)M,2,...,(a)M,N]
B=[(b)1,1,(b)1,2,...,(b)1,N,(b)2,1,(b)2,2,...,(b)2,N,...,(b)M,1,(b)M,2,...,(b)M,N]
a m , n = 2 Re { 1 N exp ( - j 2 π n / N ) y m , i * [ Σ k ≠ i 1 N exp ( j 2 π k n / N ) y m , k e - j φ ^ k ] }
b m , n = - 2 Im { 1 N exp ( - j 2 π n / N ) y m , i * [ Σ k ≠ i 1 N exp ( j 2 π k n / N ) y m , k e - j φ ^ k ] }
The ellipse parameter matrix is defined as:
R = r 1 r 3 r 3 r 2
r 1 = ( A 2 2 + B 2 2 ) / ( A 2 B 1 - A 1 B 2 ) r 2 = ( A 1 2 + B 1 2 ) / ( A 2 B 1 - A 1 B 2 ) r 3 = - ( A 1 A 2 + B 1 B 2 ) / ( A 2 B 1 - A 1 B 2 )
[A1A2]=AT[E1E2]T
[B1B2]=BT[E1E2]T
vector F is projected by multi-dimensional projection method0Projected onto a two-dimensional plane spanned by vector A, B, the phase error φ corresponding to the maximum image contrast will be resolved1Conversion to solving an ellipse to x on a two-dimensional planeoIs the farthest point from, where xoIs a projected point with the origin of coordinates in a two-dimensional plane spanned by A, B and having coordinates of x0=-[E1E2]T·F0,E1、E2Is the unit orthogonal basis of vector A, B [ ·]TPerforming matrix transposition operation; using the farthest point to xoThe vector of (2) is parallel to the geometric relation of the normal vector of the ellipse, the maximum distance from a point outside the ellipse to be solved is converted into a fourth-order polynomial about the parameter to be solved α, and the polynomial is expressed as:
Σ p = 0 4 c p α p = 0
where p is 0,1,2,3,4, which is a polynomial order, and the parameter α is related to the phase error phi to be solvediThe relationship is as follows:
c o s φ i sinφ i = [ A B ] - 1 ( α m i n R + I ) - 1 x 0
c 0 = λ 1 β 1 2 + λ 2 β 2 2 - 1 c 1 = 2 λ 1 ( λ 2 β 2 2 - 1 ) + 2 λ 2 ( λ 1 β 1 2 - 1 ) c 2 = ( λ 1 β 1 2 - 1 ) λ 2 2 + ( λ 2 β 2 2 - 1 ) λ 1 2 - 4 λ 1 λ 2 c 3 = - 2 λ 1 λ 2 ( λ 1 + λ 2 ) c 4 = - ( λ 1 λ 2 ) 2
λ1、λ2is the eigenvalue, v, of an elliptic parameter matrix R1、v2For its corresponding feature vector, [ β ]1β2]T=[v1v2]Tx0Solving a quartic polynomial about the parameter to be solved α to obtain a minimum real number solution α of αminα will beminCarry-in parameter α and phase error phi to be solvediIn the relation, the estimated value of the phase error is obtained
S6, updating the phase error estimation vector in the step S4The phase error estimated in the judgment step S5If the number i of azimuth cells in (b) is equal to N, if not, let i be i +1, and proceed to step S5 to estimate phii(ii) a If true, then y in step S3m,nMultiplying by a phase error compensation vectorAnd performing azimuth Fourier transform to obtain a two-dimensional focusing image with compensated phase errorExpressed as:
z ~ m , n = 1 / N Σ k = 1 N exp ( j 2 π k n / N ) y m , k e - j φ ^ k
according to the image contrast C in step S30Is calculated from the expressionContrast ratio C of1Judging whether the image contrast at the moment meets the conditionIf yes, directly outputting the phase error estimation vectorIf not, updating the initial image contrast C0Contrast C of the image at this time1Is given to C0Returning to step S4, repeating steps S5 and S6, and estimating phi123,…,φNUntil the condition is satisfied, outputting the phase error estimation vector at the momentParameter D here0=10-3After three cycles, the inequality in equation (25) holds, at which time the image contrast is C1=7.37×1018. Actual value phi and estimated value of error phaseAs shown in fig. 5.
S7, the method comprises the following stepsPhase error estimate vector output in S6Constructing a phase error compensation vectorWill y in step S3m,nMultiplying the phase error compensation vector and performing azimuth Fourier transform to obtain a two-dimensional focusing image after phase error compensationExpressed as:
z ~ m , n = 1 / N Σ k = 1 N exp ( j 2 π k n / N ) y m , k e - j φ ^ k .
fig. 6 is a schematic diagram showing the imaging result after self-focusing according to the present invention. The invention estimates the azimuth phase error without order limitation and has accurate estimation, and the estimated mean square error is only 0.061. Under the same conditions, the method requires 61.42 seconds for the estimation of the azimuth phase error, and 487.89 seconds for processing the specific embodiment of the invention by adopting a successive approximation self-focusing method; compared with a progressive approach self-focusing method, the method has the advantage that the efficiency is improved by 7.9 times.
It will be appreciated by those of ordinary skill in the art that the embodiments described herein are intended to assist the reader in understanding the principles of the invention and are to be construed as being without limitation to such specifically recited embodiments and examples. Those skilled in the art can make various other specific changes and combinations based on the teachings of the present invention without departing from the spirit of the invention, and these changes and combinations are within the scope of the invention.

Claims (2)

1. A synthetic aperture radar self-focusing method, comprising the steps of:
s1, acquiring two-dimensional echo data, utilizing a matched filtering method to realize range direction pulse compression, and obtaining point target echo data s after pulse compression0(τ, η) is expressed as:
s 0 ( τ , η ) = sin c [ τ - 2 R ( η ) c ] w a z ( η ) exp [ - j 4 π λ R ( η ) ]
where c is the speed of light, η is the azimuth slow time, τ is the distance fast time, waz(η) is azimuth time domain envelope, lambda is wavelength, and R (η) is scene center distance history;
s2, point target echo data S in step S10(tau, η) performing two-dimensional Fourier transform to obtain two-dimensional frequency domain echo signal S0(fτ,fη) Then two-dimensional frequency domain echo signal S0(fτ,fη) Multiplied by a migration-corrected phase HRCMC(fτ,fη) And performing Fourier inversion on the result obtained by the multiplication to a two-dimensional time domain to obtain a two-dimensional time domain signal s after range migration correction1(τ, η), expressed as:
s 1 ( τ , η ) = sin c [ τ - 2 R 0 c ] w a z ( η ) exp [ - j 4 π λ R ( η ) ]
wherein the migration corrects the phasefcIs the carrier frequency, V is the platform motion velocity, R0Instantaneous slope distance, f, at the moment of Doppler center crossingτIs a distance to a reference frequency, fηIs the azimuth reference frequency;
s3, constructing an azimuth declivity function H by using the ideal speed and action distance information of the platformdechirp(η), expressed as:
H d e c h i r p ( η ) = exp ( j 2 πV 2 λR 0 η 2 )
declivity function Hdechirp(η) and the distance migration corrected two-dimensional time domain signal S obtained in step S21(τ, η) to obtain y (τ, η) and discretely represent y (τ, η) as ym,nWherein M is 1,2, 3., M, N is 1,2, 3., N, M is the number of sampling points in the direction of distance, and N is the number of sampling points in the direction of direction; the phase error of the echo data in the azimuth direction is phi ═ phi123,…,φN) Wherein phin(N ═ 1,2, 3.., N) denotes the phase error of the nth azimuth unit in the echo data, then y will bem,nExpressed as:
y m , n = y ~ m , n · e jφ n
wherein,the method is a result of discrete representation after multiplication of a distance migration corrected two-dimensional time domain signal and an azimuth deskew function under an ideal condition; will ym,nPerforming orientation discrete Fourier transform to obtain a two-dimensional image z without self-focusing processingm,nExpressed as:
z m , n = 1 / N Σ k = 1 N exp ( j 2 π k n / N ) y ~ m , k e jφ k
wherein k is 1,2.. N is the number of azimuth units; for two-dimensional image z without self-focusing processingm,nCalculating the image contrast C thereof0Expressed as:
C 0 = Σ q = 1 N × M ( z q × z q * ) 2
wherein z isqAs a two-dimensional image zm,nQ is 1,2, and N × M is the number of matrix elements, [ · q]*Is a conjugate operation;
s4, setting a phase error estimation vectorWhen i is 1, is fixedEstimate phi1When i is>At 1 time, fixEstimate phiiWherein i is the number of azimuth units;
s5, when the number i of azimuth units of the phase error is equal to 1, utilizing the phase error phi to be estimatediAnd the phase error estimates of the remaining azimuth elementsConstructing a phase error compensation vectorWhen i ≠ 1, the phase error phi to be estimated is utilizediAnd the phase error estimates of the remaining azimuth elementsConstructing a phase error compensation vectorWill y in step S3m,nMultiplying the phase error compensation vector, and then performing azimuth Fourier transform on the result obtained after multiplication to obtain
z m , n ′ = 1 N exp ( j 2 π n / N ) y m , i e - jφ i + Σ k ≠ i 1 N exp ( j 2 π k n / N ) y m , k e - j φ ^ k
To z 'obtained'm,nAccording to the image contrast C in step S30The expression (c) leads to an analytical expression of the contrast ratio, and is simplified to obtain:
C(φi)=||F0+Acosφi+Bsinφi||2
wherein | · | purple sweet2Is a vector two norm, F0Is a constant vector quantity, satisfies
F0=[(f0)1,1,(f0)1,2,...,(f0)1,N,(f0)2,1,(f0)2,2,...,(f0)2,N,...,(f0)M,1,(f0)M,2,...,(f0)M,N]
( f 0 ) m , n = | Σ k ≠ i 1 N exp ( j 2 π k n / N ) y m , k e - j φ ^ k | 2 + | 1 N exp ( j 2 π n / N ) y m , i | 2
A. B is an elliptical parameter vector satisfying
A=[(a)1,1,(a)1,2,...,(a)1,N,(a)2,1,(a)2,2,...,(a)2,N,...,(a)M,1,(a)M,2,...,(a)M,N]
B=[(b)1,1,(b)1,2,...,(b)1,N,(b)2,1,(b)2,2,...,(b)2,N,...,(b)M,1,(b)M,2,...,(b)M,N]
a m , n = 2 Re { 1 N exp ( - j 2 π n / N ) y m , i * [ Σ k ≠ i 1 N exp ( j 2 π k n / N ) y m , k e - j φ ^ k ] }
b m , n = - 2 I m { 1 N exp ( - j 2 π n / N ) y m , i * [ Σ k ≠ i 1 N exp ( j 2 π k n / N ) y m , k e - j φ ^ k ] }
Vector F is projected by multi-dimensional projection method0Projecting the vector A, B to form a two-dimensional plane, and solving the ellipse on the two-dimensional plane to xoIs the farthest point from, where xoIs a projected point with the origin of coordinates in a two-dimensional plane spanned by A, B and having coordinates of x0=-[E1E2]T·F0,E1、E2Is the unit orthogonal basis of vector A, B [ ·]TPerforming matrix transposition operation; using the farthest point to xoIs parallel to the geometric relationship of the ellipse normal vector, solving a fourth order polynomial about the parameter to be solved α, expressed as:
Σ p = 0 4 c p α p = 0
where p is 0,1,2,3,4, which is a polynomial order, and the parameter α is related to the phase error phi to be solvediThe relationship is as follows:
c o s φ i sinφ i = [ A B ] - 1 ( α m i n R + I ) - 1 x 0
c 0 = λ 1 β 1 2 + λ 2 β 2 2 - 1 c 1 = 2 λ 1 ( λ 2 β 2 2 - 1 ) + 2 λ 2 ( λ 1 β 1 2 - 1 ) c 2 = ( λ 1 β 1 2 - 1 ) λ 2 2 + ( λ 2 β 2 2 - 1 ) λ 1 2 - 4 λ 1 λ 2 c 3 = - 2 λ 1 λ 2 ( λ 1 + λ 2 ) c 4 = - ( λ 1 λ 2 ) 2
λ1、λ2is the eigenvalue, v, of an elliptic parameter matrix R1、v2For its corresponding feature vector, [ β ]1β2]T=[v1v2]Tx0Solving a quartic polynomial about the parameter to be solved α to obtain a minimum real number solution α of αminα will beminCarry-in parameter α and phase error phi to be solvediIn the relation, the estimated value of the phase error is obtained
S6, updating the phase error estimation vector in the step S4The phase error estimated in the judgment step S5If the number i of azimuth cells in (b) is equal to N, if not, let i be i +1, and proceed to step S5 to estimate phii(ii) a If it is trueThen y in step S3m,nMultiplying by a phase error compensation vectorAnd performing azimuth Fourier transform to obtain a two-dimensional focusing image with compensated phase errorExpressed as:
z ~ m , n = 1 / N Σ k = 1 N exp ( j 2 π k n / N ) y m , k e - j φ ^ k
according to the image contrast C in step S30Is calculated from the expressionContrast ratio C of1Judging whether the image contrast at the moment meets the conditionIf yes, directly outputting the phase error estimation vectorIf not, updating the initial image contrast C0Contrast C of the image at this time1Is given to C0Returning to step S4, repeating steps S5 and S6, and estimating phi123,…,φNUntil the condition is satisfied, outputting the phase error estimation vector at the moment
S7. the phase error estimation vector output in the step S6Constructing a phase error compensation vectorWill y in step S3m,nMultiplying the phase error compensation vector and performing azimuth Fourier transform to obtain a two-dimensional focusing image after phase error compensationExpressed as:
z ~ m , n = 1 / N Σ k = 1 N exp ( j 2 π k n / N ) y m , k e - j φ ^ k .
2. the method of self-focusing of synthetic aperture radar as claimed in claim 1, wherein the elliptical parameter matrix definition in step S5 is specifically as follows:
R = r 1 r 3 r 3 r 2
r 1 = ( A 2 2 + B 2 2 ) / ( A 2 B 1 - A 1 B 2 ) r 2 = ( A 1 2 + B 1 2 ) / ( A 2 B 1 - A 1 B 2 ) r 3 = - ( A 1 A 2 + B 1 B 2 ) / ( A 2 B 1 - A 1 B 2 )
[A1A2]=AT[E1E2]T
[B1B2]=BT[E1E2]T
CN201410467978.9A 2014-09-15 2014-09-15 Synthetic aperture radar self-focusing method Active CN104251990B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410467978.9A CN104251990B (en) 2014-09-15 2014-09-15 Synthetic aperture radar self-focusing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410467978.9A CN104251990B (en) 2014-09-15 2014-09-15 Synthetic aperture radar self-focusing method

Publications (2)

Publication Number Publication Date
CN104251990A CN104251990A (en) 2014-12-31
CN104251990B true CN104251990B (en) 2016-08-24

Family

ID=52187064

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410467978.9A Active CN104251990B (en) 2014-09-15 2014-09-15 Synthetic aperture radar self-focusing method

Country Status (1)

Country Link
CN (1) CN104251990B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104730500B (en) * 2015-02-04 2017-05-10 电子科技大学 Synthetic aperture radar residual range migration correction method
CN106405549A (en) * 2016-08-29 2017-02-15 上海无线电设备研究所 Airborne synthetic aperture radar self-focusing method
CN108427115B (en) * 2018-01-29 2020-06-02 电子科技大学 Method for quickly estimating moving target parameters by synthetic aperture radar
CN108061882A (en) * 2018-01-30 2018-05-22 中国人民解放军国防科技大学 ISAR transverse calibration and Doppler-crossing walking correction method based on modified Newton iteration
CN108732555B (en) * 2018-06-04 2022-05-27 内蒙古工业大学 Automatic driving array microwave imaging motion compensation method
CN109917389B (en) * 2019-04-16 2020-07-17 中国人民解放军国防科技大学 Phase correction method in airborne holographic SAR imaging
CN110554385B (en) * 2019-07-02 2022-10-28 中国航空工业集团公司雷华电子技术研究所 Self-focusing imaging method and device for maneuvering trajectory synthetic aperture radar and radar system
CN113176570B (en) * 2021-04-21 2022-11-15 北京航空航天大学 Squint SAR time domain imaging self-focusing method
CN113219457B (en) * 2021-04-25 2023-01-06 中国科学院空天信息创新研究院 Ultra-wideband frequency-modulated continuous wave SAR self-focusing imaging method
CN113447926B (en) * 2021-06-25 2023-02-28 北京航空航天大学 Method and system for detecting foreign matters on airfield runway based on vehicle-mounted sliding rail SAR imaging

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1601297A (en) * 2003-09-28 2005-03-30 清华大学 Self-focusing method adaptive for low contrast scene composite aperture radar imaging
CN101710174A (en) * 2009-12-10 2010-05-19 南京航空航天大学 Self-focusing method for strip synthetic aperture radar images
CN101806892A (en) * 2010-03-19 2010-08-18 南京航空航天大学 Projection approximation subspace tracking technology-based self-focusing method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1601297A (en) * 2003-09-28 2005-03-30 清华大学 Self-focusing method adaptive for low contrast scene composite aperture radar imaging
CN101710174A (en) * 2009-12-10 2010-05-19 南京航空航天大学 Self-focusing method for strip synthetic aperture radar images
CN101806892A (en) * 2010-03-19 2010-08-18 南京航空航天大学 Projection approximation subspace tracking technology-based self-focusing method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Phase Gradient Autofocus-A Robust Tool for High Resolution SAR Phase Correction;D.E.WAHL 等;《IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS》;19940731;第30卷(第3期);826-835 *
利用对比度最大化实现SAR图像自聚焦;武昕伟 等;《现代雷达》;20020531(第3期);20-22 *
基于最优对比度准则的SAR图像相位梯度自聚焦算法;赵侠 等;《遥感技术与应用》;20051231;第20卷(第6期);606-610 *

Also Published As

Publication number Publication date
CN104251990A (en) 2014-12-31

Similar Documents

Publication Publication Date Title
CN104251990B (en) Synthetic aperture radar self-focusing method
CN105842694B (en) A kind of self-focusing method based on FFBP SAR imagings
CN106802416B (en) Fast factorization back projection SAR self-focusing method
CN104833972B (en) A kind of bistatic CW with frequency modulation synthetic aperture radar frequency becomes mark imaging method
CN103760558B (en) Terahertz radar ISAR imaging method
CN108427115B (en) Method for quickly estimating moving target parameters by synthetic aperture radar
CN104777479B (en) Front side based on multi-core DSP regards SAR realtime imaging methods
CN102662171A (en) Synthetic aperture radar (SAR) tomography three-dimensional imaging method
CN105699969A (en) A maximum posterior estimated angle super-resolution imaging method based on generalized Gaussian constraints
CN105116411B (en) A kind of bidimensional self-focusing method suitable for range migration algorithm
CN106054188A (en) Unmanned aerial vehicle synthetic aperture radar imaging range-dependant map drift method
CN102645651A (en) SAR (synthetic aperture radar) tomography super-resolution imaging method
CN104730520A (en) Circumference SAR back projection self-focusing method based on subaperture synthesis
CN103995260A (en) Synthetic aperture radar SAR imaging method and device
CN110109107B (en) Motion error compensation method of synthetic aperture radar frequency domain BP algorithm
CN105447867B (en) Spatial target posture method of estimation based on ISAR images
CN103235309A (en) Near space low-speed platform SAR (Synthetic Aperture Radar) imaging method
CN104316923A (en) Self-focusing method aiming at synthetic aperture radar (Back Projection) imaging
CN104122549A (en) Deconvolution based radar angle super-resolution imaging method
CN104793196A (en) Real-time SAR (synthetic aperture radar) imaging method based on improved range migration algorithm
CN103792534B (en) SAR two-dimension autofocus method based on prior phase structure knowledge
CN103728617B (en) Double-base synthetic aperture radar time domain fast imaging method
Li et al. Bayesian linear regression with cauchy prior and its application in sparse mimo radar
CN106054190A (en) Bistatic foresight SAR frequency domain imaging method based on frequency spectrum optimization modeling
CN103869315A (en) Near space circular synthetic aperture radar rapid back-direction projection imaging method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant