CN112799064B - Cylindrical aperture nonlinear progressive phase iterative imaging method and device - Google Patents

Cylindrical aperture nonlinear progressive phase iterative imaging method and device Download PDF

Info

Publication number
CN112799064B
CN112799064B CN202011611757.6A CN202011611757A CN112799064B CN 112799064 B CN112799064 B CN 112799064B CN 202011611757 A CN202011611757 A CN 202011611757A CN 112799064 B CN112799064 B CN 112799064B
Authority
CN
China
Prior art keywords
distance
point
radar
pixel
sampling point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011611757.6A
Other languages
Chinese (zh)
Other versions
CN112799064A (en
Inventor
谭维贤
赵立欣
乞耀龙
黄平平
徐伟
高志奇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Inner Mongolia University of Technology
Original Assignee
Inner Mongolia University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Inner Mongolia University of Technology filed Critical Inner Mongolia University of Technology
Priority to CN202011611757.6A priority Critical patent/CN112799064B/en
Publication of CN112799064A publication Critical patent/CN112799064A/en
Application granted granted Critical
Publication of CN112799064B publication Critical patent/CN112799064B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/89Radar or analogous systems specially adapted for specific applications for mapping or imaging
    • G01S13/90Radar or analogous systems specially adapted for specific applications for mapping or imaging using synthetic aperture techniques, e.g. synthetic aperture radar [SAR] techniques

Landscapes

  • Engineering & Computer Science (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Physics & Mathematics (AREA)
  • Electromagnetism (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Physics & Mathematics (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

The present disclosure relates to a method and apparatus for cylindrical aperture nonlinear progressive phase iterative imaging, the method comprising: a one-dimensional arc array moves along the Z-axis direction to form a synthetic aperture or a real aperture radar with sampling points positioned on the same cylindrical surface; calculating based on a stepping distance compensation factor, and analyzing a distance error; based on the region division of sampling points of the cylindrical aperture radar, the distance process from any point of an observation region to the sampling points is obtained through iterative calculation of a stepping distance compensation factor, so that the three-dimensional reconstruction of the observation region is completed. The device comprises: the error analysis module and the imaging module. According to the embodiment of the invention, the three-dimensional imaging efficiency can be greatly improved by constructing the regional nonlinear progressive Doppler compensation factor to replace a global point-by-point superposition imaging mode.

Description

Cylindrical aperture nonlinear progressive phase iterative imaging method and device
Technical Field
The disclosure relates to the field of radar three-dimensional imaging, in particular to a cylindrical aperture nonlinear progressive phase iterative imaging method and device.
Background
Compared with other traditional micro-change monitoring means such as GPS, the micro-change monitoring radar has the advantages of all weather, large range, high precision and the like, and is widely applied to micro-deformation detection of high and steep slopes, bridge buildings and the like. The three-dimensional imaging of the micro-change monitoring radar can realize three-dimensional resolution imaging and three-dimensional deformation information extraction of an observation area, can effectively inhibit the phenomena of overlapping and covering, top-bottom inversion and the like caused by observation geometry, and has wide application prospects in the aspects of slope landslide and artificial building monitoring.
Because the observation area is large, the existing imaging algorithm is difficult to reconstruct the three-dimensional of the large scene area; the three-dimensional backward projection algorithm can reconstruct the region of interest in three dimensions by means of point-by-point reconstruction, and has certain advantages, but because the distance course is calculated point by point, the calculated amount is high, the imaging efficiency is affected, and the real-time monitoring requirement is difficult to meet. In summary, a fast and accurate reconstruction of a large field of view region is not possible.
Disclosure of Invention
The disclosure aims to provide a cylindrical aperture nonlinear progressive phase iterative imaging method and device, which are used for greatly improving three-dimensional imaging efficiency by constructing a split-region nonlinear progressive Doppler compensation factor to replace a global point-by-point superposition imaging mode.
According to one aspect of the present disclosure, there is provided a method of cylindrical aperture nonlinear progressive phase iterative imaging, comprising:
a one-dimensional arc array moves along the Z-axis direction to form a synthetic aperture or a real aperture radar with sampling points positioned on the same cylindrical surface;
calculating based on a stepping distance compensation factor, and analyzing a distance error;
based on the region division of sampling points of the cylindrical aperture radar, the distance process from any point of an observation region to the sampling points is obtained through iterative calculation of a stepping distance compensation factor, so that the three-dimensional reconstruction of the observation region is completed.
In some embodiments, wherein the step-based distance compensation factor calculation performs a distance error analysis, comprising:
obtaining an azimuth distance compensation factor;
and acquiring a smaller azimuth stepping angle according to the azimuth distance compensation factor so as to reduce the distance error.
In some embodiments, the method comprises, among other things,
the obtaining the azimuth distance compensation factor comprises the following steps:
setting the number of steps of a radar sampling point along the azimuth direction and the number of steps of the radar sampling point along the elevation direction relative to a radar initial sampling point to obtain a Maclalin expression of the radar sampling point;
obtaining an approximate distance of Maclalin based on the processing of the Maclalin expression;
and obtaining an azimuth distance compensation factor based on the Maclalin approximate distance.
In some embodiments, wherein performing the distance error analysis comprises:
combining the azimuth distance compensation factors, and solving the distance course from any point of the observation area to the radar sampling point in a stepping way based on the distance from any point of the observation area to the radar initial sampling point;
and solving the distance error generated by the distance process from any point of the observation area to the radar sampling point in a stepping way.
In some embodiments, among others, comprising:
compressing echo signals at all sampling points of the radar along the distance direction;
Dividing an observation area into a plurality of pixel areas;
calculating the phase of each pixel point in an imaging area by adopting a zonal stepping phase iteration method;
calculating the size of a sampling point region according to the step-by-step distance solving error, and dividing the synthetic aperture of the cylindrical surface into a plurality of sampling point regions;
the value of each pixel point of the observation area is calculated point by point.
In some embodiments, the dividing the observation area into a number of pixel areas includes:
parameter setting, including the number of pixels in an observation area, the size of the pixels, the coordinates of initial pixels in the observation area, the number of pixels in a distance direction and the center frequency of a radar;
calculating a distance compensation factor coefficient;
and calculating the size of the pixel areas and the number of the pixel areas.
In some embodiments, the calculating the phase of each pixel point of the imaging area by using the area stepping phase iterative method includes:
calculating an initial phase and a phase compensation factor based on the number of pixel areas, the number of pixels contained in each pixel area, the radar center frequency, the distance compensation factor coefficient and an initial value;
iteratively calculating the phase along the distance to the pixel point includes repeating: azimuth iteration, distance iteration, pixel point area iteration and elevation iteration.
In some embodiments, wherein iterating comprises:
selecting an observation area pixel point and a radar initial sampling point;
calculating the distance course from the pixel point of the observation area to the radar initial sampling point;
calculating an azimuth distance compensation factor coefficient;
solving maxwell Lin Jinshi of the distance history;
solving an actual distance course;
and calculating the size of the sampling point region and the number of the azimuth dividing regions, and calculating the sampling point region according to the phase error limiting condition.
In some embodiments, wherein calculating the value of each pixel of the observation region point by point comprises:
and reconstructing each pixel point of the observation area point by point based on an algorithm so as to realize three-dimensional resolution imaging of the observation area.
According to one of the schemes of the present disclosure, a device for cylindrical aperture nonlinear progressive phase iterative imaging is provided, which is used for forming a synthetic aperture or a real aperture radar with sampling points positioned on the same cylindrical surface based on the movement of a one-dimensional arc array along the Z-axis direction; the device comprises:
an error analysis module configured to perform a distance error analysis based on the step-wise distance compensation factor calculation;
the imaging module is configured to be used for carrying out iterative calculation on the distance course from any point of the observation area to the sampling point through a stepping distance compensation factor based on the regional division of the sampling point of the cylindrical aperture radar so as to finish the three-dimensional reconstruction of the observation area.
According to one aspect of the present disclosure, there is provided a computer-readable storage medium having stored thereon computer-executable instructions that, when executed by a processor, implement:
according to the cylindrical aperture nonlinear progressive phase iterative imaging method.
According to the method and the device for cylindrical aperture nonlinear progressive phase iterative imaging in various embodiments of the disclosure, at least one-dimensional arc array moves along the Z-axis direction to form a synthetic aperture or a real aperture radar with sampling points positioned on the same cylindrical surface; calculating based on a stepping distance compensation factor, and analyzing a distance error; based on the region division of sampling points of the cylindrical surface aperture radar, the distance process from any point of an observation region to the sampling points is obtained through iterative calculation of a stepping distance compensation factor so as to finish three-dimensional reconstruction of the observation region, so that the sampling points of the cylindrical surface synthetic aperture radar can be divided into regions, and in each region, the distance from the pixel points of the observation region to the sampling points is calculated in a stepping iterative mode, so that root index operation in the process of distance calculation is reduced, and algorithm efficiency is improved; the embodiments of the disclosure relate to a specific method step of deducing and calculating a distance compensation factor, analyzing a distance error caused by a step-by-step distance iteration solving distance process, and calculating a maximum range of sampling point region division according to a phase error condition; according to the method, the observation area is divided according to the geometric specificity of the cylindrical observation area, the pixel point phase of the observation area is solved by adopting a stepping phase iteration method, the phase compensation factor is adopted to replace the defect of solving the phase of the observation area point by point, the root index operation during phase solving is reduced, and the algorithm efficiency is further improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure, as claimed.
Drawings
In the drawings, which are not necessarily to scale, like reference numerals in different views may designate like components. Like reference numerals with letter suffixes or like reference numerals with different letter suffixes may represent different instances of similar components. The accompanying drawings generally illustrate various embodiments by way of example, and not by way of limitation, and are used in conjunction with the description and claims to explain the disclosed embodiments.
FIG. 1 is a schematic diagram of a cylindrical aperture micro-variation monitoring radar according to an embodiment of the disclosure through the parts (a) and (b);
FIG. 2 is a schematic diagram of a step-wise distance solution according to an embodiment of the present disclosure;
FIG. 3 is a schematic view of an observation area division according to an embodiment of the present disclosure;
FIG. 4 is a phase matrix PHI of pixel points of an observation area according to an embodiment of the disclosure 3D
Fig. 5 is a schematic view of three-dimensional reconstruction of a pixel point of an observation area according to an embodiment of the disclosure;
FIG. 6 is a flow chart of a step-wise distance calculation according to an embodiment of the present disclosure;
FIG. 7 is a view of a pixel division of an observation area in an embodiment of the disclosure;
FIG. 8 is a flow chart of phase calculation of a pixel point of an observation area according to an embodiment of the disclosure;
FIG. 9 is a sample point region division flow chart according to an embodiment of the present disclosure;
FIG. 10 is a flow chart of a three-dimensional reconstruction of an observation region according to an embodiment of the present disclosure;
fig. 11 is a flow chart of an imaging method according to an embodiment of the disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present disclosure more apparent, the technical solutions of the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings of the embodiments of the present disclosure. It will be apparent that the described embodiments are some, but not all, of the embodiments of the present disclosure. All other embodiments, which can be made by one of ordinary skill in the art without the need for inventive faculty, are within the scope of the present disclosure, based on the described embodiments of the present disclosure.
Unless defined otherwise, technical or scientific terms used in this disclosure should be given the ordinary meaning as understood by one of ordinary skill in the art to which this disclosure belongs. The word "comprising" or "comprises", and the like, means that elements or items preceding the word are included in the element or item listed after the word and equivalents thereof, but does not exclude other elements or items.
In order to keep the following description of the embodiments of the present disclosure clear and concise, the present disclosure omits detailed description of known functions and known components.
As shown in fig. 1, the cylindrical aperture three-dimensional imaging radar can move along the Z-axis direction by a one-dimensional arc array to form a synthetic aperture or real aperture radar with sampling points located on the same cylindrical surface. The radar can realize three-dimensional resolution imaging of an observation area. The geometric model is shown in figure 1, taking figures 1 (a) and 1 (b) as examples, the arc center coordinates of the arc array are O' (0, 0 and h), the arc radius is ρ, and the radar sampling point P is obtained Radar (ρ, θ, h) is located on an arc of which the center is O', ρ represents a radar sampling point P Radar The radial distance to the center O' of the arc where θ represents the radar sampling point P Radar An included angle between the connecting line of the arc center O' and the X axis in the positive direction; theta (theta) SynA Represents the radar azimuth synthetic aperture angle, L SynE Represents the radar elevation direction synthetic aperture length; Δθ represents radar azimuth sampling interval, and Δh represents radar elevation sampling interval; n (N) θ Represents the radar azimuth sampling point number, N h Representing the radar elevation direction sampling point number. It should be noted that the azimuth synthetic aperture angle Θ SynA Can be any value from 0 to 360 degrees, and theta is shown in the figure SynA Case=180°.
As one aspect, embodiments of the present disclosure provide a method of cylindrical aperture nonlinear progressive phase iterative imaging, comprising:
a one-dimensional arc array moves along the Z-axis direction to form a synthetic aperture or a real aperture radar with sampling points positioned on the same cylindrical surface;
calculating based on a stepping distance compensation factor, and analyzing a distance error;
based on the region division of sampling points of the cylindrical aperture radar, the distance process from any point of an observation region to the sampling points is obtained through iterative calculation of a stepping distance compensation factor, so that the three-dimensional reconstruction of the observation region is completed.
One of the inventive concepts of the present disclosure aims to realize region division of sampling points of a cylindrical synthetic aperture radar, and in each region, by calculating the distance from a pixel point of an observation region to the sampling point in a stepping iteration manner, root index operation during distance calculation is reduced, and algorithm efficiency is improved.
The present disclosure is directed to two main aspects, a step-wise distance compensation factor calculation and a distance error analysis, and a cylindrical aperture zoned nonlinear progressive phase iterative imaging method performed on the basis of the two aspects. The subject of execution of the present disclosure is not limited as long as a device, apparatus, or specific imaging system that enables cylindrical aperture nonlinear progressive phase iterative imaging is satisfied.
In some embodiments, the step-based distance compensation factor calculation of the present disclosure, performing a distance error analysis, includes:
obtaining an azimuth distance compensation factor;
and acquiring a smaller azimuth stepping angle according to the azimuth distance compensation factor so as to reduce the distance error.
Specifically, the distance compensation factor Δ Azi of the calculated azimuth direction is calculated first, and then the error E of the step-wise distance calculation is analyzed r A calculation method of the maximum value of the sampling point region error is provided.
As shown in fig. 2, P n (R nn ,z n ) To observe any point of the area, P Radar (ρ, θ, h) is the sampling point of the radar; r is R n For observing the region point P n Distance to Z axis, θ n For observing the region point P n An included angle between a projection line on the XOY plane and the positive direction of the X axis and the original point line, z n For observing the region point P n Z coordinate of (2); ρ represents a radar sampling point P Radar The radial distance to the center O' of the arc where θ represents the radar sampling point P Radar And h represents the Z coordinate of the radar sampling point.
Radar is at the range of θ=θ 0 、h=h 0 I.e. the start sampling point P Radar-Start (ρ,θ 0 ,h 0 ) The position is started, sampling is carried out along the azimuth direction (A) and the elevation direction (E) at intervals, and the sampling intervals of the azimuth direction and the elevation direction are respectively delta theta and delta h; then the radar sampling point coordinates P Radar In (ρ, θ, h), θ, h can be expressed as:
θ=θ 0 +n θ ·Δθ (1)
h=h 0 +n h ·Δh (2)
wherein ,nθ Representing radar sampling point P Radar Relative to the radar start sampling point P Radar-Start Number of steps in azimuth, n θ =0,1,…,(N θ -1),n h Representing radar sampling point P Radar Relative to the radar start sampling point P Radar-Start Number of steps along height Cheng Xiang, n h =0,1,…,(N h -1);N θ Represents the radar azimuth sampling point number, N h Representing the number of radar elevation direction sampling points; arbitrary radar sampling point P Radar Represented as
Figure BDA0002874818240000071
Fig. 2 shows only the sampling points stepped in azimuth, i.e. n h =0。
Solving any point P of the observation area in a stepping mode n To radar sampling points on the same elevation
Figure BDA0002874818240000072
The distance compensation factor delta Azi of the azimuth direction is calculated, and then the distance compensation factor is used for step-by-step solving.
In some embodiments, the deriving the azimuthal distance compensation factor of the present disclosure comprises:
setting the number of steps of a radar sampling point along the azimuth direction and the number of steps of the radar sampling point along the elevation direction relative to a radar initial sampling point to obtain a Maclalin expression of the radar sampling point;
obtaining an approximate distance of Maclalin based on the processing of the Maclalin expression;
and obtaining an azimuth distance compensation factor based on the Maclalin approximate distance.
FIG. 2 shows an observation area at an arbitrary point P n (R nn ,z n ) To the radar start sampling point P Radar-Start (ρ,θ 0 ,h 0 ) The distance of (2) is:
Figure BDA0002874818240000081
When calculating azimuth distance compensation factor, radar sampling point
Figure BDA0002874818240000082
Relative to the radar start sampling point P Radar-Start Number of steps in azimuth is n θ The step number along the elevation direction is 0; the coordinates of the sampling points are expressed as
Figure BDA0002874818240000083
wherein nθ =0,1,…,(N θ -1). Any point P of the observation area n (R nn ,z n ) To radar sampling point->
Figure BDA0002874818240000084
The distance of (2) is:
Figure BDA0002874818240000085
wherein ,Rn For observing the region point P n Distance to Z axis, θ n For observing the region point P n An included angle between a projection line on the XOY plane and the positive direction of the X axis and the original point line, z n For observing the region point P n Z coordinate of (2); ρ represents a radar sampling point P Radar-Start Radial distance θ to center O' of the arc 0 Representing a radar start sampling point P Radar-Start An included angle h between the connecting line of the arc center O' and the X axis positive direction 0 Representing a radar start sampling point P Radar-Start Is a Z coordinate of (c).
Cos (n) in formula (4) θ Δθ)、sin(n θ Δθ) is expressed as:
Figure BDA0002874818240000086
Figure BDA0002874818240000087
/>
wherein o is called Peano (Peano) remainder, n θ Δθ∈[0,1]。
The peano remainder o in formulas (5), (6) was ignored, and the distance calculation error due to the omission of the peano remainder was analyzed.
Ignoring the peano remainder o in the formulas (5), (6), and substituting the formulas (3), (5) and (6) into the formula (4) to obtain any point P in the observation area n (R nn ,z n ) To radar sampling point
Figure BDA0002874818240000088
The majuline approximation distance is:
Figure BDA0002874818240000091
wherein ,
Figure BDA0002874818240000092
Figure BDA0002874818240000093
the formula (7) is opposite to n θ The derivation is carried out, and the obtained azimuth distance compensation factors are as follows:
ΔAzi(n θ )=2A θ ·Δθ 2 ·n θ +B θ ·Δθ (10)
wherein ,Rn For observing the region point P n Distance to Z axis, θ n For observing the region point P n An included angle between a projection line on the XOY plane and the positive direction of the X axis and the original point line, z n For observing the region point P n Z coordinate of (2); ρ represents the radar start sampling point P Radar-Start Radial distance θ to center O' of the arc 0 Representing a radar start sampling point P Radar-Start An included angle between the connecting line of the arc center O' and the X axis in the positive direction; n is n θ For radar sampling points
Figure BDA0002874818240000094
Relative to the initial sampling point P Radar-Start Number of steps in azimuth; delta theta is the radar azimuth sampling interval; />
Figure BDA0002874818240000095
For any point P of the observation area n To radar sampling point->
Figure BDA0002874818240000096
Is a Maclalin approximation distance; />
Figure BDA0002874818240000097
For any point P of the observation area n To the radar start sampling point P Radar-Start Is a distance of (2); a is that θ 、B θ Is the azimuth distance compensation factor coefficient.
A flow chart of an algorithm for stepwise calculating the distance process by adopting the azimuth distance compensation factor is shown in fig. 6, and any point P in an observation area is shown n (R nn ,z n ) Radar start sampling point P Radar-Start (ρ,θ 0 ,h 0 ) The radar azimuth sampling interval delta theta, the azimuth number of the area sampling points is N SubAzi
In some implementations, the performing distance error analysis of the present disclosure includes:
combining the azimuth distance compensation factors, and solving the distance course from any point of the observation area to the radar sampling point in a stepping way based on the distance from any point of the observation area to the radar initial sampling point;
And solving the distance error generated by the distance process from any point of the observation area to the radar sampling point in a stepping way.
Specifically, in the step-wise distance calculation process, the distance compensation factor Δ Azi (n x ) Is derived from equation (7) after ignoring the peano remainder o (x), i.e., retaining the finite term, so the azimuth step angle (n θ Δθ), the smaller the obtained distance error.
The following analysis is shown in equation (7), ignoring the peano remainder o (), and stepping the distance error of the distance calculation.
As shown in fig. 2, P n (R nn ,z n ) To observe any point of the area, P Radar-Start (ρ,θ 0 ,h 0 ) For the radar start sampling point,
Figure BDA0002874818240000101
for radar stepping n in azimuth θ Sub-sampling points.
Any point P of the observation area n (R nn ,z n ) To the radar start sampling point P Radar-Start (ρ,θ 0 ,h 0 ) Is expressed as the distance of (2)
Figure BDA0002874818240000102
Any point P of the observation area n (R nn ,z n ) To radar sampling point->
Figure BDA0002874818240000103
Is expressed as the actual distance history of
Figure BDA0002874818240000104
While step-wise solving for any point P in the observation area n (R nn ,z n ) To radar sampling point->
Figure BDA0002874818240000105
The distance history of (2) is:
Figure BDA0002874818240000106
wherein ,Aθ 、B θ The coefficient is an azimuth distance compensation factor; n is n θ For radar sampling points
Figure BDA0002874818240000107
Relative to the initial sampling point P Radar-Start Number of steps in azimuth; Δθ is the radar azimuth sampling interval.
Solving any point P of an observation area by adopting stepping n (R nn ,z n ) To radar sampling point
Figure BDA0002874818240000108
The distance error Er generated by the distance history of (2) is expressed as:
Figure BDA0002874818240000109
wherein ,
Figure BDA00028748182400001010
for any point P of the observation area n To the radar start sampling point P Radar-Start Distance history of (2);
Figure BDA00028748182400001011
for any point P of the observation area n To the sampling point->
Figure BDA00028748182400001012
Is a real distance history of (1); n is n θ For radar sampling points
Figure BDA00028748182400001013
Relative to the initial sampling point P Radar-Start Number of steps in azimuth; Δθ is the sampling interval of the radar azimuth.
The geometric relationship is as follows:
Figure BDA00028748182400001014
wherein ,
Figure BDA00028748182400001015
for any point P of the observation area n To the radar start sampling point P Radar-Start And sample point->
Figure BDA00028748182400001016
Is a distance history of (2); />
Figure BDA00028748182400001017
For the radar start sampling point P Radar-Start To the sampling point->
Figure BDA00028748182400001018
Is a straight line.
To sum up, when observing the region point P n Is positioned at a radar initial sampling point P Radar-Start And sampling point
Figure BDA0002874818240000111
When the extension line of the distance process is extended, the error Er of the step-by-step solving distance process is maximum.
In some embodiments, referring to fig. 11, a cylindrical aperture zoned nonlinear progressive phase iteration method of the present disclosure includes:
compressing echo signals at all sampling points of the radar along the distance direction;
dividing an observation area into a plurality of pixel areas;
calculating the phase of each pixel point in an imaging area by adopting a zonal stepping phase iteration method;
calculating the size of a sampling point region according to the step-by-step distance solving error, and dividing the synthetic aperture of the cylindrical surface into a plurality of sampling point regions;
the value of each pixel point of the observation area is calculated point by point.
According to the method and the device for achieving the area division of the sampling points of the cylindrical aperture radar, the area division is achieved on the sampling points of the cylindrical aperture radar, the distance process from any point of an observation area to the sampling points can be solved through iteration of the step compensation factors, and therefore root index operation is reduced.
As shown in fig. 2, the radar is from θ=θ 0 、h=h 0 I.e. point P Radar-Start (ρ,θ 0 ,h 0 ) The position is started, sampling is carried out along azimuth and elevation directions at intervals, and the azimuth and elevation directions are respectively carried out at intervals of delta theta and delta h; sampling point
Figure BDA0002874818240000112
For the radar to step n along azimuth and elevation directions respectively θ 、n h Sub-sampling points. Radar at sampling point +.>
Figure BDA0002874818240000113
The echo equation is expressed as:
Figure BDA0002874818240000114
f={f min ,…,f max } (15)
wherein f is the step frequency of the radar echo, f min For the lowest frequency of radar stepping, f max For the highest frequency of radar stepping, C is the speed of light,
Figure BDA0002874818240000115
for radar sampling points +.>
Figure BDA0002874818240000116
Distance to the observation area pixel point.
Specifically, the imaging method of the present embodiment includes:
step S1: distance compression, which compresses echo signals at each sampling point of the radar along the distance direction, can adopt an inverse Fourier transform (IFFT) mode, and the distance compressed signals are
Figure BDA0002874818240000117
wherein ,fc For the center frequency of the radar echo, one can take (f min +f max )/2;;
Step S2: dividing the observation area into M along azimuth direction, distance direction and elevation direction θ ×M r ×M z As shown in fig. 3, the pixels located at the same height Cheng Xiang are divided into K annular regions, and the observation region is divided into kxm z Each pixel region comprises M pixels θ ×M Sub The pixel point coordinates of the kth pixel region are expressed as
Figure BDA0002874818240000121
wherein mr =1,…,M Sub 、m θ =1,…,M θ 、m z =1,…,M z ;/>
Figure BDA0002874818240000122
Representing pixel dot +.>
Figure BDA0002874818240000123
Distance to Z axis, +.>
Figure BDA0002874818240000124
Representing pixel dot +.>
Figure BDA0002874818240000125
An angle between the projection line of the X-axis and the X-axis positive direction, which is connected with the origin, is +.>
Figure BDA0002874818240000126
Representing pixel dot +.>
Figure BDA0002874818240000127
As shown in fig. 3;
step S3: calculating the phase of each pixel point, which is located in the same distance direction and elevation direction due to the specificity of the cylindrical surface geometryThe phases of the pixel points are the same; calculating the phase PHI of each pixel point of an imaging area by adopting a sectional stepping phase iterative method 3D
Step S4: dividing the sampling point areas of the cylindrical surface, calculating the sizes of the sampling point areas according to the stepping distance solving error Er, and dividing the synthetic aperture of the cylindrical surface into I sampling point areas;
step S5: and (3) carrying out three-dimensional reconstruction on the observation area, calculating the value of each pixel point of the observation area point by point, and completing the three-dimensional reconstruction on the observation area.
Parameter settings in various embodiments of the present disclosure are for example: minimum frequency f of radar stepping min At 77GHz, the highest frequency f max 77.25GHz, the radar bandwidth B is 250M, and the frequency point gamma is 5001; the coordinate of the radar initial sampling point is P Radar-Start (0.6,0, -0.8), the sampling intervals delta theta and delta h of the radar along the azimuth direction and the elevation direction are respectively 0.6 degrees and 0.008m, and the sampling points N of the azimuth direction and the elevation direction are respectively θ 、N h 301, 201 respectively, the radar azimuth synthetic aperture angle theta SynA 180 DEG elevation synthetic aperture length L SynE 1.6m; observation area pixel point
Figure BDA0002874818240000128
Azimuth start angle θ of (2) 1 60 DEG end angle
Figure BDA0002874818240000129
120 DEG, observation area pixel point->
Figure BDA00028748182400001210
The starting distance of R 1 950m, ending distance +.>
Figure BDA00028748182400001211
1050m, observation region pixel point +.>
Figure BDA00028748182400001212
Is the initial elevation direction z of (2) 1 Is-50 m, the elevation direction is stopped>
Figure BDA00028748182400001213
50m.
Regarding observation area pixel division: the division of the observation area into a number of pixel areas of the present disclosure includes:
parameter setting, including the number of pixels in an observation area, the size of the pixels, the coordinates of initial pixels in the observation area, the number of pixels in a distance direction and the center frequency of a radar;
calculating a distance compensation factor coefficient;
and calculating the size of the pixel areas and the number of the pixel areas.
Specifically, in step S2, the observation area is divided into M along the azimuth direction, the distance direction, and the elevation direction θ ×M r ×M z A pixel; according to parameter setting, obtaining radar range resolution as C/(2.B); the radar azimuth angle resolution is
Figure BDA0002874818240000131
Radar high Cheng Xiang resolution is +.>
Figure BDA0002874818240000132
Dividing the observation area into M according to the calculated radar resolution θ ×M r ×M z Each pixel has a size of delta m θ ×Δm r ×Δm z The method comprises the steps of carrying out a first treatment on the surface of the The size delta m of each pixel along the azimuth direction, the distance direction and the elevation direction θ 、Δm r 、Δm z Respectively smaller than the resolution of radar azimuth direction, range direction and elevation direction. The observation area pixel division is shown in fig. 3.
The pixel area dividing step is as follows, and the flow chart is as shown in fig. 7:
step S21: parameter setting, observing the number M of pixels in an area θ ×M r ×M z Pixel size Δm θ ×Δm r ×Δm z Observing the initial pixel point coordinate m of the region 111 (R 11 ,z 1 ) Number m of pixel points in distance direction r Radar center frequency f c
Step S22: calculating a distance compensation factor coefficient, wherein the distance compensation factor coefficient is as follows:
Figure BDA0002874818240000133
Figure BDA0002874818240000134
step S23: calculating the dividing size M of pixel point region Sub And dividing the number K of the areas, wherein the progressive phase error is as follows:
Figure BDA0002874818240000141
let E PHI Less than or equal to pi/4, and obtaining m r Maximum value of M Sub Wherein C is the light velocity, and the number K of the areas divided into the pixel points by the same elevation is
Figure BDA0002874818240000142
Regarding the calculation of each pixel phase: the disclosed method for calculating the phase of each pixel point in an imaging area by adopting a stepped phase iteration method in a divided area comprises the following steps:
calculating an initial phase and a phase compensation factor based on the number of pixel areas, the number of pixels contained in each pixel area, the radar center frequency, the distance compensation factor coefficient and an initial value;
Iteratively calculating the phase along the distance to the pixel point includes repeating: azimuth iteration, distance iteration, pixel point area iteration and elevation iteration. .
Specifically, in step S3, a step-by-step phase iterative method is used to calculate the phase of each pixel, and the flowchart is shown in fig. 8. The method comprises the following specific steps:
step S31: parameter setting, pixelNumber of dot regions KxM z Each pixel area comprises M pixels θ ×M Sub Radar center frequency f c Distance compensation factor coefficient a r 、b r Initial value m θ =1、m r =1、m z =1、k=1;
Step S32: calculating initial phase and phase compensation factor, wherein the initial pixel point coordinate of the kth pixel point area is m k-111 (R k-1k-1 ,z k-1 ) The phase PHI of the initial pixel of the kth pixel region k-111 Is that
Figure BDA0002874818240000143
/>
The pixel point distance phase compensation factor is:
Figure BDA0002874818240000144
step S33: iterative calculation of phase, m of pixel point along distance r =m r +1, if m r ≤M Sub The phase of the iterative calculation distance to the pixel point is as follows:
Figure BDA0002874818240000151
otherwise, let m r =1, step S34 is performed;
step S34: azimuth iteration, m θ =m θ +1, if m θ ≤M θ And (3) calculating:
Figure BDA0002874818240000152
otherwise, step S35 is performed;
step S35: distance direction iteration, let m r =m r +1, if m r ≤M Sub Repeating the stepsStep S34, otherwise, executing step S36;
step S36: iterating pixel point areas to make m r =1、m θ If K is equal to or less than K, repeating steps S32 to S35, if not, executing step S37;
step S37: iteration in elevation direction, let k=1, m z =m z +1, if m z ≤M z Repeating steps S32 to S36, otherwise ending the procedure to obtain the phase PHI of each pixel point of the three-dimensional observation area 3D As shown in fig. 4.
Regarding cylinder sampling point region division: the cylindrical surface aperture zoning nonlinear progressive phase iteration method comprises the following steps:
selecting an observation area pixel point and a radar initial sampling point;
calculating the distance course from the pixel point of the observation area to the radar initial sampling point;
calculating an azimuth distance compensation factor coefficient;
solving maxwell Lin Jinshi of the distance history;
solving an actual distance course;
and calculating the size of the sampling point region and the number of the azimuth dividing regions, and calculating the sampling point region according to the phase error limiting condition.
From the first partial step-wise distance error analysis, when observing the region point P n Is positioned at a radar initial sampling point P Radar-Start And sampling point
Figure BDA0002874818240000153
The error Er of the step-by-step solving distance history is the largest, so the maximum sampling point area which can be divided by the cylindrical synthetic aperture is calculated according to the phase limiting condition when the error is the largest.
For the convenience of calculation, sampling point P is started from radar when sampling point region is divided Radar-Start (ρ,θ 0 ,h 0 ) Initially, selecting the number of azimuth sampling points as N SubAzi Is a sampling point region of (a); and selecting the pixel point of the observation area
Figure BDA0002874818240000161
The partitionable maximum sampling point area is calculated, and the flowchart is shown in fig. 9. The method comprises the following steps:
step S41: selecting pixel points of an observation area
Figure BDA0002874818240000162
With the radar start sampling point P Radar-Start (ρ,θ 0 ,h 0 ) Radar sampling point coordinates->
Figure BDA0002874818240000163
Step S42: calculating pixel points of observation area
Figure BDA0002874818240000164
To the radar start sampling point P Radar-Start (ρ,θ 0 ,h 0 ) I.e. P in equation (3) n (R nn ,z n ) Replaced by->
Figure BDA0002874818240000165
Expressed as:
Figure BDA0002874818240000166
/>
wherein ρ is the radar initial sampling point P Radar-Start Distance θ from center O' of the arcuate array 0 For the radar start sampling point P Radar-Start An included angle h between the central connecting line of the arc array and the positive direction of the X axis 0 For the radar start sampling point P Radar-Start Is a vertical coordinate of (2); r is R 1 Is the pixel point of the observation area
Figure BDA0002874818240000167
Distance to Z axis, +.>
Figure BDA0002874818240000168
For observation area pixel->
Figure BDA0002874818240000169
An included angle between a projection line and the positive direction of the X axis on the XOY plane and the original point connecting line, z 1 For observation area pixel->
Figure BDA00028748182400001610
Is a vertical coordinate of (c).
Step S43: calculating an azimuth distance compensation factor coefficient A θ 、B θ P in formula (8) and formula (9) n (R nn ,z n ) Replaced by
Figure BDA00028748182400001611
Can be written separately as:
Figure BDA00028748182400001612
Figure BDA00028748182400001613
wherein ,Aθ 、B θ Is a distance compensation factor coefficient;
step S44: maxwell Lin Jinshi for solving the distance history is obtained by adding P in formula (11) n (R nn ,z n ) Replaced by
Figure BDA00028748182400001614
Resolvable region pixel >
Figure BDA00028748182400001615
To radar sampling point->
Figure BDA00028748182400001616
The distance history of (2) is:
Figure BDA00028748182400001617
wherein ,nθ for radar sampling points
Figure BDA00028748182400001618
Relative to the initial sampling point P Radar-Start The number of steps along the azimuth direction, delta theta, is the radar azimuth sampling interval;
step S45: solving the actual distance course and pixel point
Figure BDA0002874818240000171
To radar sampling point->
Figure BDA0002874818240000172
The actual distance of (2) is:
Figure BDA0002874818240000173
step S46: calculating the size N of the sampling point region SubAzi Number of azimuth divided regions I Azi According to the phase error limiting condition (30), n is obtained θ Has a maximum value of N SubAzi
Figure BDA0002874818240000174
wherein ,fc For the center frequency of the radar echo,
Figure BDA0002874818240000175
solving pixel points stepwise>
Figure BDA0002874818240000176
To radar sampling point->
Figure BDA0002874818240000177
Distance history of>
Figure BDA0002874818240000178
Is pixel dot +.>
Figure BDA0002874818240000179
To radar sampling point->
Figure BDA00028748182400001710
Is the actual distance of (3); therefore, the radar sampling points are divided into I along the azimuth direction Azi =N θ /N SubAzi The number of the sampling point areas is as follows:
I=I Azi ·N h (31)
the starting point coordinate of the ith sampling point area is P i-Radar-Start (ρ,θ i-0 ,h i-0 ),i=1,2,…,I。
Three-dimensional reconstruction of the observation region: the point-by-point calculation of the value of each pixel point of the observation area of the present disclosure includes:
and reconstructing each pixel point of the observation area point by point based on an algorithm so as to realize three-dimensional resolution imaging of the observation area.
Specifically, in step S5, as shown in fig. 5, the algorithm performs point-by-point reconstruction on each pixel point of the observation area to implement three-dimensional resolution imaging of the observation area, and the flowchart is shown in fig. 10. The method comprises the following steps:
Step S501, parameter setting, radar echo distance compressed signal Sr and number M of pixel points in observation area θ ×M r ×M z Number of radar azimuth sampling point areas I Azi Size N of each sampling point region SubAzi I-th sampling point region radar sampling point
Figure BDA00028748182400001711
Radar azimuth sampling interval delta theta, radar elevation sampling interval delta h and pixel point phase PHI 3D Initial value q=1, i=1, n θ =0,n h =0;
Step S502: calculating the initial distance and distance compensation factor of the ith sampling point area and the coordinate of the qth pixel point
Figure BDA00028748182400001712
Starting point of the i-th sampling point regionThe coordinate is P i-Radar-Start (ρ,θ i-0 ,h i-0 ) Calculating a pixel point m according to formula (3) q To sampling point P i-Radar-00 Is>
Figure BDA0002874818240000181
Then calculating the compensation factor coefficient A according to (8) and (9) i-θ 、B i-θ The method comprises the steps of carrying out a first treatment on the surface of the Further, a distance compensation factor Delta Azi is calculated from (10) i (n θ );
Step S503: iteratively calculating the distance history of azimuth sampling points, n θ =n θ +1, if n θ <N SubAzi Calculate the q-th pixel point m q To the i-th sampling point region sampling point
Figure BDA0002874818240000182
The distance history of (2) is:
Figure BDA0002874818240000183
otherwise, step S5032 is entered;
step S504: iterating the same elevation to the sampling point area to enable n to be equal to θ =0, i=i+1, if I is not greater than I Azi Repeating steps S502 to S503, otherwise executing step S505;
step S505: iterating along the elevation direction, let i=1, n h =n h +1, if n h <N h Repeating steps S502 to 504, otherwise, executing step S506, and obtaining the q-th pixel point m q Distance histories to all sampling points of the radar;
Figure BDA0002874818240000184
wherein ,Nθ 、N h The radar azimuth and elevation sampling points are respectively represented;
step S506: calculating pixel points
Figure BDA0002874818240000185
The peak position in the compressed signal is at each sample point distance,
Figure BDA0002874818240000186
wherein B is the radar signal bandwidth, in this example (f) max -f min ),f min To step the lowest frequency, f max To step the highest frequency τ q Is N h ×N θ Is a matrix of (a);
step S507: calculating the sampling points at pixel points
Figure BDA0002874818240000187
The phase at which the phase is at is,
Figure BDA0002874818240000191
wherein ,fc Is the radar center frequency, in this example denoted (f) max -f min )/2,φ q Is N h ×N θ Is a matrix of (a);
step S508: taking out the corresponding peak value of the compressed signal from each sampling point according to the peak frequency point position of the compressed signal from each sampling point calculated in the formula (34),
Figure BDA0002874818240000192
wherein ,rq Represents the q-th pixel point m q Distance history to all sampling points of radar, sr q Is N h ×N θ Is a matrix of (a);
step S509: calculating pixel points
Figure BDA0002874818240000193
The values of (2) are:
m q =χSr q .*exp{jφ q } (37)
wherein "..x" represents multiplication of matrix element corresponding positions and "χ" represents matrix element addition;
step S510: observing regional pixel iteration, wherein the regional pixel number q=q+1, and if q is less than or equal to M θ ×M r ×M z Let n h Step S502 to step S509 are repeated for =0, otherwise, step S510 is performed, and at this time, the values of all the region pixels are obtained as follows:
m={m q } (38)
wherein ,mq Representing the value of the q-th pixel point of the observation area, M is M θ ×M r ×M z Is a three-dimensional matrix of (2);
step S511: reconstructing the region, namely combining the observed region pixel point value m with the observed region pixel point phase PHI 3D Corresponding multiplication, the reconstruction of the pixel of the observation area is completed,
m complex =m.*exp{-j·PHI 3D } (39)
wherein m is the value of the pixel point of the observation area, PHI 3D For the observation area pixel phase, "..x" represents the multiplication of matrix corresponding elements.
As one of the schemes of the present disclosure, the present disclosure also provides a device for cylindrical aperture nonlinear progressive phase iterative imaging, which is used for forming a synthetic aperture or a real aperture radar with sampling points located on the same cylindrical surface based on the movement of a one-dimensional arc array along the Z-axis direction; the device comprises:
an error analysis module configured to perform a distance error analysis based on the step-wise distance compensation factor calculation;
the imaging module is configured to be used for carrying out iterative calculation on the distance course from any point of the observation area to the sampling point through a stepping distance compensation factor based on the regional division of the sampling point of the cylindrical aperture radar so as to finish the three-dimensional reconstruction of the observation area.
In combination with the foregoing examples, in some implementations, the error analysis module of the present disclosure may be further configured to:
Obtaining an azimuth distance compensation factor;
and acquiring a smaller azimuth stepping angle according to the azimuth distance compensation factor so as to reduce the distance error.
In combination with the foregoing examples, in some implementations, the error analysis module of the present disclosure may be further configured to:
setting the number of steps of a radar sampling point along the azimuth direction and the number of steps of the radar sampling point along the elevation direction relative to a radar initial sampling point to obtain a Maclalin expression of the radar sampling point;
obtaining an approximate distance of Maclalin based on the processing of the Maclalin expression;
and obtaining an azimuth distance compensation factor based on the Maclalin approximate distance.
In combination with the foregoing examples, in some implementations, the error analysis module of the present disclosure may be further configured to:
combining the azimuth distance compensation factors, and solving the distance course from any point of the observation area to the radar sampling point in a stepping way based on the distance from any point of the observation area to the radar initial sampling point;
and solving the distance error generated by the distance process from any point of the observation area to the radar sampling point in a stepping way.
In combination with the foregoing examples, in some implementations, the imaging module of the present disclosure may be further configured to:
compressing echo signals at all sampling points of the radar along the distance direction;
Dividing the observation area into a plurality of pixel areas, including: parameter setting, including the number of pixels in an observation area, the size of the pixels, the coordinates of initial pixels in the observation area, the number of pixels in a distance direction and the center frequency of a radar; calculating a distance compensation factor coefficient; calculating the size of the pixel areas and the number of the pixel areas;
the method for calculating the phase of each pixel point in an imaging area by adopting a zonal stepping phase iterative method comprises the following steps: calculating an initial phase and a phase compensation factor based on the number of pixel areas, the number of pixels contained in each pixel area, the radar center frequency, the distance compensation factor coefficient and an initial value; iteratively calculating the phase along the distance to the pixel point includes repeating: azimuth iteration, distance iteration, pixel point area iteration and elevation iteration;
calculating the size of a sampling point region according to the step-by-step distance solving error, and dividing the synthetic aperture of the cylindrical surface into a plurality of sampling point regions;
calculating the value of each pixel point of the observation area point by point comprises the following steps: and reconstructing each pixel point of the observation area point by point based on an algorithm so as to realize three-dimensional resolution imaging of the observation area.
Specifically, one of the inventive concepts of the present disclosure is directed to a synthetic aperture or real aperture radar that is moved in a Z-axis direction by at least a one-dimensional arc array to form a sampling point located at the same cylindrical surface; calculating based on a stepping distance compensation factor, and analyzing a distance error; based on the region division of sampling points of the cylindrical surface aperture radar, the distance process from any point of an observation region to the sampling points is obtained through iterative calculation of a stepping distance compensation factor so as to finish three-dimensional reconstruction of the observation region, so that the sampling points of the cylindrical surface synthetic aperture radar can be divided into regions, and in each region, the distance from the pixel points of the observation region to the sampling points is calculated in a stepping iterative mode, so that root index operation in the process of distance calculation is reduced, and algorithm efficiency is improved; the embodiments of the disclosure relate to a specific method step of deducing and calculating a distance compensation factor, analyzing a distance error caused by a step-by-step distance iteration solving distance process, and calculating a maximum range of sampling point region division according to a phase error condition; according to the method, the observation area is divided according to the geometric specificity of the cylindrical observation area, the pixel point phase of the observation area is solved by adopting a stepping phase iteration method, the phase compensation factor is adopted to replace the defect of solving the phase of the observation area point by point, the root index operation during phase solving is reduced, and the algorithm efficiency is further improved.
As one aspect of the disclosure, the disclosure further provides a computer-readable storage medium having stored thereon computer-executable instructions that, when executed by a processor, principally implement a method for cylindrical aperture nonlinear progressive phase iterative imaging according to the above, comprising at least:
a one-dimensional arc array moves along the Z-axis direction to form a synthetic aperture or a real aperture radar with sampling points positioned on the same cylindrical surface;
calculating based on a stepping distance compensation factor, and analyzing a distance error;
based on the region division of sampling points of the cylindrical aperture radar, the distance process from any point of an observation region to the sampling points is obtained through iterative calculation of a stepping distance compensation factor, so that the three-dimensional reconstruction of the observation region is completed.
In some embodiments, the executing computer-executable instructions processor can be a processing device including more than one general purpose processing device, such as a microprocessor, central Processing Unit (CPU), graphics Processing Unit (GPU), or the like. More specifically, the processor may be a Complex Instruction Set Computing (CISC) microprocessor, a Reduced Instruction Set Computing (RISC) microprocessor, a Very Long Instruction Word (VLIW) microprocessor, a processor running other instruction sets, or a processor running a combination of instruction sets. The processor may also be one or more special purpose processing devices such as an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA), a Digital Signal Processor (DSP), a system on a chip (SoC), or the like.
In some embodiments, the computer readable storage medium may be memory, such as read-only memory (ROM), random-access memory (RAM), phase-change random-access memory (PRAM), static random-access memory (SRAM), dynamic random-access memory (DRAM), electrically erasable programmable read-only memory (EEPROM), other types of random-access memory (RAM), flash memory disk or other forms of flash memory, cache, registers, static memory, compact disk read-only memory (CD-ROM), digital Versatile Disk (DVD) or other optical storage, magnetic cassettes or other magnetic storage devices, or any other possible non-transitory medium which can be used to store information or instructions that can be accessed by a computer device, and the like.
In some embodiments, computer-executable instructions may be implemented as a plurality of program modules which collectively implement a method of cylindrical aperture nonlinear progressive phase iterative imaging in accordance with any of the present disclosure.
The present disclosure describes various operations or functions that may be implemented or defined as software code or instructions. The display unit may be implemented as software code or instruction modules stored on a memory that when executed by a processor may implement the corresponding steps and methods.
Such content may be source code or differential code ("delta" or "patch" code) that may be executed directly ("object" or "executable" form). The software implementations of the embodiments described herein may be provided by an article of manufacture having code or instructions stored thereon or by a method of operating a communication interface to transmit data over the communication interface. The machine or computer-readable storage medium may cause a machine to perform the described functions or operations and includes any mechanism for storing information in a form accessible by the machine (e.g., computing display device, electronic system, etc.), such as recordable/non-recordable media (e.g., read Only Memory (ROM), random Access Memory (RAM), magnetic disk storage media, optical storage media, flash memory display device, etc.). The communication interface includes any mechanism for interfacing with any of a hard-wired, wireless, optical, etc. media to communicate with other display devices, such as a memory bus interface, a processor bus interface, an internet connection, a disk controller, etc. The communication interface may be configured by providing configuration parameters and/or sending signals to prepare the communication interface to provide data signals describing the software content. The communication interface may be accessed by sending one or more commands or signals to the communication interface.
The computer-executable instructions of embodiments of the present disclosure may be organized into one or more computer-executable components or modules. Aspects of the disclosure may be implemented with any number and combination of such components or modules. For example, aspects of the present disclosure are not limited to the specific computer-executable instructions or the specific components or modules illustrated in the figures and described herein. Other embodiments may include different computer-executable instructions or components having more or less functionality than illustrated and described herein.
The above description is intended to be illustrative and not restrictive. For example, the above-described examples (or one or more aspects thereof) may be used in combination with each other. For example, other embodiments may be used by those of ordinary skill in the art upon reading the above description. In addition, in the above detailed description, various features may be grouped together to streamline the disclosure. This is not to be interpreted as an intention that the disclosed features not being claimed are essential to any claim. Rather, the disclosed subject matter may include less than all of the features of a particular disclosed embodiment. Thus, the following claims are hereby incorporated into the detailed description as examples or embodiments, with each claim standing on its own as a separate embodiment, and it is contemplated that these embodiments may be combined with one another in various combinations or permutations. The scope of the disclosure should be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled.
The above embodiments are merely exemplary embodiments of the present disclosure, which are not intended to limit the present disclosure, the scope of which is defined by the claims. Various modifications and equivalent arrangements of parts may be made by those skilled in the art, which modifications and equivalents are intended to be within the spirit and scope of the present disclosure.

Claims (7)

1. The method for cylindrical aperture nonlinear progressive phase iterative imaging comprises the following steps:
a one-dimensional arc array moves along the Z-axis direction to form a synthetic aperture or a real aperture radar with sampling points positioned on the same cylindrical surface;
calculating based on a stepping distance compensation factor, and analyzing a distance error;
based on the region division of sampling points of the cylindrical aperture radar, the distance process from any point of an observation region to the sampling points is obtained through iterative calculation of a stepping distance compensation factor so as to finish the three-dimensional reconstruction of the observation region;
the step-based distance compensation factor calculation, performing distance error analysis, includes:
obtaining an azimuth distance compensation factor;
acquiring a smaller azimuth stepping angle according to the azimuth distance compensation factor so as to reduce the distance error;
wherein ,
The obtaining the azimuth distance compensation factor comprises the following steps:
setting the number of steps of a radar sampling point along the azimuth direction and the number of steps of the radar sampling point along the elevation direction relative to a radar initial sampling point to obtain a Maclalin expression of the radar sampling point;
obtaining an approximate distance of Maclalin based on the processing of the Maclalin expression;
obtaining an azimuth distance compensation factor based on the Maclalin approximate distance;
wherein, carry out the distance error analysis, include:
combining the azimuth distance compensation factors, and solving the distance course from any point of the observation area to the radar sampling point in a stepping way based on the distance from any point of the observation area to the radar initial sampling point;
step-by-step solving a distance error generated by the distance process from any point of the observation area to the radar sampling point;
when the observation area point is positioned at the radar initial sampling point and the extension line of the sampling point, the error of the step-by-step solving distance course is maximum.
2. The method according to claim 1, comprising:
compressing echo signals at all sampling points of the radar along the distance direction;
dividing an observation area into a plurality of pixel areas;
calculating the phase of each pixel point in an imaging area by adopting a zonal stepping phase iteration method;
calculating the size of a sampling point region according to the step-by-step distance solving error, and dividing the synthetic aperture of the cylindrical surface into a plurality of sampling point regions;
The value of each pixel point of the observation area is calculated point by point.
3. The method of claim 2, wherein the dividing the observation region into pixel regions comprises:
parameter setting, including the number of pixels in an observation area, the size of the pixels, the coordinates of initial pixels in the observation area, the number of pixels in a distance direction and the center frequency of a radar;
calculating a distance compensation factor coefficient;
and calculating the size of the pixel areas and the number of the pixel areas.
4. A method according to claim 3, wherein the calculating the phase of each pixel point in the imaging region by using a split-region step-and-step phase iterative method comprises:
calculating an initial phase and a phase compensation factor based on the number of pixel areas, the number of pixels contained in each pixel area, the radar center frequency, the distance compensation factor coefficient and an initial value;
iteratively calculating the phase along the distance to the pixel point includes repeating: azimuth iteration, distance iteration, pixel point area iteration and elevation iteration.
5. The method of claim 4, wherein the cylindrical aperture zoned nonlinear progressive phase iteration method comprises:
selecting an observation area pixel point and a radar initial sampling point;
Calculating the distance course from the pixel point of the observation area to the radar initial sampling point;
calculating an azimuth distance compensation factor coefficient;
solving maxwell Lin Jinshi of the distance history;
solving an actual distance course;
and calculating the size of the sampling point region and the number of the azimuth dividing regions, and calculating the sampling point region according to the phase error limiting condition.
6. The method of claim 5, wherein calculating the value of each pixel of the observation region point by point comprises:
and reconstructing each pixel point of the observation area point by point based on an algorithm so as to realize three-dimensional resolution imaging of the observation area.
7. The device is used for forming a synthetic aperture or a real aperture radar of the same cylindrical surface on the basis of the motion of a one-dimensional arc array along the Z-axis direction; the device comprises:
an error analysis module configured to perform a distance error analysis based on a step-wise distance compensation factor calculation as described in the method of claim 1;
the imaging module is configured to be used for carrying out iterative calculation on the distance course from any point of the observation area to the sampling point through a stepping distance compensation factor based on the regional division of the sampling point of the cylindrical aperture radar so as to finish the three-dimensional reconstruction of the observation area.
CN202011611757.6A 2020-12-30 2020-12-30 Cylindrical aperture nonlinear progressive phase iterative imaging method and device Active CN112799064B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011611757.6A CN112799064B (en) 2020-12-30 2020-12-30 Cylindrical aperture nonlinear progressive phase iterative imaging method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011611757.6A CN112799064B (en) 2020-12-30 2020-12-30 Cylindrical aperture nonlinear progressive phase iterative imaging method and device

Publications (2)

Publication Number Publication Date
CN112799064A CN112799064A (en) 2021-05-14
CN112799064B true CN112799064B (en) 2023-05-26

Family

ID=75804560

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011611757.6A Active CN112799064B (en) 2020-12-30 2020-12-30 Cylindrical aperture nonlinear progressive phase iterative imaging method and device

Country Status (1)

Country Link
CN (1) CN112799064B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113610174A (en) * 2021-08-13 2021-11-05 中南大学 Power grid host load prediction method, equipment and medium based on Phik feature selection

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001074832A (en) * 1999-09-02 2001-03-23 Mitsubishi Electric Corp Radar device
WO2010119447A1 (en) * 2009-04-16 2010-10-21 Doron Shlomo Imaging system and method
CN104237885A (en) * 2014-09-15 2014-12-24 西安电子科技大学 Synthetic aperture radar image orientation secondary focusing method
US9176226B1 (en) * 2012-03-14 2015-11-03 The Boeing Company Radar tomography using doppler-based projections
RU2628997C1 (en) * 2016-06-14 2017-08-24 Российская Федерация, от имени которой выступает Министерство обороны Российской Федерации Method of obtaining two-dimensional radar images of object at multi-frequency pulse sensing and inverse device synthesis with iterative distance reconciliation from equivalent antenna phase center to synthesization point
WO2017198162A1 (en) * 2016-04-29 2017-11-23 深圳市太赫兹科技创新研究院有限公司 Three-dimensional image rebuilding method and device based on synthetic aperture radar imaging
CN112034460A (en) * 2020-08-17 2020-12-04 宋千 Circular-arc aperture radar imaging method based on antenna phase directional diagram compensation and radar

Family Cites Families (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4355311A (en) * 1979-03-06 1982-10-19 Nasa Multibeam single frequency synthetic aperture radar processor for imaging separate range swaths
US4924229A (en) * 1989-09-14 1990-05-08 The United States Of America As Represented By The United States Department Of Energy Phase correction system for automatic focusing of synthetic aperture radar
SE9804417L (en) * 1998-12-18 1999-12-20 Foersvarets Forskningsanstalt A SAR radar system
JP2001141821A (en) * 1999-11-11 2001-05-25 Mitsubishi Electric Corp Radar signal processor
JP2004198275A (en) * 2002-12-19 2004-07-15 Mitsubishi Electric Corp Synthetic aperture radar system, and image reproducing method
US7663529B2 (en) * 2006-08-15 2010-02-16 General Dynamics Advanced Information Systems, Inc. Methods for two-dimensional autofocus in high resolution radar systems
WO2009122328A1 (en) * 2008-03-31 2009-10-08 Koninklijke Philips Electronics N. V. Fast tomosynthesis scanner apparatus and ct-based method based on rotational step-and-shoot image acquisition without focal spot motion during continuous tube movement for use in cone-beam volume ct mammography imaging
CN101581780B (en) * 2008-05-14 2012-02-22 中国科学院电子学研究所 Three-dimensional focus imaging method of side-looking chromatography synthetic aperture radar
US8193967B2 (en) * 2008-12-10 2012-06-05 The United States Of America As Represented By The Secretary Of The Army Method and system for forming very low noise imagery using pixel classification
CN101900812B (en) * 2009-05-25 2012-11-14 中国科学院电子学研究所 Three-dimensional imaging method in widefield polar format for circular synthetic aperture radar
CN102445690B (en) * 2010-10-13 2014-02-12 中国科学院电子学研究所 Three-dimensional imaging QR decomposition method of synthetic aperture radar
US8937570B2 (en) * 2012-09-28 2015-01-20 Battelle Memorial Institute Apparatus for synthetic imaging of an object
CN103630905B (en) * 2013-08-29 2016-05-18 中国科学院电子学研究所 The overlapping sub-aperture imaging method of array antenna SAR polar coordinates
CN105259552B (en) * 2015-09-17 2019-05-17 中国科学院电子学研究所 A kind of synthetic aperture radar image-forming method and apparatus based on NLFM signal
CN106772369B (en) * 2016-12-16 2019-04-30 北京华航无线电测量研究所 A kind of three-D imaging method based on multi-angle of view imaging
CN107238866B (en) * 2017-05-26 2019-05-21 西安电子科技大学 Millimeter wave video imaging system and method based on synthetic aperture technique
EP3639055A4 (en) * 2017-06-14 2021-03-10 BAE SYSTEMS Information and Electronic Systems Integration Inc. Satellite tomography of rain and motion via synthetic aperture
US20200320731A1 (en) * 2019-04-04 2020-10-08 Battelle Memorial Institute Imaging Systems and Related Methods Including Radar Imaging with Moving Arrays or Moving Targets
CN110793543B (en) * 2019-10-21 2023-06-13 国网电力科学研究院有限公司 Positioning navigation precision measuring device and method of electric power inspection robot based on laser scanning
CN110837127B (en) * 2019-11-26 2021-09-10 内蒙古工业大学 Sparse antenna layout method based on cylindrical radar imaging device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001074832A (en) * 1999-09-02 2001-03-23 Mitsubishi Electric Corp Radar device
WO2010119447A1 (en) * 2009-04-16 2010-10-21 Doron Shlomo Imaging system and method
US9176226B1 (en) * 2012-03-14 2015-11-03 The Boeing Company Radar tomography using doppler-based projections
CN104237885A (en) * 2014-09-15 2014-12-24 西安电子科技大学 Synthetic aperture radar image orientation secondary focusing method
WO2017198162A1 (en) * 2016-04-29 2017-11-23 深圳市太赫兹科技创新研究院有限公司 Three-dimensional image rebuilding method and device based on synthetic aperture radar imaging
RU2628997C1 (en) * 2016-06-14 2017-08-24 Российская Федерация, от имени которой выступает Министерство обороны Российской Федерации Method of obtaining two-dimensional radar images of object at multi-frequency pulse sensing and inverse device synthesis with iterative distance reconciliation from equivalent antenna phase center to synthesization point
CN112034460A (en) * 2020-08-17 2020-12-04 宋千 Circular-arc aperture radar imaging method based on antenna phase directional diagram compensation and radar

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
地基干涉合成孔径雷达图像非线性大气相位补偿方法;胡程;邓云开;田卫明;曾涛;;雷达学报(第06期);167-176 *
机载双天线InSAR对飞数据处理与分析;李芳芳;胡东辉;丁赤飚;仇晓兰;;雷达学报(第01期);42-52 *

Also Published As

Publication number Publication date
CN112799064A (en) 2021-05-14

Similar Documents

Publication Publication Date Title
CN109493407B (en) Method and device for realizing laser point cloud densification and computer equipment
US10162358B2 (en) Unmanned vehicle, method, apparatus and system for positioning unmanned vehicle
CN109978767B (en) Laser SLAM map method based on multi-robot cooperation
CN104154911B (en) A kind of sea-floor relief two dimension matching auxiliary navigation method with rotational invariance
CN103438826B (en) The three-dimension measuring system of the steel plate that laser combines with vision and method
CN112132908B (en) Camera external parameter calibration method and device based on intelligent detection technology
CN108326845B (en) Robot positioning method, device and system based on binocular camera and laser radar
CN112799064B (en) Cylindrical aperture nonlinear progressive phase iterative imaging method and device
CN111210465B (en) Image registration method, image registration device, computer equipment and readable storage medium
CN113610975B (en) Quasi-three-dimensional map generation and coordinate conversion method
CN103839238A (en) SAR image super-resolution method based on marginal information and deconvolution
CN112835039B (en) Planar aperture zoned nonlinear progressive phase iterative imaging method and device
CN111325663A (en) Three-dimensional point cloud matching method and device based on parallel architecture and computer equipment
CN103076608B (en) Contour-enhanced beaming-type synthetic aperture radar imaging method
CN112835040B (en) Spherical aperture zoned progressive phase iterative imaging method and device
CN112362059B (en) Positioning method and device for mobile carrier, computer equipment and medium
CN104574428B (en) A kind of SAR image incidence angle evaluation method
CN114693755B (en) Non-rigid registration method and system for multimode image maximum moment and space consistency
CN108986031B (en) Image processing method, device, computer equipment and storage medium
US11940279B2 (en) Systems and methods for positioning
CN108253931B (en) Binocular stereo vision ranging method and ranging device thereof
Huang et al. Research on LIDAR Slam Method with Fused Point Cloud Intensity Information
EP2562722A1 (en) Method and system for scene visualization
WO2024011455A1 (en) Method for position re-recognition of mobile robot based on lidar estimable pose
CN117036511B (en) Calibration method and device for multi-type sensor, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant