GB2168870A - Imaging system - Google Patents

Imaging system Download PDF

Info

Publication number
GB2168870A
GB2168870A GB08411913A GB8411913A GB2168870A GB 2168870 A GB2168870 A GB 2168870A GB 08411913 A GB08411913 A GB 08411913A GB 8411913 A GB8411913 A GB 8411913A GB 2168870 A GB2168870 A GB 2168870A
Authority
GB
United Kingdom
Prior art keywords
image
impulse response
image data
imaging
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB08411913A
Other versions
GB2168870B (en
Inventor
Edward Roy Pike
Christopher John Oliver
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
UK Secretary of State for Defence
Original Assignee
UK Secretary of State for Defence
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by UK Secretary of State for Defence filed Critical UK Secretary of State for Defence
Priority to GB08411913A priority Critical patent/GB2168870B/en
Publication of GB2168870A publication Critical patent/GB2168870A/en
Application granted granted Critical
Publication of GB2168870B publication Critical patent/GB2168870B/en
Expired legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/89Radar or analogous systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging

Landscapes

  • Engineering & Computer Science (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • General Physics & Mathematics (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

A resolution enhanced imaging system provides improved target resolution by combining image information with system impulse response and a weight function expressing prior knowledge, ie probable target extent. The system may comprise a radar transceiver 10 and a heterodyne signal processor 14 providing in phase and quadrature image data. The data are processed to provide a conventional diffraction limited image. Pixels 17 above a preset threshold 21 are accorded the weight factor unity, and those below zero. The weight factors collectively form a weight function 22 which is combined with the system impulse response 23, for use in a singular function 24 decomposition 25 of the original complex image data 15. Object data is reconstructed 28 from this decomposition with noise-corrupted terms being omitted 26. This provides better resolution than the original image data. The invention is applicable to imaging targets in a negligible background, such as in air search radars. <IMAGE>

Description

SPECIFICATION Resolution enhanced imaging system This invention relates to a resolution enhanced imaging system of the kind employing coherent radiation to illuminate a scene.
Resolution enhancement in imaging systems is known, as set out for example in published United Kingdom Patent Application No 2,1 13,501A (Reference 1). This reference describes resolution enhancement in an optical microscope. The microscope comprises a laser illuminating a predetermined small area of an object plane, and means for focussing light from the object plane on to an image plane containing a two dimensional array of detectors. Each detector output is processed to derive the complex amplitude and phase of the light or image element incident on it. A mathematical analysis of the image information is employed to reconstruct the illuminated object region.The analysis incorporates the constraints that the object illumination is zero outside the predetermined area referred to as the support, and that a focussing device of known spatial impulse response or optical transfer function is employed to produce the image information. The net effect of combining these constraints with the image data is that the objec.t can be reconstructed with better resolution than that provided by the image data alone. The mathematical analysis is discussed in detail by Bertero and Pike, Optica Acta, 1982, Vol 29, No 6, pp 727-746 (Reference 2).
Reference 1 is applicable to any imaging system, ie to optics, radar and sonar. It can be considered in broad terms as employing a single transmitter with a number of detectors, or alternatively as a transmitter with a movable or scanning detector. This corresponds to a bistatic radar system for example. However, in many important cases imaging systems employ a single coupled transmitter and receiver, as for example in monostatic radar equipment, sonar and laser rangefinders or lidar. In radar and sonar, the transmitter and receiver are commonly the same device, ie a radar antenna or a sonar transducer array. The transmitter/receiver combination may be scanned to provide the effect of equal numbers of transmitters and receivers, as occurs in air traffic control radars and synthetic aperture radars.In these and other analogous optical and sonar equipments it would be expensive and undesirably complex to provide a plurality of detectors per transmitter, or to decouple the transmitter and receiver and to scan the latter. Furthermore, the invention of Reference (1) gives no improvementto range information.
It is an object of the invention to provide an alternative form of resolution enhanced imaging system.
The present invention provides an imaging system for imaging targets in a negligible background, the system having a given impulse response and including: (1 ) an imaging device arranged to provide complex amplitude image data; (2) Comparing means arranged to indicate whether individual image amplitude values are above or below a given threshold; (3) computing means arranged to:: (a) provide a support having values equal or equivalent to unity and zero in accordance with image amplitude above and below the threshold respectively, (b) provide a respective set of singular functions for each of the object and image spaces derived from the system impulse response and the support, (c) decompose image data into image space components expressed as singular functions, (d) convert image space components with energy exceeding system noise into corresponding decomposed object data, and (e) reconstruct object data from its decomposition; and (4) means for providing an image from the reconstructed object data.
The invention provides enhanced resolution by combining image information with system impulse response and detection of the probable extent of the object giving rise to the image and expressed as a uniform support. Support is a mathematical term construed as the delimited region within which the target signal amplitude is non-zero, or within which the target is assumed to be localised. The invention may be employed as an imaging system used for ranging targets in a negligible background (less than detector noise), such as monostatic air search radars. Mathematically, any diffraction pattern generated by an imaging system may correspond to an infinity of possible objects imaged by the system. However, experience shows that the main lobe of the diffraction pattern is very likely to correspond to one or more real objects.If this prior knowledge or constraint is expressed as a support and incorporated with the image data and system impulse response, the effect is to improve resolution. The invention is applicable to radar, sonar and optical ranging devices incorporating a coupled receiver and transmitter. It should however be noted that the invention is not applicable to cases such as landscape imaging, where target background is not negligible.
Reference 1 relates to an invention in which the support is fixed or predetermined. In the present invention, the support is determined for each target distribution from image data. It is therefore adaptive and suitable for use as a ranging device where the support is unknown in range until determined from image data. Reference 1 cannot acomplish this, being non-adaptive.
The computing means may be arranged to calculate singular functions from the system impulse response and the support when determined. Alternatively where there is a limited number of possible supports, the singular functions appropriate to each support may be precalculated and stored in a lock-up table. As any particular support is derived, the corresponding singular functions may be addressed and implemented for image decomposition and object reconstruction.
An image may be provided from reconstructed object data by means including an envelope detector.
In orderthatthe invention might be more fully understood, embodimentsthereofwill now be described by way of example only, with reference to the accompanying drawings, in which: Figure 1 is a schematic drawing of a radar system of the invention, Figures 2 and 3 graphically illustrate object reconstruction in accordance with the invention, Figure 4 is a schematic drawing of a laser ranging system of the invention.
Referring to Figure 1, there is shown a schematic pulse compression radar system of the invention incorporating a transmitter/receiver unit (Tx/Rx) 10 and a fixed air search antenna 11 generating a radar beam 12. In this example, the radar system is one dimensional. It resolves a target in range in terms of the time taken to receive a return pulse from the target. Two targets or aircraft 13 are present in the beam 12. The unit 10 is connected to a conventional heterodyne radar signal processing (SP) unit 14 producing image data in the form of in-phase and quadrature signals land Q.The SP unit 13 feeds the I and 0 signals to a frame store 15, and thence to an envelope detector 16 which provides their modules 02). This passes to an image store j7 and display unit 18. The unit 18 has 64 pixels or picture elements (not shown), and displays a single broad diffraction maximum 19 incorporating contributions from both targets 13.
The land Ci signals are also fed to a computer indicated by chain lines 20. The computer 20 also receives information from the image store 17 via a threshold detector 21. The computer is arranged to operate in accordance with the flow diagram within the chain lines 20. The image store information is employed at 22 to generate a support, ie pixel values y1 and y2 between which the support has the value 1, and the value 0 otherwise. The pixel values y1 and y2 are those between which the image signal exceeds a threshold or discrimination level set by the threshold detector 21.The impulse response function K,KT of the radar antenna is stored at 23 and combined at 24 with the supportto produce singular function sets u and v with a common eigenvalue set X for object and image decomposition. I and Ci signals from the frame store 15 are decomposed at 25 into image space singular functions providing coefficients g. The noise energy value EN of the imaging system is stored at 26. The energy in each singular function is compared at 26 with EN/M, M being the number of image or object space functions. Functions uj,vj for which the corresponding energy FM is not greater than EN/M are discarded.The remaining object space singular functions uj are multiplied at 27 by g j/A/Aj respectively to provide the object decomposition. The object is then reconstructed at 28 to provide the object decomposition. The reconstructed I and Ci signals are fed to a second envelope detector 29, and thence to an object store 30 and display device 31. The device 31 displays two peaks 32 over 64 pixels (not shown) indicating resolution of the targets 13.
Referring now to Figure 2, there are shown graphs 41 to 44 respectively representing a one dimensional point target, a diffraction pattern corresponding to a radar image of the point target, a support and a target reconstruction. The graphs 41 have abscissa values from one to sixty-four, corresponding to a display such as 18 or 31 having sixty-four pixels and to a time-bandwidth product of 64 for the radar system. Ordinate values are referred to a scale which is arbitrary but the same for each of the graphs. The graphs 41 to 44 were generated in a computer simulation of the invention.
Graph 41 indicates a point target 45 at pixel number thirty-three, ie at just over half the maximum displayable range of the radar system. Graph 42 is an image of the target, and therefore consists of a diffraction pattern K(sin x)/x, where K is a constant and xis approximately equal to 2rr(y-33)/8, y being the pixel number. This image 42 represents the fundamental resolution showing a width between the first two nulls of 8 pixels. The image data is oversampled by a factor of 4 in order to demonstrate the form of the impulse response. In practice only 16 samples would be required to representthe image without any loss of information. The one dimensional expression (sin x)/x is the temporal impulse response of the radar system, since system impulse response is defined as the image generated in response to a point source.The threshold detector 21 of Figure 1 is employed to set a threshold or discrimination level 46. Only the main lobe 47 of the diffraction pattern 42 exceeds the threshold 46. The support 43 has the value zero over the regions from pixel numbers one to thirty and thirty-six to sixty-four, and the value unity between pixels thirty and thirty-six. The support expresses the assumption that the main lobe 47 corresponds to target localisation in a negligible background, in accordance with prior knowledge of air search radar systems. Whereas an infinity of target distributions could give rise to the diffraction pattern 42, in practice the most likely target distribution is localised within the main lobe 47.
Graph 44 illustrates target or object space reconstruction from the image data used to plot graph 42, the weight function 43 and the system impulse response (sin x)/x. It can be seen that the reconstructed target 48 has greatly enhanced resolution as compared to the original image 42.
Referring now to Figure 3, there are shown graphs 50 to 53 analogous to those of Figure 2 but corresponding to two targets within the radar beam, such as targets 13 in Figure 1. Here again, the graphs 50 to 53 were generated in a computer simulation of the invention. Graph 50 indicates two point targets 54 and 55 at pixels thirty-one and thirty-four giving return signals in antiphase. The targets 54 and 55 give rise to an image or diffraction pattern 51 having a central lobe 56 within which the targets are localised but no+ resolved. Athreshold level 57 defines a support 52 having the value unity over the main lobe 56, ie pixels twenty-eight to thirty-seven, and zero elsewhere. Graph 53 indicates the object reconstruction obtained from combining the image data of graph 51 with the support 52 and the system impulse response (sin x)/x.Two peaks 58 and 59 are observable in graph 53, corresponding to resolution of the targets 54 and 55.
Accordingly, the reconstruction process yields resolution of targets not resolved in the original image data.
The degree of resolution enhancement depends on the width or pixel extent of the support. The more narrowly the support 52 can be defined the greater the enhancement achieved. It should be noted that the anti-phase target return assumption is a worst case, since the two returns give rise to destructive interference. For targets not in antiphase, the invention provides still greater resolution enhancement.
The mathematical process of target reconstruction by singular value decomposition will now be described in more detail, following Reference 2.
Consider an image data set g, which may be decomposed into a set of orthonormal functions in image space, v. These functions therefore have the property
where t is the Hermitian conjugate of v.
Let this image have arisen from a object, f, undergoing an imaging transformation. Introducing prior knowledge about the object delimits the region in object space within which the object is expected to lie, ie defines a support such that f(y) = f(y), ylYY2 = y < y1ory > y2 (2) ie the object has a non-zero amplitude only within the region y1 toy2.
Let K be the impulse response of the imaging system, ie the image the system generates of an object having the dimensional properties of a delta function. For a lens, this would be the image of a geometrical point source, ie the optical transfer function which has two spatial dimensions. Reference 1 gives impulse response functions for square and circular lenses. A radar system impulse response has one temporal (range) dimension if fixed, and one temporal and one or two spatial dimensions if scanned. A rotating radar scanner has an angular spatial dimension and a synthetic aperture radar a linear spatial dimension. Impulse responses of this kind may be calculated and/or measured in individual imaging systems by known techniques.
Necessarily, the object space functions must be imaged into image space functions by the imaging system transformation or impulse response K.
If the effect of the delimited object region, or support, is included with the imaging transformation operator or impulse response K, the complete imaging process (including support) is defined by Kf=g (3) The symbolic equation (3) represents the imaging of an objectfinto an image g, as exemplified by the one-dimensional equation: Y2 I dy K(x-y)f(y) g(x) (4) Y1 where the limits of integration y1 and y2 correspond to those pixel numbers between which the image signal exceeds the threshold (eg 46 or 57) set by the threshold detector 21.The limits y1 and y2 accordingly provide mathematical incorporation of the support or delimited object region into the imaging transformation.
An orthonormal set of functions, ui, may now be defined in object/reconstruction space. These are defined by the eigenfunction equation: KTKui X1u1, (5) where the u are eigenfunctions of the operator KTK having eigenvalues Aj, and KT is the adjoint of K.
Moreover, the sets of functions in object and image space, u and v, can be shown (see Reference 2) to be uniquely related by:
Equations (6 and (7) define the unique setof singularfunctions in the object and image spaces. For rectangular supports (eg supports 43 and fibrin Figures 2 and 3) in accordance with the present invention these are prorate spheroidal functions. Multiplying equation (7) by K. -
-Tvs Ku A-,Xjyj (from Equation 6) ~ t = i;;iy I Equation (8) shows that the terms #i are eigenvalues for the image space function set v as well as for the object space function set u of equation (5).
As indicated at 24 in Figure 1, the computer 20 calculates the object space function set u by solying the eigenfunction equation (5). This determines u and #i for all i = ,1 to M, M being the finite number of functions appropriate for a given number of image-space pixels. From ui and #i, the computer employs equation (6) or (7) to determine the image space functions set v. This provides eigenfunction sets u an""dvwith. common eigenvalue set #i and is a well-known computation.
Each eigenvalue #i is an energy team corresponding to a respective eigenfunction or eigenstate of the imaging system in both the object and image spaces, ie ui and vi, In other words, #i is proportional to the object or image energy contributed by the ith eigenfunction ui or v;. It is not equal to that energy, since Aj C arises from a support having values 1 and 0 unrelated to image energy. To achieve equality, a normalisation coefficient or scaling factor F (to be calculated later) is defined such that F#i = energy in ui or vi.If FXi is less than or equal to the proportion of the imaging system noise energy EN(i) appropriate to that function, its value determined by the computer 20 at 24 is extremely uncertain although small. It can however produce large spurious results. Accordingly, as will be described, all functions ui,vi for which F#i#En(i) are omitted from the reconstruction process. Provided that several lower-order singular functions have been used for reconstruction, the addition of a contribution from the next, higher, singular function in a noiseless image does not have a marked effect on the apparent resolution. Thus only a small penalty is paid if the reconstruction process is truncated.However, if the new component is largely made up of noise, then it contributes significantly to the reconstruction as the contribution is inversely proportional to #i. It is therefore important to truncate the reconstruction process to avoid spurious results. For white noise, EN(i) is a constant, EN/M, for all eigenfunctions, ie the proportion of noise attributable to each eigenfunction is the total system noise EN divided by the number of eigenfunctions M in the set u or v.
Complex image data is represented by a set g having a value g(j) at pixel number. Decomposition of the set g into a function set v is defined by:
ie the proportion or fraction Ngj or t vig of the image data set g present in the ith image space singular function vi is the summation over all j (pixel number) of the productofthejth point value of t v.
and the jth value of G. This calculation is carried out by the the computer 10 as indicated at 25 for all M functions in the image space function set v, ie v1 to vM, and produces a set of coefficients g. Image data is accordingly decomposed into a linear combination of functions, ie a series of numerical coefficients or function amplitudes of the kind #i each multiplying a respective function v. This is analogous to decomposition of a signal into its Fourier components, and is a well-known computation.
Decomposition of the image into functions vi corresponds directly to object decomposition into functions ui, in view of the unique relation between the sets u and v defined in equations (6) and (7). Each image function-coefficient combination #ivi corresponds to the respective #iui/##i. As indicated at 24 and 25, the computer 20 has calculated the sets g, u and X, and may accordingly generate an object decomposition.
However, as has been mentioned, terms in the object decomposition strongly affected by imaging system noise must be omitted. It can be shown that the effect of a function u on a reconstructed object is inversely proportional to #i. Accordingly, it is important to omit small and therefore uncertain terms if spurious results are to be avoided.To ascertain which terms are to be omitted, the scaling factor F previously defined is evaluated to convert each Aj into a corresponding energy in the respective us or v1. F is given by:
Total image energy - Sum of squares of g amplitude terms Sum ofall hi
As indicated at 26, the computer 20 calculates F from equations (9) and (10) and compares each product FX with the imaging system noise fraction EN/M. All Xi,i, and us for which F#i#EN/M are discarded, and those remaining provide an object decomposition into the remaining functions of the set u, ie#iui/##i is calculated for all remaining i at 27.
Subsequently, as indicated at 28, the computer 20 reconstructs the jth object pixel value by adding together the jth point values of the decomposition functionsu1(j)/N/K. This provides fr(i), the reconstructed object complex amplitude or I and Ci values for pixel number j, and is expressed by:
Equation (11) is evaluated at 28 for all i. The I and Q values from this series of computations nass to the envelope detector 29 providing their modulus
and thence to the object store 30 for display at 31 as a reconstructed object 32.
In practice, X normally decreases monotonically with increasing iso the summation of equation (11) is performed over limits i = 1 to image imax being the maximum value of i for which Farm exceeds EN/M. This is a simple truncation of the summation to ignore terms strongly affected by noise.
To summarise the computation, the computer 20 calculates the object and image space function sets u and vat 24 from equations (1), (5), (6) and (7) incorporating the known system impulse K stored at 23 and the support generated at 22. The image data set g is decomposed to a linear combination of the function set v using equation (8). This yields numerical coefficients t v.g or go which are divided by ##i to yield corresponding coefficients for the object decomposition in terms of the function set u, ignoring terms strongly affected by system noise. The computer 20 multiplies each function u1 by the respective coefficient1/VK producing the object decomposition in terms of u as indicated at 27.It then computes at 28 the value of #iui/##i at each image point or pixel number j to produce contributions #iui(j)/##i. These contributions are summed over all remaining i at each pixel number in turn to produce the required reconstructed object data set comprising a complex data value of I and 0 for each pixel number. This is analogous to reconstructing a signal from its Fourier spectral components by adding together the contributions from each component to the corresponding points of the signal.
Referring now to Figure 4, there is shown a schematic drawing of part of a laser ranging or lidar system.
The system comprises a continuous wave (cw) CO2 laser 60 producing a plane polarised output light beam along a path 61 to a first beam splitter 62. Light transmitted by the beam splitter 62 passes via a second beam splitter 63 to a CO2 laser amplifier 64 arranged to produce 10 nsec pulses at a repetition frequency of 10 kHz.
A first lens 65 renders the amplifier output beam 66 parallel for illumination of a scattering object 67 in a remote scene (not shown). Light scattered from the object 67 is focussed by a second lens 68 on to two detectors 69 and 70.
Detector 69 receives a reference light beam 71 from the laser 60 after reflection at the beam splitter 62 and at a partially reflecting mirror 72. In addition, the detector 69 receives light scattered from the object 67 after transmission through a partially reflecting mirror 73 and through the partially reflecting mirror 72 via a path 74. Detector 70 receives a reference beam 75 from reflection of laser light 61 at the beam splitter 63 with subsequent transmission via a qr/2 or quarter wavelength delay device 76 and reflection at a partially reflecting mirror 77. Light scattered from the object 67 and reflected at the mirror passes via paths 78 and 79 to the detector 70 after reflection at a mirror 80 and transmission through the partially reflecting mirror 77.
The delay device 76 may be a gas cell having an optical thickness appropriate to delay the beam 75 by (n + 1/4) wavelengths, n being integral but arbitrary. The gas pressure in the cell would be adjusted to produce the correct delay by known interferometric techniques; ie the device 76 would be placed in one arm of an interferometer and the gas pressure varied until fringe pattern movement indicated the correct delay.
The arrangement of Figure 3 operates as follows. The delay unit 76 introduces a sir/2 phase shift in the reference beam 75 reaching detector 70 as compared to that reaching detector 69. Each of the detectors 69 and 70 mixes its reference beam 71 or 75 with light 74 or 79 from the scene, acting as a homodyne receiver.
The laser 60 acts as its own local oscillator. in view of the qr/2 phase difference between the reference beams 71 and 75, detector outputs with a relative phase difference of sr/2 are produced at 81 and 82. These outputs accordingly provide in-phase and quadrature signals I and Q, or complex amplitude image data. These signals are precisely analogous to the land Ci signals appearing at the output of the signal processing unit 14 in Figure 1, and are processed in the same way as previously described to provide resolution enhancement.
In an analogous fashion, a sonar system may be adapted for resolution enhancement in accordance with the invention, since I and Q signals are provided by sonar transducers which may be processed in the same way as radar signals.
Though the example given has been expressed in terms of range (temporal) resolution a precisely analogous approach could have been employed to yield azimuthal resolution enhancement as the sonar, radar or lidar beam is scanned. The processing then takes place in the orthogonal dimension of the image.
Whereas the foregoing description (with reference to Figure 1 in particular) has referred to calculation of object and image space singular functions from the support and system impulse response, in some cases this is capable of simiplification. As indicated in Figures 2 and 3, the support will be zero over some pixels and unity elsewhere. Since the system impulse response is constant, the object and image space singular functions will vary in accordance with the pixels over which the support is unity. Accordingly, rather than computing the singular functions following support determination, the functions may be precalculated and stored in a computer look-up table or memory. The number of possible supports is limited, so that storage of singular functions need not be impracticable.When a particular support is detected, it is then merely necessary to read out the singular functions from the corresponding memory address in order to perform image decomposition and object reconstruction. This procedure should reduce computer time needed for image processing, but at the expense of increasing memory requirements.
As has been mentioned previously, the invention is applicable only to the case of targets appearing in a negligible background. If the background is not negligible, spurious results can be obtained. Image data would be received from background outside the target region or support, and analysed by the decomposition/reconstruction process as if it came from within that region. It is accordingly important that the assumption of negligible background hold good, and that this be verified for any application of the invention. The assumption is valid in the case of a radar antenna employed to search the sky for aircraft.

Claims (4)

1. An imaging system for imaging targets in a negligible background, the system having a given impulse response and including: (1) an imaging device arranged to provide complex amplitude image data; (2) means for generating from image data a weight function indicating whether or not individual image features are above or below a given threshold intensity; (3) means for reconstructing object data from a singular function decomposition of image data on the basis of singular functions derived from the weight function and system impulse response, the recombination being arranged to omit significantly noise-corrupted terms; and (4) means for generating an image from the reconstructed object data.
2. An imaging system for imaging targets in a negligible background, the system having a given impulse response and including: (1) an imaging device arranged to provide complex amplitude image data; (2) comparing means arranged to indicate whether individual image implitude values are above or below a given threshold; (3) computing means arranged to: (a) provide a support having values equal or equivaient to unity and zero in accordance with image amplitude above and below the threshold respectively; (b) provide a respective set of singular functions for each of the object and image spaces derived from the system impulse response and the support; (c) decompose image data into image space components expressed as singular functions; (d) convert image space components with energy exceeding system noise into corresponding decomposed object data; and (e) reconstruct object data from its decomposition; and (4) means for providing an image from the reconstructed object data.
3. An imaging system substantially as herein described with reference to Figures 1 to 3 or parts 14 to 32 of Figure 1 and Figures 2 to
4.
GB08411913A 1984-05-10 1984-05-10 Imaging system Expired GB2168870B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB08411913A GB2168870B (en) 1984-05-10 1984-05-10 Imaging system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB08411913A GB2168870B (en) 1984-05-10 1984-05-10 Imaging system

Publications (2)

Publication Number Publication Date
GB2168870A true GB2168870A (en) 1986-06-25
GB2168870B GB2168870B (en) 1987-09-03

Family

ID=10560722

Family Applications (1)

Application Number Title Priority Date Filing Date
GB08411913A Expired GB2168870B (en) 1984-05-10 1984-05-10 Imaging system

Country Status (1)

Country Link
GB (1) GB2168870B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1243944A2 (en) * 2001-03-19 2002-09-25 Matsushita Electric Works, Ltd. Distance measuring apparatus
WO2009052663A1 (en) * 2007-10-23 2009-04-30 Jianhua Luo A signal denoising method based on reconstructed signal replacing spectrum data
US7764220B1 (en) * 2009-04-22 2010-07-27 Raytheon Company Synthetic aperture radar incorporating height filtering for use with land-based vehicles
US8035545B2 (en) 2009-03-13 2011-10-11 Raytheon Company Vehicular surveillance system using a synthetic aperture radar
CN103852759A (en) * 2014-04-08 2014-06-11 电子科技大学 Scanning radar super-resolution imaging method
CN111868566A (en) * 2019-10-11 2020-10-30 安徽中科智能感知产业技术研究院有限责任公司 Agricultural machine working area measuring and calculating method based on positioning drift measuring and calculating model

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1243944A2 (en) * 2001-03-19 2002-09-25 Matsushita Electric Works, Ltd. Distance measuring apparatus
EP1243944A3 (en) * 2001-03-19 2004-10-27 Matsushita Electric Works, Ltd. Distance measuring apparatus
EP1998187A2 (en) * 2001-03-19 2008-12-03 Matsushita Electric Works, Ltd. Distance measuring apparatus
EP1998187A3 (en) * 2001-03-19 2008-12-31 Matsushita Electric Works, Ltd. Distance measuring apparatus
WO2009052663A1 (en) * 2007-10-23 2009-04-30 Jianhua Luo A signal denoising method based on reconstructed signal replacing spectrum data
US8035545B2 (en) 2009-03-13 2011-10-11 Raytheon Company Vehicular surveillance system using a synthetic aperture radar
US7764220B1 (en) * 2009-04-22 2010-07-27 Raytheon Company Synthetic aperture radar incorporating height filtering for use with land-based vehicles
CN103852759A (en) * 2014-04-08 2014-06-11 电子科技大学 Scanning radar super-resolution imaging method
CN103852759B (en) * 2014-04-08 2016-05-25 电子科技大学 Scanning radar super-resolution imaging method
CN111868566A (en) * 2019-10-11 2020-10-30 安徽中科智能感知产业技术研究院有限责任公司 Agricultural machine working area measuring and calculating method based on positioning drift measuring and calculating model
CN111868566B (en) * 2019-10-11 2023-10-03 安徽中科智能感知科技股份有限公司 Agricultural machinery operation area measuring and calculating method based on positioning drift measuring and calculating model

Also Published As

Publication number Publication date
GB2168870B (en) 1987-09-03

Similar Documents

Publication Publication Date Title
US4716414A (en) Super resolution imaging system
US5734347A (en) Digital holographic radar
US5093563A (en) Electronically phased detector arrays for optical imaging
CN112987024B (en) Imaging device and method based on synthetic aperture laser radar
AU2001297860B2 (en) System and method for adaptive broadcast radar system
Ding et al. THz 3-D image formation using SAR techniques: Simulation, processing and experimental results
Van Zyl Synthetic aperture radar polarimetry
US5394151A (en) Apparatus and method for producing three-dimensional images
US7105820B2 (en) Terahertz imaging for near field objects
Geibig et al. Compact 3D imaging radar based on FMCW driven frequency-scanning antennas
US6643000B2 (en) Efficient system and method for measuring target characteristics via a beam of electromagnetic energy
Blanchard et al. Coherent optical beam forming with passive millimeter-wave arrays
JP2002533685A (en) SAR radar system
CA2428513C (en) Coherent two-dimensional image formation by passive synthetic aperture collection and processing of multi-frequency radio signals scattered by cultural features of terrestrial region
Buell et al. Demonstration of synthetic aperture imaging ladar
JP4966535B2 (en) Interferometric imaging using orthogonal transverse mode diversity
US4011445A (en) Optical array active radar imaging technique
JP2006113584A5 (en)
Vu et al. A comparison between fast factorized backprojection and frequency-domain algorithms in UWB lowfrequency SAR
Liu Optical antenna of telescope for synthetic aperture ladar
GB2168870A (en) Imaging system
CN106170714B (en) Electromagnetic search and identification in near field domain
US4792231A (en) Laser speckle imaging
CN102230963B (en) Multi-sub-aperture optical receiving antenna system of synthetic aperture laser imaging radar
Waldman et al. Submillimeter modeling of millimeter radar systems

Legal Events

Date Code Title Description
PCNP Patent ceased through non-payment of renewal fee

Effective date: 19960510